id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.11804 | FGFusion: Fine-Grained Lidar-Camera Fusion for 3D Object Detection | Lidars and cameras are critical sensors that provide complementary
information for 3D detection in autonomous driving. While most prevalent
methods progressively downscale the 3D point clouds and camera images and then
fuse the high-level features, the downscaled features inevitably lose low-level
detailed information. In this paper, we propose Fine-Grained Lidar-Camera
Fusion (FGFusion) that make full use of multi-scale features of image and point
cloud and fuse them in a fine-grained way. First, we design a dual pathway
hierarchy structure to extract both high-level semantic and low-level detailed
features of the image. Second, an auxiliary network is introduced to guide
point cloud features to better learn the fine-grained spatial information.
Finally, we propose multi-scale fusion (MSF) to fuse the last N feature maps of
image and point cloud. Extensive experiments on two popular autonomous driving
benchmarks, i.e. KITTI and Waymo, demonstrate the effectiveness of our method. | Zixuan Yin, Han Sun, Ningzhong Liu, Huiyu Zhou, Jiaquan Shen | 2023-09-21T06:24:59Z | http://arxiv.org/abs/2309.11804v1 | # FGFusion: Fine-Grained Lidar-Camera Fusion for 3D Object Detection
###### Abstract
Lidars and cameras are critical sensors that provide complementary information for 3D detection in autonomous driving. While most prevalent methods progressively downscale the 3D point clouds and camera images and then fuse the high-level features, the downscaled features inevitably lose low-level detailed information. In this paper, we propose Fine-Grained Lidar-Camera Fusion (FGFusion) that make full use of multi-scale features of image and point cloud and fuse them in a fine-grained way. First, we design a dual pathway hierarchy structure to extract both high-level semantic and low-level detailed features of the image. Second, an auxiliary network is introduced to guide point cloud features to better learn the fine-grained spatial information. Finally, we propose multi-scale fusion (MSF) to fuse the last N feature maps of image and point cloud. Extensive experiments on two popular autonomous driving benchmarks, i.e. KITTI and Waymo, demonstrate the effectiveness of our method.
Keywords:Lidar-Camera Fusion Fine-grained Fusion Multi-scale Feature Attention Pyramid.
## 1 Introduction
3D object detection is a crucial task in autonomous driving[1, 8]. In recent years, lidar-only methods have made significant progress in this field. However, relying solely on point cloud data is insufficient because lidar only provides low-resolution shape and depth information. Therefore, researchers hope to leverage multiple modalities of data to improve detection accuracy. Among them, vehicle-mounted cameras can provide high-resolution shape and texture information, which is complementary to lidar. Therefore, the fusion of point cloud data with RGB images has become a research hotspot.
In the early stages of fusion method research, researchers naturally assumed that the performance of fusion methods would be better than that of lidar-only methods, because the essence of fusion methods is to add RGB information as an auxiliary to lidar-only methods. Therefore, the performance of the model should be at least as good as before, rather than declining[22]. However, this is not always the case.
There are two reasons for the performance decline: 1) a suitable method for aligning the two modal data has not yet been found, 2) the features of the two modalities used in the fusion are too coarse. Regarding the first issue, fusion methods have evolved from the initial post-fusion[3, 10] and point-level fusion[22, 23] methods to today's more advanced feature fusion[2, 12, 24] methods. However, the second problem has not yet been solved. Specifically, we know that lidar-only methods are mainly divided into one-stage methods[31, 14, 26] and two-stage methods[4, 18, 19, 20]. Usually, the performance of two-stage methods is better than that of one-stage methods because the features extracted by the first stage can be refined in the second stage. However, most current fusion methods focus on how to fuse features more effectively and ignore the process of refining fused features.
To solve the above problems, we utilize fine-grained features to improve the model accuracy and propose an efficient multi-modal fusion strategy called FG-Fusion. Specifically, since both image and point cloud data inevitably lose detailed features and spatial information during the downscaling process, we design different feature refinement schemes for the two modalities. First, for image data, we exploit a dual-path pyramid structure and designs a top-down feature path and a bottom-up attention path to better fuse high-level and low-level features. For point cloud data, inspired by SASSD[7], we construct an auxiliary network with point-level supervision to guide the intermediate features from different stages of 3D backbone to learn the fine-grained spatial structures of point clouds. In the fusion stage, we select several feature maps of the same number from the feature pyramids of images and point clouds respectively, and fuse them by cross-attention. The fused feature pyramids can then be passed into modern task prediction head architecture[28, 2].
In brief, our contributions can be summarized as follows:
* We design different feature refinement schemes for camera image and point cloud data, in order to fuse high-level abstract semantic information and low-level detailed features.
* We design a multi-level fusion strategy for point clouds and images, which fully utilizes the feature pyramids of the two modalities in the fusion stage to improve the model accuracy.
* We verify our method on two mainstream autonomous driving point cloud datasets (KITTI and Waymo), and the experimental results prove the effectiveness of our method.
## 2 Related Work
### LiDAR-only 3D Detection
Lidar-only methods are mainly divided into point-based methods and voxel-based methods. Among them, point-based methods such as PointNet[16] and PointNet++[17] are the earliest neural networks directly applied to point clouds. They directly process unordered raw point clouds and extract local features
through max-pooling. Based on their work, voxel-based and pillar-based methods have been derived. They transform the original point cloud into a Euclidean feature space and then use standard 2D or 3D convolution to calculate the features of the BEV plane. Representative methods include VoxelNet[31], SECOND[25], PointPillars[11], etc.
The development of lidar-only methods later shows two different development trends. Like 2D object detection, they are divided into one-stage and two-stage methods. One-stage methods[31, 14, 26] directly regress category scores and bounding boxes in one stage, and the network is relatively simple and has fast inference speed. Two-stage methods[4, 18, 19, 20] usually generate region proposals in the first stage and then refine them in the second stage. The accuracy of two-stage methods is usually higher than that of one-stage methods, because the second stage can capture more detailed and distinctive features, but the cost is a more complex network structure and higher computational cost.
### Fusion-based 3D Detection
Due to the sparsity of point cloud data and its sole possession of spatial structural information, researchers have proposed to complement point clouds with RGB images. Early methods[3, 15] use result-level or proposal-level post-fusion strategies, but the fusion granularity is too coarse, resulting in performance inferior to that of lidar-only methods.
PointPainting[22] is the first to utilize the hard correlation between LiDAR points and image pixels for fusion. It projects the point clouds onto the images through a calibration matrix and enhances each LiDAR point with the semantic segmentation score of the image. PointAugmenting[23] builds on PointPainting and proposes using features extracted from 2D object detection networks instead of semantic segmentation scores to enhance LiDAR points. Feature-level fusion methods points out that the hard association between points and pixels established by the calibration matrix is unreliable. DeepFusion[12] uses cross-attention to fuse point cloud features and image features. TransFusion[2] uses the prediction of point cloud features as a query for image features and then uses a transformer-like architecture to fuse features.
It can be seen that whether using semantic segmentation scores and image features obtained from pre-trained networks, or directly querying and fusing at the feature level, these methods essentially fuse high-level features with the richest semantic information, while ignoring low-level detailed information.
## 3 FGFusion
### Motivations and Pipeline
The previous fusion methods only exploit high-level features, ignoring the important fact that detailed feature representations are lost in the downsampling process. For example, PointPainting[22] directly makes use of pixel-wise semantic segmentation scores as image features to decorate point cloud data, which
only uses the results of last feature map and ignores multi-scale information. PointAugmenting[23] utilizes the last feature map with the richest semantic information to decorate point cloud data, but discards all the others that contain low-level detailed information. DeepFusion[12] is a feature-level fusion method, which improves the accuracy compared to point-level fusion methods such as PointAugmenting, but the essence is the same, as shown in Fig. 1.
We noticed that in some 2D object detection tasks, such as small object detection and fine-grained image recognition, multi-scale techniques are often used to extract fine-grained features. While in 3D object detection, point cloud data is suitable for capturing the spatial structural features, but it is easy to ignore small targets and fine features due to its sparse characteristic. Therefore, we hope to fuse point cloud and image data in a multi-scale way to make up for the shortcomings of point clouds. To achieve this goal, we fuse the features of point cloud and image at multiple levels instead of only using the last feature map generated by the backbone network. In addition, to extract finer features, we design a dual-path pyramid structure in the image branch and add an auxiliary network to guide convolutional feature perception of object structures in the point cloud branch.
To summarize, our proposed fine-grained fusion pipeline is shown in Fig. 2. For the image branch, we exploit 2D backbone and a dual-path pyramid structure to obtain the attention pyramid. For the point cloud branch, the raw points are fed into the existing 3D backbone to obtain the lidar features, and at the same time, guide the learning of features through an auxiliary network. Finally, we fuse the image and point cloud features at different levels and attach the same designed head to each fused layer of features to obtain the final results.
Figure 1: Most point-level fusion methods[22, 23] and feature-level fusion methods[12, 2] only use the last layer of image or point cloud features for fusion, while our FGFusion performs fusion at multiple feature scales, fully utilizing low-level detail information to improve model accuracy.
### Camera Stream Architecture
In general, the input image will be processed by a convolutional neural network to obtain a feature representation with high-level semantic information. However, many low-level detailed features will be lost, which is insufficient for robust fusion. In order to retain the fine-grained features, inspired by the FPN network[13], we design a top-down feature path to extract features of different scales.
Let \(\{B_{1},B_{2},...,B_{l}\}\) represent the feature maps obtained after the input image passes through the backbone and \(l\) represent the number of convolutional blocks. The general method is to directly use the output of the last block \(B_{l}\) for fusion, but we hope to make full use of each \(B_{i}\). Since it will bring huge cost overheads inevitably if making full use of every blocks of the network, we only select the last \(N\) outputs to generate the corresponding feature pyramid. The final feature pyramid obtained can be denoted as \(\{F_{l-N+1},F_{l-N+2},...,F_{l}\}\).
After obtaining the feature pyramid, we design a bottom-up attention path which includes spatial attention and channel attention. Spatial attention is used to locate the identifiable regions of the input image at different scales. It can be represented as:
\[A_{i}^{s}=\sigma(K*F_{i}), \tag{1}\]
where \(\sigma\) is the sigmoid activation function, \(*\) represents the deconvolution operation, and \(K\) represents the convolution kernel. Channel attention is used to add associations between channels and pass low-level detailed information layer by layer to higher levels:
\[A_{i}^{c}=\sigma(W_{b}\cdot ReLU(W_{a}\cdot GAP(F_{i}))), \tag{2}\]
Figure 2: An overview of FGFusion framework. FGFusion consists of 1) a dual pathway hierarchy structure with a top-down feature pathway and a bottom-up attention pathway, hence learning both high-level semantic and low-level detailed feature representation of the images, 2) an auxiliary network to guide point cloud features to better learn the fine-grained spatial information, and 3) a fusion strategy that can fuse the two modalities in a multi-scale way.
where \(\cdot\) represents element-wise multiplication, \(W_{a}\) and \(W_{b}\) represent the weight parameters of two fully connected layers. GAP(\(\cdot\)) represents global average pooling. In order to transmit low-level detailed information to high-level features, \(A_{i}^{c}\) need to be added with \(A_{i-1}^{c}\) and then downsampled twice to generate a bottom-up path.
After obtaining the attention pyramid, a bottom-up attention path can be generated in combination with the spatial pyramid. Specifically, this paper first adds spatial attention \(A_{i}^{s}\) and channel attention \(A_{i}^{c}\), and then performs dot product operation with \(F_{i}\) in the feature pyramid to obtain \(F_{i}^{\prime}\):
\[F_{i}^{\prime}=F_{i}\cdot(A_{i}^{s}+\alpha A_{i}^{c}). \tag{3}\]
Finally, \(\{F_{l-N+1}^{\prime},F_{l-N+2}^{\prime},...,F_{l}^{\prime}\}\) can be obtained for subsequent classification.
### LiDAR Stream Architecture
Our framework can use any network that can convert point clouds into multi-scale feature pyramids as our lidar flow. At the same time, inspired by SASSD[7], we designed an auxiliary network, which contains a point-wise foreground segmentation head and a center estimation head, to guide the backbone CNN to learn the fine-grained structure of point clouds at different stages of intermediate feature learning. It is worth noting that the auxiliary network can be separated after training, so no additional computation is introduced during inference.
### Multi-scale Fusion Module
Now we have obtained the attention pyramid of the image and the feature pyramid of the point cloud separately. In order to fully fuse the two modalities, we
Figure 3: The multi-scale fusion module first compresses the point cloud features into BEV features, and then uses TransFusion[2] to fuse the last N layers of BEV features and image features separately to obtain the prediction results of each layer. Finally, the post-processing is performed to obtain the final results.
take the last \(N\) layers of features of both for fusion, rather than just using the last layer, as shown in Fig. 3. Through the point cloud feature pyramid, we can obtain a multi-scale point cloud BEV feature map \(\{F_{l-N+1}^{B},F_{l-N+2}^{B}...,F_{l}^{B}\}\). Following TransFusion[2], we use two transformer decoding layers to fuse the two modalities: first decodes object queries into initial bounding box predictions using the LiDAR information, and then performs LiDAR-camera fusion by attentively fusing object queries with useful image features. Finally, each fusion feature can generate corresponding prediction results, and the final prediction is obtained through post-processing.
## 4 Experiments
We evaluate our proposed FGFuison on two datasets, KITTI[6] and Waymo[21], and conduct sufficient ablation experiments.
### Datasets
The KITTI dataset contains 7481 training samples and 7518 testing samples of autonomous driving scenes. As common practice, we divide the training data into a training set containing 3712 samples and a validation set containing 3769 samples. According to the requirements of the KITTI object detection benchmark, we conduct experiments on three categories of cars, pedestrians, and cyclists and evaluate the results using the average precision (AP) with an IoU threshold of 0.7.
The Waymo Open Dataset contains 798 training sequences, 202 validation sequences and 150 testing sequences. Each sequence has about 200 frames, which contain lidar points, camera images, and labeled 3D bounding boxes. We use official metrics, i.e., Average Precision (AP) and Average Precision weighted by Heading (APH), to evaluate the performance of different models and report the results of LEVEL1 (L1) and LEVEL2 (L2) difficulty levels.
### Implementation Details
For the KITTI dataset, the voxel size is set to (0.05m, 0.05m, 0.1m). Since KITTI only provides annotations for the front camera's field of view, the detection range of the X, Y and Z axes are set to [0, 70.4 m], [-40m, 40m], and [-3m, 1m], respectively. The image size is set to 448 \(\times\) 800. For the Waymo dataset, the voxel size is set to (0.1m, 0.1m, 0.15m). The detection range of the X and Y axes is [-75.2m, 75.2m], and the detection range of the Z axis is [-2m, 4m].
We choose TransFusion-L and the DLA34 of the pre-trained CenterNet as the 3D and 2D backbone networks, respectively. Following TransFusion[2], our training consists of two stages: 1) First we train the 3D backbone with the first decoder layer and FFN for 20 epochs. It only requires point clouds as input, and the last BEV feature map is used to produce initial 3D bounding box predictions. 2) Then we train the LiDAR-camera fusion and image-guided query initialization
module for another 6 epochs. In this stage, the last three feature maps of the 3D and 2D backbone are fused separately. The advantage of this two-step training scheme over joint training is that auxiliary networks can be used only in the first stage, as well as data augmentation methods for pure point cloud methods. For post-processing, we use NMS with the threshold of 0.7 for Waymo and 0.55 for KITTI to remove redundant boxes.
### Experimental Results and Analysis
**KITTI.** To prove the effectiveness of our method, we compare the average precision (AP) of FGFusion with some state-of-the-art methods on the KITTI dataset. As shown in Table 1, the mAP of our proposed FGFusion is the highest among all methods. KITTI divides all objects into three difficulty levels: easy, moderate and hard based on the size of the object, occlusion status and truncation level. The higher the difficulty level, the harder it is to detect. Our method leads in different levels of difficulty for multiple categories and has higher accuracy than all other methods in the difficult levels of all three categories, which proves that our method can effectively fuse fine-grained features.
In lidar-only methods, the accuracy of one-stage methods such as SECOND[25] and PointPillars[11] is lower than that of two-stage methods such as PV-RCNN[18]. In the easy and medium difficulty levels of the car category, our FGFusion is competitive with Voxel-RCNN[5], the best-performing method in lidar-only methods, and surpasses 0.98% AP in the difficult level. In fusion methods, early works such as MV3D[3] and AVOD[10] have lower performance than lidar-only methods. However, recently proposed CAT-Det[30] can achieve higher overall
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Modality} & \multirow{2}{*}{mAP} & \multicolumn{3}{c|}{Car} & \multicolumn{3}{c|}{Pedestrian} & \multicolumn{3}{c}{Cyclist} \\ \cline{3-11} & & & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline SECOND[25] & L & 68.06 & 88.61 & 78.62 & 77.22 & 56.55 & 52.98 & 47.73 & 80.58 & 67.15 & 63.10 \\ PointPillars[11] & L & 66.53 & 86.46 & 77.28 & 74.65 & 57.75 & 52.29 & 47.90 & 80.05 & 62.68 & 59.70 \\ PointRCNN[19] & L & 70.67 & 88.72 & 78.61 & 77.82 & 62.72 & 53.85 & 50.25 & 86.84 & 71.62 & 65.59 \\ PV-RCNN[18] & L & 73.27 & 92.10 & 84.36 & 82.48 & 64.26 & 56.67 & 51.91 & 88.88 & 71.95 & 66.78 \\ Voxel-RCNN[5] & L & - & **92.38** & **85.29** & 82.86 & - & - & - & - & - & - \\ \hline MV3D[3] & L+C & - & 71.29 & 62.68 & 56.56 & - & - & - & - & - & - \\ AVOD[10] & L+C & - & 84.41 & 74.44 & 68.65 & - & 58.80 & - & - & 49.70 & - \\ F-PointNet[15] & L+C & 65.58 & 83.76 & 70.92 & 63.65 & 70.00 & 61.32 & 53.59 & 77.15 & 56.49 & 53.37 \\
3D-CVF[29] & L+C & - & 89.67 & 79.88 & 78.47 & - & - & - & - & - & - \\ EPNet[9] & L+C & 70.97 & 88.76 & 78.65 & 78.32 & 66.74 & 59.29 & 54.82 & 83.88 & 65.60 & 62.70 \\ CAT-Det[30] & L+C & 75.42 & 90.12 & 81.46 & 79.15 & **74.08** & **66.35** & 58.92 & 87.64 & 72.82 & 68.20 \\ \hline FGFusion(Ours) & L+C & **77.05** & **92.38** & 84.96 & **83.84** & 72.63 & 65.07 & **59.21** & **90.33** & **74.19** & **70.84** \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison on the KITTI _val_ set with AP calculated by 40 recall positions.
accuracy than lidar-only methods in all three categories, and achieve 75.42 in mAP, which is a little lower than that of our method.
#### 4.3.3 Waymo.
Compared with the KITTI dataset, the Waymo dataset is larger and more diverse in sample diversity, and hence is more challenging. To verify our proposed FGFusion, we also conduct experiments on the Waymo dataset and compare it with some state-of-the-art methods. Table 2 shows that our FGFusion is better than other methods for both car and pedestrian categories in LEVEL2 difficulty, which is the main metric for ranking in the Waymo 3D detection challenge. Compared with the best PV-RCNN[18] in lidar-only methods, FGFusion has improved the APH of vehicle recognition by 4.93% and that of pedestrian recognition by 18.53%, which proves that our fusion method is more advantageous in small object detection.
### Ablation study
We conduct a series of experiments on Waymo to demonstrate the effectiveness of each component in our proposed FGFusion, including the attention pyramid of the image branch (AP), the auxiliary network of the point cloud branch (AN), and the multi-scale fusion module (MSF).
#### 4.4.1 Effect of each component.
As shown in Table 3, our FGFusion is 2.92% and 3.2% higher than the baseline in APH for the two categories, vehicles and pedestrians, respectively. Specifically, the multi-scale fusion module brings improvements of 1.74% and 1.93% to the baseline on two categories, which confirms our proposed fine-grained fusion strategy. The attention pyramid or the auxiliary network can further bring improvements of (0.7%, 0.87%) and (0.61%, 0.51%), respectively. This indicates that the finer the fused features, the higher the model accuracy can achieve, which is consistent with our expectation.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Modality} & \multicolumn{2}{c|}{Vehicle(AP/APH)} & \multicolumn{2}{c}{Pedestrian(AP/APH)} \\ \cline{3-6} & & L1 & L2 & L1 & L2 \\ \hline SECOND[25] & L & 72.27/71.69 & 63.85/63.33 & 68.70/58.18 & 60.72/51.31 \\ PointPillars[11] & L & 71.60/71.00 & 63.10/62.50 & 70.60/56.70 & 62.90/50.20 \\ PV-RCNN[18] & L & 77.51/76.89 & 68.98/68.41 & 75.01/65.65 & 66.04/57.61 \\ CenterPoint[28] & L & - & -/66.20 & - & -/62.60 \\
3D-MAN[27] & L & 74.50/74.00 & 67.60/67.10 & 71.70/67.70 & 62.60/59.00 \\ PointAugmenting[23] & L+C & 67.40/- & 62.70/- & 75.04/- & 70.60/- \\ DeepFusion[12] & L+C & 80.60/80.10 & 72.90/72.40 & **85.80/83.00** & 78.70/76.00 \\ \hline FGFusion(Ours) & L+C & **81.92/81.44** & **73.85/73.34** & 85.73/82.85 & **78.81/76.14** \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparison on the Waymo _val_ set for 3D vehicle (IoU = 0.7) and pedestrian (IoU = 0.5) detection.
#### 4.2.2 Number of feature layers selected for fusion.
The number of fusion features for point clouds and images is the key hyperparameter of our multi-scale fusion module. In order to determine the optimal value, we conduct experiments on the Waymo dataset without using attention pyramids or auxiliary networks. As shown in Table 4, the more feature layers used, the higher the model accuracy can achieve. This is because high-level features have rich semantic information and low-level features reserve complementary detailed information. The more feature layers used for fusion, the less information lost during downsampling. From the experimental results, it is intuitive that using two or three layers of features for fusion can bring significant improvements to model accuracy. While the number of fusion layers reaches four, the degree of improvement will be greatly reduced. It is worth noting that the more fusion layers used, the more weights the cross-attention model needs to train during fusion. In order to balance between model accuracy and computational cost, we use three layers of features for fusion in our experiments.
## 5 Conclusion
In this paper, we propose a novel multimodal network FGFusion for 3D object detection in autonomous driving scenarios. We design fine-grained feature extraction networks for both the point cloud branch and the image branch, and fuse features from different levels through a pyramid structure to improve detection accuracy. Extensive experiments are conducted on the KITTI and Waymo datasets, and the experimental results show that our method can achieve better performance than some state-of-the-art methods.
\begin{table}
\begin{tabular}{c|c c} \hline Feature Num. & Vehicle & Pedestrian \\ \hline
1 & 70.42 & 72.94 \\
2 & 71.62 (+1.20) & 74.05 (+1.11) \\
3 & 72.16 (+0.54) & 74.81 (+0.76) \\
4 & 72.32 (+0.16) & 75.01 (+0.20) \\ \hline \end{tabular}
\end{table}
Table 4: Performance comparison on Waymo _val_ set with APH in L2 difficulty using different number of features for fusion.
\begin{table}
\begin{tabular}{c c|c c} \hline \hline \multicolumn{2}{c|}{MSF AP AN} & Vehicle & Pedestrian \\ \hline & & 70.42 & 72.94 \\ ✓ & & 72.16 & 74.87 \\ ✓ & ✓ & 72.86 & 75.74 \\ ✓ & & ✓ & 72.77 & 75.38 \\ ✓ & ✓ & ✓ & 73.34 & 76.14 \\ \hline \end{tabular}
\end{table}
Table 3: Effect of each component in FGFusion on Waymo _val_ set with APH in L2 difficulty. |
2309.05859 | Demystifying Statistical Matching Algorithms for Big Data | Statistical matching is an effective method for estimating causal effects in
which treated units are paired with control units with ``similar'' values of
confounding covariates prior to performing estimation. In this way, matching
helps isolate the effect of treatment on response from effects due to the
confounding covariates. While there are a large number of software packages to
perform statistical matching, the algorithms and techniques used to solve
statistical matching problems -- especially matching without replacement -- are
not widely understood. In this paper, we describe in detail commonly-used
algorithms and techniques for solving statistical matching problems. We focus
in particular on the efficiency of these algorithms as the number of
observations grow large. We advocate for the further development of statistical
matching methods that impose and exploit ``sparsity'' -- by greatly restricting
the available matches for a given treated unit -- as this may be critical to
ensure scalability of matching methods as data sizes grow large. | Sanjeewani Weerasingha, Michael J. Higgins | 2023-09-11T22:48:15Z | http://arxiv.org/abs/2309.05859v1 | # Demystifying Statistical Matching Algorithms for Big Data
###### Abstract
Statistical matching is an effective method for estimating causal effects in which treated units are paired with control units with "similar" values of confounding covariates prior to performing estimation. In this way, matching helps isolate the effect of treatment on response from effects due to the confounding covariates. While there are a large number of software packages to perform statistical matching, the algorithms and techniques used to solve statistical matching problems--especially matching without replacement--are not widely understood. In this paper, we describe in detail commonly-used algorithms and techniques for solving statistical matching problems. We focus in particular on the efficiency of these algorithms as the number of observations grow large. We advocate for the further development of statistical matching methods that impose and exploit "sparsity"--by greatly restricting the available matches for a given treated unit--as this may be critical to ensure scalability of match
ing methods as data sizes grow large.
## 1 Introduction
Consider an observational study where each unit is given exactly one of two treatment conditions: treatment or control. When confounding variables--those that are correlated with both treatment and response--are present, failure to account for this confounding may lead to significant bias in treatment effect estimates (Rosenbaum et al., 2010). For instance, in a study assessing the effect of smoking and heart disease, confounders include having poor diet and exercise habits as both variables are correlated with an increased incidence in heart disease and a higher likelihood of smoking.
Statistical matching is a technique designed to isolate the effect of treatment in the presence of confounders. In statistical matching, treated units are matched with control units with similar values for confounding covariates. Treatment effect estimates can then be obtained by taking, for example, the average of the differences in response between the treated and matched control units. Statistical matching plays an essential role in conducting research work in many subject areas, such as medicine, economics, and political science, since experiments are not always practical or ethical to conduct.
With advances in computing, the volume of observational data has increased dramatically. For example, Electronic Health Records (EHR) collect valuable clinical information that researchers can use to guide patient care. EHRs include information on patient demographics, progress notes, problem lists, medications, vital signs, past medical history, etc. (Gliklich et al., 2019). With this surge in available data, there is a significant need for matching methods that can be applied under big data settings.
We aim to provide a detailed description of the available techniques and tools for statistical matching, thereby adding some clarity to the black box of statistical matching and possibly enlightening the path towards future advances. We have particular focus on issues of
the scalability of matching algorithms--the ability to successfully apply matching algorithms as the number of units under study becomes large.
This chapter is organized as follows. Section 2.1 review the background materials and notation about statistical matching. Matching problems are well-studied optimization problems in the Operation Research area. In particular, the bipartite matching or statistical matching can be considered a version of a linear assignment problem in the Operation Research area. Hence, Section 2.2 discusses materials on related materials from the Optimization area for statistical matching. And Section 2.3 discusses how to model a bipartite matching problem as a network flow problem in an optimization framework. Section 2.4 will explore why matching on a sparse graph is important and the existing approaches to solve minimum cost maximum matching on a sparse graph. Finally, Section 2.6 will demystify the matching algorithms. Moreover, Section 2.6 will discuss helpful materials from the optimization theory area to understand algorithms used in statistical matching.
## 2 Problem Setup for Statistical Matching
Consider an observational study on \(N\) units, numbered 1 through \(N\). For each unit \(i\), we observe a response \(y_{i}\), a treatment status \(T_{i}\in\{0,1\}\)--where \(T_{i}=1\) denotes that \(i\) is given treatment and \(T_{i}=0\) denotes that \(i\) is given control--and a \(p\)-dimensional vector of confounding covariates \(\mathbf{x}_{i}=(x_{i1},x_{i2},\ldots,x_{ip})\). Let \(N_{T}\) denote the number of treated units, numbered 1 through \(N_{T}\), and let \(N_{C}\) denote the number of control units, numbered 1 through \(N_{C}\). For ease of exposition, we assume \(N_{T}\leq N_{C}\).
Between each treated unit \(i\) and control unit \(j\), a dissimilarity measure \(w_{ij}\) may be computed on the confounding covariates \(\mathbf{x}\), where smaller values of \(w_{ij}\) indicate that \(i\) and \(j\) have more similar values of confounding covariates. Common choices of \(w_{ij}\) include the standardized Euclidean and Mahalanobis distances and the absolute difference in estimated propensity scores (Imbens and Rubin, 2015). Intuitively, matching aims to find, for each
treated unit \(i\), one or more control units \(j\) that have "small dissimilarity."
As Rosenbaum (1989) astutely noted, in considering the problem of statistical matching, there is a large body of literature--historically in the field of operations research--on similar types of matching problems from which to draw inspiration. In deliberately vague terms, these problems often start by assuming a mathematical graph and aim to select connected pairs of units within the graph in an "optimal" way. Hence, to make statistical matching problems more precise and to help draw connections between statistical matching and matching problems in the operations research literature, we describe these problems in terms graph theory. For simplicity, we focus on the 1:1 matching case, where each treated unit is allowed to be matched to, at most, one control unit (Savje et al., 2021), and extend our approach to more complicated matching schemes (e.g. 1:\(k\) matching, full matching, generalized full matching, cardinality matching) when appropriate (Hansen, 2007).
For statistical matching, units under study are represented as a graph \(G=(V,E)\); each node \(i\) in the node set \(V\) represents a unit under study (hence, \(|V|=N\)), and edges \(ij\) in the edge set \(E\) are drawn between two nodes if their corresponding units are allowed to be matched with each other. Each edge \(ij\) has a non-negative cost \(w_{ij}\) equal to the dissimilarity between the corresponding units \(i\) and \(j\). The resulting graph is a _bipartite_ graph; the node set \(V\) can be partitioned into two groups \(V_{T}\) and \(V_{C}\)--those nodes that correspond to treated and control units respectively--and edges are only allowed to connect a node from \(V_{T}\) to one in \(V_{C}\) (e.g. you cannot match a treated unit to another treated unit nor a control unit to another control unit). When initializing a matching problem, it is common to make minimal assumptions on which units can be matched to each other, thereby allowing the matching algorithm to completely determine which matches are appropriate. In terms of the graph \(G\), this corresponds to the assumption that edges \(ij\in E\) exist between each pair of units \(i\in V_{T},j\in V_{C}\)--that is, \(G\) is a _complete_ bipartite graph. For ease of exposition, we may refer to nodes as units and edge costs as dissimilarities throughout this paper.
A 1:1 statistical matching is a subset of edges \(M\subset E\) such that each treated node
is _incident_ to, at most, one edge \(ij\in M\)--that is, each node \(i\in V_{T}\) is the endpoint of at most one edge in \(M\). If \(ij\in M\), then the control unit \(j\) is matched to the treated unit \(i\) before performing analyses. Since the power of a study is most affected by the number of observations included in the study, often we aim to select a matching \(M\) with a large cardinality \(|M|\); often, we require the cardinality to be maximized.
For a given dataset, there may be a large number of candidate matchings \(\mathbb{M}\) from which to choose. Hence, many statistical matching algorithms aim to select a matching \(M^{\dagger}\in\mathbb{M}\) that is optimal with respect to some _objective function_. The commonly used objective function in statistical matching is to minimize total dissimilarity, or cost, between treatment and control pairs in the matched sample; matching algorithms aim to minimize the total cost
\[M^{\dagger}=\operatorname*{arg\,min}_{M\in\mathbb{M}}\sum_{ij\in M}w_{ij}. \tag{1}\]
Other objectives used in matching include: minimizing the maximum cost within a match (Savje et al., 2021); minimizing the maximum \(p\)-value for tests for the null hypothesis that covariate distributions between treated and matched control groups are equivalent (Diamond and Sekhon, 2013); and maximizing the number of matched pairs subject to constraints on the difference in sample moments between treated and matched control groups (Zubizarreta, 2012; Zubizarreta et al., 2014).
Matching can be performed _with replacement_--multiple treated units are allowed to be matched to the same control units--or _without replacement_. The statistical problem of finding a matching without replacement and the operations research problem of finding a bipartite matching are equivalent. Hence, significant progress on the statistical matching problem can be made by importing well-studied ideas from the optimization literature.
Before we discuss optimal methods for performing matching without replacement, we take a couple of brief detours. First, we describe greedy matching, which is computationally efficient but can suffer from arbitrarily poor performance when matching without replace
ment. Then, we discuss in full detail the problem of matching with replacement. Statistical matching with replacement is a well-understood problem--greedy algorithms can often obtain an optimal matching straightforwardly and efficiently--but it may be inappropriate to use under certain settings.
## 3 Greedy Matching
Greedy algorithms provide a simple and intuitive solution for statistical matching problems. Greedy matching algorithms match each treated unit with the eligible control unit that is most similar (with respect to the dissimilarity measure \(w\)). Simple implementations of greedy algorithms can terminate quickly. Specifically, for each treated unit, the problem of finding the most similar control unit requires \(O(N)\) time, and this problem is solved a maximum of \(O(N)\) times, leading to a worst-case total runtime of \(O(N^{2})\)(Cormen et al., 2022), outside of the cost of computing the dissimilarities \(w\). Thus, greedy matching is computationally inexpensive enough for most studies using observational data.
However, when matching without replacement, greedy matching may have significant drawbacks. When the selected dissimilarity measure does not satisfy the triangle inequality (_i.e._ for any three units \(i,j,k\), \(w_{ij}+w_{jk}\leq w_{ik}\)), the total cost of a 1:1 greedy matching can be infinitely bigger than that for an optimal matching (Rosenbaum, 1989). Even when the dissimilarity measure satisfies the triangle inequality, the difference in total cost between 1:1 greedy matching and optimal matching may worsen as data sizes get large--to be exact, the difference may be as large as \(O(N^{\log_{2}(3/2)})\approx O(N^{0.58})\)(Reingold and Tarjan, 1981; Agarwal and Sharathkumar, 2014). Additionally, a greedy matching may have a smaller cardinality than an optimal matching (Rosenbaum, 1989), and the matching quality may depend on the order in which treatment units are selected for matching (Dehejia and Wahba, 2002).
A 1:1 greedy matching algorithm proceeds as follows.
1. **(Initialize)** Set the greedy matching \(M=\emptyset\).
2. **(Select treated node)** Select unit \(i\in V_{T}\) (for example, at random).
3. **(Find control match)** Of all eligible control units \(j\), find the unit \(j^{\dagger}\in V_{C}\) that is the most similar to \(i\): \[j^{\dagger}=\operatorname*{arg\,min}_{j:ij\in E}w_{ij}.\] (2) Match \(i\) to \(j^{\dagger}\): Set \(M\xleftarrow{\mathit{set}}M\cup ij^{\dagger}\). If no matches are possible, skip to Step 4.
4. **(Remove matches)** Set \(V_{T}\xleftarrow{\mathit{set}}V_{T}\setminus\{i\}\). If matching without replacement, and if a control match \(j^{\dagger}\) was found in Step 3, set \(V_{C}\xleftarrow{\mathit{set}}V_{C}\setminus\{j^{\dagger}\}\) and \(E\xleftarrow{\mathit{set}}E\setminus\{ij^{\dagger}:i\in V_{C}\}\).
5. **(Terminate)** If \(V_{T}=\emptyset\), stop. The matching \(M\) is a greedy matching. Otherwise, return to Step 2.
A greedy 1:\(k\) matching is performed by choosing the \(k\) most similar units to the treated unit \(i\) in Step 3 of the algorithm. The performance of greedy matching without replacement highly depends on the order in which treated units are selected for matching in Step 2. Improved methods for choosing treated nodes--for example, finding the edge \(ij\in E\) with the largest cost \(w_{ij}\), and choosing the treated unit \(i\) incident to this edge--often come with an increased computational cost.
## 4 Statistical Matching with Replacement
Matching with replacement permits different treated units to be matched to the same control unit. The biggest advantage of matching with replacement is computational cost. Greedy matching is almost always used to perform matching with replacement as it is optimal for a number of commonly used objective functions--including the total cost and the maximum cost--under this setting.
There may be additional instances where, in practice, matching with replacement outperforms without replacement. For example, matching with replacement may perform better in
practice when the distribution of confounding covariates between treated and control groups have little overlap (Dehejia and Wahba, 2002). It can also be used to estimate the average treatment effect for the treated (ATT) when the number of treated units is greater than the number of control units--matching without replacement would necessarily leave some treated units unmatched, thereby changing the estimand. Monte Carlo simulations have suggested that matching with replacement can provide reliable treatment effect estimates if control units are reused for matches infrequently, and suggest that covariate distributions between treated and control groups are more similar for 1:\(k\) matching with replacement than without replacement, \(k>1\)(Bottigliengo et al., 2021).
However, there may also be some drawbacks with matching with replacement. First and foremost, there is no way to easily control how many times one control unit is used in a match. For a given study, it may be possible that many treatment units are matched to a single control unit. In this case, the response of the control unit will disproportionately influence the estimate of the treatment effect, thereby inflating the standard error of the matching estimator. Moreover, simulation results suggest that 1:1 matching without replacement usually yields a smaller difference in sample means between treated and matched control groups than with replacement (Bottigliengo et al., 2021). Matching with replacement is rarely used in certain areas of study--for example, in the biomedical sciences--where matching without replacement appears to be more effective (Austin and Small, 2014).
## 5 Optimal Statistical Matching Without Replacement
In 1:1 matching without replacement, each control unit is included in at most one pair in the matched sample. Hence, once a control unit is selected for matching, that control unit is no longer eligible for consideration as a potential match for subsequent treatment units. This substantially increases the difficulty of finding an optimal match, for example, with respect to the total cost objective. Thankfully, these types of statistical matching
problems are well studied, though most of this work originates from the field of operations research--Rosenbaum (1989) first identified the connection between statistical matching and this optimization literature.
The most common statistical matching optimization problem is to find a matching \(M^{\dagger}\) that minimizes the total cost given that it contains as many matched pairs as possible--or more precisely, under the constraint that the cardinality of \(M^{\dagger}\) is maximized. In the statistical matching literature, these matchings are simply called _optimal matchings_(Rosenbaum, 1989). In the optimization literature, this problem is known as the _linear unbalanced assignment problem_ (LUAP)(Bijsterbosch and Volgenant, 2010; Burkard et al., 2012).
### The Linear Assignment Problem
We begin with a simplification of LUAP--the _linear assignment problem (LAP)_--in which we aim to find an optimal matching \(M^{\dagger}\) when the number of treated units is equal to the number of control units, that is, \(N_{T}=N_{C}=N/2\). In full generality, LAP can be formulated as an _integer linear programming_ problem (ILP). However, as we will see, LAP can be solved for small matching problems using a pen and paper.
The ILP formulation of LAP associates each edge \(ij\in E\) with a binary variable \(z_{ij}\). These binary variables _induce_ a matching \(M\): if \(z_{ij}=1\), then the match \(ij\in M\), and if \(z_{ij}=0\), then \(ij\notin M\). LAP aims to find, across all possible vectors of _variables_\(\mathbf{z}=(z_{ij})_{ij\in E}\), a vector \(\mathbf{z}^{\dagger}\) that satisfies
\[\mathbf{z}^{\dagger}=\operatorname*{arg\,min}_{\mathbf{z}}\sum_{ij\in E}w_{ ij}z_{ij}\]
under the constraints that
\[\sum_{i\in V_{T}}z_{ij} =1\ \forall\ j\in V_{C},\] \[\sum_{j\in V_{C}}z_{ij} =1\ \forall\ i\in V_{T}, \tag{3}\] \[z_{ij}\in\{0,1\}\ \forall\ ij\in E. \tag{4}\]
The \(\mathbf{z}^{\dagger}\) is known as an _optimal solution_, and the _value_ of the ILP is the value of the objective evaluated at \(\mathbf{z}^{\dagger}\). Any \(\mathbf{z}\) that satisfies the constraints--but does not necessarily minimize the objective--is simply called a _solution_. This problem is an _integer_ programming problem as the variables \(\mathbf{z}\) are integer-valued, and is _linear_ because both the objective function and the constraints are linear combinations of the \(\mathbf{z}\) variables.
Note that the constraints for LAP ensure that each treated unit is matched to exactly one control unit and _vice versa_. In other words, every unit under study is covered by exactly one edge. This type of matching is known as a _perfect matching_; hence, LAP is also known as the _minimum cost (or weight) perfect matching problem_.
#### 5.1.1 Solving Integer Linear Programming Problems
There are, broadly speaking, two kinds of approaches for solving these types of ILP matching problems. The first approach is to work directly on the integer program. A common technique is to relax the integer constraint on the variables \(\mathbf{z}\) to allow \(z_{ij}\) to take values within the entire interval \([0,1]\). This relaxation results in a standard linear programming (LP) problem, which can be solved in polynomial time [Khachiyan and Porkolab, 2000]. After this relaxation, additional constraints--for example, blossom inequalites [Edmonds, 1965b,a]--can be iteratively added to the LP to force a solution with 0-1-valued variables.
A particularly interesting instance of the LP relaxation approach occurs when all costs \(w_{ij}\) are integer-valued. In this case, the _integrality theorem_[Dasgupta et al., 2008] ensures the
existence of an optimal solution \(\mathbf{z}^{\dagger}\) to the LP such satisfying \(z_{ij}^{\dagger}\in\{0,1\}\). Hence, a standard linear program solver--for example, the simplex method (Nelder and Mead, 1965; Dantzig, 1990)--can exactly solve the original ILP. In practice, this is a quite common setting; edge costs are often multiplied by a large power of 10 and rounded to the nearest integer before the optimization problem is initialized. While this necessarily yields an approximation to the original statistical matching problem, such a matching tends to be acceptable in practice.
Primal-dual methods provide another technique to solve ILP problems. In _very_ crude terms, the dual of an optimization problem is also an optimization problem, but the roles of the variables the costs are switched and the objective function is "flipped"--for example, the dual of a minimization problem is a maximization problem (Bachem et al., 1992). Duality allows for quick computation of both lower and upper bounds to the objective of an optimization problem; for example, a solution to a minimization problem yields an upper bound on the objective, and a solution to the dual of this problem yields a lower-bound on this objective. Additionally, under certain conditions, the value of the optimization problem and the value of its dual will be the same--a property known as _strong duality_. When strong duality holds, an arbitrarily good solution can be found by iteratively switching between the original problem and dual problem, where the solution for the dual problem helps improve the solution for the original optimization problem and _vice versa_(Fang and Gong, 2017).
The second approach is to iteratively manipulate characteristics of the matching graph \(G\)--for example, edge costs, cycles, minimum cuts, or shortest paths (Kovacs, 2015)--until an optimal solution is found. For example, an instance of LAP with \(N/2\) treated and control units can be solved using the Hungarian algorithm (Kuhn, 1955; Munkres, 1957; Dutta and Pal, 2015). This algorithm can be viewed as performing a series of manipulations on the \(N/2\times N/2\) cost matrix \(W\)--the entry in the \(i\)th row and \(j\)th column of \(W\) is the cost \(w_{ij}\). For small instances of LAP, these manipulations can be performed using a pen and paper. We now describe this implementation of the Hungarian algorithm in detail.
#### 5.1.2 Hungarian Algorithm for Solving LAP
The Hungarian algorithm builds an optimal match through selecting entries of the \(N/2\times N/2\) cost matrix \(W\)[17]; if the entry in the \(i\)th row and \(j\)th column of \(W\) is selected, then the edge \(ij\) is added to the optimal matching \(M^{\dagger}\), and a cost of \(w_{ij}\) is incurred. Additionally, from Konig [1931], in order for the match to be perfect, the selected entries must not be coverable by fewer than \(N/2\) lines.
The algorithm works by iteratively adding and subtracting costs from the matrix \(W\) to obtain a modified cost matrix \(W^{\dagger}\). These operations are performed in such a way to ensure three properties: the optimal solution in \(W^{\dagger}\) is the same as that in \(W\); the costs in \(W^{\dagger}\) are non-negative; and that the optimal solution in \(W^{\dagger}\) has a total cost of 0. Hence, the matching can be verified as optimal through inspection; it is optimal if and only if the selected entries of \(W^{\dagger}\) are all 0 and cannot be covered by fewer than \(N/2\) horizontal or vertical lines.
The algorithm proceeds as follows. For brevity, we do not go into detail about how to
Figure 1: (a). A complete bipartite graph with \(N_{T}=N_{C}=5\). (b). The cost matrix of (a), where a smaller value of \(w_{ij}\) indicates that \(i\) and \(j\) have more similar values of covariates.
cover 0 entries with lines in Step 4 or technical proofs as to why repeated applications of Step 6 will lead to convergence of the algorithm. See Dutta and Pal (2015) for a rigorous discussion.
1. **(Initialize)** Begin with an \(N/2\times N/2\) cost matrix \(W\).
2. **(Subtract the minimum of each row)** For each row \(i\), find the smallest entry of \(W\) in row \(i\). Subtract all costs in row \(i\) by this entry to form a new cost matrix \(W^{r}\). Note, \(W^{r}\) will have at least one 0 entry within each row.
3. **(Subtract the minimum of each column)** Similarly, for each column \(j\), the smallest entry of \(W^{r}\) in column \(j\). Subtract all costs in column \(j\) by this entry to form a cost matrix \(W^{c}\). Now, each row and each column of \(W^{c}\) has at least one 0 entry.
4. **(Cover all zeroes)** Cover all zeroes of \(W^{c}\) with as few horizontal and vertical lines as possible. Let \(L\) denote the total number of lines required. If \(L=N/2\), set \(W^{\dagger}=W^{c}\) and go to Step 7. Otherwise, proceed to Step 5.
5. **(Partition entries)** Partition entries of \(W^{c}\) into three components: Those entries that are uncovered by a line \(W^{c0}\); those that are covered by exactly one line \(W^{c1}\); and those that are covered by two lines \(W^{c2}\).
6. **(Find the minimum uncovered cell value)** Find the smallest cost of an entry in \(W^{c0}\). Subtract this cost from all entries in \(W^{c0}\) and add it to all entries in \(W^{c2}\) to obtain a new cost matrix \(W^{\sigma}\). Go to Step 4 with \(W^{c}=W^{\sigma}\).
7. **(Find optimal matching)** Choose a set of entries \(ij\) such that \(W^{\dagger}_{ij}=0\) for all entries and no entries occur in the same row or column, and let \(M^{\dagger}\) denote this set. Then, \(M^{\dagger}\) is an optimal matching.
The Hungarian algorithm requires \(O(N^{3})\) runtime to terminate. The majority of this runtime is devoted to verifying the existence of an optimal solution, for example, for finding
optimal matching by drawing the minimum number of lines through the matrix to cover all zeroes.
### The Linear Unbalanced Assignment Problem
The linear unbalanced assignment problem (LUAP) is a extension of LAP in which \(N_{T}<N_{C}\). The ILP formulation of LUAP is identical to that for LAP except that the constraint (3) changes to
\[\sum_{j\in V_{C}}z_{ij}\leq 1\ \forall\ i\in V_{T}. \tag{5}\]
Note that all optimal matching problems are either equivalent to LAP or LUAP.
LUAP can straight-forwardly be reduced to LAP by creating \(N_{C}-N_{T}\) "dummy" treated nodes and setting the cost between these dummy nodes and any control node to be \(w^{+}=\max_{ij}w_{ij}+1\). This forces an instance where there are an equal number of "treated" and control nodes. The choice of costs ensures that an optimal solution \(x^{\dagger}\) for the original LUAP can be obtained by taking the optimal solution for the LAP reduction and selecting only the \(N_{T}\) variables that are associated with an edge incident to a node \(i\in V_{T}\)--swapping one of these edges with one incident to a dummy node will only increase the objective. A similar transformation can be performed to prevent certain units from being paired together--that is, between \(i^{\prime}\in V_{T}\) and \(j^{\prime}\in V_{C}\) if \(i^{\prime}j^{\prime}\notin E\). In this case, we may set \(w^{+}_{i^{\prime}j^{\prime}}=\max_{ij\in E}w_{ij}+1\) prior to solving the LUAP.
Since LUAP can be reduced to LAP, it follows that the Hungarian algorithm can be used to solve instances of LUAP as well. However, the addition of dummy nodes may substantially increase the total runtime of the Hungarian algorithm (\(O(N^{3})\)), especially if a large number of dummy nodes are added. Additionally, adding dummy nodes may substantially increase memory requirements--the cost matrix \(W\) requires \(O(N^{2})\) space to store. Thus, attempting to solve LUAP using the Hungarian algorithm may not be an efficient approach matching under big data settings, and historically, other approaches have been used to solve LUAP
and optimal matching problems.
### Maximum Cardinality Matching
While methods for solving LUAP directly can be implemented to find an optimal matching, in the statistical matching literature, this optimization problem has historically been broken into two separate subproblems:
1. **(Maximum cardinality matching)** Find the maximum cardinality \(m^{\dagger}\) across all possible matchings.
2. **(Minimum cost matching)** Find the matching that has the smallest total cost under the constraint that the matching contains \(m^{\dagger}\) matched pairs.
We now describe these subproblems in detail, beginning with maximum cardinality matching.
For any bipartite graph \(G=((V_{T},V_{C}),E)\), the maximum cardinality matching problem (MaxCard) is to find a matching in \(G\) such that the cardinality of the matching \(|M|\) is as large as possible. MaxCard may be formulated as an ILP where the aim is to find an optimal solution \(\mathbf{z}^{\dagger}\) satisfying
\[\mathbf{z}^{\dagger}=\operatorname*{arg\,max}_{\mathbf{z}}\sum_{ij\in E}z_{ij}\]
under the constraints that
\[\sum_{i\in V_{T}}z_{ij} \leq 1\ \forall\ j\in V_{C},\] \[\sum_{j\in V_{C}}z_{ij} \leq 1\ \forall\ i\in V_{T},\] \[z_{ij} \in\{0,1\}\ \forall\ ij\in E. \tag{6}\]
Note, the matching \(M^{\dagger}\) induced by such a \(z^{\dagger}\) satisfies \(|M^{\dagger}|=m^{\dagger}\).
Some available matching methods work directly on this objective. One notable example, _cardinality matching_(Zubizarreta et al., 2014), solves this ILP with an additional constraint
ensuring that, for example, the differences in sample means of confounding covariates between treated and matching control groups are within some pre-specified tolerance threshold. After finding a matching \(M^{\dagger}\) that maximizes the cardinality, an optimal matching on all units incident to an edge in \(M^{\dagger}\) is obtained before estimating treatment effects.
A traditional approach for solving MaxCard is to first transform this problem into a network flow problem. Algorithms designed to find maximum flows can then be applied to solve the original MaxCard problem. These maximum flow algorithms are often computationally efficient, and thus, may be scalable to large observational studies. We now describe a solution using this approach--the Ford-Fulkerson algorithm--in detail [Ford and Fulkerson, 1957].
#### 5.3.1 Ford-Fulkerson for Solving MaxCard
We begin by reducing MaxCard to a maximum flow problem. To do this, we first transform \(G\) to a digraph \(G^{\prime}=(V^{\prime},E^{\prime})\)--that is, each edge in \(E^{\prime}\) is now directed. Specifically, we allow edges in \(E^{\prime}\) to travel from a node in \(V_{T}\) to a node in \(V_{C}\), but not the other direction: for \(i\in V_{T}\), \(j\in V_{C}\), and \(ij\in E\), we have \(\vec{ij}\in E^{\prime}\), but \(\vec{ji}\notin E^{\prime}\). We then add a _source_ node \(s\) and a _sink_ node \(t\) to \(G\), and we connect these nodes to \(G^{\prime}\) by adding edges traveling from the source node to each node in \(V_{T}\) and edges traveling from each node in \(V_{C}\) to the sink node: for \(i\in V_{T}\), \(\vec{si}\in E^{\prime}\), and for \(j\in V_{C}:\vec{jt}\in E^{\prime}\). Finally, we assign each edge in \(e\in E^{\prime}\) a _capacity_\(c_{e}\) equal to 1. Figure 2 details this transformation.
A _flow_ on the digraph \(G^{\prime}\) from the source \(s\) to the sink \(t\) is a real-valued function \(f\) on each edge \(e\in E^{\prime}\) satisfying the following conditions:
1. For any edge \(e\in E^{\prime}\) : \(0\leq f(e)\leq c_{e}\). If \(f(e)=c_{e}\), we say that the flow is _saturated_ on that edge.
2. For any node \(j\in V^{\prime}\backslash\{s,t\}\), the total flow into the node \(j\) is same as the total flow out of the node. That is, \[\sum_{i:\vec{ij}\in E^{\prime}}f(\vec{ij})=\sum_{k:\vec{jk}\in E^{\prime}}f( \vec{jk}).\] (7)
The value \(|f|=\sum_{i:\vec{si}\in E^{\prime}}f(\vec{si})\) is the _total flow_ out from the source \(s\), and hence, from (7), the total flow entering into \(t\) is \(|f|\). Under this setup, each flow \(f\) on \(G^{\prime}\)_induces_ a matching \(M_{f}\subset E\) obtained by selecting the edges that the flow saturates:
\[M_{f}=\left\{ij\in E:i\in V_{T},j\in V_{C},f(\vec{ij})=1\right\}. \tag{8}\]
The maximum flow problem is to find a flow \(f^{\dagger}\) that maximizes the total flow into \(t\). If all capacities are integers--as is the case with MaxCard--it is possible to find such an \(f^{\dagger}\) with integer values for all edges: \(f(e)\in\mathbb{N}\cup 0\ \forall\ e\in E^{\prime}\)(Dasgupta et al., 2008). Upon finding such a maximum flow \(f^{\dagger}\), a maximum cardinality matching \(M^{\dagger}\) is a matching induced by this flow: \(M^{\dagger}=M_{f^{\dagger}}\).
The Ford-Fulkerson algorithm (FFA) is commonly used to solve maximum flow problems. Intuitively, FFA works by starting from an initial flow \(f\) and iteratively finding paths of edges in \(G^{\prime}\) from \(s\) to \(t\) that will lead to increases in the total flow of \(f\).
FFA is most easily described through the introduction of residual graphs. Given the maximum flow digraph \(G^{\prime}=(V^{\prime},E^{\prime})\) and a flow \(f\), the _residual_ graph \(H=(V^{\prime},E_{f})\) is a digraph on the nodes \(V\). For each edge \(\vec{ij}\in E^{\prime}\), there is a "forwards' \(\vec{ij}\) and "backwards" \(\vec{ji}\)
Figure 2: (a). A bipartite graph with five treated and control units. (b). The network flow graph for (a).
version of this edge in the residual edge set \(E_{f}\):
\[E_{f}=\left\{i\vec{j}\cup\vec{ji}:i\vec{j}\in E^{\prime}\right\} \tag{9}\]
For MaxCard in particular, forward edges \(i\vec{j}\) have a _residual capacity_ of \(\delta(i\vec{j})=1-f(i\vec{j})\), which denotes the unused capacity on edge \(i\vec{j}\). Backward edges \(j\vec{i}\) have capacity \(\delta(\vec{ji})=f(i\vec{j})\), which denotes how much the flow on edge \(i\vec{j}\) can be suppressed. That is,
\[\delta(i\vec{j})=\left\{\begin{array}{ll}1-f(i\vec{j}),&i\vec{j}\in E^{ \prime},\\ f(i\vec{j}),&\vec{ji}\in E^{\prime}.\end{array}\right. \tag{10}\]
Once the residual graph \(H\) is constructed, FFA finds paths \(P=\left\{s\vec{i_{1}},\overrightarrow{i_{1}i_{2}}.\ldots,\overrightarrow{i_ {\ell-1}i_{\ell}},i_{\ell}^{\prime}\right\}\) from \(s\) to \(t\) within this residual graph such that the capacity \(\delta(i\vec{j})>0\) for each edge \(i\vec{j}\in P\). These paths are called _augmenting paths_. The current flow \(f\) can then be improved by adding flow to the forward edges and decreasing flow to the backwards edges along this path.
Rigorously, FFA for MaxCard is performed as follows:
1. **(Initialize flow)** Set \(f(i\vec{j})=0\) for all edges \(\vec{ij}\in E^{\prime}\).
2. **(Update residual graph)** Update the residual graph \(H=(V,E_{f})\) with capacities given in (10).
3. **(Find augmenting path or terminate)** Find an augmenting path \(P=\left\{s\vec{i_{1}},\overrightarrow{i_{1}i_{2}}.\ldots,\overrightarrow{i_ {\ell-1}i_{\ell}},i_{\ell}^{\prime}\right\}\) from \(s\) to \(t\) such that \(\delta(i\vec{j})=1\) for all edges \(i\vec{j}\in P\). If no such path exists, stop.
4. **(Augment the flow)** Update the flow \(f\) along all edges \(i\vec{j}\in P\): \[\begin{array}{ll}f(i\vec{j})\longleftarrow f(i\vec{j})+1,&i\vec{j}\in P,\ i\vec{j}\in E^{\prime},\\ f(i\vec{j})\longleftarrow f(i\vec{j})-1,&\vec{ji}\in P,\ i\vec{j}\in E^{ \prime}.\end{array}\] (11)
Return to Step 2.
Each iteration of FFA increases the flow of \(f\) by 1. For general maximum flow problems, finding an augmenting path takes \(O(|E^{\prime}|)\) time. Moreover, if all capacities in the maximum flow problem are integer-valued, then the flow \(f\) at termination in Step 3 is a maximum flow, and reaching this flow requires, at most, \(|f^{\dagger}|\) iterations. In particular, for MaxCard, the maximum cardinality \(m^{\dagger}\leq|V_{T}|<N\), and so, total runtime of FFA is bounded by \(O(N|E|)\leq O(N^{3})\). Moreover, when the graph is _sparse_--that is, when the number of edges is proportional to the number of nodes--this runtime is reduced to \(O(N^{2})\). In practice, FFA tends to be computationally efficient enough for most statistical matching applications.
Figure 3: (a). The flow network \(G\) and initial flow \(f\) with (capacity, flow). (b) The residual graph for (a) with augmenting path \(p\) in blue and residual capacity (\(\delta\)); Consider the reverse-path \(C_{2}-T_{1}\) and selecting path \(s-T_{2}-C_{2}-T_{1}-C_{1}-t\). (c). The flow in \(G\) that results from augmenting along path \(p\) by its residual capacity. (d). The residual network induced by the flow in (c); no path can be found \(s-t\) with all edges those with \(\delta=1\).
### Minimum Cost Matching
Recall that, in the matching graph \(G=(V,E)\), each edge \(ij\in E\) has a cost \(w_{ij}\geq 0\). The general form of a minimum cost matching problem (MinCost) is to find a matching \(M^{\dagger}\) that minimizes the total cost (1) under a constraint that \(M^{\dagger}\) has sufficiently large cardinality. Constraints on the cardinality of the matching prevent a trivial optimal solution of \(M^{\dagger}\) containing no matched pairs.
As with MaxCard, MinCost can be formulated as an ILP. For any size of matching \(m\), we aim to find an optimal solution \(z^{\dagger}\) satisfying
\[\mathbf{z}^{\dagger}=\operatorname*{arg\,max}_{\mathbf{z}}\sum_{ij\in E}w_{ij} z_{ij}\]
under the constraints that
\[\sum_{i\in V_{T}}z_{ij}\leq 1\ \forall\ j\in V_{C},\] \[\sum_{j\in V_{C}}z_{ij}\leq 1\ \forall\ i\in V_{T},\] \[\sum_{i\in V_{T}}\sum_{j\in V_{C}}z_{ij}\geq m,\] \[z_{ij}\in\{0,1\}\ \forall\ ij\in E. \tag{12}\]
The constraint \(\sum_{i\in V_{T}}\sum_{j\in V_{C}}z_{ij}\geq m\) ensures that the optimal matching \(M^{\dagger}\) satisfies \(|M^{\dagger}|\geq m\) (and, in fact, \(|M^{\dagger}|=m\), as any extra edges in \(M^{\dagger}\) can be removed without an increase in the total cost). In practice, optimal matching problems will set the cardinality to \(m^{\dagger}\), the maximum cardinality possible for a match.
#### 5.4.1 Cycle Canceling for Solving MinCost
Apart from the linear programming approach, there are a variety of approaches for solving MinCost. We discuss one of these approaches--cycle canceling--while noting that other
approaches, including cost-scaling, relaxation, and simplex approaches, may also yield relatively efficient solutions for MinCost.
As with MaxCard, cycle-canceling approaches for solving MinCost begin by transforming the problem into an optimal flow problem. The digraph \(G^{\prime}=(V,E^{\prime})\) described in Section 5.3.1 is constructed. For completeness, costs \(w\) are defined on all edges \(\vec{ij}\in E^{\prime}\) by setting \(w_{si}=1\) for all \(i\in V_{T}\) and \(w_{jt}=1\) for all \(j\in V_{C}\). FFA approaches can then be used to find an initial flow \(f_{0}\) satisfying \(|f_{0}|=m\). Finally, the residual graph \(H=(V,E_{f_{0}})\) is constructed, and costs \(w^{H}\) are assigned to each edge \(\vec{ij}\in E_{f_{0}}\) as follows:
\[w^{H}_{ij}=\left\{\begin{array}{ll}w_{ij},&\vec{ij}\in E^{\prime}\text{ and }f_{0}(\vec{ij})=1,\\ w_{ij},&\vec{ji}\in E^{\prime}\text{ and }f_{0}(\vec{ji})=0,\\ -w_{ij},&\text{otherwise.}\end{array}\right. \tag{13}\]
That is, costs are positive for forward edges that are used the flow from \(s\) to \(t\) and for backwards edges not used in this flow; costs are negative otherwise.
Searching negative cycles and canceling them with a cycle canceling algorithm will then find the minimum cost for the matching. A _cycle_\(C\) in the residual graph \(H\) is a path that begins and ends at the same node \(C=\left\{\overrightarrow{i_{1}i_{2}},\overrightarrow{i_{2}i_{3}}.\ldots, \overrightarrow{i_{\ell-1}i_{\ell}},\overrightarrow{i_{\ell}i_{1}}\right\}\). A _negative cycle_ is a cycle \(C^{-}\) in which the sum of the costs along edges in the cycle is negative: \(\sum_{\vec{ij}\in C^{-}}w_{ij}<0\). It can be shown that the matching \(M\) induced by a flow \(f\) is a minimum cost matching if and only if there are no negative cycles within the corresponding residual graph [Klein, 1967]. There are a variety of methods for finding negative cycles, including the Bellman-Ford algorithm and minimum-mean cycle approaches.
For MinCost specifically, each cycle within the residual graph will have the same number of forward edges traveling from a treated unit to a control unit as backward edges traveling from a control unit to a treated unit. Once a negative cycle \(C^{-}\) is found, the flow is updated by pushing flow forward through the backward edges in \(C^{-}\) and preventing flow from traveling through the forward edges in \(C^{-}\). The matching induced by the updated flow will have the
same cardinality as the matching with the original flow but will have a smaller total cost. The process of updating the flow from the negative cycle is called _cycle canceling_.
Rigorously, cycle canceling for MinCost is performed as follows:
1. **(Initialize flow)** Find initial flow \(f\) on \(G^{\prime}=(V,E^{\prime})\) with total flow \(|f|=m\). Define costs \(w\) on all edges \(E^{\prime}\) as previously described.
2. **(Update residual graph)** Update the residual graph \(H=(V,E_{f})\) with costs given in (13).
3. **(Find negative cycle or terminate)** Find a cycle \(C^{-}=\left\{\overrightarrow{i_{1}i_{2}},\overrightarrow{i_{2}i_{3}}\ldots, \overrightarrow{i_{\ell-1}i_{\ell}},\overrightarrow{i_{\ell}i_{1}}\right\}\) satisfying \(\sum_{\vec{i}\vec{j}\in C^{-}}w_{ij}<0\). If no such cycle exists, stop.
4. **(Update the flow)** Update the flow \(f\) along all edges \(\vec{ij}\in C\) as follows: \[\begin{array}{ll}f(\vec{ij})\longleftarrow f(\vec{ij})+1,&\vec{ij}\in C ^{-},\ \vec{ij}\in E^{\prime},\\ f(\vec{ij})\longleftarrow f(\vec{ij})-1,&\vec{ji}\in C^{-},\ \vec{ij}\in E^{ \prime}.\end{array}\] (14) Return to Step 2.
As mentioned before, each iteration of the cycle canceling algorithm will find a flow with the same total flow but a smaller total cost. Standard approaches for finding negative cycles require \(O(N|E^{\prime}|)\) time (Goldberg and Tarjan, 1989). However, unlike with Max-Card, there may not be a restrictive upper bound for the number of iterations required to find an optimal solution. If all costs are integer-valued, cycle canceling algorithms can terminate in \(O(N|E^{\prime}|\sum_{\vec{ij}\in E^{\prime}}w_{ij})\) iterations as each iteration will reduce the total cost by at least 1 (Kovacs, 2015). Additionally, some algorithms have been developed for MinCost that are guaranteed to terminate in polynomial time with respect to \(N\), even if costs are not integer-valued. The most well-known of these algorithms, minimum mean-cycle cancelling (Goldberg and Tarjan, 1989; Radzik and Goldberg, 1994), requires at most
\(O(N|E^{\prime}|^{2})\leq O(N^{5})\) iterations, leading to a total runtime of \(O(N^{2}|E^{\prime}|^{3})\leq O(N^{8})\). This is substantially more computationally complex than FFA. Again, ensuring sparsity in the matching graph can dramatically reduce the runtime--down to \(O(N^{5})\) for sparse graphs.
More recent approaches for solving MinCost may yield improvements to the total run time. However, despite these developments, current state-of-the-art algorithms for solving MinCost still require significantly more computation than those for solving MaxCard. Consequently, solving MinCost tends to be the computational bottleneck for statistical matching algorithms.
## 6 Scaling down data in statistical matching
We have previously emphasized that potential gains in computational efficiency can be obtained by imposing sparsity in the matching graph \(G\). Thus, as observational studies grow in size, the use of matching methods that perform a pre-processing step to sufficiently sparsify \(G\) prior to matching seems critical. Ideally, the sparsification should be performed in a way
Figure 4: (a). The original flow network \(G\) with initial flow \(f\) with dissimilarities \(w\). (b). The residual graph for (a) with augmenting path \(p\) color in blue; consider the reverse-paths \(c_{1}-T_{1}\) and \(c_{2}-T_{2}\), selecting path \(c_{1}-T_{1}-C_{2}-T_{2}-C_{1}\) with \(-w_{11}+w_{12}-w_{22}+w_{21}<0\) negative cost. (c). The flow in \(G\) that results from augmenting along path \(p\).
to ensure that the matching solution on the sparse graph is similar to that on the original matching graph. While some matching methods that include this sparsification step have been developed, overall, there is still a substantial need for additional research in this area.
We now detail the logistics of matching on a sparse graph and give some examples of current techniques for imposing sparsity in the matching graph.
### Matching on a Sparse Ggraph
Matching graphs \(G=(V,E)\) are often be expressed as an \(N_{T}\times N_{C}\) cost matrix \(W\)--similar to the one constructed in Section 5.1. Cost matrices are easy to store as data and provide all the necessary information to perform a standard statistical matching algorithm.
The cost matrix \(W\) from a graph \(G\) is constructed as follows. If the edge \(ij\in E\), then \(W_{ij}=w_{ij}\). If \(ij\notin E\), then \(W_{ij}=\infty\) (or, in practice, is set to a number larger than any \(w_{ij}\) for \(ij\in E\)). For this latter case, the large cost prevents algorithms from matching unit \(i\) to \(j\) instead of to \(j^{\prime}\) if \(ij\notin E\) and \(ij^{\prime}\in E\) (provided both are possible). It requires \(O(N^{2})\) memory to store a cost matrix.
Note that, if \(G\) is a complete bipartite graph, then \(W\) will only have finite entries, and if \(G\) is a _dense_ graph--that is, if the number of edges is proportional to \(N^{2}\)--then a significant proportion of entries will be finite. However, if \(G\) is sparse graph--that is, if the number of edges is proportional to \(N\)--then most of the entries of \(W\) are infinite. That is, the bulk of the \(O(N^{2})\) memory required to store \(W\) will be devoted to storing infinite values which will not be used when optimizing the matching algorithm. Figure 5 provides an example of a sparse graph.
Instead, when matching problems are sparse, _adjacency lists_ tend to be the preferred object for storing the information in \(G\). For every node \(i\), an adjacency list stores a vector \(v_{i}\) containing all nodes \(j\) which are incident to \(i\). Edge costs can be stored, for example, within a second vector \(v_{i}^{w}\), where the \(\ell\) th entry of \(v_{i}^{w}\) is the cost between \(i\) and the node in the \(\ell\) th entry of \(v_{i}\). If, on average, each node is incident to \(k\) other nodes, then the memory
sequence that the graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _setset_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set_\(G\). The graph is a _set\(G\). The graph is a _set_\(G\). The graph is a _set\(G\). The graph is a _set_\(G\). The graph is a _set\(G\).
types of constraints within matching problems.
Methods to find common support prior to matching may also be useful in reducing the total computational cost of matching. Regions of common support are often much smaller than the entire population of units under study, and ensuring common support will often lead to a dramatic reduction in the number of control units (and possibly, the number of treated units) prior to matching. However, most common support methods are not designed to impose sparsity--often, it is assumed that every treatment-control pair within the region of common support may be matched together--and additional steps are necessary to induce sparsity in the matching problem.
Figure 6: (a). Adjacency matrix, (b). Cost matrix, and (c) Adjacency list for the sparse graph in Figure 5
Software for Statistical Matching
There are a variety of software packages available for performing statistical matching without replacement, especially for the R programming language. Commonly used R packages include Matching[Sekhon, 2008], MatchIt[Stuart et al., 2011], and optmatch[Hansen, 2007]. Additionally, a recently developed package,
rcbalance[Pimentel et al., 2022], is explicitly designed to solve sparse matching problems, and allows users to input the statistical matching problem as an adjacency list.
Under the hood, however, most of these packages tend to use the same handful of algorithms to solve optimal matching problems. Historically, the most commonly-used algorithm has been the Relax-IV algorithm [Bertsekas et al., 1994]. This algorithm solves the matching problem using a coordinate ascent procedure on the dual of the assignment problem (see Section 5.1) [Bertsekas, 1981, Bertsekas and Tseng, 1988b,a] where an initial solution is obtained via an auction algorithm [Bertsekas et al., 1992]. This algorithm is free to use for academic research purposes, but requires special permission for non-research or commercial uses. Additionally, this algorithm has been largely unchanged since 1994.
The LEMON (Library for Efficient Modeling and Optimization in Networks) solver library has grown in recent popularity [Dezso et al., 2011]. LEMON can solve a wide variety of optimization problems on graphs, and in particular, has four efficient implementations for solving instances of MinCost: cycle cancelling, network simplex, cost scaling, and capacity scaling. These implementations appear to perform competitively when compared to other implementations [Kovacs, 2015]. Of particular note, LEMON is free and has a very permissive license that allows its use for both academic and commercial purposes.
Some statistical matching packages--for example,
MatchIt and designmatch[Zubizarreta et al., 2018]--allow for the use of the proprietary optimization libraries to solve the matching problem. The most commonly used libraries include Gurobi[Gurobi Optimization, 2021] and CPLEX[CPLEX, 2009]. Like LEMON, these libraries are designed to efficiently solve a wide variety of linear and integer programming
problems, not just those related to MinCost or LUAP. However, these libraries are not free to use outside of academic purposes.
Finally, a potentially useful algorithm for solving statistical matching problems is the CS2 (cost-scaling 2) algorithm (Goldberg, 1997), a type of push-relabel algorithm. Simulation studies have shown this algorithm to be one of the most efficient available at solving MinCost (Kovacs, 2015). CS2 appears to have been free to download and use for academic purposes, and some implementations of this algorithm can be found with a Google search.
## 8 Statistical Matching on Massive Data Moving Forward
Overall, there appears to be a need for further development and implementation of algorithms for solving optimal statistical matching problems. Ideally, these algorithms should be tailored to take advantage of properties particular to the optimal matching problem--for example, if solving MinCost, that all edges have a capacity of 1. These algorithms may also benefit from smart choices of the dissimilarity measure. For example, additional approaches may be available if the edge costs satisfy the triangle inequality (Hochbaum and Shmoys, 1986).
Finally, as statistical matching problems continue to grow in scale, the computational complexity of these problems will necessitate statistical matching techniques that impose sparsity on the matching problem. Algorithms designed and implemented to exploit sparsity of the matching graph--for example, that in Axiotis et al. (2022)--seem ideal for these types of matching problems.
|
2309.17285 | Efficient Large Scale Medical Image Dataset Preparation for Machine
Learning Applications | In the rapidly evolving field of medical imaging, machine learning algorithms
have become indispensable for enhancing diagnostic accuracy. However, the
effectiveness of these algorithms is contingent upon the availability and
organization of high-quality medical imaging datasets. Traditional Digital
Imaging and Communications in Medicine (DICOM) data management systems are
inadequate for handling the scale and complexity of data required to be
facilitated in machine learning algorithms. This paper introduces an innovative
data curation tool, developed as part of the Kaapana open-source toolkit, aimed
at streamlining the organization, management, and processing of large-scale
medical imaging datasets. The tool is specifically tailored to meet the needs
of radiologists and machine learning researchers. It incorporates advanced
search, auto-annotation and efficient tagging functionalities for improved data
curation. Additionally, the tool facilitates quality control and review,
enabling researchers to validate image and segmentation quality in large
datasets. It also plays a critical role in uncovering potential biases in
datasets by aggregating and visualizing metadata, which is essential for
developing robust machine learning models. Furthermore, Kaapana is integrated
within the Radiological Cooperative Network (RACOON), a pioneering initiative
aimed at creating a comprehensive national infrastructure for the aggregation,
transmission, and consolidation of radiological data across all university
clinics throughout Germany. A supplementary video showcasing the tool's
functionalities can be accessed at https://bit.ly/MICCAI-DEMI2023. | Stefan Denner, Jonas Scherer, Klaus Kades, Dimitrios Bounias, Philipp Schader, Lisa Kausch, Markus Bujotzek, Andreas Michael Bucher, Tobias Penzkofer, Klaus Maier-Hein | 2023-09-29T14:41:02Z | http://arxiv.org/abs/2309.17285v1 | # Efficient Large Scale Medical Image Dataset Preparation for Machine Learning Applications
###### Abstract
In the rapidly evolving field of medical imaging, machine learning algorithms have become indispensable for enhancing diagnostic accuracy. However, the effectiveness of these algorithms is contingent upon the availability and organization of high-quality medical imaging datasets. Traditional Digital Imaging and Communications in Medicine (DICOM) data management systems are inadequate for handling the scale and complexity of data required to be facilitated in machine learning algorithms. This paper introduces an innovative data curation tool, developed as part of the Kaapana1 open-source toolkit, aimed at streamlining the organization, management, and processing of large-scale medical imaging datasets. The tool is specifically tailored to meet the needs of radiologists and machine learning researchers. It incorporates advanced search, auto-annotation and efficient tagging functionalities for improved data curation. Additionally, the tool facilitates quality control and review, enabling researchers to validate image and segmentation quality in large datasets. It also plays a critical role in uncovering potential biases in datasets by aggregating and visualizing metadata, which is essential for developing robust machine learning models. Furthermore, Kaapana is integrated within the Radiological Cooperative Network (RACOON), a pioneering initiative aimed at creating a comprehensive national infrastructure for the aggregation, transmission, and consolidation of radiological data across all university clinics throughout Germany.
Footnote 1: [https://github.com/kaapana/kaapana](https://github.com/kaapana/kaapana)
A supplementary video showcasing the tool's functionalities can be accessed at [https://bit.ly/MICCAI-DEMI2023](https://bit.ly/MICCAI-DEMI2023).
Keywords:Medical Imaging Data Curation Machine Learning Kaapana Dataset Preperation Quality Control Bias Detection
## 1 Introduction
In recent years, the development and application of machine learning algorithms in medical imaging have emerged as an instrumental component in advancing healthcare and diagnostic accuracy [1]. This advancement, however, depends heavily on the availability and organization of high-quality medical imaging datasets [2, 3]. The Digital Imaging and Communications in Medicine (DICOM) standard, commonly adopted for storing medical images, encapsulates both image data and vital metadata, including image modality, acquisition device manufacturer, and patient information like age and gender [4, 5]. This metadata holds considerable value in the development of robust medical imaging machine learning algorithms [6]. Traditional DICOM data management systems, while effective for individual scans or patients, struggle to efficiently handle the scale and complexity of the data needed to be facilitated in machine learning algorithms [7]. The demand for superior data curation tools is crucial for advancing the field of medical imaging [8, 9]. Despite recent progress in medical imaging data curation, existing solutions exhibit certain limitations. Some tools, while useful for data curation, are either proprietary, ill-equipped to handle large-scale medical datasets, or fail to fully exploit the benefits of DICOM headers [10].
Figure 1: Screenshot of the curation tool integrated into Kaapana. The gallery view displays series thumbnails accompanied by customizable metadata, providing a comprehensive visual overview. The sidebar showcases the metadata of the current selection, enabling swift detection of potential biases based on the DICOM metadata. This layout illustrates the tool’s user-friendly interface and its capabilities in efficient data curation and bias detection.
Additionally, while there are automated approaches to enhance the data curation process, they are not conveniently integrated into a user-friendly tool [11, 12, 13].
In response to these challenges, we have developed an innovative data curation tool as part of the Kaapana open-source toolkit [14, 15]. Kaapana is designed for advanced medical data analysis, especially in radiological and radiotherapeutic imaging, facilitating AI-driven workflows and federated learning approaches. By enabling on-site data processing and ensuring seamless integration with clinical IT infrastructures, it aims to address challenges in multi-center data acquisition and offers tools for standardized data processing workflows, distributed method development, and large-scale multi-center studies. Building up on Kaapana, our tool is designed to streamline the organization, management, and processing of large-scale medical imaging datasets, catering specifically to the needs of radiologists and machine learning researchers.
Our contribution is threefold: Our data management tool facilitates (1) efficient data curation by advanced search, auto-annotation and tagging, (2) quality control and review and (3) dataset bias detection by metadata visualization.
Kaapana is a constituent of the Radiological Cooperative Network (RACOON), an initiative to establish a nationwide infrastructure for collecting, transferring, and pooling radiological data across all German university clinics. Integrating our tool in Kaapana paves the way for its imminent deployment across all German university clinics, facilitating clinical validation.
## 2 Methodology
The essence of our methodology is to extend the capabilities of Kaapana, an open-source toolkit designed for medical data analysis and platform provisioning, by incorporating a comprehensive tool for managing, curating, and processing large-scale medical imaging datasets.
### Technical Infrastructure
Our system benefits from Kaapana's robust technical infrastructure. Vue.js, a versatile JavaScript framework, powers the frontend, ensuring user-friendly, dynamic, and responsive web interfaces. FastAPI, a high-performance web framework, forms the backbone of the backend, enabling efficient communication with the frontend.
The persistence layer is three-fold, each serving a unique purpose. The dcm4chee Picture Archiving and Communication System (PACS) stores the original DICOM images, safeguarding their integrity and availability. For efficient management of large datasets, the DICOM Header is converted to JSON and stored in OpenSearch, a powerful open-source search engine known for its quick querying abilities. PostgreSQL, an open-source object-relational database system, forms the mapping layer, establishing connections between data and respective
datasets, hence facilitating effective categorization and retrieval. The full utilized technical infrastructure is visualized in Fig. 2.
While our focus has primarily been on DICOM data, our solution also demonstrates flexibility in accommodating other formats. Kaapana is capable of transforming images in the Neuroimaging Informatics Technology Initiative (NIfTI) data format into DICOMs. These transformed images can then be curated. It's important to note, however, that metadata extraction is not possible from the NIfTI format; only the image data is preserved in the transformation. Nevertheless, this flexibility in data handling further extends the applicability of our tool in a variety of medical imaging contexts.
### Graphical User Interface
The graphical user interface in seamlessly integrated into Kaapana's Vue.js frontend. Throughout the development process, which was conducted in close collaboration with radiologists, it was highlighted that varying use cases necessitate distinct user interface requirements. Consequently, the user interface has been designed to be highly adaptable, offering an array of customizable settings to cater to diverse needs. Overall, the user interface consists of a three-part layout visualized in Figure 3.
**Search.** A sophisticated full-text search function, supporting wildcard search and free-text filtering, assists users in efficiently locating specific items based on image metadata. Additionally, it provides autocomplete functionality, streamlining the search process.
**Gallery View.** The gallery view provides a visual display of DICOM series, presenting them in a thumbnail format along with customizable metadata. The
Figure 2: Illustrating the technical infrastructure. Highlighting the frontend powered by Vue.js, the backend using FastAPI, and the three-fold data persistence layer consisting of dcm4chee, OpenSearch, and PostgreSQL. The arrows visualize the communication between the components.
thumbnail creation is in compliance with the DICOM standard [4, 5], which accommodates a broad spectrum of image modalities, including but not limited to, Structured Reports (SR), CT, or Magnetic Resonance Imaging (MRI). Given the current interest in segmentation algorithms within the medical imaging community [16], our tool automatically generates thumbnails for DICOM-SEGs or RTStructs that illustrates the segmentation superimposed on the original image. The gallery view also includes a multi-selection feature that facilitates bulk operations (see Fig. 1).
#### 3.2.2 Sidebar.
The sidebar serves a dual function - as a metadata dashboard and a detail view. The configurable metadata dashboard aggregates and displays comprehensive metadata distributions based on the current selection in the gallery view. These metadata distributions are interactive, allowing for selection and zooming for detailed examination. They can also be downloaded as charts or CSV files, providing flexibility in data analysis and sharing.
In the detail view mode, activated upon series selection, it showcases an interactive 3D visualization of the chosen DICOM series using the integrated (adjusted) OHIF Viewer [7] next to a searchable table with the series' metadata, including the DICOM Headers.
Figure 3: Filtering for series containing lower lung lobes. The gallery view presents thumbnails with superimposed segmentations, while one selected series is opened in the sidebar for interactive 3D volume visualization with segmentations.
### Machine Learning Integration
Kaapana is capable of executing state-of-the-art machine learning algorithms robustly. It is already equipped with a robust body part regression algorithm, allowing automatic assignment of which body part is covered in a given CT image [17, 13]. Since one of Kaapana's major strengths is the easy extendability, we integrated TotalSegmentator [12] to even further extend the automatic data curation capabilities. TotalSegmentator is based on nnUNet [18], an automatically adapting semantic segmentation method, which allows segmenting 104 anatomical structures (27 organs, 59 bones, 10 muscles, 8 vessels) from CT images. This integration significantly enhances the automatic annotation capabilities. Furthermore, by this, users can filter for those body parts or anatomical structures and even further speed up their curation process.
### Data Management and Workflow Execution
Our tool incorporates robust data management and workflow execution capabilities. Users can perform various actions on multiple selected series simultaneously, such as adding or removing series from a dataset and initiating workflows. An intuitive tagging system, with shortcut and autocomplete support, streamlines data annotation and categorization.
## 3 Results
Our data curation tool, integrated into Kaapana, provides a comprehensive and intuitive interface for managing, organizing, and processing extensive medical imaging datasets, thereby contributing significantly to efficient dataset curation for machine learning algorithms. Here, we highlight potential applications of our tool through a series of illustrative examples:
### Dataset Management, Auto-Annotation and Tagging
Radiologists frequently handle vast collections of medical images, encompassing multiple patients, studies, and imaging modalities [9]. A common scenario involves a radiologist tasked with organizing thousands of CT and MRI scans acquired over several years for a large-scale study. Concurrently, in large-scale medical imaging studies curating and annotating an extensive collection of CT scans presents a formidable challenge. This requires a meticulous analysis of thousands of scans for visible disease symptoms, a process that is both labor-intensive and time-consuming.
Our tool offers a solution to these challenges with its gallery-style view, multi-select functionality, and advanced search features. Radiologists can swiftly sift through images, categorizing them into different datasets based on various attributes, such as patient demographics, study type, or imaging modality. The
tool's advanced search functionality enables efficient image curation by allowing filters for DICOM metadata or algorithm outcomes, such as body part or anatomical structure.
These machine-assisted annotations provide an initial dataset that radiologists can validate and refine, significantly reducing the manual labor required and streamlining the annotation process. Furthermore, the gallery view, coupled with tagging functionality, enhances the organization of the curated and annotated dataset. This integrated approach to data organization, management, and annotation significantly alleviates the burden on radiologists and accelerates the preparation of data for machine learning applications.
### Quality Control and Review
Our tool is particularly beneficial in scenarios where researchers need to validate the quality of images and segmentations in large medical imaging datasets, such as those obtained from multi-center studies. The tool's gallery and detail views can be effectively utilized to swiftly pinpoint images with poor quality or erroneous segmentations. An illustration of this capability is evident in the lower row of Fig. 4, where a multi-organ segmentation algorithm was applied to CT images but yielded subpar results. While 2D thumbnails may not always suffice for quality control of 3D segmentation algorithms, they can significantly expedite the quality control process in certain cases.
For instances where thumbnails fall short, the detail view allows researchers to navigate through the 3D volumes for a more comprehensive quality assurance.
Figure 4: Showcasing the gallery view’s ability to handle various DICOMs and visually inspecting problematic series, such as the noisy series (top row, fourth from left) and the adjacent patient report. The radiologist can then exclude those problematic series. The lower row emphasizes the tool’s capacity to quickly spot low-quality segmentations of a 3D CT image.
Moreover, as Magudia et al.[9] highlight, quality control for DICOM Headers is particularly crucial in multi-center studies due to data heterogeneity. Our tool caters to this need by displaying and allowing filtering of metadata.
### Uncovering Potential Bias in Datasets
Dataset biases, such as disparities in patient demographics or variations in scanner types and configurations, can profoundly influence the performance of machine learning models [6]. Such biases may lead to models that exhibit excellent performance during training and validation phases but falter in real-world applications due to an over-dependence on biased features. For example, a model predominantly trained on data from a specific scanner may struggle to generalize to images produced by other scanners [19]. Our tool can play a pivotal role in identifying these biases through its metadata dashboard. By aggregating and visualizing the metadata of selected items, researchers can discern patterns or inconsistencies that could signal potential biases. The visualization of the metadata distribution from a subset of the LIDC-IDRI Dataset's CT scans, as shown in Fig. 1, underscores the tool's ability to detect such biases [20]. A machine learning model trained on this dataset might inadvertently learn the skewed distribution of convolution kernels or scanners, which could result in failure on unseen data which does not represent the learned distribution.
By offering early detection of bias, the tool enables researchers to implement corrective strategies, such as data augmentation or bias mitigation techniques. This enhances the generalizability and resilience of the machine learning models developed, ensuring they perform optimally across varied scenarios.
## 4 Discussion and Conclusion
The development of an efficient data curation tool as part of the Kaapana open-source toolkit, as presented in this paper, addresses a critical need in the field of medical imaging. The availability and organization of high-quality medical imaging datasets are paramount for the successful application of machine learning algorithms in healthcare. The tool's integration with Kaapana provides a robust infrastructure for managing, curating, and processing large-scale medical imaging datasets.
One of the significant contributions of this tool is the streamlined annotation process. By employing advanced search functionality and auto-annotation capabilities through machine learning algorithms such as TotalSegmentator and Body Part Regression, the tool significantly reduces the manual labor required for image curation. Moreover, the tool's ability to support quality control and review mechanisms is vital for ensuring the reliability of datasets, especially in multi-center studies. The integration of a metadata dashboard is particularly noteworthy, as it enables the detection of potential biases in datasets. Furthermore, the open-source nature of the tool promotes collaboration and sharing among researchers, which is essential for advancing medical imaging research.
By leveraging Kaapana's federated learning capabilities, in future work curated datasets can be used in downstream federated learning use cases, enabling a collaborative approach to machine learning that respects data privacy and locality constraints. While the use cases demonstrate the utility of the tool, quantifying its enhancements remains a primary focus for future work. Furthermore, integrating even more advanced algorithms for automatic image annotation could further improve the efficiency and accuracy of the tool. Another potentially promising advancement could be the integration of Electronic Health Record (EHR) data, which plays a crucial role in the process of creating datasets.
These future directions aim to ensure that the Kaapana data curation tool remains at the forefront of medical imaging research, catering to the evolving needs of radiologists and machine learning researchers.
Funded by "NUM 2.0" (FKZ: 01KX2121)
|
2309.11192 | The Impact of Surface Passivation on Kapitza Resistance at the Interface
between a Semiconductor and Liquid Nitrogen | Cooling electronic devices to cryogenic temperatures (< 77 K) is crucial in
various scientific and engineering domains. Efficient cooling involves the
removal of heat generated from these devices through thermal contact with
either a liquid cryogen or a dry cryostat cold stage. However, as these devices
cool, thermal boundary resistance, also known as Kapitza resistance, hinders
the heat flow across thermal interfaces, resulting in elevated device
temperatures. In transistors, the presence of passivation layers like Silicon
Nitride (SiN) introduces additional interfaces that further impede heat
dissipation. This paper investigates the impact of passivation layer thickness
on Kapitza resistance at the interface between a solid device and liquid
nitrogen. The Kapitza resistance is measured using a capacitance thermometer
that has been passivated with SiN layers ranging from 0 to 240 nm. We observe
that Kapitza resistance increases with increasing passivation thickness. | Babak Mohammadian, Mark A. McCulloch, Thomas Sweetnam, Valerio Gilles, Lucio Piccirillo | 2023-09-20T10:25:47Z | http://arxiv.org/abs/2309.11192v2 | The Impact of Surface Passivation on Kapitza Resistance at the Interface between a Semiconductor and Liquid Nitrogen
###### Abstract
Cooling electronic devices to cryogenic temperatures (\(<\) 77 K) is crucial in various scientific and engineering domains. Efficient cooling involves the removal of heat generated from these devices through thermal contact with either a liquid cryogen or a dry cryostat cold stage. However, as these devices cool, thermal boundary resistance, also known as Kapitza resistance, hinders the heat flow across thermal interfaces, resulting in elevated device temperatures. In transistors, the presence of passivation layers like Silicon Nitride (SiN) introduces additional interfaces that further impede heat dissipation. This paper investigates the impact of passivation layer thickness on Kapitza resistance at the interface between a solid device and liquid nitrogen. The Kapitza resistance is measured using a capacitance thermometer that has been passivated with SiN layers ranging from 0 to 240 nm. We observe that Kapitza resistance increases with increasing passivation thickness.
**Keywords:** Kapitza Resistance, Passivation layer, Self-heating
## 1 Introduction
To ensure the linearity and reliability of electronic devices operating at high power and frequency, effective cooling is required [1, 2, 3]. In transistors, insufficient heat dissipation in the active channel leads to significant temperature elevations, commonly referred to as self-heating, which adversely impacts the device's performance [4, 5]. Intensive
efforts have been undertaken to effectively understand and mitigate the challenge of self-heating, particularly at cryogenic temperature [6, 7, 8, 9]. For example, the simulations reported in [10] show that heat dissipation from the active channel of a transistor is limited by the lack of states in phonon black-body radiation. Consequently, even when devices are cooled below 1 K, the temperature within the active region remains 10 to 20 K. For transistors, this phenomenon causes the noise figure to plateau below 20 K [11].
Further, to avoid damage and contamination, transistors are commonly coated with a nitride or oxide passivation layer. It is reported in [12] that at room temperature, the self-heating in the active channel of the power transistor is affected by the passivation layer's thickness and the thermal conductivity of materials utilized in the device. However, the effect of surface passivation on thermal dispersion has not been measured at cryogenic temperatures. In this paper, we preliminary present the results showing the impact of passivation layer thickness on the Kapitza resistance between a gold-plated quartz substrate and liquid nitrogen. In section 2, we will discuss Kapitza resistance and how it can impede phonon transmission at material interfaces. In sections 3 and 4, capacitor fabrication and thermometer calibration are presented, and related plots are discussed. Finally, our experimental setup and how we use a capacitance thermometer to measure the Kapitza resistance are presented in section 5.
## 2 Kapitza Resistance
When heat flows from a solid device to a liquid cryogen, there will be a temperature difference in the interfacial area. This temperature difference is due to the acoustic impedance mismatch between the materials, which impedes the flow of the phonons across the interface, resulting in a measurable resistance known as Kapitza resistance [13, 14, 15]. The Kapitza resistance, \(R_{K}\), for constant thermal flux at the interface is given by:
\[R_{K}=A\cdot\frac{\Delta T}{\dot{Q}}(\mathrm{m}^{2}\mathrm{K}/\mathrm{W}) \tag{1}\]
where \(\Delta T\) is the temperature difference across the liquid-solid interface, \(A\) is the area of the interface, and \(\dot{Q}\) is the total heat flow.
In the case of transistors, when transistors are cooled with liquid cryogens, we hypothesize that the Kapitza resistance hinders the phonon heat dissipation from the active region, resulting in it being hotter than its surroundings [10].
## 3 The Fabrication
To investigate the effect of the surface passivation layer on the Kapitza resistance at the interface of the sample and liquid nitrogen, we have fabricated parallel plate quartz capacitors that act as thermometers [16]. Unlike certain materials that experience significant fluctuations in dielectric constants (\(\varepsilon\)) with temperature, quartz provides a stable dielectric constant as well as high hardness and minimal thermal contraction stability under temperature variation [17]. Also, since its dielectric constant is
temperature-dependent in the region of interest (60-100 K), it is chosen as the dielectric of the capacitor in this experiment. To fabricate the capacitors, the quartz wafer is diced into \(20\times 20\) mm\({}^{2}\) chips with the substrate thickness tolerance and cutting deviation of \(\pm 20\)\(\mu\)m and \(\pm 0.1\) mm, respectively. The samples are then patterned with a laser writer and coated with (50 \(\pm\) 2) nm layers of Au on both sides, using an Electron Beam Evaporator. Further, to vary the heat flux across the interface of Au-SiN, one side of the capacitor acts as a heater, using a meandering path that is patterned with Photo-lithography and a Lift-off process [16]. Also, a thin border electrode (guard electrode) is used to prevent the chip from stray and fringe capacitances. In the second phase of fabrication, different passivation layers of SiN, ranging \(0\sim 240\) nm, are deposited over both sides of the capacitor by Plasma-Enhanced Chemical Vapor Deposition (PECVD). Fig. 1 illustrates the fabricated samples.
To use the fabricated capacitor as a reliable thermometer, it needs to be calibrated. Therefore, the variation of the dielectric constant of quartz should be measured with respect to temperature to provide a calibration plot.
## 4 Calibration
A cryostat equipped with a two-stage mechanical cooler (dry cooling) is used to calibrate the dielectric constant of the samples against temperatures from 60 to 100 K. A copper sample holder is used to provide a thermal contact for the sample in the cooling process, Fig. 2.
It contains two heaters, a calibrated temperature sensor (Lakeshore Cernox: CX-1030-CU-HT-0.3L). To measure the capacitance, two pairs of 0.1 mm-thick shielded copper wires are indium soldered at the allocated bond pads on the top and bottom of the samples. The copper wires are then soldered to the SMA connectors, which are connected to a Keysight LCR meter Model: E4980AL using stainless steel coaxial cables. It is then attached to the 4 K stage of a cryostat to conduct the calibration process.
Figure 1: The capacitor model, consisting of a quartz dielectric, gold plates, an embedded heater, and different thicknesses of SiN (top and bottom sides). Fabricated samples: Unpassivated and 60nm of SiN. The 3 small rectangles in the 60 nm samples are unpassivated to allow the capacitance measurement and bias wires to be attached.
The calibration plots for cooling are shown in Fig. 3. Since the experiment is based on the immersion of samples in liquid nitrogen, the acquired calibration data requires a correction to compensate for the slightly different electromagnetic environment seen by the capacitor when immersed in the liquid nitrogen dewar.
Fig. (a)a shows that the dielectric constant in liquid nitrogen (wet cooling) is higher than that measured in the cryostat (dry cooling). Therefore, we apply a linear shift to the calibration to correct for the change in the environment, which can be seen in Fig. (b)b. In order to validate this phenomenon, we conducted an experiment involving the immersion of five separate samples (ranging from 0 to 240 nm) in liquid nitrogen and measured the capacitance using LCR and averaged. The effect was repeatable
Figure 3: Calibration plot: (a) The calibration plot in dry cooling (cryostat). The red marker shows the measured dielectric constant at 77 K (the reference point in dry cooling), while the green marker illustrates the measured dielectric constant in liquid nitrogen (77K). (b) Adjusted calibration plot from 60K to 100K due to the change in measuring environment (liquid nitrogen reference point).
Figure 2: The copper sample holder, with the central sample epoxied in place and connected to the outer SMA connectors with copper wires. The heaters allow the temperature of the sample to be varied, with the calibrated thermometer (Cernox) providing an accurate temperature value.
with the measured dielectric constant at 4.61741 \(\pm\) 7.78E-05 and assumed to be linear across the temperature region of interest.
## 5 Kapitza Resistance Measurement
To derive the Kapitza resistance using Eq.1, it is required to measure the surface temperature while applying constant power for different thicknesses of passivation.
To minimize noise and the impact of the test fixture, a 4-wire capacitance measurement, as explained in Section 4, is performed. Also, to provide the DC bias, two bond pads are allocated at each end of the meander and connected to a Keithley source meter, model: 2401, via coaxial cables. In this way, the applied DC current makes the meander a controllable heater (\(\sim\) 14 \(\Omega\)). The wired sample is then placed on a Polytetrafluoroethylene (PTFE) sample holder (thickness: 1.5 mm), using plastic washers Fig. (a)a. A low pass filter with a cutoff frequency of 100 kHz is also used to reduce unwanted noise interference (mostly from the source meter at 80 kHz). The set-up can be seen in (b)b.
To measure the Kapitza resistance, each sample undergoes immersion in the liquid nitrogen bath for 20 minutes to ensure it is properly thermalized. The volume of the liquid nitrogen is chosen in a way that applying heat to the sample does not warm the reservoir significantly. Also, the generated heat by the LCR meter into the sample is in the order of nW, which is negligible.
For each sample, 200 readings of the capacitance at a frequency of 200 kHz are acquired and averaged. This is repeated at six different power levels, ranging from 2.04 mW to 8.28 mW, corresponding to a current sweep from 10 to 20 mA in steps of 2 mA. The dielectric constant is then calculated, and the corresponding temperature value is found from the fit in Fig. (b)b. The Kapitza resistance is then calculated using Eq. 1 and shown in Fig 5.
Figure 4: (a) PTFE-based sample holder. (b) Measurement setup, comprising the measurement schematic of 4-wire measurement of capacitance and DC biasing of the heater
The data in Fig. 5 shows that for a constant power, as the passivation thickness increases, there is a corresponding rise in the Kapitza resistance. Therefore, this supports our assumption that the passivation layer has a direct impact on the self-heating of the active regions in cryogenic electronics.
## 6 Conclusion
Self-heating behavior of the active channel of transistors puts constraints on the performance of these devices. Since the surface of transistors is commonly coated with a passivation layer, thermal dissipation can be restricted by this layer. In this paper, we investigated an experimental setup to evaluate the effect of surface passivation on Kapitza resistance at the interface of a solid and liquid nitrogen. We found that the Kapitza resistance increased for thicker passivation layers, increasing the self-heating effect. The results suggest that since Kapitza resistance increases with the variation of the passivation layer thickness, reducing the passivation layer thickness to a desirable thickness may improve the self-heating in transistors, and further work is underway to investigate this.
## Acknowledgments
This project received funding from the European Unions Horizon 2020 research and innovation program under the Marie Skodowska-Curie grant agreement No. 811312 for the project Astro-Chemical Origins (ACO) and UKRI ST/X006344/1.
We wish to acknowledge the support of the National Graphene Institute team, Dr. Lee Hague, Andrew Brook, Robert Howard, Matthew Whitelegg, and Dr. Kunal Lulla, offering suggestions in the fabrication process.
Figure 5: Kapitza resistance values for different passivation layers and applied power. Considering a constant power for each sample, by increasing the thickness of the passivation layer, the Kapitza resistance increases. |
2302.14398 | A spin model for intrinsic antiferromagnetic skyrmions on a triangular
lattice | Skyrmions are prospected as the potential future of data storage due to their
topologically protected spin structures. However, traditional ferromagnetic
(FM) skyrmions experience deflection when driven with an electric current,
hindering their usage in spintronics. Antiferromagnetic (AFM) skyrmions,
consisting of two FM solitons coupled antiferromagnetically, are predicted to
have a zero Magnus force, making them promising candidates for spintronic
racetrack memories. Currently, they have been stabilized in synthetic AFM
structures, i.e. multilayers hosting FM skyrmions, which couple
antiferromagnetically through a non-magnetic spacer, while recent
first-principles simulations predict their emergence in an intrinsic form,
within an row-wise AFM single monolayer of Cr deposited on PdFe bilayer grown
on Ir(111) surfaces. The latter material forms a triangular lattice, where
single and interlinked AFM skyrmions can be stabilized. Here, we explore the
minimal Heisenberg model enabling the occurrence of such AFM solitons and the
underlying phase diagrams by accounting for the interplay between the
Dzyaloshinskii-Moriya and Heisenberg exchange interactions, as well as the
magnetic anisotropy and impact of magnetic field. By providing the fundamental
basis to identify and understand the behavior of intrinsic AFM skyrmions, we
anticipate our model to become a powerful tool for exploring and designing new
topological magnetic materials to conceptualize devices for AFM spintronics. | Amal Aldarawsheh, Moritz Sallermann, Muayad Abusaa, Samir Lounis | 2023-02-28T08:28:35Z | http://arxiv.org/abs/2302.14398v2 | # A spin model for intrinsic antiferromagnetic skyrmions on a triangular lattice
###### Abstract
Skyrmions are prospected as the potential future of data storage due to their topologically protected spin structures. However, traditional ferromagnetic (FM) skyrmions experience deflection when driven with an electric current, hindering their usage in spintronics. Antiferromagnetic (AFM) skyrmions, consisting of two FM solitons coupled antiferromagnetically, are predicted to have a zero Magnus force, making them promising candidates for spintronic racetrack memories. Currently, they have been stabilized in synthetic AFM structures, i.e. multilayers hosting FM skyrmions, which couple antiferromagnetically through a non-magnetic spacer, while recent
first-principles simulations predict their emergence in an intrinsic form, within an row-wise AFM single monolayer of Cr deposited on PdFe bilayer grown on Ir(111) surfaces. The latter material forms a triangular lattice, where single and interlinked AFM skyrmions can be stabilized. Here, we explore the minimal Heisenberg model enabling the occurrence of such AFM solitons and the underlying phase diagrams by accounting for the interplay between the Dzyaloshinskii-Moriya and Heisenberg exchange interactions, as well as the magnetic anisotropy and impact of magnetic field. By providing the fundamental basis to identify and understand the behavior of intrinsic AFM skyrmions, we anticipate our model to become a powerful tool for exploring and designing new topological magnetic materials to conceptualize devices for AFM spintronics.
**Keywords: Intrinsic antiferromagnetic skyrmions, spin model, single and interchained AFM skyrmions, triangular lattice, thermal stability, phase diagram, antiferromagnetism, topology.**
## Introduction
Since their early observation [1, 2, 3, 4], skyrmions, which are magnetic textures with unique properties, have garnered the attention of the condensed matter community. They are seen as potential bit representatives for future spintronic devices [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] due to their nontrivial topological twists and exotic properties [16, 17, 18, 19]. Skyrmion-based racetrack memory devices are expected to remarkably reduce the power consumption in data flow compared to domain walls [20, 21]. However, ferromagnetic (FM) skyrmions are afflicted with various drawbacks that limit their optimal utilization such as: their
sensitivity to stray fields [21, 22, 23, 24], suffering from dipolar interactions [25], in addition to their complex response to applied currents leading to unwanted deflections [18], which can become even more elaborated under the presence of defects [8, 9, 26, 27, 28, 29, 30, 31]. In contrast, antiferromagnetic (AFM) skyrmions have several advantages over their FM counterparts since the stray field cancels out [32] augmented with an immunity to the Magnus force [23, 33, 34, 35, 36, 37] with their potential for ultrafast dynamics [38], and ability to overcome hole defects [39].
Several recent theoretical studies are inspecting the realization of individual skyrmions or their periodic arrangement skyrmion assuming a squared [33, 34, 35, 36, 37, 23, 38, 39, 40] or a honeycomb lattice [36].
On the experimental side, synthetic AFM skyrmions were unveiled experimentally in multilayers, where FM films host regular FM skyrmions with an interfilm coupling of AFM nature through various spacers [41, 42, 43, 40, 44] while complex topological AFM objects were found in a bulk phase [45]. In Ref. [46], we predicted the emergence of intrinsic single and interchained AFM skyrmions on a triangular lattice of row-wise AFM (RW-AFM) Cr layer deposited on PdFe/Ir(111). The latter substrate became over the last decade a perfect bed system for a plethora of phenomena pertaining to FM skyrmions [47, 48, 49, 50, 51, 52, 4, 6, 6, 6, 6, 6, 6, 7, 8, 9, 4, 6, 7, 8, 9, 52]. The goal of the current work is to introduce a Heisenberg model that incorporates the essential magnetic interactions required to produce AFM skyrmions on a triangular lattice. We perform atomistic spin simulations on the basis of the Landau-Lifschitz-Gilbert (LLG) equations as implemented in the Spirit code [53]. We consider the interplay between the exchange interactions, Dzyaloshinskii-Moriya interactions (DMI), the magnetic anisotropy and the impact of an external magnetic field to establish the phase diagrams of the intrinsic AFM skyrmions
while inspecting their stability via simulations based on the geodesic nudged elastic band method (GNEB) [53, 54, 55]. Our model offers a robust approach to comprehend the behavior of AFM skyrmions in a triangular lattice with the aim of understanding the required ingredients for their stabilization and to create novel materials and devices for AFM spintronics.
## Results
### AFM system
We examine a single layer of an antiferromagnetic spin system on a triangular lattice using the Heisenberg Hamiltonian,
\[H=-\sum_{<i,j>}J_{ij}\:\mathbf{S}_{i}\cdot\mathbf{S}_{j}-\sum_{<i,j>}\mathbf{D} _{ij}\cdot(\mathbf{S}_{i}\times\mathbf{S}_{j})-K\sum_{i}(S_{i}^{z})^{2}-\sum_{ i}h_{i}S_{i}^{z}, \tag{1}\]
where \(i\) and \(j\) are site indices, each carrying a magnetic moment. \(\mathbf{S}\) is the unit vector of the magnetic moment. \(J\) is the Heisenberg exchange coupling strength, being negative for an AFM interaction while \(\mathbf{D}\) is the DMI vector \(\mathbf{D}\), and \(K\) the magnetic anisotropy energy per atom favoring an out-of-plane orientation if positive. \(h_{i}=\mu_{i}B\) describes the Zeeman coupling to the atomic spin moment \(\mu\) at site \(i\), assuming \(\mu=1\)\(\mu_{B}\) and an out-of-plane field.
### Phase diagrams
We start our investigations by determining the conditions to form a RW-AFM spin state. Such a phase was observed experimentally in Mn/Re(0001) [56, 57]. As established in Ref. [58] the minimum set of Heisenberg exchange interactions involves the interactions with first, second, and third nearest neighboring interactions \(J_{1}\), \(J_{2}\) and \(J_{3}\) respectively as shown in the upper inset of Fig.1 a. The latter figure illustrates the underlying phase diagram. We expect four regions
that can host either a Neel, FM, spin spiraling and RW-AFM spin states. The dark blue color indicates the region of interest, where the magnetic moments are distributed into four sublattices L1, L2, L3 and L4 as shown in the lower inset of Fig.1 a. \(J_{3}\) mediates the magnetic interaction within each sublattice and must be positive, i.e. favoring a FM alignment, to enable the stabilization of the RW-AFM state. If too weak with respect to \(J_{1}\) or if it is of antiferromagnetic nature, either spin spirals or a Neel state are favored depending on the strength of \(J_{2}\). We observe that the RW-AFM configuration occupies a larger phase area when \(J_{2}\) is of AFM nature.
In the RW-AFM state, \(J_{3}\) is thus positive, which together with the DMI vector \(\textbf{D}_{3}\) that is connecting the third n.n. similarly to \(J_{3}\), enable the formation of sublattice FM skyrmions. \(\textbf{D}_{3}\) lies in-plane and is perpendicular to the bond connecting neighboring atoms as shown in Supplementary Fig. 1. The AFM interaction among the FM skyrmions is mediated by \(J_{1}\) such that the presence of \(J_{2}\) is not requested. As predicted in Ref. [46], the single AFM skyrmion consists of FM skyrmions present in two sublattices (L1 and L2) with the other two sublattices remaining collinear, while for the double AFM skyrmions, the building blocks FM skyrmions reside in each of the four sublattices (L1, L2, L3 and L4) as illustrated in Fig. 1b-c.
After setting the base for the magnetic interactions needed to realize our AFM solitons, we inspect the range of parameters (\(J_{2}\), \(J_{3}\), \(D_{3}\) and \(K\)) normalised to the absolute value of \(J_{1}\), within which the single and double interchained AFM skyrmions can be stabilized (Figs. 1d-g). The building blocks of the AFM solitons are FM skyrmions. The region hosting the skyrmions, color coded in terms of their radius, is sandwiched between the RW-AFM and stripe domains phases.
Thus the impact of the underlying interactions is similar to what is expected from the FM topological objects. For instance increasing \(J_{3}\) (Figs. 1d-e, with \(K/|J_{1}|\) and \(D_{3}/|J_{1}|\) equal to 0.024 and 0.03 respectively), which defines the FM interaction among the spins of the FM skyrmions, or the magnetic anisotropy energy \(K\) (Fig. 1f-g, with \(J_{2}\) and \(J_{3}\) equal to -0.2 and 0.2 respectively) shrinks the size of the spin-texture by ultimately leading to its annihilation, while the DM interaction \(D_{3}\) induces the opposite behavior (Fig. 1f-g). Interestingly, \(J_{2}\) counteracts \(J_{3}\) by amplifying the skyrmion size, which at some point can be deformed into stripe domains. For completeness, snapshots of skyrmions, labelled from A to L in Figs. 1, are presented in Supplementary Fig. 2.
It is worth mentioning that the size of the single AFM skyrmion is smaller than those participating in the formation of the interchained magnetic textures (see for example the radius given in Figs. 1b-c), which impacts on the details of the phase diagrams. On the one hand, the window in which the double AFM skyrmions are stabilised while varying \(J_{2}\) and \(J_{3}\) is larger than that of the single magnetic objects (Figs.1d-e). On the other hand, the single skyrmion phase seems wider and shifted to the upper region of the diagram while tuning \(D_{3}\) and \(K\).
### Response to external magnetic fields
The stability of skyrmions when exposed to an external magnetic field is an essential aspect for their utilization in future spintronics. Here, we investigate the response of the single and double AFM skyrmions to a magnetic field perpendicular to the lattice. Within our model, as theoretically expected [46, 59, 60, 61], and in contrast to their FM counterparts, the size of the AFM skyrmions increases with the external magnetic field, until its magnitude approaches a critical value (\(B_{c}\)), after which, the skyrmion deforms into the stripe domain phase.
It has been shown that the single and interlinked AFM skyrmions formed with the realistic interactions among Cr atoms bear high magnetic fields [46]. At the model level, the critical value of the normalised magnetic field (\(\mu B_{c}/|J1|\)) can be enhanced by increasing the anisotropy magnitude, as depicted in Fig. 2a, for both single and double AFM skyrmions. In contrast, the DMI lessens the highest magnetic field survived by the AFM solitons, as shown in Fig. 2c. Various formulas have been proposed to describe the impact of DMI and anisotropy magnitude on the radius of the FM skyrmions [19, 62, 63, 64]. Inspired by Ref. [64], and utilizing the fact that \(|J_{1}|>>D_{3},K\), our results on the dependence of the AFM skyrmion radius \(R\) on the anisotropy (Fig. 2b) and DMI (Fig. 2d) when the external field is switched-off can be fitted with \(R_{0}=a+b\frac{D_{3}}{K}\left(1+c\frac{D_{3}^{2}}{|J_{1}|K}\right)\), where a, b and c are fitting parameters.
Upon application of the magnetic field, we found that the form proposed in Ref. [60] has to be amended with a linear field-dependent term. After a Taylor expansion in the regime where the field is smaller than the rest of the magnetic interactions, we find
\(R=a+b\frac{D_{3}}{K}\left(1+c\frac{D_{3}^{2}}{|J_{1}|K}\right)\left(1+\alpha \frac{B}{|J_{1}|}+\beta\frac{B^{2}}{|J_{1}|^{2}}+\gamma\frac{B^{3}}{|J_{1}|^{ 3}}\right)\), where \(\alpha\), \(\beta\) and \(\gamma\) are additional fitting parameters, grasps reasonably the dependencies reported in Fig. 2e-f (with \(D_{3}/|J_{1}|=0.03\) and \(K/|J_{1}|=0.023\)).
Overall, the magnetic interactions reducing (increasing) the size of the skyrmions, as the magnetic anisotropy (DMI) does, enable an enhanced (reduced) stability with respect to an external magnetic field.
#### AFM skyrmions thermal stability
Now we turn to the stability of the AFM skyrmions against thermal fluctuations, by calculating the energy barrier which is needed for the collapse of the single and double interchained AFM skyrmions into the RW-AFM ground state utilizing the GNEB method [53, 54, 55]. To inspect their stability, we calculate the energy barrier for both single and double interlinked AFM skyrmions assuming \(J_{2}/|J_{1}|\) = -0.2, \(J_{3}/|J_{1}|\) = 0.2, \(D_{3}/|J_{1}|\) = 0.03, and \(K/|J_{1}|\) = 0.024. The barrier is determined by the energy difference between the local minimum magnetic state hosting the AFM skyrmion and its relevant saddle point, which lies on the path of minimum energy connecting the skyrmion configuration to the RW-AFM ground state. In the absence of external magnetic field, the double AFM skyrmions with radius of 1.95 nm, has an energy barrier of 0.67 meV, which translates to \(\approx\) 7.8 K, while for the single AFM skyrmion with radius of 1.6 nm, the energy barrier is 0.055 meV (\(\approx\) 0.64 K). For both cases, the major key for the stability of the AFM skyrmions is the DMI which contributes with \(\Delta E_{\rm{DMI}}\) = 15.66 meV to the energy barrier of the double AFM skyrmion and 4.33 meV for the single case, while the anisotropy and exchange interactions prefer the collapse of the AFM solitons by contributing with \(\Delta E_{K}\) = -9.21 meV (-2.53 meV), and \(\Delta E_{J}\) = -5.79 meV (-1.71 meV) for double (single) AFM skyrmions. Moreover, we addressed another important aspect, the impact of the magnetic field, by carrying out a systematic study with results illustrated in Fig. 3. The thermal stability is obviously enhanced with the magnetic field, which impacts more efficiently the double than the single AFM skyrmion (Fig. 3a). For \(\mu B/|J_{1}|\) = 1, the energy barrier of the double (single) AFM skyrmions increased to 0.81 meV (0.12 meV) \(\approx\) 9.4 K (1.3 K). By increasing the magnetic field, the skyrmions expand (Fig. 3f), which in contrast to the DMI and Zeeman contributions (Figs. 3c, d) is disfavored by those of the
exchange and anisotropy (Figs. 3b, e). Snapshots of the various states prospected in defining the energy barriers are presented in Supplementary Fig. 3.
## Discussion
Inspired by our recent findings on the emergence of single and interchained AFM skyrmions on a triangular lattice, we propose here a spin model with the minimum set of magnetic interactions needed to realize such intriguing solitons. They form in a RW-AFM state, which can be decomposed into four sublattices. The exchange interaction within each sublattice, mediating the coupling between the third n.n., is of FM nature along with the associated DMI and out-of-plane anisotropy permit the formation of FM skyrmions within the sublattices. The first n.n. have to be AFM to impose the emergence of AFM skyrmions. We identified the phase diagrams of the latter entities as well as their dependencies on the magnitude of the various magnetic interactions and sensitivity to an external magnetic field. We expect our work to facilitate the search and the identification of single or overlapping AFM skyrmions while contributing to the detailed understanding of their various properties, which is a corner stone in the field of topological antiferromagnetism and its potential use in devices for information technology.
## 1 Acknowledgements
This work was supported by the Federal Ministry of Education and Research of Germany in the framework of the Palestinian-German Science Bridge (BMBF grant number 01DH16027) and the Deutsche Forschungsgemeinschaft (DFG) through SPP 2137 "Skyrmionics" (Project LO 1659/8
1). The authors gratefully acknowledge the computing time granted through JARA on the supercomputer JURECA at Forschungszentrum Julich.
## 2 Author Contributions
S.L. initiated, designed and supervised the project. A.A. performed the simulations with support and supervision from M.S. A.A., M.S., M.A., and S.L. discussed the results. A.A. and S.L. wrote the manuscript to which all co-authors contributed.
## 3 Competing Interests.
The authors declare no competing interests.
|
2309.03596 | Non-equilibrium time evolution in the sine-Gordon model revisited | We study the non-equilibrium dynamics of the quantum sine-Gordon model
describing a pair of Josephson-coupled one-dimensional bosonic
quasi-condensates. Motivated by experimentally accessible quench procedures
where the zero mode of the quasi-condensates is weakly coupled to finite
momentum modes, we develop a novel Hamiltonian truncation scheme relying on a
mini-superspace treatment of the zero mode (MSTHA). We apply this method to
simulate the time evolution after both weak and strong quantum quenches,
injecting a low or high energy density into the system, and demonstrate that
MSTHA accurately captures the dynamics from the hard core boson limit to the
experimentally relevant weakly interacting regime for sufficiently mild
quenches. In the case of high energy densities, MSTHA breaks down for weak
interaction but still extends the range of validity of previous Hamiltonian
truncation schemes. We also compare these results to the semiclassical
truncated Wigner approximation (TWA) and establish that the dynamics can be
well approximated by the semiclassical description in the weakly interacting
regime realised in the experiments. In addition, we clarify the importance of
the phononic modes depending on the sine-Gordon interaction strength. | Dávid Szász-Schagrin, Izabella Lovas, Gábor Takács | 2023-09-07T09:42:32Z | http://arxiv.org/abs/2309.03596v2 | # Non-equilibrium time evolution in the sine-Gordon model revisited
###### Abstract
We study the non-equilibrium dynamics of the quantum sine-Gordon model describing a pair of Josephson-coupled one-dimensional bosonic quasi-condensates. Motivated by experimentally accessible quench procedures where the zero mode of the quasi-condensates is weakly coupled to finite momentum modes, we develop a novel Hamiltonian truncation scheme relying on a mini-superspace treatment of the zero mode (MSTHA). We apply this method to simulate the time evolution after both weak and strong quantum quenches, injecting a low or high energy density into the system, and demonstrate that MSTHA accurately captures the dynamics from the hard core boson limit to the experimentally relevant weakly interacting regime for sufficiently mild quenches. In the case of high energy densities, MSTHA breaks down for weak interaction but still extends the range of validity of previous Hamiltonian truncation schemes. We also compare these results to the semiclassical truncated Wigner approximation (TWA) and establish that the dynamics can be well approximated by the semiclassical description in the weakly interacting regime realised in the experiments. In addition, we clarify the importance of the phononic modes depending on the sine-Gordon interaction strength.
## I Introduction
The sine-Gordon model is a paradigmatic example of integrable quantum field theories and also an effective description of the low-energy physics of numerous physical systems, such as, e.g., spin chains [1; 2; 3], circuit quantum electrodynamics [4; 5], and bosonic and fermionic Hubbard models [6; 7; 8; 9]. Due to its integrability, many equilibrium properties of the model are exactly known, ranging from exact results on scattering amplitudes and form factors to expectation values of local observables [6; 10; 11; 12; 13; 14; 15].
Recently, it also attracted considerable interest in the context of non-equilibrium dynamics due to an experimental realisation with two Josephson-coupled one-dimensional bosonic quasi-condensates [16; 17; 18; 19; 20]. In the experiment, ultra-cold atoms are trapped in an elongated double-well potential, limiting the physics to one spatial dimension. The effective description of the system can be obtained using bosonisation [21], predicting that the anti-symmetric modes of the double-well potential realise the sine-Gordon model, weakly coupled to a Luttinger liquid accounting for symmetric modes. These considerations suggested new possibilities for experimentally observing the out-of-equilibrium dynamics of the sine-Gordon model, an idea gaining further experimental support by demonstrating that correlations in thermal equilibrium can be described in terms of the (classical) thermal sine-Gordon. [22; 23]. However, non-equilibrium phenomena observed in the experiments point to dynamics beyond the sine-Gordon model [24; 25; 26; 27]. These results show the relevance of coupling terms to additional degrees of freedom, such as the symmetric modes or the transverse modes in the quasi one-dimensional geometry. Identifying the simplest theoretical model accounting for the experimental observations remains an outstanding open question, the resolution of which requires efficient numerical methods that yield reliable predictions for the experimental protocols.
In parallel with and motivated by these experimental developments, several theoretical approaches have been developed along different lines of approach. A paradigmatic and experimentally relevant framework to out-of-equilibrium dynamics in quantum many-body systems is provided by the framework of quantum quench [28; 29; 30]. In this scenario, the system is initially in equilibrium, prepared in the ground state of some pre-quench Hamiltonian. It is then driven out of equilibrium by a sudden change of some parameters, leading to a subsequent evolution governed by a different (post-quench) Hamiltonian. Several avenues can be explored to describe the time evolution of the sine-Gordon model after a quantum quench. These include semiclassical approximations, such as the mean-field approximation [31; 32] or the truncated Wigner approximation (TWA) [25; 33; 34; 35]. However, semiclassical approaches are, in general, uncontrolled approximations that need to be validated against some complementary description of the quantum dynamics to test their validity. An alternative is a form factor expansion relying on the exactly known spectrum and local operator matrix elements (form factors) of the model [36; 37]; however, this runs into serious difficulties in the experimentally relevant attractive regime [15]. Another way to describe non-equilibrium behaviour is provided by generalised hydrodynamics [38; 39], the application of which needs
an effective description of thermodynamic states in the model which was resolved only very recently [40; 41].
In this work, we consider an alternative approach to non-equilibrium dynamics provided by the framework of Hamiltonian truncation, a family of numerical approaches to low-dimensional quantum field theories. It was initially developed to describe relevant perturbations of simple conformal field theories [42] and later extended to the sine-Gordon model [43]. Recently it was applied to describe non-equilibrium time evolution, both in perturbed minimal conformal field theories [44] and in the sine-Gordon model [25; 45]; however, previous approaches were limited to parameters away from the experimentally available range. Aiming at overcoming this difficulty, here we introduce a novel truncated Hamiltonian formulation of the sine-Gordon model, which makes use of the so-called minisuperspace approach originally introduced in the context of \(\varphi^{4}\) field theory [46; 47]. The main idea behind this approach is that in the limit of large Luttinger parameter \(K\) relevant for the experiment, the coupling of the zero mode to the non-zero modes is weak. Therefore, solving the zero mode in a numerically exact way and including the non-zero modes afterwards is natural. We compare the results of the new minisuperspace-based truncated Hamiltonian approximation (MSTHA) to TWA for verification and testing the conditions and the range of validity for both approaches.
We find that the MSTHA is well suited for simulating mild quantum quenches inserting a small energy density. For these protocols, MSTHA allows us to obtain reliable, well-converged results even in the experimentally relevant weakly interacting limit, a regime inaccessible by previous implementations of TCSA. In contrast, for stronger quenches, MSTHA continues to show good convergence properties in the limit of strong interactions but breaks down with decreasing interaction strength, a limitation similar to the one observed in previous TCSA simulations. In contrast to TCSA, semiclassical approaches are expected to become more reliable for stronger quenches or weaker interactions. In accordance with these general expectations, we find that TWA yields a considerable error for weak quenches in the limit of strong interactions, considerably overestimating the damping of quantum oscillations. However, the performance of TWA improves rapidly with decreasing interaction strength, and TWA shows excellent agreement with the essentially exact MSTHA results for moderate interactions. Similarly, larger quenches with a higher energy density render TWA results more reliable, and a direct comparison with well-converged MSTHA reveals considerable errors only close to the limit of hard-core repulsion. These results establish MSTHA and TWA as powerful complementary approaches for simulating sine-Gordon dynamics. Moreover, by considering the mode-resolved occupation numbers for various quench protocols, we take a step towards identifying the most relevant degrees of freedom for the dynamics, an essential ingredient for constructing a simple theoretical model accounting for experimental observations.
The outline of the paper is as follows. In section II we briefly review the sine-Gordon model, and in section III, we describe the MSTHA and its implementation, together with a brief review of the TWA. Section IV contains our results regarding the time evolution from two different classes of initial states, corresponding to mild and strong quenches with small and high energy density, respectively, together with a comparison to the TWA description. We discuss the results and draw our conclusions in Section V. Some technical details are relegated to the Appendix to make the main exposition easier to follow.
## II Brief summary of the the sine-Gordon model
The classical sine-Gordon model is defined by the following action,
\[\mathcal{S}^{\rm cl}_{\rm sG}=\int dt\int dx\left[\frac{1}{2}(\partial_{t} \varphi)^{2}-\frac{1}{2}(\partial_{x}\varphi)^{2}+\lambda\cos\beta\varphi \right], \tag{1}\]
describing the continuum limit of a one-dimensional chain of torsion-coupled pendula.
It has topologically charged soliton/anti-soliton excitations with mass
\[M_{\rm cl}=\frac{8\sqrt{\lambda}}{\beta}\,, \tag{2}\]
and spatially localised oscillating configurations parametrised by a continuous parameter \(\varepsilon\) called breathers with mass
\[m_{\varepsilon}=\frac{16\varepsilon\sqrt{\lambda}}{\beta}. \tag{3}\]
At the quantum level, the classical field \(\varphi\) is replaced by the field operator \(\hat{\varphi}\) and its dynamics is governed by the Hamiltonian:
\[\hat{H}_{\rm sG}=\int dx:\left(\frac{1}{2}(\partial_{t}\hat{\varphi})^{2}+ \frac{1}{2}(\partial_{x}\hat{\varphi})^{2}-\lambda\cos\beta\hat{\varphi} \right):, \tag{4}\]
where the semicolon denotes normal ordering relative to the modes of the \(\lambda=0\) massless free boson. The spectrum of the breathers becomes discrete:
\[m_{n}=2M\sin\frac{\pi\xi n}{2},\quad\xi=\frac{\beta^{2}}{8\pi-\beta^{2}}\,, \tag{5}\]
where \(M\) is the quantum soliton mass. Integrability allows to determine the exact relation between the mass scale given by, say, the first breather mass \(m_{1}\) and \(\lambda\)[48]:
\[\lambda=\left(2\sin\frac{\pi\xi}{2}\right)^{2\Delta-2}\frac{2\Gamma(\Delta)}{ \pi\Gamma(1-\Delta)}\left(\frac{\sqrt{\pi}\Gamma\left(\frac{1}{2-2\Delta} \right)m_{1}}{2\Gamma\left(\frac{\Delta}{2-2\Delta}\right)}\right)^{2-2\Delta} \tag{6}\]
where
\[2\Delta=\frac{\beta^{2}}{4\pi} \tag{7}\]
is the anomalous dimension of the cosine operator. All physical quantities can then be parameterised in units of the mass scale \(m_{1}\). We note that another common parametrization relies on the Luttinger parameter
\[K=\frac{\pi}{\beta^{2}}\,, \tag{8}\]
with \(K=1\) corresponding to hard-core repulsion between bosons, and \(K\) increasing with decreasing sine-Gordon interaction strength, such that \(K\to\infty\) upon approaching the non-interacting field theory limit.
In a finite spatial volume \(L\), observing that the sine-Gordon field is an angular variable of period \(\frac{2\pi}{\beta}\), it is natural to consider the following quasi-periodic boundary conditions:
\[\hat{\varphi}(x+L,t)=\hat{\varphi}(x,t)+\frac{2\pi}{\beta}m\,, \tag{9}\]
with \(m\in\mathbb{Z}\) giving the so-called winding number a.k.a. the topological charge. We only consider the sector \(m=0\) in the following, so the field satisfies ordinary periodic boundary conditions.
The Hamiltonian can be considered as a perturbation of the compactified massless free boson in finite volume with the Hamiltonian
\[\hat{H}_{\rm FB}=\frac{1}{2}\int_{0}^{L}dx:\left[(\partial_{t}\hat{\varphi})^ {2}+(\partial_{x}\hat{\varphi})^{2}\right]: \tag{10}\]
Expanding the field in Fourier modes
\[\hat{\varphi}(x,t)= \hat{\varphi}_{0}+\frac{1}{L}\hat{\pi}_{0}t\] \[+\frac{i}{\sqrt{4\pi}}\sum_{k\neq 0}\frac{1}{k}\left[a_{k}e^{i \frac{2\pi}{k}k(x-t)}+\bar{a}_{k}e^{-i\frac{2\pi}{L}k(x+t)}\right], \tag{11}\]
the free part of the Hamiltonian (4) can be written as
\[\hat{H}_{\rm FB}=\frac{2\pi}{L}\left(\frac{\hat{\pi}_{0}^{2}}{4\pi}+\sum_{k>0 }a_{-k}a_{k}+\sum_{k>0}\bar{a}_{-k}\bar{a}_{k}-\frac{1}{12}\right). \tag{12}\]
Here \(\hat{\pi}_{0}\) is the zero mode of the momentum canonically conjugate to \(\hat{\varphi}\),
\[\hat{\pi}(x,t)=\partial_{t}\hat{\varphi}(x,t);\quad\hat{\pi}_{0}=\int_{0}^{L} dx\hat{\pi}(x,t)\,, \tag{13}\]
while the \(a_{k}\) and \(\bar{a}_{k}\) with negative (positive) \(k\) are the left and right bosonic creation (annihilation) operators satisfying the commutation relations
\[[\hat{\varphi}_{0},\hat{\pi}_{0}]=i;\quad[a_{k},a_{l}]=[\bar{a}_{k},\bar{a}_{ l}]=k\delta_{k+l,0}. \tag{14}\]
Therefore the sine-Gordon Hamiltonian (4) takes the form
\[\hat{H}_{\rm sG}=\frac{2\pi}{L}\left(\frac{\hat{\pi}_{0}^{2}}{4 \pi}+\sum_{k>0}a_{-k}a_{k}+\sum_{k>0}\bar{a}_{-k}\bar{a}_{k}-\frac{1}{12}\right) \\ -\frac{\lambda}{2}\int_{0}^{L}:\left(e^{i\beta\hat{\varphi}}+e^{- i\beta\hat{\varphi}}\right):. \tag{15}\]
## III Simulating the time evolution
### Truncated Conformal Space Approach
The main idea of TCSA is to use the eigenstates of the massless free boson in a finite volume \(L\) as a computational basis and truncate it by imposing an upper energy cutoff. Since the matrix elements of the exponential operators can be explicitly computed, the Hamiltonian (4) can be represented by a finite matrix, reducing the determination of the spectrum and time evolution of expectation values of observables to a numerical linear algebra problem. However, the results obtained through TCSA differ from the exact results by the so-called _truncation errors_. For relevant perturbations, the truncation errors decrease with increasing energy cutoff, and renormalisation group methods can improve the convergence [49; 50; 51; 52].
The Hilbert space of the massless free boson consists of Fock modules \(\mathcal{F}_{\nu}\)
\[\mathcal{H}_{\rm FB}=\bigoplus_{\nu\in\mathbb{Z}}\mathcal{F}_{\nu}, \tag{16}\]
with
\[\mathcal{F}_{\nu}=\left\{\left|\psi\right\rangle=\prod_{k>0}a_{-k}^{r_{k}} \bar{a}_{-k}^{\bar{r}_{k}}\left|\nu\right\rangle\left|r_{k},\bar{r}_{k}\in \mathbb{N}^{+}\right\}\right. \tag{17}\]
built upon zero mode plane wave states defined as
\[\left|\nu\right\rangle=e^{i\nu\beta\hat{\varphi}_{0}}\left|0\right\rangle. \tag{18}\]
It is useful to further decompose the Fock modules into different momentum sectors parameterised by a quantum number \(s\in\mathbb{Z}\) as
\[\mathcal{F}_{\nu}=\bigoplus_{s\in\mathbb{Z}}\mathcal{F}_{\nu}^{(s)} \tag{19}\]
where
\[\mathcal{F}_{\nu}^{(s)}=\left\{\left|\psi\right\rangle=\prod_{k>0}a_{-k}^{r_{k }}\bar{a}_{-k}^{\bar{r}_{k}}\left|\nu\right\rangle\left|\sum kr_{k}-\sum k \bar{r}_{k}=s\right\}\right.\,, \tag{20}\]
with fixed total spatial momentum \(2\pi s/L\). In our simulations, we only need the zero-momentum sector, i.e., \(s=0\).
The Hilbert space is then usually truncated by introducing an upper limit on the unperturbed energy of the massless free boson basis vectors [43]
\[\begin{split}\mathcal{H}^{\text{trm.}}_{\text{FB}}=\text{span} \bigg{\{}&\prod_{k>0}a^{r_{k}}_{-k}\bar{a}^{\tilde{r}_{k}}_{-k}\ket{ \nu}\\ &\bigg{|}\frac{(\nu\beta)^{2}}{4\pi}+\sum_{k>0}k(r_{k}+\bar{r}_{ k})<e_{\text{cut}}\bigg{\}}\,.\end{split} \tag{21}\]
The disadvantage of this truncation procedure is that for small \(\beta\), i.e., in the limit of large \(K\) corresponding to a weakly interacting quantum field, it includes a large number of Fock modules, which severely limits the method's applicability in the experimental regime [25].
### The mini-superspace based THA
To go beyond the TCSA detailed in the previous subsection, we note that the bosonic field \(\hat{\varphi}(x,t)\) can be decomposed into homogeneous (zero mode) and inhomogeneous (oscillator modes) parts
\[\hat{\varphi}(x,t)=\hat{\varphi}_{0}(t)+\tilde{\varphi}(x,t)\,. \tag{22}\]
Neglecting the contribution of oscillator modes, the single mode description of the model describes a quantum pendulum:
\[\hat{H}_{\text{QP}}=\frac{1}{2L}\hat{\pi}_{0}^{2}-\lambda L\left(\frac{L}{2 \pi}\right)^{2\Delta}\cos(\beta\hat{\varphi}_{0})\,, \tag{23}\]
(for the volume dependence see Appendix A). The full sine-Gordon model itself corresponds to a quantum pendulum coupled to a set of non-linear, interacting phononic modes:
\[\hat{H}_{\text{sG}}=\frac{1}{2}\int_{0}^{L}:\left[(\partial_{t} \hat{\varphi}_{0})^{2}+(\partial_{t}\hat{\varphi})^{2}+(\partial_{x}\hat{ \varphi})^{2}\right]:\] \[-\frac{\lambda}{2}\int_{0}^{L}dx:\left[e^{i\beta\hat{\varphi}_{0} }e^{i\beta\tilde{\varphi}}+e^{-i\beta\hat{\varphi}_{0}}e^{-i\beta\tilde{ \varphi}}\right]: \tag{24}\]
In the experimental parameter regime of weak interactions, \(\beta\) is small, so the inter-mode coupling is expected to be weak. As a result, it is reasonable to introduce a different approximation to the dynamics, in which the zero-mode dynamics is first solved in a (numerically) exact way, and the coupling to the non-zero modes is taken into account at the next stage, which is known as the mini-superspace approach [46; 47]. The usefulness of this approach can also be understood by looking at the truncated Hamiltonian approximation as a variational method: optimizing the variational basis allows for more precise computation of spectral quantities and expectation values.
The first step consists of constructing the single (zero) mode Hamiltonian (23) (the quantum pendulum) on the plane wave basis \(\{\ket{\nu}\}\) (18) with some appropriate truncation. Diagonalisation of (23) yields the energy spectrum and eigenvectors of the pendulum
\[\hat{H}_{\text{QP}}\ket{n}=\varepsilon_{n}\ket{n}\quad n\in\mathbb{N} \tag{25}\]
as a function of the truncation of the basis \(\{\ket{\nu}\}\). With a high enough truncation, it turns out that the energy levels converge very fast to an essentially exact result.
In the next step, one computes a numerically exact matrix representation of the operators \(\hat{\pi}_{0}^{2}\) and \(e^{\pm i\beta\hat{\varphi}_{0}}\) on the eigenbasis \(\{\ket{n}\}\). The matrix elements of the non-zero mode parts can be computed separately, and their handling can be made more efficient by exploiting the factorisation of the oscillator modes into left- and right-moving sectors. This reduces the memory requirements of the method and enables higher truncation values, similar to the chirally factorised TCSA developed by Horvath et al. [53]. As a final step, the sine-Gordon Hamiltonian can be assembled from the finite zero and non-zero mode matrix pieces according to (24) by simple matrix operations.
Truncation now depends on two parameters: \(n_{\text{max}}\) describing the truncation of the zero mode space and \(\ell_{\text{cut}}\) giving the truncation of the non-zero modes,
\[\begin{split}\mathcal{H}^{\text{trm.}}_{\text{FB}}=\text{span} \bigg{\{}&\prod_{k>0}a^{r_{k}}_{-k}\bar{a}^{\tilde{r}_{k}}_{-k} \ket{n}\bigg{|}n\leq n_{\text{max}}\text{ and }\\ &\sum_{k>0}k(r_{k}+\bar{r}_{k})\leq\ell_{\text{cut}}\bigg{\}}\,. \end{split} \tag{26}\]
Time evolution in the TCSA is computed using the Bessel-Chebyshev method [53; 44]. The validity of the results is maintained through monitoring of the norm of the time-evolved state \(\ket{\Psi(t)}\)
\[\ket{\Psi(t)}=e^{i\hat{H}_{\text{sG}}t}\ket{\Psi_{0}}. \tag{27}\]
where the initial state \(\ket{\Psi_{0}}\) depends on the quench protocol. For the quantum quenches considered in this paper, it is specified in the next section.
Before applying the method to non-equilibrium time evolution, the zero mode spectrum was cross-checked by comparing it with a solution of the quantum pendulum Schrodinger equation using the shooting method. In addition, we compared the time evolution of the system truncated to its zero mode to a numerical solution of the coordinate space Schrodinger equation for the time evolution. The fully assembled MSTHA was verified by checking the spectrum against the predictions of the exact \(S\)-matrix sine-Gordon theory and by comparing it to previous TCSA results for the time evolution for quench protocols where they were available. The convergence of the method can also be checked by comparing results for different values of the cutoff; examples are given in Appendix C.
In our subsequent simulation of time evolution, we consider two observables: The expectation value of the cosine of the phase field,
\[\left\langle:\cos\beta\hat{\varphi}:\right\rangle, \tag{28}\]
and the Fourier transform of the phase-phase correlator
\[\langle\hat{\varphi}_{k}\hat{\varphi}_{-k}\rangle=\] \[\frac{1}{4\pi}\left\langle\frac{1}{k^{2}}\left(a_{-k}a_{k}+\bar{a} _{-k}\bar{a}_{k}-a_{k}\bar{a}_{k}-a_{-k}\bar{a}_{-k}+k\right)\right\rangle\,. \tag{29}\]
Both are experimentally accessible observables, the first one characterizing the phase coherence between Josephson-coupled one-dimensional bosonic quasi-condensates, which has already been measured for various quench protocols. The latter gives information on the mode-resolved occupation numbers, allowing us to identify the finite momentum modes that contribute substantially to the dynamics.
### Truncated Wigner approximation
The TWA is implemented using the lattice regularisation of the sine-Gordon model [25]
\[\hat{H}_{\text{Lat}}=\frac{a}{2}\sum_{j=1}^{N}\left((\partial_{t }\hat{\varphi}_{j})^{2}+\frac{(\hat{\varphi}_{j}-\hat{\varphi}_{j-1})^{2}}{a^ {2}}\right)\] \[-\frac{\lambda a}{\mathcal{N}}\sum_{j=1}^{N}\cos\beta\hat{ \varphi}_{j}, \tag{30}\]
with lattice constant \(a=L/N\), and the discretised scalar field variables related to the continuum filed via \(\hat{\varphi}_{j}=\hat{\varphi}(x=ja)\). The canonically conjugate momentum variables are given by
\[\hat{\pi}_{j}=a\partial_{t}\hat{\varphi}_{j}, \tag{31}\]
and satisfy \([\hat{\varphi}_{j},\hat{\pi}_{j^{\prime}}]=i\delta_{j,j^{\prime}}\). Normal ordering of the cosine operator is accounted for by a coefficient \(\mathcal{N}\) determined from the Baker-Campbell-Hausdorff formula,
\[\cos\beta\hat{\varphi}_{i}=\mathcal{N}:\cos\beta\hat{\varphi}_{i}:, \tag{32}\]
expressed as [25]
\[\mathcal{N}=\exp\biggl{\{}\left(-\frac{\pi\Delta}{N}\right)\biggr{\}}\prod_{ n=1}^{N/2-1}\exp\biggl{\{}\left(-\frac{2\pi\Delta}{N\sin\frac{\pi\pi}{N}} \right)\biggr{\}}. \tag{33}\]
The Fourier modes of the discretised scalar field are defined as
\[\hat{\varphi}_{k\neq 0}=\frac{1}{N}\sum_{j=1}^{N}e^{i\frac{2\pi}{N}kj}\hat{ \varphi}_{j}\,. \tag{34}\]
The expectation value of their correlator in the ground state is given by
\[\langle 0|\hat{\varphi}_{k}\hat{\varphi}_{-k}|0\rangle=\frac{1}{4N\sin(\pi k/N)}\,, \tag{35}\]
reducing to the correlator (29) in the continuum limit \(N\to\infty\).
In the TWA, the time evolution of operator expectation values is expressed in terms of the Wigner function, defining a quasi-probability distribution in phase space:
\[\begin{split}& W(\underline{\varphi},\underline{\pi})=\\ &\frac{1}{(2\pi)^{2N}}\int d\underline{\varphi}^{\prime}\, \langle\underline{\varphi}-\underline{\varphi}^{\prime}/2|\hat{\rho}| \underline{\varphi}+\underline{\varphi}^{\prime}/2\rangle\,e^{-i\underline{ \varphi}^{\prime}}\cdot\underline{\pi}\,.\end{split} \tag{36}\]
Here \(\hat{\rho}\) is the density operator corresponding to the state of the system at \(t=0\), and we have introduced the usual vector notation for phase space coordinates
\[\underline{\varphi}=\left\{\varphi_{j}|j=1,...,N\right\},\quad\underline{ \pi}=\left\{\pi_{j}|j=1,...,N\right\}. \tag{37}\]
Given an initial state \(|\Psi_{0}\rangle\), the corresponding Wigner function can be computed from the density operator \(\hat{\rho}_{\Psi_{0}}=\left|\Psi_{0}\right\rangle\left\langle\Psi_{0}\right|\). The TWA approximates the time evolution through an ensemble of classical trajectories, obtained by evolving fluctuating initial conditions \(\{\underline{\varphi},\underline{\pi}\}\), distributed according to the Wigner quasi-probability distribution, with the classical equations of motion. In practice, the calculation is performed through classical Monte Carlo averaging. The Wigner function \(W\) is often positive semi-definite [54], allowing to generate a sufficiently large set of random initial conditions \(\{\underline{\varphi},\underline{\pi}\}\) distributed according to \(W\). The time evolution of observables is then computed by averaging over the classical trajectories determined by these initial conditions. A detailed discussion of the TWA implementation and parameter matching with the truncated Hamiltonian approximation has been described previously [25], and we refrain from repeating it here.
## IV Time evolution in sine-Gordon quenches
We now turn to the non-equilibrium time evolution after quantum quenches in the sine-Gordon field theory. Setting the energy unit as \(m_{1}=1\), we define the dimensionless volume parameter as \(l=m_{1}L\). Time is measured using the variable \(\nu_{1}t\) where
\[\nu_{1}=\frac{m_{1}}{2\pi} \tag{38}\]
is the frequency associated with the rest mass of the lightest breather. Given the relation to the experimental setup discussed in Appendix B, connection with the experiments is facilitated by characterising the strength of interactions via the aforementioned Luttinger parameter \(K\), Eq. (8).
Here, we present results by simulating time evolution in the dimensionless volume \(l=10\). Finite size effects from excitations travelling around the volume limit the evolution time to \(m_{1}t<l\). However, lower volumes are less computationally demanding, and we also find that the time range allowed by this choice is suitable for a
detailed comparison of the two methods. We also performed a few computations in larger volumes up to \(l=18\) and found that all the conclusions drawn in this paper remained unchanged.
Below, we consider two different types of quantum quenches. To demonstrate the power of the MSTHA, we first focus on weak quenches inducing a small energy density in Sec. IV.1. Here the initial state is close to the quantum pendulum ground state associated with the post-quench Hamiltonian, such that the basis used in MSTHA is well-suited for representing the time evolution of the state, in contrast to previous implementations of TCSA. As a result, MSTHA yields reliable, well-converged results for a wide range of interaction strengths, from hard core repulsion to the experimentally relevant weakly interacting limit, substantially extending the quench protocols accessible within the framework of Hamiltonian truncation.
For completeness, in Sec. IV.2, we revisit strong quenches from the ground state of the unperturbed (\(\lambda=0\)) free bosons, i.e., two decoupled one-dimensional quasi condensates in the experimental setup, to finite \(\lambda\) / Josephson coupling. These protocols were already studied relying on previous TCSA implementations [25], formulated in terms of the eigenstates of the massless free boson limit, a natural choice for representing the initial state. For these strong quenches, both TCSA and MSTHA suffer from similar limitations, yielding well-converged results for strong interactions but breaking down in the experimentally relevant weakly interacting regime. Nevertheless, we find that MSTHA still shows improved convergence properties.
### Quantum quenches starting from the quantum pendulum ground state
Here, we consider weak quantum quenches starting from the quantum pendulum ground state, corresponding to a small injected energy density. More precisely, the initial state corresponds to the zero mode being in its ground state,
\[\ket{\Psi_{\mathrm{QP}}}=\ket{n=0}, \tag{39}\]
while all other modes are in the ground state of the respective oscillator. The time evolution can be interpreted by suddenly switching on the coupling between the zero-mode pendulum and the non-zero modes corresponding to phononic excitations. This scenario is expected to be optimal for the MSTHA since the implementation uses the pendulum eigenstate basis for the zero-modes, and the energy injected by the quench into the system is small, increasing the reliability of the truncated approximation.
We also compare the MSTHA to the TWA approach. To this end, the Wigner function of the initial state can be decomposed into a product of the part corresponding to the zero mode and the one coming from the oscillator modes. The zero mode part can be obtained simply from the numerically computed ground state wave function of the pendulum Hamiltonian (23),
\[W_{0}\left(\varphi_{0},\pi_{0}\right)=\frac{1}{2\pi}\int_{-\pi}^{ \pi} d\varphi_{0}^{\prime}\bra{\varphi_{0}-\varphi_{0}^{\prime}/2}\ket{n=0}\] \[\bra{n=0}\varphi_{0}+\varphi_{0}^{\prime}/2)\,e^{-iN\varphi_{0}^{ \prime}\pi_{0}}. \tag{40}\]
Here, the matrix element \(\bra{\varphi}\ket{n=0}\) is just the ground state wave function of the pendulum (23) in position space. For the non-zero modes, the Wigner function takes the form of a simple Gaussian [25]
\[W_{\mathrm{osc}}=\prod_{k>0}\frac{4}{\pi^{2}}\exp\left\{-\sigma _{k}^{2}\varphi_{k}\varphi_{-k}-\frac{4\pi_{k}\pi_{-k}}{\sigma_{k}^{2}}\right\}\] \[\sigma_{k}^{2}=4N\sin\frac{\pi k}{N}\to 4\pi k\quad\text{for }N \rightarrow\infty\,. \tag{41}\]
We note that the quench protocol discussed here, coupling a quantum pendulum to a bath of massless modes, does not have direct experimental relevance. From the experimental point of view, a potentially more realistic state would have the non-zero modes in the ground state of appropriate massive oscillator modes. The present choice, where they are described as gapless modes of the conformal boson, eventually overestimates their contributions and is motivated by two considerations. First, it leads to a technical simplification since the above state has a simpler representation in terms of the MSTHA. Second, one of our goals is to gauge whether these modes play a significant role in the dynamics and determine how much they alter the quantum pendulum dynamics. As mentioned above, this step is crucial for finding the simplest theoretical description of the experiments, where the time evolution is potentially affected by many additional degrees of freedom, including the symmetric and transverse modes. To establish the relevant degrees of freedom, it is, therefore, acceptable to consider a slightly modified quench protocol that overestimates the effects of finite momentum modes., rendering the gapless nature of these modes secondary. The implementation of massive modes lies outside the scope of the paper.
Fig. 1 displays the time evolution of the expectation value of the cosine of the phase field and the phase-phase correlator starting from the initial state (39) for several interaction strengths \(K\) and for a dimensionless volume \(l=10\) as computed by the MSTHA and the TWA. The MSTHA data is computed using truncation values of \(n_{\mathrm{max}}=9,7,11\) and \(7\) and \(\ell_{\mathrm{max}}=20,20,20\) and \(24\) for \(K=1,1.56,4\) and \(27\), respectively. The largest value is eventually the one directly relevant in the experimental context.
The MSTHA results converge well with the truncation and can be considered numerically exact. Since the quantum pendulum is initially in its ground state, a small number of zero mode basis states is enough for the MSTHA to converge. In contrast, the TWA does
not allow for a reliable estimate of its accuracy; however, due to the numerically exact nature of the MSTHA, the deviation of the TWA from the MSTHA results can be considered the error involved in the TWA approximation.
We can see that the two methods agree well for large values of \(K\), where quantum effects are expected to be small, which is reasonable given the TWA's semiclassical nature. However, for smaller values of \(K\) where quantum fluctuations are enhanced, the TWA differs from the numerically exact results of the MSTHA. This is also expected in light of the initial state (39): insufficient energy is injected into the system to accommodate higher occupation numbers in the oscillator modes, enhancing the inherently quantum nature of their dynamics. The difference in the time evolution is that the TWA overestimates the dephasing of the condensates, as shown by the results in Fig. 1.
For the case studied in this section, truncating the Hamiltonian to the zero-mode part (23) results in trivial time evolution since the initial state is its eigenstate. Therefore, the strength of interaction between the zero-mode pendulum and the phononic modes can be deduced from the dynamic range of the cosine expectation value in Fig. 1, which decreases substantially for large \(K\) and becomes very small at the experimentally relevant value \(K=27\), indicating that the zero-mode is very weakly coupled to the phononic excitations.
### Quantum quenches starting from the free massless boson vacuum
Here, we consider the quenches from the ground state of the unperturbed (\(\lambda=0\)) free boson,
\[|\Psi_{\rm FB}\rangle=|\nu=0\rangle \tag{42}\]
which is more directly relevant to the experiment than the previous one. It can be realised by cooling the atoms in the presence of a large barrier to obtain two uncoupled identical condensates and then introducing Josephson tunnel-coupling via lowering the barrier to achieve a desired finite \(\lambda\). The time evolution starting from this initial state was previously studied using TWA and TCSA [25]. The drawback of that study was that the original version of TCSA (sketched in Subsection III.1) was limited to rather small values \(K\) far away from the experimentally realised weak coupling regime.
Figure 1: The time-dependent expectation value of \(:\cos\beta\hat{\varphi}:\) (top row) and the phase-phase correlator \(\langle\hat{\varphi}_{k}\hat{\varphi}_{-k}\rangle\) (bottom row) for various values of \(K\) for dimensionless volume \(l=10\), starting from the initial state (39). Joined markers correspond to TWA, while solid lines show the MSTHA results. For the two larger values of \(K\), the difference between the two approximations is entirely invisible. The dashed red line corresponds to the (numerically) exact solution of the zero-mode quantum pendulum dynamics.
The state (42) can be easily implemented in the MSTHA by expanding the plane wave state \(\ket{\nu=0}\) in the eigenstates of the zero-mode pendulum Hamiltonian (23):
\[\ket{\Psi_{\rm FB}}=\ket{\nu=0}=\sum_{n=-N}^{N}C_{n}\ket{n} \tag{43}\]
Accurately representing this state requires more vectors for larger values of \(K\), which foreshadows that MSTHA has difficulties capturing the time evolution for large values of \(K\). While this is similar to the original TCSA [25], we still find that the mini-superspace representation substantially improves the situation.
In the TWA, the Wigner distribution again factorises into a zero-mode part
\[W_{0}\{\varphi_{0},\pi_{0}\}=\frac{\theta(\varphi_{0}+\pi)\theta(\pi-\varphi_ {0})}{2\pi}\delta_{\pi_{0},0} \tag{44}\]
with the oscillator part identical to (41). The above zero-mode part corresponds to a uniform distribution of initial phases \(\varphi_{0}\) in the range \([-\pi,\pi]\) together with a definite value \(\pi_{0}=0\).
The results of the TWA and MSTHA simulations for the time evolution of \(\langle:\cos\beta\hat{\varphi}:\rangle\) and \(\langle\hat{\varphi}_{k}\hat{\varphi}_{-k}\rangle\) following a quantum quench from the initial state (42) are shown in Fig. 2. Simulations were performed for \(l=10\) and several Luttinger parameters \(K\). As noted above, contrary to the case where the system is initialised in the ground state of the quantum pendulum, the conformal vacuum (42) spans a large subspace of the quantum pendulum eigenbasis, requiring larger cutoff values in the mini-superspace: \(n_{\rm max}=11,17,35\) and \(225\) for \(K=1,1.56,4\) and \(27\), respectively (as before, the largest value is the one relevant for the experimental realisation). The zero-mode cutoff values are chosen so the dynamics remains unchanged by increasing the cutoff \(n_{\rm max}\). For the oscillator modes, we used the truncations \(\ell_{\rm max}=26,20,28\) and \(20\) for \(K=1,1.56,4\) and \(27\), respectively. For the couplings \(K=1\) and \(1.56\), the simulations converged with high accuracy, and in the latter case, they also matched the TWA results. However, for \(K=4\) the MSTHA simulations involving larger Luttinger parameters converged less well. Nevertheless, we found that the results matched the TWA results very well. For \(K=27\), MSTHA failed to converge for the accessible truncation levels, pointing to the need to include higher excitations in the oscillation modes, making the use of MSTHA computationally extremely demanding.
Again, the TWA fails to describe the time evolution
Figure 2: The time-dependent expectation value of \(:\cos\beta\hat{\varphi}:\) (top row) and the phase-phase correlator \(\langle\hat{\varphi}_{k}\hat{\varphi}_{-k}\rangle\) (bottom row) for various values of \(K\) for dimensionless volume \(l=10\), starting from the state (42). Joined markers correspond to TWA, while solid green lines show the MSTHA results. The dashed red lines correspond to the (numerically) exact solution of the zero-mode quantum pendulum dynamics.
for the strongly interacting regime, as evidenced by its deviation from the MSTHA, which can be considered numerically exact. Nevertheless, TWA shows improved performance due to the high energy density induced by the quench. In particular, the TWA becomes much better for larger \(K\), and in fact very accurate for \(K\gtrsim 2\), making it a reliable description in the experimental regime. We also note that the TWA gives very good results even for \(K=1.56\), indicated by the minimal disagreement with the MSTHA data. This contrasts with the quenches from the quantum pendulum ground state, where the small energy injected in the quench forbids the accumulation of large occupations in the oscillator modes, amplifying the difference between the quantum and the semiclassical dynamics. In quenches starting from the ground state of the massless free boson, the system is initialised in a very highly excited state, as indicated by the large values of \(n_{\text{max}}\) required to represent the time-evolving state. When the interaction between the zero-mode pendulum and the phononic modes is switched on at time \(t=0\), a large amount of energy is transferred into the oscillator modes, resulting in mode occupation numbers seen in Fig. 2, which are much higher compared to those in Fig. 1. The occupation of these modes grows with \(K\), and their presence decoheres the zero mode dynamics, which, together with the suppression of quantum fluctuations, accounts for the good agreement with the semiclassical TWA results. However, as \(K\) decreases, the effects of quantum fluctuations grow, and the occupation numbers of the oscillator modes decrease, which explains the growing deviation between the semiclassical TWA and the full quantum dynamics obtained from the MSTHA.
Similarly to the previous case, the red dashed lines in Fig. 2 show time evolution considering only the zero-mode dynamics governed by the quantum pendulum Hamiltonian (23). Again, we find that the zero mode dominates the dynamics for large \(K\); however, even at the very large \(K\), which is characteristic of the experiment, the oscillating modes are seen to influence the dynamics substantially as time progresses. This is fully consistent with the energy transfer to the oscillating modes, which leads to a substantial increase in their occupation number, counteracting the effect of their weaker coupling to the zero mode.
## V Conclusions
This work investigated the non-equilibrium time evolution induced by quantum quenches in the sine-Gordon model. Besides being a paradigmatic example of integrable quantum field theories, the sine-Gordon model also describes the low-energy dynamics of two Josephson-coupled one-dimensional bosonic quasi-condensates.[16; 17; 18; 19; 20] However, the experimental system has many additional degrees of freedom, which are not accounted for in the sine-Gordon description. Simulating the physical system realised in the experiment is still an open question, and progress requires the identification of the relevant degrees of freedom.
Motivated by the fact that the coupling between the zero and non-zero modes of the sine-Gordon field is weak in the experimentally available parameter range, a naturally occurring question is the importance of non-zero modes for the dynamics. To address this issue, we introduced the mini-superspace-based truncated Hamiltonian approximation (MSTHA), an improvement of the truncated conformal space approach (TCSA) used in earlier studies[25; 43]. It consists of solving the zero-mode dynamics in a numerically exact way and then including the non-linearly interacting phononic modes. Apart from making the distinction between the zero and non-zero modes explicit, it also efficiently improves the previous versions of the THA, allowing for the simulations in the weakly interacting regime closer to the experiments. In addition, we used the semiclassical truncated Wigner approximation[33; 34] (TWA) as an alternative approach, a simple and wide-spread method that has been applied for various sine-Gordon quenches. Comparison to MSTHA allows for studying the accuracy and limitations of the TWA, for which accuracy is hard to control directly.
We considered time evolution from two classes of initial states, corresponding to small and large energy densities, respectively. We find that for the mild quench protocol, starting in the ground state of the quantum pendulum, the MSTHA yields essentially (numerically) exact results regardless of the Luttinger parameter \(K\), even in the weakly interacting limit relevant to the experiments, a region inaccessible by previous implementations of the THA[25]. For the stronger quenches initiated in the ground state of the free massless boson, the MSTHA results converge for smaller \(K\), corresponding to strong inter-mode interactions, but become less reliable with increasing \(K\), when the coupling between the modes is weak.
We established that (as generally expected) the TWA performs well in the weakly interacting regime, even for the mild quench protocol, indicated by the virtually non-existent difference from the MSTHA results. However, this difference grows as the strength of the interaction increases, leading to the breakdown of the TWA close to \(K=1\), corresponding to hard-core repulsion between atoms. While this trend remains unchanged for stronger quenches as well, it is found that the reliability of the TWA increases with the strength of the quench, pushing its breakdown to smaller values of \(K\) compared to mild quenches. This latter effect is intuitively expected since the TWA is a semiclassical approximation, which is expected to improve with higher excitations in the modes.
Our findings establish the TWA and MSTHA as powerful numerical methods for studying non-equilibrium dynamics in the sine-Gordon model, depending on the initial state and strength of the inter-mode interaction \(K\). For weak quenches or strong quenches for strong interactions (large \(K\)), the MSTHA can provide reliable results for the dynamics, while for strong quenches or weak interactions, the TWA proves reliable for study
ing the time evolution. Overall, our results establish the TWA and MSTHA as powerful complementary approaches for studying non-equilibrium time evolution in the sine-Gordon model in the weakly interacting parameter range accessible in the experiments, with the choice of method dependent on the initial energy density of the system.
Moreover, we find that the effect of the nonzero modes, a.k.a. the phononic degrees of freedom, diminishes when the interaction becomes weaker (i.e., for large \(K\)) and has a limited effect on the time evolution for mild quenches. For stronger quenches, the contribution of the phononic modes becomes weaker for the initial transient; however, even in the experimentally relevant large \(K\) regime, it eventually appears when the occupation number of the phononic modes becomes large.
###### Acknowledgements.
We thank S. Erne and J. Schmiedmayer for useful discussions and D. Horvath for sharing his TCSA results to verify our numerics. This work was supported by the National Research, Development and Innovation Office (NKFIH) through the OTKA Grant ANN 142584. DSz was also partially supported by the National Research Development and Innovation Office of Hungary via the scholarship UNKP-22-3-II-BME-30, while GT was partially supported by the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004). I.L. acknowledges support from the Gordon and Betty Moore Foundation through Grant GBMF8690 to UCSB and the National Science Foundation under Grant No. NSF PHY-1748958.
## Appendix A Matrix elements on the CFT basis
For the practical evaluation of matrix elements of the exponential operators in the computational basis, time is continued to Euclidean signature by setting \(t=i\tau\), and then the resulting space-time cylinder is mapped on the conformal plane of variable \(z\) using [53]
\[z=\exp\left\{\frac{2\pi}{L}(\tau-ix)\right\}\quad,\quad\bar{z}=\exp\left\{ \frac{2\pi}{L}(\tau+ix)\right\}\,. \tag{10}\]
The exponential operator on the cylinder is related to the one defined on the plane by
\[:e^{i\beta\nu\hat{\varphi}}:^{\text{cyl}}=\left(\frac{2\pi|z|}{L}\right)^{2 \Delta_{\nu}}:e^{i\beta\nu\hat{\varphi}}:^{\text{pl}} \tag{11}\]
with
\[\Delta_{\nu}=\frac{\nu^{2}\beta^{2}}{8\pi} \tag{12}\]
Therefore, the matrix elements of the integrated exponential operator can then be computed as
\[\int\limits_{0}^{L}dx\left\langle\Psi^{\prime}|:\exp\left\{i\nu \beta\hat{\varphi}(0,x)\right\}:^{\text{cyl}}|\Psi\right\rangle\] \[=L\left(\frac{2\pi}{L}\right)^{2-2\Delta_{\nu}}\left\langle\Psi^ {\prime}|:\exp\left\{i\nu\beta\hat{\varphi}(1,1)\right\}:^{\text{pl}}|\Psi \right\rangle\delta_{s\ast s\ast_{s^{\prime}}}\,. \tag{13}\]
Implementation of the above matrix element requires the computation of
\[\left\langle\Psi^{\prime}|:\exp\left\{i\mu\beta\hat{\varphi}(1,1)\right\}:^{ \text{pl}}|\Psi\right\rangle\,, \tag{14}\]
which is a straightforward task described in detail in previous works [53; 25].
### Pendulum quantum mechanics
Implementation of the mini-superspace for the sine-Gordon model requires the construction of the quantum pendulum Hamiltonian
\[\hat{H}_{\text{QP}}=\frac{1}{2L}\hat{\pi}_{0}^{2}-\lambda L\left(\frac{L}{2 \pi}\right)^{2\Delta}\cos(\beta\hat{\varphi}_{0})\;. \tag{15}\]
The free part
\[\hat{H}_{\text{FQM}}=\frac{1}{2L}\hat{\pi}_{0}^{2} \tag{16}\]
admits solutions \(|\nu\rangle\) in the form of plane waves:
\[|\nu\rangle=\sqrt{\frac{\beta}{2\pi}}e^{i\beta\nu\varphi_{0}} \tag{17}\] \[\frac{1}{2L}\hat{\pi}_{0}^{2}\left|\nu\right\rangle=\frac{(\nu \beta)^{2}}{2L}\left|\nu\right\rangle\;, \tag{18}\]
with the canonical momentum operator given in coordinate representation as
\[\hat{\pi}_{0}=\frac{1}{i}\partial_{\varphi_{0}}\;. \tag{19}\]
The states \(\{|\nu\rangle\}\) are created by the exponential operators from the vacuum
\[|\nu\rangle=e^{i\beta\nu\hat{\varphi}_{0}}\left|0\right\rangle\;,\quad|0 \rangle=\frac{\beta}{2\pi} \tag{20}\]
and therefore, the zero-mode exponential operators act as ladder operators on the plane wave basis:
\[e^{\pm i\mu\beta\hat{\varphi}_{0}}\left|\nu\right\rangle=|\nu\pm\mu\rangle\;\,. \tag{21}\]
Employing a simple truncation of the plane wave basis by only keeping states \(\{|\nu\rangle\},\nu\in[-\nu_{\text{max}},\nu_{\text{max}}]\) results in a representation of the operators \(\hat{\pi}_{0}^{2}\) and \(\exp\{\pm i\beta\nu\hat{\varphi}_{0}\}\)
by finite matrices, with the Hamiltonian (100) becoming a tridiagonal matrix.
Numerical diagonalisation of the finite matrix of (100) is straightforward and leads to the spectrum of the quantum pendulum
\[\hat{H}_{\rm QP}\left|n\right>=\varepsilon_{n}\left|n\right>\,, \tag{101}\]
which we cross-checked by numerically solving the coordinate space Schrodinger equation with the shooting method.
## Appendix B Low-energy description of a pair of coupled bosonic quasi-condensates
The Hamiltonian of a one-dimensional bosonic quasi-condensate is given by
\[\hat{H}_{\rm QC}=\int dz\ \hat{\psi}^{\dagger}(z)\left[-\frac{\hbar}{2 m}\partial_{z}^{2}+V(z)-\mu\right]\hat{\psi}(z)\] \[+\frac{g}{2}\int dz\ \hat{\psi}^{\dagger}(z)\hat{\psi}^{\dagger}(z) \hat{\psi}(z)\hat{\psi}(z) \tag{102}\]
where the \(\hat{\psi}\) are bosonic field operators satisfying \([\hat{\psi}(z),\hat{\psi}^{\dagger}(z^{\prime})]=i\delta(z-z^{\prime})\), \(V(z)\) is a longitudinal trap potential, \(\mu\) is the chemical potential and \(g\) is some effective one-dimensional interaction coupling. The strength of the interaction is characterised by the parameter
\[\gamma=\frac{mg}{\hbar^{2}\rho_{0}}\,, \tag{103}\]
where \(\rho_{0}\) is the longitudinal density of atoms. Introducing the bosonisation in terms of density \(\hat{\rho}(z)\) and phase \(\hat{\theta}(z)\) fields,
\[\hat{\psi}(z)=\sqrt{\hat{\rho}(z)}e^{i\hat{\theta}(z)};\quad\hat{\rho}=\rho_{ 0}+\delta\hat{\rho}\,, \tag{104}\]
the density fluctuations \(\delta\hat{\rho}\) and the phase field \(\hat{\theta}\) obey the commutation relations \([\hat{\theta}(z),\delta\hat{\rho}(z^{\prime})]=i\delta(z-z^{\prime})\). Substituting (104) to (102) and expanding to second order in density and phase fluctuations yields a low-energy effective field theory in the form of the Tomonaga-Luttinger-liquid Hamiltonian
\[\hat{H}_{\rm TLL}=\frac{\hbar}{2\pi}\int dz\ \left[\nu_{N}\pi^{2}\delta\hat{ \rho}^{2}+\nu_{J}(\partial_{z}\hat{\theta})^{2}\right]. \tag{105}\]
Here the density/phase-stiffness \(\nu_{N/J}\) can be expressed in terms of the parameters of the condensate,
\[\nu_{J}=\frac{\pi\hbar\rho_{0}}{m},\qquad\nu_{N}=\frac{1}{\pi\hbar}\partial_{ \rho_{0}}\mu\stackrel{{\gamma\ll 1}}{{\approx}}\frac{g}{\pi\hbar} \tag{106}\]
Due to the spatial dependence of the background density \(\rho_{0}\) (inherited from the trapping potential \(V(z)\)), these parameters generally carry a \(z\)-dependence, which we ignore from now on, focusing on a homogenous system. Introducing the Luttinger parameter \(K\) and sound velocity \(c\) as
\[\tilde{K}=\sqrt{\frac{\nu_{J}}{\nu_{N}}},\qquad c=\sqrt{\nu_{J}\nu_{N}}, \tag{107}\]
results in the following form the Tomonaga-Luttinger Hamiltonian,
\[\hat{H}_{\rm TLL}=\frac{\hbar c}{2}\int dz\ \left[\frac{\pi}{\tilde{K}} \delta\hat{\rho}^{2}+\frac{\tilde{K}}{\pi}(\partial_{z}\hat{\theta})^{2}\right]. \tag{108}\]
For a pair of bosonic one-dimensional quasi-condensates loaded into a double-well potential, a finite potential barrier induces a coupling between the condensates through tunnelling, described by the Hamiltonian
\[\hat{H}_{\rm QCP}=\sum_{j=1,2}\int dz\ \hat{\psi}^{\dagger}_{j}(z) \left[-\frac{\hbar}{2m}\partial_{z}^{2}+V(z)-\mu_{j}\right]\hat{\psi}_{j}(z)\] \[+\frac{g}{2}\int dz\ \hat{\psi}^{\dagger}_{j}(z)\hat{\psi}^{ \dagger}_{j}(z)\hat{\psi}_{j}(z)\hat{\psi}_{j}(z)-\hbar J\int dz\ \left[\hat{\psi}^{ \dagger}_{1}\hat{\psi}_{2}+\hat{\psi}^{\dagger}_{2}\hat{\psi}_{1}\right], \tag{109}\]
with tunelling amplitude \(J\). Setting \(\mu_{1}=\mu_{2}=\mu\), introducing bosonisation via
\[\hat{\psi}_{j}(z)=\sqrt{\hat{\rho}_{j}(z)}e^{i\hat{\theta}_{j}(z)};\quad\hat{ \rho}=\rho_{0}+\delta\hat{\rho}_{j} \tag{110}\]
and expanding to second order in the fluctuations we arrive at
\[\hat{H}_{\rm QCP}=\hat{H}_{\rm TLL,\ 1}(\tilde{K})+\hat{H}_{\rm TLL,\ 2}( \tilde{K})+\hat{H}_{J}(\hat{\theta}_{1}-\hat{\theta}_{2})\,, \tag{111}\]
where we explicitly indicated the Luttinger parameter. Since the coupling Hamiltonian \(\hat{H}_{J}\) only depends on the relative phase of the two quasi-condensates, it is advantageous to perform a change of variables to common and relative degrees of freedom as
\[\delta\hat{\rho}_{c} =\delta\hat{\rho}_{1}+\delta\hat{\rho}_{2}\] \[\delta\hat{\rho}_{r} =\frac{\delta\hat{\rho}_{1}-\delta\hat{\rho}_{2}}{2}\] \[\hat{\theta}_{c} =\frac{\hat{\theta}_{1}+\hat{\theta}_{2}}{2}\] \[\hat{\theta}_{r} =\hat{\theta}_{1}-\hat{\theta}_{2}.\]
The TLL Hamiltonians can be rearranged as
\[\hat{H}_{\rm TLL,\ 1}(\tilde{K})+\hat{H}_{\rm TLL,\ 2}(\tilde{K})=\hat{H}_{\rm TLL,\ c}(K_ {c})+\hat{H}_{\rm TLL,\ r}(K_{r}) \tag{112}\]
with \(K_{c}=2\tilde{K}\) and \(K_{r}=\tilde{K}/2\), while expanding to second order in density fluctuations we obtain
\[\hat{H}_{J} =-\hbar J\int dz\ \left[2\rho_{0}+\delta\hat{\rho}_{c}\right]( \cos\hat{\theta}_{r}-1)+\frac{\hbar J}{\rho_{0}}\delta\hat{\rho}_{r}^{2}\cos\hat{ \theta}_{r}\] \[\approx-2\hbar J\rho_{0}\int dz\ \cos\hat{\theta}_{r}\,. \tag{113}\]
Here, the second line was obtained by neglecting the density fluctuations, resulting in a decoupling of common and relative degrees of freedom:
\[\hat{H}_{c} =\frac{\hbar c}{2}\int dz\ \left[\frac{\pi}{2\tilde{K}}\delta\hat{ \rho}_{c}^{2}+\frac{2\tilde{K}}{\pi}(\partial_{z}\hat{\theta}_{c})^{2}\right] \tag{133}\] \[\hat{H}_{r} =\frac{\hbar c}{2}\int dz\ \left[\frac{2\pi}{\tilde{K}}\delta\hat{ \rho}_{r}^{2}+\frac{\tilde{K}}{2\pi}(\partial_{z}\hat{\theta}_{r})^{2}\right] -2\hbar J\rho_{0}\int dz\ \cos\hat{\theta}_{r} \tag{134}\]
showing that the relative phase field obeys sine-Gordon dynamics. To further simplify the Hamiltonian, we choose units in which \(\hbar=c=1\) and introduce a boson field and its canonical momentum defined as
\[\hat{\varphi}=\beta^{-1}\hat{\theta}_{r}\quad,\quad\hat{\pi}=\beta\delta\hat{ \rho}_{r} \tag{135}\]
where
\[\beta=\sqrt{\frac{2\pi}{\tilde{K}}}\,, \tag{136}\]
which leads to the usual form of the sine-Gordon Hamiltonian
\[\hat{H}_{\rm sG}=\frac{1}{2}\int dz\left[\hat{\pi}^{2}+(\partial_{z}\hat{ \varphi})^{2}\right]-\lambda\int dz:\cos\beta\hat{\varphi}:\,. \tag{137}\]
We note that the normal ordering results in a redefinition of the coupling \(\lambda\) and accounts for its anomalous dimension \(\Delta=\beta^{2}/8\pi\), manifesting in \(\lambda\) having units of \([\text{energy}]^{2-2\Delta}\).
We also note that in the main text, we consider only the physics of the relative degrees of freedom, and so we drop the subscript of \(K_{r}\) and refer to the relative Luttinger parameter simply as \(K\).
## Appendix C Examples on the convergence of MSTHA
This section contains representative data illustrating the convergence of the MSTHA. In Fig. 3, the time evolution of the cosine of the phase field is shown as computed by the MSTHA starting from the massless free boson ground state (42). It is apparent that for small values of \(K\), the MSTHA quickly converges, whereas for larger \(K\) the convergence is much slower, as displayed in Fig. 3.
|
2310.20238 | Study of speaker localization with binaural microphone array
incorporating auditory filters and lateral angle estimation | Speaker localization for binaural microphone arrays has been widely studied
for applications such as speech communication, video conferencing, and robot
audition. Many methods developed for this task, including the direct path
dominance (DPD) test, share common stages in their processing, which include
transformation using the short-time Fourier transform (STFT), and a direction
of arrival (DOA) search that is based on the head related transfer function
(HRTF) set. In this paper, alternatives to these processing stages, motivated
by human hearing, are proposed. These include incorporating an auditory filter
bank to replace the STFT, and a new DOA search based on transformed HRTF as
steering vectors. A simulation study and an experimental study are conducted to
validate the proposed alternatives, and both are applied to two binaural DOA
estimation methods; the results show that the proposed method compares
favorably with current methods. | Yanir Maymon, Israel Nelken, Boaz Rafaely | 2023-10-31T07:43:12Z | http://arxiv.org/abs/2310.20238v1 | Study of speaker localization with binaural microphone array incorporating auditory filters and lateral angle estimation
###### Abstract
Speaker localization for binaural microphone arrays has been widely studied for applications such as speech communication, video conferencing, and robot audition. Many methods developed for this task, including the direct path dominance (DPD) test, share common stages in their processing, which include transformation using the short-time Fourier transform (STFT), and a direction of arrival (DOA) search that is based on the head related transfer function (HRTF) set. In this paper, alternatives to these processing stages, motivated by human hearing, are proposed. These include incorporating an auditory filter bank to replace the STFT, and a new DOA search based on transformed HRTF as steering vectors. A simulation study and an experimental study are conducted to validate the proposed alternatives, and both are applied to two binaural DOA estimation methods; the results show that the proposed method compares favorably with current methods.
keywords: Speaker localization, reverberation, binaural microphone arrays, room acoustics +
Footnote †: journal:
## 1 Introduction
Direction of arrival (DOA) estimation of speakers in a room using a binaural array is a challenging problem which has a wide range of applications in speech enhancement, hearing aids and robot audition. The challenge is exacerbated by coherent reflections that obscure DOA information typically available only in the direct sound. Methods for DOA estimation using binaural arrays have been based on widely-used approaches for source localization. These include estimation of the interval time difference (ITD), which is extracted from the generalized cross correlation (GCC) [(1)], beamforming based methods [(2)], and subspace methods such as multiple signal classification (MUSIC) [(2)].
In the last decades, new studies proposed techniques to make these methods more robust to reverberation. For example, a GCC based approach that is robust to noise and multipath distortion for ITD estimation [(3)], and the coherent signal subspace method (CSSM) [(4)], which implements focusing and frequency smoothing in order to decorrelate coherent sources. Additionally, another GCC-based approach that employs a Bayesian framework has been developed. This approach utilizes a mixture model along with Bayesian modeling to robustly estimate the directions of multiple speakers in the presence of noise and reverberation [(5)]. Furthermore, this Bayesian methodology has also been extended to more complex microphone arrays, such as coprime arrays [(6)], and spherical arrays [(7)]. Recently, a reverberation-robust method, based on the CSSM and originally developed for spherical arrays, has been proposed, called the direct path dominance (DPD) test [(8)]. The estimation of the DOA is performed by selecting time-frequency (TF) bins that are dominated by the direct sound, thus successfully overcoming the detrimental effect of room reverberation. More recently, an extension for the DPD test for arbitrary arrays, and particularly for binaural arrays, was proposed [(9; 10)]. This extension incorporates a focusing process that does not require an initial DOA estimation, making it usable for reverberant environments.
The methods described above are all based on explicit processing of array data such as correlation matrices. Recently, a deep neural network based methods has been developed for sound source localization in general [(11)], and binaural localization in particular [(12; 13; 14)], offering new opportunities for exploiting information in the data. Unlike the DPD, these methods require a full learning phase with labeled data, which makes these methods less appropriate for some applications.
In summary, the methods presented above for DOA estimation using a binaural array, although showing good performance in many cases, may have limited performance for challenging environments with noise and reverberation. In particular, this paper examines two features of many current methods. First, current methods are mostly based on pre-processing using the fast Fourier transform (FFT). This is in contrast to the human ear, for example, that has filters whose bandwidth increases approximately proportionally to their center frequency [(15)]. Second, many current methods [(16; 17; 13; 14)], simplify the directional search space, and assume, for example, that the sound source is positioned in the horizontal plane (elevation of 0\({}^{\circ}\)), relative to the binaural array, and then only search directly for the source azimuth angle.
In this paper, we develop and investigate alternatives to these
commonly used processing features, showing that improved performance can indeed be achieved. The proposed alternatives can be integrated into many of the sate-of-the-art methods presented in this literature review, including in the pre-processing stages of neural network based methods. First, we present a processing framework which allows the incorporation of complex-valued version of the auditory filter bank in the processing pipeline, replacing the FFT. The new framework which can be incorporated in a wide range of current methods, shows that improved DOA performance can be achieved in some cases. Second, a new method is proposed to directly estimate the lateral angle in an interaural coordinate system [(18)], by incorporating the characteristics of the cone of confusion [(19)], showing improved performance over standard azimuth and elevation based DOA estimation. An experimental study examines the proposed processing alternatives relative to the current approaches, when applied to a binaural DPD test-based method.
## 2 System model and DPD test
This section briefly presents the system model assumed in this work, and the DPD-test for a binaural array according to [(9)]. While the DPD test-based method presented here can indeed extended to the localization of multiple speakers, for the sake of simplicity and focus, this paper primarily explores the case of a single speaker. It is important to note that the DPD test based method is incorporated in this paper as an example of a state-of-the-art algorithm that can be applied to a binaural microphone array. Nevertheless, the processing alternatives developed and investigated in this paper can also be applied to other current methods of binaural speaker localization [(20; 21; 22; 23)].
Consider a binaural array, and \(L\) plane waves forming the sound field around the array. Among these waves, one can be a direct sound from the source, while the rest are reflections from the room walls. The binaural signal can be expressed in the time domain as follows:
\[\mathbf{p}(t)=\sum_{i=1}^{L}\mathbf{h}(t,\psi_{i})\varoquiv s_{i}(t)+\mathbf{n }(t), \tag{1}\]
where \(t\) is time, \(\mathbf{p}(t)=[p_{l}(t),p_{r}(t)]^{T}\) is the left and right binaural signals, \(s_{i}(t)\) is the \(i\)'th source signal, \(\mathbf{h}(t,\psi_{i})\) is the impulse response corresponding to the \(i\)'th plane wave direction of \(\psi_{i}=(\theta_{i},\phi_{i})\), where \(\theta_{i}\) and \(\phi_{i}\) are the elevation and the azimuth of the source, respectively, and \(\varoquiv\) denotes convolution. \(\mathbf{n}(t)=[n_{l}(t),n_{r}(t)]^{T}\) is additive sensor noise.
By employing the multiplicative transfer function (MTF) approximation [(24)], the binaural signals can be expressed in the short-time Fourier transform (STFT) domain as follows:
\[\mathbf{p}(\tau,\omega)=\mathbf{H}(\omega,\psi)\mathbf{s}(\tau,\omega)+ \mathbf{n}(\tau,\omega), \tag{2}\]
where \(\tau\) and \(\omega\) are the time frame and frequency indices, respectively. \(\mathbf{p}(\tau,\omega)=[p_{l}(\tau,\omega),p_{r}(\tau,\omega)]^{T}\) is the left and right binaural signals, \(\mathbf{H}(\omega,\psi)=[\mathbf{h}(\omega,\psi_{1}),...,\mathbf{h}(\omega, \psi_{L})]^{T}\) is the \(2\times L\) head related transfer function (HRTF) matrix, and \(\mathbf{s}(\tau,\omega)=[s_{1}(\tau,\omega),...,s_{L}(\tau,\omega)]^{T}\) is the vector of source signals. \(\mathbf{n}(\tau,\omega)=[n_{l}(\tau,\omega),n_{r}(\tau,\omega)]^{T}\) is additive sensor noise.
In order to estimate the DOA of the source representing the direct sound, the next stage is to estimate the spatial spectrum matrix in every TF bin, by averaging over a predefined range in time and frequency. This stage requires a focusing process to eliminate the frequency dependence of the HRTF matrix within the specified averaging frequency range [(10)]. This focusing process, crucial for maintaining spatial information within the smoothed HRTF matrix, involves aligning the HRTF matrices within the averaging window to the HRTF matrix from the center frequency. The alignment is implemented using a focusing matrix \(\mathbf{T}(\omega,\omega_{0})\) that satisfies the following:
\[\mathbf{T}(\omega,\omega_{0})\mathbf{H}(\omega,\psi)=\mathbf{H}(\omega_{0}, \psi). \tag{3}\]
The transformed binaural signal \(\tilde{\mathbf{p}}(\tau,\omega)\) is then obtained by multiplying the original binaural signal \(\mathbf{p}(\tau,\omega)\) by the focusing matrix:
\[\begin{split}\tilde{\mathbf{p}}(\tau,\omega)&= \mathbf{T}(\omega,\omega_{0})\mathbf{p}(\tau,\omega)\\ &=\mathbf{H}(\omega_{0},\psi)\mathbf{s}(\tau,\omega)+\tilde{ \mathbf{n}}(\tau,\omega),\end{split} \tag{4}\]
where \(\tilde{\mathbf{n}}(\tau,\omega)=\mathbf{T}(\omega,\omega_{0})\mathbf{n}(\tau,\omega)\) is the transformed noise.
After the focusing process, a smoothing operation is performed. A spatial spectrum matrix, \(\mathbf{R}(\tau,\omega)\), is computed at each time-frequency bin:
\[\mathbf{R}(\tau,\omega)=E[\tilde{\mathbf{p}}(\tau,\omega)\tilde{\mathbf{p}}^ {H}(\tau,\omega)], \tag{5}\]
where \(E[\cdot]\) denotes expectation. This matrix is estimated by averaging \(J_{\tau}\) and \(J_{\omega}\) adjacent time frames and frequency bins, respectively:
\[\hat{\mathbf{R}}(\tau,\omega)=\\ \frac{1}{J_{\tau}J_{\omega}}\sum_{j=0}^{J_{\tau}-1}\sum_{L=0}^{L _{\omega}-1}\tilde{\mathbf{p}}(\tau-j_{\tau},\omega-j_{\omega})\tilde{\mathbf{ p}}^{H}(\tau-j_{\tau},\omega-j_{\omega}). \tag{6}\]
In the next stage, the singular-value decomposition (SVD) of \(\hat{\mathbf{R}}(\tau,\omega)\) is computed at each TF bin, in order to find bins that pass the DPD test [(8)]. The SVD operation splits the spatial spectrum matrix into signal and noise subspaces. This partitioning of the data is a fundamental step towards applying subspace methods, particularly the MUSIC algorithm. The SVD of the matrix \(\hat{\mathbf{R}}\) can be expressed as:
\[\hat{\mathbf{R}}=\mathbf{Q}\mathbf{\Sigma}\mathbf{Q}^{\mathbf{H}}=\left[ \begin{array}{cc}\mathbf{q_{s}}&\mathbf{q_{n}}\end{array}\right]\left[ \begin{array}{cc}\sigma_{\mathbf{s}}&0\\ 0&\sigma_{\mathbf{n}}\end{array}\right]\left[\begin{array}{cc}\mathbf{q_{s}} ^{H}\\ \mathbf{q_{s}}^{H}\end{array}\right], \tag{7}\]
where, \(\mathbf{q_{s}}\) and \(\mathbf{q_{n}}\) represent the signal and noise subspaces, respectively, and \(\sigma_{\mathbf{s}}\) and \(\sigma_{\mathbf{n}}\) denote their corresponding singular values. The DPD test is then applied as follows:
\[\mathcal{D}=\left\{(\tau,\omega):\frac{\sigma_{s}(\hat{\mathbf{R}}(\tau,\omega) )}{\sigma_{n}(\hat{\mathbf{R}}(\tau,\omega))}\geq\mathcal{T}\mathcal{H}\right\}, \tag{8}\]
where \(\sigma_{s}\) and \(\sigma_{n}\) denote the largest and second largest singular values. \(\mathcal{TH}\) is a threshold, chosen sufficiently larger than one to ensure that \(\hat{\mathbf{R}}\) are dominated by a single source.
In the next stage, a MUSIC spectrum is calculated for every TF bin that passes the DPD test, i.e. for all \((\tau,\omega)\in\mathcal{D}\) the MUSIC spectrum is computed by
\[P(\psi)=\frac{1}{\|\mathbf{d}_{\mathbf{n}}^{\prime}\mathbf{h}(\psi)\|^{2}}, \tag{9}\]
where \(\psi\) represents the direction on a two-dimensional (2D) search grid, and \(\mathbf{h}(\psi)\) is the steering vector in the direction \(\psi\).
The direction \(\psi\) that maximizes the MUSIC spectrum is the DOA estimate for the specific bin. This process leads to a DOA histogram containing all DOA estimates for all TF bins that passed the DPD test, denoted by the set \([\psi_{\mathcal{D}}]\).
In the final stage, the source direction can be estimated by taking the average angle of the DOA histogram or by performing clustering, as suggested in (25). In this paper we will use the first and simpler method,
\[\hat{\psi}=\overline{[\psi_{\mathcal{D}}]}, \tag{10}\]
where \(\overline{\{\}}\) denotes the averaging operation.
Fig. 1 shows a block diagram of the DPD test based algorithm used to estimate the direction angle.
## 3 Replacing FFT with auditory filters
In this section we present the first proposed innovation in the processing pipeline: developing a formulation to enable the incorporation of human-hearing motivated auditory filter banks to replace the FFT.
The following formulation is developed for the auditory filter, so that it can replace the FFT. Note that auditory filters are typically employed to compute signal power in frequency bands, while here a version of the filtering is required that provides both magnitude and phase similar to the FFT.
The starting point is the continuous time STFT, or the windowed Fourier transform (WFT) defined as (26)
\[X(\tau,\omega_{c})=\int_{-\infty}^{\infty}x(t)w(t-\tau)e^{-j\omega_{c}t}dt, \tag{11}\]
where \(x(t)\) is an arbitrary signal and \(w(t)\) is a window function centered around zero. The above equation can be reinterpreted in a form that characterizes a signal passing through a filter. This can be expressed as (27)
\[X(\tau,\omega_{c})=e^{-j\omega_{c}\tau}\int_{-\infty}^{\infty}x(t)f(\tau-t; \omega_{c})dt, \tag{12}\]
where (12) denotes the convolution between the signal \(x(t)\) and a filter \(f(t;\omega_{c})\). The filter \(f(t;\omega_{c})\) is defined as:
\[f(t;\omega_{c})=w(-t)e^{j\omega_{c}t}. \tag{13}\]
Note that (13) can be considered as a one-sided version (over the frequency axis) of the filter defined by the window function \(w(t)\) centered around \(\omega_{c}\). Next, (12) can be formulated as
\[X(\tau,\omega_{c})=e^{-j\omega_{c}\tau}\mathcal{F}^{-1}\{X(\omega)F(\omega; \omega_{c})\}(\tau), \tag{14}\]
where \(\mathcal{F}^{-1}\) denotes the inverse Fourier transform, and \(X(\omega)\) and \(F(\omega;\omega_{c})\) are the Fourier transforms of \(x(t)\) and \(f(t;\omega_{c})\) respectively. Overall, (14) can be interpreted as filtering with a one sided band-pass filter, centered around the positive frequency \(\omega_{c}\), and then shifting the output signal back to baseband to be centered around the origin.
This general formulation allows the incorporation of auditory filter banks to replace the conventional FFT. For this, the filter \(F(\omega;\omega_{c})\) is replaced in this work with a gammatone filter bank, which is designed to model the human auditory system. The filter has been specifically modified to be one-sided in the frequency domain.
The gammatone filters' impulse response is defined as (28)
\[gt(t;\omega_{c})=\begin{cases}t^{n-1}e^{-2\pi b_{\omega_{c}}t}\cos(\omega_{c }t),&\text{if }t\geq 0\\ 0,&\text{otherwise},\end{cases} \tag{15}\]
where \(\omega_{c}\) is the center frequency of the filter in channel \(c\), \(n\) is the filter order, which is set to be 4 in this work, and \(b_{\omega_{c}}\) is the filter bandwidth. The center frequencies and bandwidths of the filters are determined according to the Equivalent Rectangular Bandwidth (ERB) scale (15).
By keeping the envelope of the gammatone impulse response, a one sided gammatone filter can be constructed as follows:
\[g(t;\omega_{c})=w_{gt}(-t;\omega_{c})e^{j\omega_{c}t}, \tag{16}\]
where
\[w_{gt}(t;\omega_{c})=\begin{cases}t^{n-1}e^{-2\pi b_{\omega_{c}}t},&\text{if }t \geq 0\\ 0,&\text{otherwise},\end{cases} \tag{17}\]
Eq. (14) can now be rewritten for the case of the gammatone filters as follows:
\[X_{AFB}(\tau,\omega_{c})=e^{-j\omega_{c}\tau}\mathcal{F}^{-1}\{X(\omega)G( \omega;\omega_{c})\}(\tau), \tag{18}\]
where \(G(\omega;\omega_{c})\) is the Fourier transform of \(g(t,\omega_{c})\) defined in Eq. (16), and the subscripts AFB is acronyms of Auditory Filter Bank, representing the alternative operation presented here.
Figure 1: Block diagram of the DPD algorithm for DOA estimation
The above formulation is developed in continuous time. Practically, the signals are discrete, and a few modifications are required to adapt the above formulation to a discrete-time signal. Equation (12) can be rewritten for the discrete-time case as follows:
\[X[m,k_{c}]=e^{-j2\pi\frac{k_{c}}{N}m}\sum_{n=0}^{N-1}x[n]f[m-n], \tag{19}\]
where \(x[n]\) is the arbitrary signal of length \(N\), and \(f[n]\) is the discrete version of the filter defined in Eq. (13), of length \(L\). In order to formulate Eq. (19) using FFT, zero padding on the signals to length \(N+L-1\) is required. Define the zero padding signals of \(x[n]\) and \(f[n]\) as \(\tilde{x}[n]\) and \(\tilde{f}[n]\), respectively. Following that, Eq. (18) can be rewritten as
\[X_{AFB}[m,k_{c}]=e^{-j2\pi\frac{k}{N}m}FFT^{-1}\{\tilde{X}[k]\tilde{G}[k;k_{c} ]\}[m], \tag{20}\]
where \(\tilde{X}[k]\) and \(\tilde{G}[k;k_{c}]\) are the FFT of \(\tilde{x}[n]\) and \(\tilde{g}[n;k_{c}]\), where \(\tilde{g}[m;k_{c}]\) is defined using the discrete version of Eq. (16), with zero padding.
Sampling of the auditory filter output signals can be performed in a similar way to the STFT; however, unlike the STFT, each channel has to be sampled at different time intervals, because each channel has a different bandwidth determined by the ERB. Hence, according to the Nyquist sampling theorem, the sampling time interval can be defined as follows (29):
\[\Delta\tau(k_{c})=\frac{1}{2BW_{ERB}(k_{c})} \tag{21}\]
, where \(BW_{ERB}(k_{c})\) is the bandwidth in the channel \(c\). For the discrete case, sampling is replaced by decimating the signal at the filter output. The decimation factor can be computed as follows:
\[M(k_{c})=\left\lfloor\frac{\Delta\tau(k_{c})}{T_{s}}\right\rfloor, \tag{22}\]
where \(T_{s}\) is the sampling interval of the continuous time signal. Now, the filters' output can be computed as follows:
\[\tilde{X}_{AFB}[m,k_{c}]=X_{AFB}[mM(k_{c}),k_{c}]. \tag{23}\]
## 4 Lateral angle estimation
The previous section incorporated auditory filter banks in the processing, which were motivated by human hearing. On a similar note, this section incorporates novel lateral angle estimation, also partially motivated by human hearing. Binaural cues contain important information about the azimuth direction of a source and its estimation is often a major goal in source localization. There are a number of options for estimating source azimuth given binaural steering vectors. The first is to perform a full 2D search over both azimuth and elevation, and then extract only the azimuth angle. This option may be computationally expensive due to the extensive search over all directions. Furthermore, this approach may suffer from error due to the less informative elevation cue (30). Another option is to perform a one-dimensional (1D) search for source's azimuth by assuming the sources elevation is known, e.g., assuming sources are in the horizontal plane [16; 17; 13; 14]. This option may be prone to error if the source's elevation is not accurately provided.
In this section a new localization framework is presented, aiming to overcome the limitations of previous approaches.
The proposed approach is motivated by the human auditory system, which relies on the ITD and the interaural level difference (ILD) as localization cues. The set of source directions with a similar ITD and ILD form a cone, which is known as the cone of confusion (19). Therefore, interaural cues can be used to distinguish between sources at different cones, but not between sources within the same cone. Therefore, the interaural coordinate system, which directly represents the cone of confusion, may be more suitable than the standard spherical polar system. The spherical and interaural coordinate systems are illustrated in Fig. 2. Within the interaural coordinate system, the lateral angle differentiates between cones, and the intraconic angle indicates the position within a cone.
Inspired by human localization, as discussed above, a new method for directly estimating the lateral angle is developed in this section. The HRTF set, which is usually sampled in a spherical coordinates grid (azimuth and elevation), is resam
Figure 2: (a) Spherical coordinate system and (b) Interaural coordinate system, both presented over the Cartesian coordinate system
pled into a lateral-intraconic grid. For each lateral angle in the set, a steering matrix is reconstructed with steering vectors representing all intraconic directions. This steering matrix of size \(2\times N\) is defined as
\[\mathbf{H}(\theta^{{}^{\prime}},\omega)\triangleq\left[\begin{array}{cccc}h_{ 1l}(\theta^{{}^{\prime}},\omega)&h_{2l}(\theta^{{}^{\prime}},\omega)&...&h_{Nl}( \theta^{{}^{\prime}},\omega)\\ h_{1r}(\theta^{{}^{\prime}},\omega)&h_{2r}(\theta^{{}^{\prime}},\omega)&...&h_ {Nr}(\theta^{{}^{\prime}},\omega)\end{array}\right], \tag{24}\]
where \(\theta^{{}^{\prime}}\) is the lateral angle, \(N\) is the number of the intraconic directions, and \(h_{ii}(\theta^{{}^{\prime}},\omega)\) and \(h_{ir}(\theta^{{}^{\prime}},\omega)\) are the left and right HRTFs, respectively, where the subscripts \(r\), \(l\) and \(i\) denote left, right and intraconic number in the set, respectively. The columns are expected to be similar, but not the same, due to the HRTF similarity within a cone. We aim to find a single steering vector that best represents a specific lateral direction, i.e. a single steering vector for every cone. To do that, we decompose matrix \(\mathbf{H}(\theta^{{}^{\prime}},\omega)\) using SVD, as follows:
\[\mathbf{H}(\theta^{{}^{\prime}},\omega)=\mathbf{U}(\theta^{{}^{\prime}},\omega )\mathbf{S}(\theta^{{}^{\prime}},\omega)\mathbf{V}(\theta^{{}^{\prime}},\omega), \tag{25}\]
where the matrix \(\mathbf{U}(\theta^{{}^{\prime}},\omega)\) has the following structure:
\[\mathbf{U}(\theta^{{}^{\prime}},\omega)=\left[\begin{array}{cc}\mathbf{u}_{1 }(\theta^{{}^{\prime}},\omega)&\mathbf{u}_{2}(\theta^{{}^{\prime}},\omega) \end{array}\right]. \tag{26}\]
The first column of \(\mathbf{U}(\theta^{{}^{\prime}},\omega)\), \(\mathbf{u}_{1}(\theta^{{}^{\prime}},\omega)\), corresponding to the largest singular value, is the best representation of a lateral direction that is common to all intraconic directions. Therefore, we can use vector \(\mathbf{u}_{1}(\theta^{{}^{\prime}},\omega)\) as a steering vector for a 1D lateral search.
Then, similarly to in Eq. (9), the MUSIC spectrum will be calculated as follows:
\[P(\theta^{{}^{\prime}})=\frac{1}{\|\mathbf{u}_{n}^{H}\mathbf{u}_{1}(\theta^{{ }^{\prime}})\|^{2}}, \tag{27}\]
where \(\theta^{{}^{\prime}}\) represents the lateral direction in a 1D grid, and \(\mathbf{u}_{1}(\theta^{{}^{\prime}})\) is the lateral steering vector (the time-frequency dependence is omitted for simplicity).
Figure 3 shows a block diagram of the DPD algorithm with the proposed auditory processing and with the incorporation of the auditory filter banks. The localization process is summarized in Algorithm 1.
## 5 Simulation study
This section studies,through simulations, the performance of two DOA estimation methods for binaural arrays, and compares the results to those obtained when incorporating the proposed auditory processing, under different reverberation and background noise conditions. The effect of the auditory filter bank and the direct lateral search, in terms of accuracy of DOA estimation, are investigated. The selected methods for comparison include the DPD method (9), as presented in this paper, and the joint estimation (JE) method proposed by Raspaud (31). These were selected as examples of methods that incorporate STFT computation and angle estimation searches based on HRTFs.
### Simulation setup
The simulation setup includes a single speaker in a room, recorded by a binaural array. The room is rectangular, with various different dimensions, the speaker is represented by a point source, and the binaural arrays are simulated using a model of HRTFs from the Neumann KU-100 manikin (32).
In order to calculate the microphone signals, the room impulse responses are first computed using the image method (33),
Figure 3: Block diagram of the updated DPD algorithm with the incorporation of the auditory filter bank and the direct lateral angle estimation
and then convolved with a speech signal of duration about 5 s, sampled at 48 kHz. After calculating the binaural signal, a Gaussian white noise source is added, in the form of sensor noise.
As there are no agreed benchmarks for binaural localization, and performance varies greatly based on the environment, we will run 500 simulations under different conditions so that a diverse set of conditions is generated, in order to obtain representative results. In each simulation, the room size, the speakers and the distance between the speaker and array is chosen randomly from a defined set of options. The room size options are \(5\times 10\times 8\) m, \(9\times 7\times 5\) m and \(8\times 5\times 3\) m, the speakers set, taken from the TIMIT database (34), includes 2 male and 2 female speakers, and the distance options between the speaker and array are 0.5, 1 and 2 times the critical distance. The array position is chosen randomly within the boundaries of the room, and the speaker position is determined by the DOA, chosen randomly, and by the speaker-array distance.
Several options of signal to noise ratios (SNRs) and reverberation times (\(T_{60}\)) are generated. The SNR options go from very low value, -5 dB to high value, 15 dB, with steps of 5 dB, and the \(T_{60}\) options go from medium reverberation time, 0.4 s to high reverberation time 0.8 s, with steps of 0.2 s.
### Methodology
DOA estimation performance is compared for the two methods, and when incorporating STFT versus auditory filters, and a 2D search versus a 1D lateral search.
For both methods, in the first stage of processing both STFT and auditory filters are applied. The STFT is computed with an FFT size of 1536 samples (32 ms) and a Hanning window with 50% overlap. The auditory filters were computed with 42 frequency channels logarithmically spaced according to the ERB scale, between 60 to 6000 Hz, and sampled in time according to Eq. (21). Performance of the methods based on the auditory filter bank will be referenced as AFB.
The DPD-test method is computed according to Section 2. The spatial correlation matrix is estimated according to Eq. 5 by averaging bins over time and over frequency. The number of bins in time is such that the averaging interval is equal to 64 ms, and the number of bins in the frequency is equal to 2. The threshold in Eq. 8 is determined such that 5% of the bins pass the test. In addition, TF bins from frequencies below 1kHz and beyond 6 kHz were excluded due to poor performance in those frequency regions (10). For every TF bins that passed the test, the MUSIC spectrum is calculated as in Eq.9, with the spectrum peak providing the DOA estimate.
According to the second DOA estimation method (the JE method), the ITD and the ILD are first estimated from the STFT or from the AFB. The time averaging scheme is the same as in the DPD method. The DOA is estimated by comparing the binaural cues from the STFT or AFB with the corresponding cue computed from the HRTF as a reference. There is no methodology for selecting good TF bins in the JE method because it was not originally designed to be reverberation robust. Therefore, the same bins from the DPD method were used here for the lateral angle estimation.
DOA estimation for both methods is computed in two ways. First, a complete 2D search is performed to estimate both azimuth and elevation, from which the lateral angle is computed (18). Second, as proposed in this paper, a 1D search is performed directly for the lateral angle, with the derived lateral steering vectors, as in Eq. 26.
Finally, the lateral estimation is computed as the mean over all TF bins that passed the test, according to Eq. 10. The root mean squared error (RMSE) is then calculated for each condition of the simulation.
### STFT vs AFB
In the first study, the performance of the methods when using STFT and AFB is compared. Figure 4 presents examples of STFT and AFB magnitude for clean speech, showing the spread of energy over both frequency and time in both representations.
Fig. 5 presents the RMSE of the lateral angle estimation for both the DPD and JE methods, averaged over all conditions. Figures (a), (d) represent the results for varying SNR;
Figure 4: (a) STFT magnitude of clean speech (b) AFB magnitude of clean speech
(b), (e) for varying \(T_{60}\); and (c), (f) for varying lateral angle direction, for both STFT and AFB. In both cases a 2D search was used. As expected, the figure shows that the error increases with longer reverberation time, lower SNR, and lateral directions away from the front (90\({}^{\circ}\)) (35). The AFB seems to perform better than the STFT - the RMSE for the AFB is significantly lower then for the STFT, especially under worse conditions, i.e., low SNR, high \(T_{60}\) and lateral directions away from the front. In Fig. 5, the upper row (a-c) presents results for the DPD method and the lower row (d-f) presents results for the JE method. Both methods exhibit similar trends. In summary, frequency analysis with AFB seems to outperform the STFT for both methods, motivating the use of AFB for DOA estimation.
### 2D vs 1D angle search
In this study, the performance of the lateral angle estimation with 2D and 1D searches is compared. For this purpose, the lateral steering vectors are computed as in Eq. 26. Figure 7 presents the effective rank (36) of the steering matrix \(\mathbf{H}(\theta^{\prime},\omega)\) defined in Eq. (24) in the lateral angle - frequency domain. The figure shows that the effective rank is close to 1 in most lateral angle - frequency regions, due to the HRTF similarity within a cone. This supports the formulation of lateral angle steering vectors as in Eq. 24, which are based on a rank-1 approximation.
Fig. 6 presents the RMSE of the lateral angle estimation in a way that is similar to Fig. 5, but for 2D and 1D angle searches. The AFB is used in this case as it outperformed the STFT, as presented in the previous section. In addition to requiring a more computationally efficient search due to the dimension reduction in the search grid, the figure shows that the 1D search outperforms the 2D search with respect to RMSE for the DPD method.
For the JE method, similar results are obtained for the two search methods, while the 1D search is still preferred in terms of computation complexity. The latter can be explained by the use of ITD and ILD in this method, which inherently maps the lateral angle. In summary, the 1D search outperformed the 2D search for the DPD method, which is based on steering vectors, while for the JE method the 1D search only incorporated a more efficient search.
### Computational complexity
This section aims to study the computational efficiency of the two search methods. The total running times of the entire algorithm, as detailed under Algorithm 1, is used as a measure for computation complexity. The methods were implemented in MATLAB (2022 version), running on a MacBook with 16 GB RAM and a 2.2 GHz Intel Core i7 processor. The average running times for a single realization were measured to be 867 ms
Figure 5: RMSE of the lateral estimation with the DPD and JE methods, using 2D angle search. Figure (a) shows the DPD method for different SNR (with randomly taken lateral angles and an average \(T_{60}\) of 0.6 s). Figure (b) shows the DPD method for different \(T_{60}\) (with randomly taken lateral angles and an average SNR of 5 dB). Figure (c) shows the DPD method for different lateral angles (with an average SNR of 5 dB and an average \(T_{60}\) of 0.6 s). Figures (d-f) follow the same format as (a-c) respectively, but use the JE method instead.
for the 2D search and just 37 ms for the 1D search method, for a 5-second speech segment. This underlines the significantly lower computational demand of the 1D search.
## 6 Experimental study with BRIR data
This chapter presents an experimental study based on measured binaural room impulse responses (BRIR), aiming to validate the theory and simulation results.
### Setup
The experiment is based on a dataset derived from a library of BRIR, captured in a controlled environment at the University of Salford (37). The experiment was conducted in a room of dimensions \(5.8\times 6.6\times 2.8\) m, with average reverberation time of 0.27s, and under signal-to-noise ratio of 90 dB. The recordings were performed using a sample rate of 48kHz.
In the dataset, BRIRs were measured with the sound source (loudspeaker) positioned at various directions relative to the KEMAR manikin that was used to capture the binaural signals. A directional resolution of \(2^{\circ}\) was used along the azimuth. The sound source was positioned in the horizontal plane, leading to elevation of \(0^{\circ}\) between the source and the manikin. Among the 15 distinct manikin position, the central room position was chosen for this study. In this specific configuration, the distance between the source and the manikin was consistently 2.1 m.
To recreate audio signals that would have been captured in the room, audio files from the TIMIT database (34) were used. For each realization, we randomly selected a speaker from a group of 2 male and 2 female speakers, consistent with the approach in the simulations setup (chapter 5.1). The chosen audio signals were then convolved with the BRIRs to compute binaural signals. Then, Gaussian white noise was added to the binaural signals, with a signal-to-noise ratio of 5 dB, in a way
Figure 6: RMSE of the lateral estimation with the DPD and JE methods, using the AFB. Figure (a) shows the DPD method for different SNR (with randomly taken lateral angles and an average \(T_{60}\) of 0.6 s). Figure (b) shows the DPD method for different \(T_{60}\) (with randomly taken lateral angles and an average SNR of 5 dB). Figure (c) shows the DPD method for different lateral angles (with an average SNR of 5 dB and an average \(T_{60}\) of 0.6 s). Figures (d-f) follow the same format as (a-c) respectively, but use the JE method instead.
Figure 7: Effective rank of steering matrix \(\mathbf{H}(\theta^{\prime},\omega)\), defined in Eq. 24
similar to the simulation study, to produce more realistic noisy signals.
The following results represent an aggregation of error analyses across different realizations taken from varying source locations and azimuth directions within the room, in a way that is similar to with the methodology described in the simulation study (chapter 5.2).
### STFT vs AFB
In this section, performance of the algorithm with STFT and AFB is compared using the experimental data. Note that the SNR and reverberation time are fixed in this case. Fig. 8 presents the RMSE of the lateral angle estimation for both the DPD and JE methods. The results show that the error with AFB is lower, especially for lateral directions away from the front (\(90^{\circ}\)) (\(35\)), and especially for the DPD algorithm. The similarity between the simulated (Fig. (c)c and (f)f) and experimental (Fig. 8) results, further validate the effectiveness of the AFB approach in DOA estimation.
### 2D vs 1D angle search
Fig. 9 illustrates the RMSE of the lateral angle estimation for both 2D and 1D angle searches with the experimental data, for the DPD and JE methods. The results shows that the trends align with the simulation findings as in Figs. (c)c and (f)f. The 1D search shows better results with the DPD method, in particular for the extreme lateral directions. However, for the JE method, the difference between the two searches seems less significant, with a slight advantage for the 2D search. Overall, this is consistent with the simulation results. With the 1D search providing similar or better performance compared to the 2D search, its reduced computational saving becomes an advantage.
Figure 8: RMSE of the lateral estimation with the DPD and JE methods, using 2D angle search. The figures show the results for different lateral angles with a \(T_{60}\) of 0.27 s and SNR of 5 dB. Figure (a) presents results for the DPD method. Figure (b) presents results for the JE method.
Figure 9: RMSE of the lateral estimation with the DPD and JE methods, using the AFB. The figures show the results for different lateral angles with a \(T_{60}\) of 0.27 s and SNR of 5 dB. Figure (a) presents results for the DPD method. Figure (b) presents results for the JE method.
## 7 Conclusions
This paper proposed and investigated new alternatives for processing in binaural DOA estimation, which included the incorporation of an auditory filter bank, and direct lateral angle estimation. These processing alternatives have been theoretically developed and incorporated into the DPD and the JE methods as examples. The proposed alternatives outperformed the original methods in most cases. The study suggests that improved performance is achieved with the proposed alternatives in terms of DOA estimation error and computational efficiency, and can also be generalized to other binaural localization methods.
## Acknowledgments
This research was supported by THE ISRAEL SCIENCE FOUNDATION under Grant 966/18.
|
2304.00077 | Decentralized Attack Search and the Design of Bug Bounty Schemes | Systems and blockchains often have security vulnerabilities and can be
attacked by adversaries, with potentially significant negative consequences.
Therefore, infrastructure providers increasingly rely on bug bounty programs,
where external individuals probe the system and report any vulnerabilities
(bugs) in exchange for rewards (bounty). We develop a simple contest model of
bug bounty. A group of individuals of arbitrary size is invited to undertake a
costly search for bugs. The individuals differ with regard to their abilities,
which we capture by different costs to achieve a certain probability to find
bugs if any exist. Costs are private information. We study equilibria of the
contest and characterize the optimal design of bug bounty schemes. In
particular, the designer can vary the size of the group of individuals invited
to search, add a paid expert, insert an artificial bug with some probability,
and pay multiple prizes. | Hans Gersbach, Akaki Mamageishvili, Fikri Pitsuwan | 2023-03-31T19:00:30Z | http://arxiv.org/abs/2304.00077v2 | # Decentralized Attack Search and the Design of Bug Bounty Schemes+
###### Abstract
Systems and blockchains often have security vulnerabilities and can be attacked by adversaries, with potentially significant negative consequences. Therefore, organizations and blockchain infrastructure providers increasingly rely on bug bounty programs, where external individuals probe the system and report any vulnerabilities (bugs) in exchange for monetary rewards (bounty). We develop a contest model for bug bounty programs with an arbitrary number of agents who decide whether to undertake a costly search for bugs or not. Search costs are private information. Besides characterizing the ensuing equilibria, we show that even inviting an unlimited crowd does not guarantee that bugs are found. Adding paid agents can increase the efficiency of the bug bounty scheme although the crowd that is attracted becomes smaller. Finally, adding (known) bugs increases the likelihood that unknown bugs are found, but to limit reward payments it may be optimal to add them only with some probability.
**Keywords:** Contest Design, Equilibrium, Bug Bounty
**JEL Classification:** D82, C72, H41
Introduction
Softwares often have security vulnerabilities and can be attacked by adversaries, with potentially significant negative social or economic consequences. This is particularly critical for blockchain infrastructure providers, since such projects do not have dedicated security teams testing software upgrades. Once the software is deployed, there is no turning back or any legal defense mechanisms against system exploitation.1 Therefore, such projects rely on public intrusion test where everyone was allowed to probe the software and report any vulnerabilities (bugs) in exchange for monetary rewards (bounty).2 This type of program, often called _bug bounty_ or _crowdsourced security_, has become a major tool for detecting software vulnerability on blockchains through publicly announced bug bounties (Breidenbach et al. (2018) studies the evolution and design of these programs). Bug bounty programs are also used by governments and tech companies.3
Footnote 1: At least until the next hard fork.
Footnote 2: Participants are often called _ethical hackers_, _white-hats_, or _security researchers_.
Footnote 3: The success in recent years has led the authority to systematically adopt bug bounty programs as a main measure in government cybersecurity. The Federal Council of Switzerland states in a recent press release that “standardised security tests are no longer sufficient to uncover hidden loopholes. Therefore, in the future, it is intended that ethical hackers will search through the Federal Administration’s productive IT systems and applications for vulnerabilities as part of so-called bug bounty programmes.” (Federal Department of Finance, 2022)
There have been comprehensive accounts on the rules of engagement of bug bounty programs (Laszka et al., 2018), on the effectiveness and best practices of such programs (Walshe and Simpson, 2020; Malladi and Subramanian, 2020), and on the incentives of researchers to participate in bug bounty schemes (Maillart et al., 2017). In this paper, we offer insights on some of the dimensions of bug bounty design, using a game-theoretic model of a simple contest building on the important work of Ghosh and Kleinberg (2016) and Sarne and Lepioshkin (2017), where agents with different abilities decide on whether or not to exert costly effort for finding bugs.
Several salient features of bug bounty differentiate our approach from that of the standard optimal contest literature. The design objective in traditional contests is to elicit the highest effort (or sum of efforts) from the contestants. This can be done by appropriately splitting up the prize (Moldovanu and Sela, 2001), choosing a suitable reserve effort (Chawla et al., 2019), setting an entry fee (Taylor, 1995) or by developing a revelation mechanism to select a subset of contestants from a pool of candidates (Mercier, 2018).
We focus on the simple problem how to maximize the likelihood to find bugs when a given amount of money is available for rewards. In particular, we will focus on three design variables for bug bounty systems. How large should the crowd of agents invited to find bugs be? Should paid experts be added to the crowd of invited bug finders? Should artificial bugs be added to the software to increase participation in bug finding and to increase the likelihood that the real bug is found?
To answer these questions and other, general questions about the nature of equilibria in bug bounty schemes, we develop a simple model of crowd-sourced security. A group of individuals of arbitrary size is invited to search for a bug. Whether a bug exists is uncertain. The individuals differ with regard to their abilities to find bugs, which we capture by different costs to achieve a certain probability to find the bug if it exists. Costs are private information. The designer of the bug bounty scheme offers a prize for the individual or the set of individuals who find the bug. The designer can vary the size of the group of individuals invited to find a bug, can add a paid expert to the crowd, and can insert an artificial bug with some probability.
We obtain the following results. First, we establish that any equilibrium strategy must be a threshold strategy, i.e. only agents with a cost of search below some (potentially individual) threshold participate in the bug bounty scheme. Second, we provide sufficient conditions for the equilibrium to be unique and symmetric. Third, we show that even inviting an unlimited crowd does not guarantee that bugs are found, unless there are agents which have zero costs, or equivalently have intrinsic gains from participating in the scheme. It may even happen that having more agents in the pool of potential participants may lower the probability of finding a bug. Fourth, adding paid agents can increase the efficiency of the bug bounty scheme, although the crowd that is attracted becomes smaller. Fifth, we illustrate how adding (known) bugs is another way to increase the likelihood that unknown bugs are found. When the additional costs of paying rewards are taken into account, it can be optimal to insert a known bug only with some probability. Finally, we illustrate the equilibria when costs are distributed uniformly and identify circumstances when asymmetric equilibria occur.
The paper is organized as follows: In the next section, we introduce the model. In Section 3, we characterize the equilibria and derive their properties for finding bugs. In Section 4, we provide extensions when experts or artificial bugs are added and when multiple prizes are awarded. We also discuss the existence and nature of asymmetric equilibria. In Section 5, we illustrate the results when the costs are uniformly distributed. Section 6 concludes. The proofs can be found in the Appendix.
## 2 Model
There are \(n\geq 2\) agents invited to search for a bug. Denote the set of agents by \(N=\{1,\ldots,n\}\) and let \(\mathbf{s}=(s_{1},\ldots,s_{n})\in\{0,1\}^{n}\) denote the action profile of the agents, where \(s_{i}=1\) if agent \(i\) searches, and otherwise \(s_{i}=0\). If agent \(i\) decides to search, \(i\) finds the bug with probability \(q\in(0,1]\) at a random time \(t_{i}\), uniformly distributed over the possible search time \([0,T]\), where \(T\) is the maximal time for a search. These arrival times \(t_{i}\) are stochastically independent across agents. Otherwise, \(i\) does not find the bug. For simplicity, we assume that a bug exists, but the model can be reinterpreted as a model
in which a bug exists with some probability.
A search is costly. If \(s_{i}=1\), agent \(i\) incurs a cost \(c_{i}\) which is private information and drawn from a continuous distribution \(F\) with a corresponding probability density \(f\) and support \([\underline{c},\overline{c}]\), \(0\leq\underline{c}<\overline{c}\leq\infty\).4 We consider the case of winner-takes-all contest where only the first agent to find the bug receives a prize \(V>0\).5 If two (or more) agents find the bug at the same time, they share the prize. Yet since the bug finding arrival time is uniformly distributed and stochastically independent across a discrete number of agents, the probability that this happens is zero and thus this can be neglected. The assumption also implies that agents that decide to search have the same probability to win the contest.
Footnote 4: The model can be extended to allow for \(\underline{c}<0\).
Footnote 5: We consider multiple prizes in an extension and show that the winner-takes-all contest induces the highest level of participation by the agents.
We write \(\boldsymbol{s}_{-i}=(s_{1},\ldots,s_{i-i},s_{i+i},\ldots,s_{n})\), and let \(S=\sum_{j}s_{j}\) and \(S_{-i}=\sum_{j\neq i}s_{j}\) denote the total number of agents who search and the total number of agents other than agent \(i\) who search, respectively. Given the set up, the payoff of agent \(i\) is given by
\[u_{i}(s_{i},\boldsymbol{s}_{-i},c_{i})=s_{i}\left(p_{i}(\boldsymbol{s}_{-i}) V-c_{i}\right), \tag{1}\]
where
\[p_{i}(\boldsymbol{s}_{-i})\equiv q\sum_{t=0}^{S_{-i}}{S_{-i}\choose t}q^{t}(1 -q)^{S_{-i}-t}\frac{1}{t+1} \tag{2}\]
is the probability that agent \(i\) is the first agent to find the bug conditioning on searching. Given an action profile \(\boldsymbol{s}\), let \(B(\boldsymbol{s})\) be the event that the bug is found. An important variable is a probability that the bug is found \(\Pr(B(\boldsymbol{s}))=1-(1-q)^{S}\), which depends on the total number of agents participating in the search.
A strategy profile is denoted \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{n})\), where a strategy \(\sigma_{i}:[\underline{c},\overline{c}]\rightarrow\{0,1\}\) maps an agent's private information to an action. We write \(\sigma\) for the symmetric strategy profile \((\sigma,\ldots,\sigma)\) when there is no risk of confusion and adopt the usual notational convention for \(\boldsymbol{\sigma}(\boldsymbol{c})\), \(\boldsymbol{\sigma}_{-i}\), \(\boldsymbol{c}_{-i}\), and \(\boldsymbol{\sigma}_{-i}(\boldsymbol{c}_{-i})\). Given a strategy profile \(\boldsymbol{\sigma}\), the ex-ante probability that the bug is found is then \(\mathbb{E}[\Pr(B(\boldsymbol{\sigma}(\boldsymbol{c})))]\).
An important class of strategies is threshold strategies. A _threshold strategy_ with threshold \(\hat{c}\), denoted by \(\sigma_{\hat{c}}\), is characterized by
\[\sigma_{\hat{c}}(c_{i})=\left\{\begin{array}{ll}1&\mbox{if}\ \ \ c_{i}\leq\hat{c}\\ 0&\mbox{if}\ \ c_{i}>\hat{c}\end{array}\right..\]
A threshold strategy profile is denoted \(\boldsymbol{\sigma}_{\hat{c}}=(\sigma_{\hat{c}_{1}},\ldots,\sigma_{\hat{c}_{n }})\) for some threshold vector \(\hat{\boldsymbol{c}}=(\hat{c}_{1},\ldots,\hat{c}_{n})\). The ex-ante probability that the bug is found under a threshold strategy
profile is then
\[\mathbb{E}[\Pr(B(\mathbf{\sigma_{\hat{c}}}(\mathbf{c})))]=1-\prod_{i}(1-qF(\hat{c}_{i})).\]
If all agents use the same threshold strategy \(\sigma_{\hat{c}}\), the ex ante probability that the bug is found becomes
\[P(\hat{c},q,n)\equiv 1-(1-qF(\hat{c}))^{n},\]
which we shall call the _probability of success_.
A strategy profile \(\mathbf{\sigma}^{*}\) is a _Bayes Nash Equilibrium_ (BNE) if for all \(i\), \(c\), and \(s_{i}\),
\[\mathbb{E}[u_{i}(\sigma_{i}^{*}(c_{i}),\mathbf{\sigma}_{-i}^{*}(\mathbf{c}_{-i}),c_{i} )|c_{i}=c]\geq\mathbb{E}[u_{i}(s_{i},\mathbf{\sigma}_{-i}^{*}(\mathbf{c}_{-i}),c_{i})|c _{i}=c].\]
## 3 Equilibrium Analysis
This section analyzes the game. We offer a characterization of the equilibrium, discuss some important comparative statics, and examine the limit behaviors of the game as the number of agents grows large.
### Equilibrium Characterization
We proceed as follows. First, we establish that any equilibrium strategy must be a threshold strategy. Second, we show that if the threshold cost vector is interior, then they must satisfy a system of indifference conditions. Third, we propose a set of conditions for the equilibrium to be unique and symmetric. Lastly, we derive a simple and intuitive fixed-point condition for the unique equilibrium.
The first result states that the equilibrium strategies are threshold strategies.
**Proposition 1**.: \(\mathbf{\sigma}^{*}=\mathbf{\sigma}_{\mathbf{c}^{*}}\) _for some threshold vector \(\mathbf{c}^{*}=(c_{1}^{*},\ldots,c_{n}^{*})\)._
Consequently, we can analyze the game as if the strategies are the thresholds, and characterizing the equilibrium strategies then boils down to characterizing the _equilibrium threshold vector_, \(\mathbf{c}^{*}=(c_{1}^{*},\ldots,c_{n}^{*})\). Suppose further that the equilibrium threshold vector is interior, \(c_{i}^{*}\in(\underline{c},\overline{c})\) for all \(i\). Then, it must satisfy the following system of indifference conditions: for all \(i\),
\[c_{i}^{*}=V\Psi(\mathbf{c}_{-i}^{*}), \tag{3}\]
where the function \(\Psi:[\underline{c},\overline{c}]^{n-1}\rightarrow\mathbb{R}\) is given by
\[\Psi(\hat{\mathbf{c}}_{-i})\equiv q\sum_{K\subseteq N\setminus\{i\}}\left\{\prod_ {j\in K}F(\hat{c}_{j})\prod_{j\notin K}(1-F(\hat{c}_{j}))\left[\sum_{t=0}^{|K| }\binom{|K|}{t}q^{t}(1-q)^{|K|-t}\frac{1}{t+1}\right]\right\}. \tag{4}\]
Indeed, \(\Psi(\hat{\mathbf{c}}_{-i})\) denotes the probability that agent \(i\) will be the winner given that the other \(n-1\) agents deploy threshold strategies characterized by some threshold vector \(\hat{\mathbf{c}}_{-i}\).6 The condition in (3) then equates the cost and the expected benefits of search for each agent, characterizing the threshold cost such that the agent is indifferent between searching and not searching for the bug. The following proposition states some important properties of \(\Psi\).
Footnote 6: \(\Psi(\hat{\mathbf{c}}_{-i})\) is in fact the expectation over the cost distribution of \(p_{i}(\mathbf{s}_{-i})\) given that other agents follow threshold strategies.
**Proposition 2**.: _The following holds_
* \(\Psi(\hat{c}_{1},\ldots,\hat{c}_{i-1},\hat{c}_{i+1},\ldots,\hat{c}_{n})=\Psi( \hat{c}_{\pi(1)},\ldots,\hat{c}_{\pi(i-1)},\hat{c}_{\pi(i+1)},\ldots,\hat{c}_{ \pi(n)})\) _for any permutation_ \(\pi\)_,_
* \(\partial\Psi(\hat{\mathbf{c}}_{-i})/\partial c_{j}<0\) _for all_ \(j\) _and all_ \(\hat{\mathbf{c}}_{-i}\in[\underline{c},\overline{c}]^{n-1}\)_,_
* \(\Psi(\underline{c},\ldots,\underline{c})=q\) _and_ \(\Psi(\overline{c},\ldots,\overline{c})=\frac{1-(1-q)^{n}}{n}\)_._
The first property says that \(\Psi\) is symmetric. The identity of the agents does not matter because agents are ex ante symmetric. The second property is that \(\Psi\) is strictly decreasing in all its arguments. It holds because higher thresholds adopted by other agents increase their search probability and in turn lowers agent \(i\)'s probability of winning the prize. To facilitate a sharper prediction, we now impose two assumptions on \(\Psi\).
**Assumption 1**.: \(\partial\Psi(\hat{\mathbf{c}}_{-i})/\partial c_{j}\neq-1/V\) _for all \(j\) and all \(\hat{\mathbf{c}}_{-i}\in[\underline{c},\overline{c}]^{n-1}\)._
**Assumption 2**.: \(\underline{c}<V\Psi(\underline{c},\ldots,\underline{c})=qV\) _and \(V\frac{1-(1-q)^{n}}{n}=V\Psi(\overline{c},\ldots,\overline{c})<\overline{c}\)._
The first assumption ensures that the equilibrium is unique. Note that since the choice of a threshold is effectively agent \(i\)'s strategy, the function \(V\Psi(\mathbf{c}_{-i})\) can be interpreted as agent \(i\)'s best-response function given the thresholds chosen by the other agents. Assumption 1 then demands that this best-response function has a slope that is never equal to unity. This guarantees that best-response functions cross only once, resulting in a unique equilibrium. Assumption 2 restricts the parameter values to ensure that the solution to the system of indifference conditions in (3) is interior. With these two assumptions, we now characterize the unique equilibrium of the bug bounty game. To this end, define \(\Phi:[\underline{c},\overline{c}]\times(0,1]\times\mathbb{N}\to\mathbb{R}\) by
\[\Phi(\hat{c},q,n)\equiv\frac{P(\hat{c},q,n)}{nF(\hat{c})}=\frac{1-(1-qF(\hat{c }))^{n}}{nF(\hat{c})} \tag{5}\]
if \(\hat{c}>\underline{c}\) and \(\Phi(\underline{c},q,n)\equiv q\). Indeed, \(\Phi\) is the probability that agent \(i\) wins given that all other agents use the same threshold strategy. In other words, \(\Phi\) the "slice" of \(\Psi\) along the "diagonal", i.e. when the arguments of \(\Psi\) are all the same. As defined in (5), \(\Phi\) has
an intuitive interpretation in that it is the probability that the bug is found, divided by the expected number of agents who search. The reason is that if the bug is found at all, then the agents participating in the search have the same chance to obtain the reward. We obtain
**Proposition 3**.: _Under Assumption 1 and Assumption 2, the unique equilibrium is \(\mathbf{\sigma}_{c^{*}}\). The equilibrium threshold \(c^{*}\equiv c^{*}(V,q,n)\in(\underline{c},\overline{c})\) is the solution to_
\[c^{*}=V\Phi(c^{*},q,n). \tag{6}\]
We henceforth refer to \(\mathbf{\sigma}_{c^{*}}\) simply as the equilibrium. To ease exposition, we suppress explicit dependence of \(c^{*}\) and \(\Phi\) on \(V\), \(q\), and \(n\) when appropriate. Condition (6) is a special case of (3). It is an indifference condition capturing the fact that in an equilibrium, an agent of type \(c^{*}\) must be indifferent between searching and not searching. The left-hand side is the cost of the search and the right-hand side is the expected reward: \(V\) times \(\Phi\).
### Comparative Statics
We now perform comparative statics of the equilibrium. For this purpose, we first state the properties of \(\Phi(c,q,n)\). The properties of \(c^{*}(V,q,n)\) then ensue since \(c^{*}\) is the unique fixed point of \(V\Phi(c,q,n)\). We obtain the following comparative statics results for \(c^{*}\).
**Proposition 4**.: \(\Phi(c,q,n)\) _is strictly decreasing in \(c\) and strictly increasing in \(q\). For \(c>\underline{c}\), \(\Phi(c,q,n)\) is strictly decreasing in \(n\). The equilibrium threshold \(c^{*}(V,q,n)\) is_
1. _increasing in_ \(V\)_,_
2. _increasing in_ \(q\)_, and_
3. _decreasing in_ \(n\)_._
The results are intuitive. If the prize \(V\) is increased, agents have more incentive to search. Agents with higher cost will now search when they otherwise would not. The same is true for when \(q\), the probability that the bug is found conditioning on search, increases. Lastly, more agents intensify competition for the bug search, which lowers the probability that an agent wins the prize. Figure 1 illustrates how \(V\Phi\) changes with \(V\), \(q\), and \(n\). Furthermore, Figure 1 demonstrates the comparative statics of the equilibrium threshold \(c^{*}(V,q,n)\), which is the fixed point of \(V\Phi(c,q,n)\). Panel (a) of Figure 1 shows that for \(V^{\prime}<V^{\prime\prime}\), \(V\Phi\) as a function of \(c\) shifts up with \(V\), keeping \(q\) and \(n\) constant. Consequently, we have that \(c^{*}(V^{\prime})<c^{*}(V^{\prime\prime})\). Panel (b) illustrates the case for \(q^{\prime}<q^{\prime\prime}\). Lastly, panel (c) illustrates that \(V\Phi\) shifts down with \(n\) and thus for \(n^{\prime}<n^{\prime\prime}\), we have \(c^{*}(n^{\prime\prime})<c^{*}(n^{\prime})\).
### Probability of Success
For the design of the bug bounty scheme, the quantity of interest is the probability of success in equilibrium, \(P(c^{*}(V,q,n),q,n)=1-(1-qF(c^{*}(V,q,n)))^{n}\), which we shall denote as \(P^{*}=P^{*}(V,q,n)\) for simplicity.7 How does the equilibrium probability of success vary with the parameters of the model? We have the following result.
Footnote 7: Again, to ease exposition we suppress the arguments of \(P^{*}\) that are kept fixed in the context of the analysis. For example, we write \(c^{*}(n)\) and \(P^{*}(n)\) for the equilibrium threshold and the probability of success in equilibrium, respectively, when there are \(n\) agents, recognizing that \(V\) and \(q\) are fixed.
**Proposition 5**.: \(P^{*}(V,q,n)\) _increases with \(V\) and \(q\), and may increase or decrease with \(n\)._
That \(P^{*}\) increases with \(V\) and \(q\) is straightforward. The comparative statics with respect to \(n\), however, is more interesting. It turns out, rather surprisingly, that the probability of success may decrease or increase with the number of agents \(n\). Intuition suggests that the probability of finding the bug should go up with the number of agents. However, as we have seen, more agents result in heightened competition, which lowers the participation threshold. That is, agents crowd out each others' individual incentives to search. Either force may dominate depending on the specifications of the cost distribution and the parameters of the model.
Since \(P^{*}(n)=1-(1-qF(c^{*}(n))^{n}\), there are two possible channels in which the crowding-out effect can dominate when \(n\) increases. The first channel operates through the cost distribution \(F\) as it can amplify a decrease in \(c^{*}(n)\). The second channel is direct via a sharp decrease in \(c^{*}(n)\). This happens when \(c^{*}(n)\) starts high, perhaps due to high rewards, so that each subsequent \(c^{*}(n)\) drops sharply relative to the increase in \(n\). The following examples illustrate these two channels.
**Example 1**.: Consider \(F(c)=c^{20}\) for \(0\leq c\leq 1\), and let \(q=1\) and \(V=1\). Table 0(a) shows the numerical values of \(c^{*}(n)\) and \(P^{*}(n)\). The equilibrium thresholds \(c^{*}(n)\) are
Figure 1: Comparative statics of \(c^{*}(V,q,n)\).
decreasing in \(n\) as expected. For \(P^{*}(n)\), we see it is decreasing for \(n=2\) to \(n=4\) and increasing for \(n\geq 5\) onward.
Intuitively, Example 1 demonstrates distribution functions for which most individuals are expected to have a cost close to \(1\), and only a few highly talented agents are expected in the pool. Then, enlarging the pool of agents may be detrimental because as the threshold declines, the expected crowd that participates shrinks considerably making it less likely to find the bug. Example 2 considers a uniform cost distribution with high rewards. Since \(V\) is high, the threshold starts near \(1\) and declines sharply relative to the direct effect of having more agents.
**Example 2**.: Consider \(F(c)=c\) for \(0\leq c\leq 1\), and let \(q=1\) and \(V=1.999\). Table 0(b) shows the numerical values \(c^{*}(n)\) and \(P^{*}(n)\). The equilibrium thresholds \(c^{*}(n)\) are decreasing in \(n\) as expected. For \(P^{*}(n)\), we see it is decreasing for \(n=2\) to \(n=4\) and increasing for \(n\geq 5\) onward.
An implication of our analysis is that the designer of the bug bounty system should pay close attention to the number of invited agents to trade off the crowding-out effect of having many agents.
### Large Contests
In this section, we keep all parameters fixed, but vary the number of agents that are invited to the bug bounty system. Throughout the section, we denote by \(c_{n}\equiv c^{*}(n)\) and \(P_{n}\equiv P^{*}(n)\) the equilibrium threshold and the equilibrium success probability when \(n\) agents are invited to participate. We examine the limit behaviors as \(n\to\infty\) and obtain
**Proposition 6**.: _The following holds_
1. _For any_ \(\underline{c}\geq 0\)_, we have_ \(c_{n}\to\underline{c}\)_._
2. _If_ \(\underline{c}=0\)_, then_ \(nF(c_{n})\to\infty\)_. If_ \(\underline{c}>0\)_, assuming_ \(nF(c_{n})\) _converges, we obtain_ \[nF(c_{n})\to\kappa(\underline{c}),\]
_where the constant_ \(\kappa\equiv\kappa(\underline{c})\) _is the unique solution to_ \(\underline{c}=V\frac{1-e^{-q\kappa}}{\kappa}\)_, and_
* _If_ \(\underline{c}=0\)_, then_ \(P_{n}\to 1\)_. If_ \(\underline{c}>0\)_, then assuming_ \(nF(c_{n})\) _converges, we obtain_ \[P_{n}\to 1-e^{-q\kappa(\underline{c})}.\]
Proposition 6 has important implications for the success of bug bounty schemes. Plausibly \(\underline{c}>0\) as even high-ability agents have to exert effort to find bugs. Then, even inviting an unlimited crowd to find bugs will not guarantee that bugs are found. The reason is that--given the expected intensive competition--only comparatively few agents will decide to participate and the bug is not found with some probability. Yet, if a large group of agents could be invited that are partially intrinsically motivated or by reputational concerns, cases with \(\underline{c}=0\) may become possible as well as the prospect that the bug is found with certainty.
We now investigate the tail behavior of \(P_{n}\). In the previous section, we have shown that the probability of success may increase or decrease with the number of agents. In both examples, however, we see that \(P_{n}\) eventually increases for large enough \(n\). This is a general property as we now state.
**Proposition 7**.: _Suppose the cost distribution is such that \(\liminf_{c\to\underline{c}^{+}}\frac{F(c)}{cf(c)}=\delta\), for some \(\delta>0\). Then there exists \(N\) such that for all \(n>N\), \(P_{n}\) is increasing. Suppose further that the cost distribution is such that \(\frac{F(c)}{cf(c)}\) is non-decreasing. Then there exists \(\hat{n}\) such that \(P_{n}\) is decreasing for all \(n<\hat{n}\) and increasing for all \(n>\hat{n}\)._
To see the forces at play,8 treat \(n\) as a continuous variable and calculate
Footnote 8: The full proof and derivation are in the appendix.
\[\frac{\mathrm{d}P_{n}}{\mathrm{d}n}=(1-qF(c_{n}))^{n}\left[\frac{nqf(c_{n})}{ 1-qF(c_{n})}\frac{\mathrm{d}c_{n}}{\mathrm{d}n}-\ln\left(1-qF(c_{n})\right) \right]. \tag{7}\]
From (7), we see that \(\mathrm{d}P_{n}/\mathrm{d}n\geq 0\) if and only if the magnitude of \(\mathrm{d}c_{n}/\mathrm{d}n\), which is negative by Proposition 4, is not too large. Using the equilibrium condition \(c_{n}nF(c_{n})=VP_{n}\), we can derive \(\mathrm{d}c_{n}/\mathrm{d}n\) and show that \(\mathrm{d}P_{n}/\mathrm{d}n\geq 0\) if and only if
\[\frac{(1-qF(c_{n}))\ln\left(1-qF(c_{n})\right)}{-qF(c_{n})}\geq\frac{1}{1+ \frac{F(c_{n})}{c_{n}f(c_{n})}}. \tag{8}\]
Now, since \(c_{n}\to\underline{c}\) by Proposition 6 and \(F(c_{n})\to 0\) by continuity of \(F\). The left-hand side of (29) goes to \(1\) by L'Hopital's rule and this means that if the right-hand side is bounded away from \(1\), then (29) is eventually satisfied. This leads to the sufficient condition in Proposition 7 on the term \(F(c)/(cf(c))\), which holds for a large class of distributions. For example, for \(F(c)=c^{\alpha}\), \(\alpha>0\) with support on \([0,1]\), we have \(\frac{F(c)}{cf(c)}=\frac{1}{\alpha}>0\). It also holds for the Beta distribution and the exponential distribution.
Extensions
We provide further analysis of the bug bounty game in this section. First, we investigate how adding a non-strategic agent, interpreted as an expert, alters the equilibrium behavior. Second, we look at how adding a bug to the software can increase incentives for the agents. Third, we extend the analysis to the case of multiple prizes. Lastly, we show how asymmetric equilibria can exist without imposed assumptions.
### Adding Experts
We next examine whether adding an expert will improve bug finding of the enlarged group--crowd plus expert. The tradeoffs are obvious. The crowd will tend to search less, but this may be overcompensated by the expert's search. Thus, suppose there is a non-strategic agent, an expert, who searches regardless of the cost and finds the bug with probability \(q_{e}\in(0,1]\), which is common knowledge. This could arise if the bug bounty system designer outsources the search to an expert and pays for his cost. Note that we do not assume that \(q_{e}\), in which we call _expertise_, is larger than \(q\). This allows us to capture the situation in which the internal security team, the "expert", is not necessarily more equipped to find the bugs than the crowd.9 We further suppose that the expert gets rewarded in the same manner as the strategic agents.10
Footnote 9: In fact, this situation is often the case in practice as Malladi and Subramanian (2020) reports: “Systems are becoming complex, and the nature of vulnerabilities is becoming unpredictable, thereby limiting a firm’s ability to trace critical weaknesses. Given this, firms are increasingly leveraging BBPs [bug bounty programs] to crowdsource both discovery and fixing of vulnerabilities.”
Footnote 10: That is, the expert and the strategic agents who found the bug get rewarded with equal probability. This arises if the expert finds the bug, if any, at a random time that is also distributed uniformly on \([0,T]\). An alternative reward scheme is to keep the prize if the expert finds the bug. With this scheme, however, the equilibrium simply solves \(c=V(1-q_{e})\Phi(c)\).
We now characterize the equilibrium of the game with an expert. For ease of exposition, we focus only on symmetric equilibria. Analogous to the original game (bug search without expert), the key quantity is the probability that an agent wins the prize in the game with an expert. To derive this quantity, denoted by \(\Phi^{e}\), we condition the winning probability on two cases: if the expert does not find the bug (with probability \(1-q_{e}\)) and if the expert finds the bug (with probability \(q_{e}\)). After some algebra, we get
\[\Phi^{e}(c,q,q_{e},n)\equiv\Phi(c,q,n)-q_{e}\frac{1-(1-qF(c))^{n}(1+nqF(c))}{n (n+1)qF(c)^{2}} \tag{9}\]
if \(c>\underline{c}\) and \(\Phi^{e}(\underline{c},q,q_{e},n)\equiv q(1-q_{e}/2)\). Now, we assume an analog of Assumption 2 for the game with an expert to ensure the interiority of the equilibrium and obtain
**Proposition 8**.: _Suppose \(\underline{c}<\Phi^{e}(\underline{c},q,q_{e},n)\) and \(\Phi^{e}(\overline{c},q,q_{e},n)<\overline{c}\). Then, the unique symmetric equilibrium of the game with expert \(q_{e}\in(0,1]\) is \(\boldsymbol{\sigma}_{c^{e}}\). The equilibrium threshold
\(c^{e}\equiv c^{e}(V,q,q_{e},n)\in(\underline{c},\overline{c})\) is the solution to_
\[c^{e}=V\Phi^{e}(c^{e},q,q_{e},n). \tag{10}\]
Denote \(c^{e}(q_{e})\) as the equilibrium threshold of the game with expert \(q_{e}\) and \(c^{*}(n)\) as the equilibrium threshold of the original game with \(n\) agents. It follows that \(c^{e}(q_{e})<c^{*}(n)\) since the second term in (9) is positive and thus \(\Phi^{e}<\Phi\) as functions of \(c\). Intuitively, the expert crowds out the search effort of the agents as fewer of them decide to search since the return prospects decline. Moreover, we have that \(\lim_{q_{e}\to 0}c^{e}(q_{e})=c^{*}(n)\) since \(\Phi^{e}\) approaches \(\Phi\) as \(q_{e}\to 0\). This implies that for sufficiently small \(q_{e}\), we have
\[c^{*}(n+1)<c^{e}(q_{e})<c^{*}(n). \tag{11}\]
Furthermore, since the expert is a non-strategic agent who searches regardless of their cost, adding an expert with \(q_{e}=q\) crowds out individuals' search incentives more so than adding an extra agent would. That is, we have that
\[c^{e}(q)<c^{*}(n+1)<c^{*}(n). \tag{12}\]
Together, (11) and (12) imply that there exists a critical expertise \(\hat{q}_{e}\in(0,q)\) such that the equilibrium threshold in the game with an expert is equal to the equilibrium threshold in the game with an additional strategic agent, \(c^{e}(\hat{q}_{e})=c^{*}(n+1)\). By the indifference conditions (6) and (10), we have
\[\Phi^{e}(c^{e}(\hat{q}_{e}),q,n,\hat{q}_{e})=\Phi(c^{*}(n+1),n+1,q),\]
which after some algebra yields The next proposition summarizes the above analysis.
**Proposition 9**.: _Let \(\hat{q}_{e}\) be such that \(c^{e}(\hat{q}_{e})=c^{*}(n+1)\). Then \(\hat{q}_{e}=qF(c^{*}(n+1))\). Moreover, if \(q_{e}<\hat{q}_{e}\), then \(c^{*}(n+1)<c^{e}(q_{e})\), while if \(q_{e}>\hat{q}_{e}\), then \(c^{e}(q_{e})<c^{*}(n+1)\)._
We now look at the probability of success when the expert is present. This probability, given by
\[P^{e}(q_{e},n)\equiv(1-q_{e})P(c^{e}(q_{e}),q,n)+q_{e},\]
consists of two terms. If the expert does not find the bug then the crowd succeeds with probability \(P(c^{e}(q_{e}),q,n)\), while success is guaranteed if the expert succeeds. These two terms capture the two effects. First, there is the crowding-out effect, which decreases participation and therefore decreases the probability of finding the bug. Second, there is the direct benefit of expert search, which increases the probability of finding the bug. The natural question then is whether the first or the second effect dominates, that is, whether \(P^{e}(q_{e},n)\) is larger or smaller than \(P^{*}(n)\).
Let us first consider the extreme cases. If \(q_{e}=1\), then success is guaranteed as the direct benefit dominates. On the other extreme, \(P^{e}(q_{e},n)\to P^{*}(n)\) as \(q_{e}\to 0\) since both effects vanish. One would then conjecture that as the expertise increases, the probability of finding the bug would also increase. It turns out that this is not the case. To see this, consider the specification from either Example 1 or Example 2 and let \(q_{e}=\hat{q}_{e}\). By Proposition 9 and the definition of \(\hat{q}_{e}\), we have
\[P^{e}(\hat{q}_{e},n) =(1-\hat{q}_{e})P(c^{e}(\hat{q}_{e}),q,n)+\hat{q}_{e}\] \[=(1-\hat{q}_{e})(1-(1-qF(c^{e}(\hat{q}_{e})))^{n})+\hat{q}_{e}\] \[=1-(1-\hat{q}_{e})(1-qF(c^{e}(\hat{q}_{e})))^{n}\] \[=P(c^{*}(n+1)).\]
Therefore, the probability of success with an expert equals the probability of success with an additional strategic agent. The values from Table 1 then show that the probability of success may decrease or increase with the addition of an outside expert.
This shows that intermediate values of \(q_{e}\) either the direct benefit or the crowding-out effect may dominate. In other words, there is non-monotonicity in the probability of success with respect to expertise. The implication is that when hiring an internal team one must make sure that their expertise is sufficiently high relative to that of the crowd.
### Adding Artificial Bug
In this section, we allow the designer can add a bug to the software. This bug is called an artificial bug (in contrast to the possible real bug) and is known to the designer, but not to the participants of the bug bounty scheme. The idea is to increase the incentives for the agents to engage in the costly search process. The downside is that the expenses of the designer are increasing as more rewards may have to be paid out.
We assume that the event of finding the artificial bug is stochastically independent of the event to find the real bug. This is reasonable as the designer knows nothing about the real bug. The designer selects the probability that such an artificial bug is found by an agent which is denoted by \(q_{a}\). The designer can select a high (low) value by making it easy (difficult) that the artificial bug to be found for the participants in the bug bounty scheme.
We thus assume that once an agent has decided to invest in the costly search for a bug, with probability \(q\) he finds the real bug and with probability \(q_{a}\) he finds the artificial bug. This assumption is reasonable as the search is viewed as an investment in finding the bug which is a zero/one decision in our model. Hence, the probability that at least one of the bugs is found is equal to \(Q\equiv q+q_{a}-qq_{a}\).
We observe from Proposition 4 that \(\Phi(c,Q,n)\) is increasing in \(Q\). Hence, it is optimal
for the principal to set it to 1, that is, set \(q_{a}=1\) if he wants to maximize the probability to find the real bug. By setting \(q_{a}=1\), the number of participating agents in the bug bounty scheme is maximized, and thus the chance to find the real bug. Hence, the optimal choice of \(q_{a}\) corresponds to adding a very easy bug which is found by everyone with probability 1, as long as they exert the costs to find the bug.
Of course, adding a very easy bug will increase the expected rewards the designer has to pay to the participants. Therefore, we next look at the broader objective when the principal wants to maximize his utility taking into account the costs of having a real bug and the payments for rewards. Suppose the principal derives utility \(W\) from finding the real bug. Then, the problem of the designer can be written as:
\[\max_{q_{a}}\;W\Pr(\text{real bug is found})-V\Pr(\text{any bug is found}). \tag{13}\]
The corresponding probabilities of (13) are given by our previous expressions once calculated with only \(q\) and once calculated with \(Q\), namely:
\[\Pr(\text{real bug is found})=P(c^{*}(Q),q),\]
and
\[\Pr(\text{any bug is found})=P(c^{*}(Q),Q).\]
Let us illustrate the trade-offs with a simple example. Take \(W=1\) and \(V=1\) and \(F(c)=c\) when \(c\in[0,1]\), \(F(c)=1\) when \(c>1\), \(q=0.1\) and two different values of \(q_{a}\), for example, 1 and 0.8. In both cases, in the equilibrium, we have \(c^{*}(Q)=1\). That is, all agents engage in the search. In the first case, the principal pays more for finding a bug, while in the second case, the principal pays less. In both cases, the principal pays the same for finding the real bug. In other words, it is better for the principal to add an artificial bug only with some probability and this probability is less than 0.8.11
Footnote 11: By numerical approximations we find the optimal value for the probability that an artificial bug is inserted to be 0.72.
### Multiple Prizes
In this section, we extend our analysis to the case of multiple prizes. The set up is as before, but with the addition that if agent \(i\) finds the bug and is the \(m\)-th agent to do so, agent \(i\) receives a prize \(v^{m}\) (\(m=1,\dots,n\)). We denote \(\boldsymbol{v}=(v^{1},\dots,v^{n})\) as the prize vector and consider \(\boldsymbol{v}\in\mathcal{V}\equiv\{\boldsymbol{v}:v^{1}\geq\dots\geq v^{n}\geq 0\text { and }\sum_{j}v^{j}=V\}\). The payoff of agent \(i\) from (1) is now changed to
\[u_{i}(s_{i},\boldsymbol{s}_{-i},c_{i})=s_{i}\left(\sum_{m=1}^{n}p_{i}^{m}( \boldsymbol{s}_{-i})v^{m}-c_{i}\right), \tag{14}\]
where \(p_{i}^{m}(\mathbf{s}_{-i})\) is now the probability that agent \(i\) is the \(m\)-th agent to find the bug conditioning searching. The expression for \(p_{i}^{m}(\mathbf{s}_{-i})\) is given by
\[p_{i}^{m}(\mathbf{s}_{-i})=\left\{\begin{array}{ll}q\sum_{t=m-1}^{S_{-i}}{S_{-i} \choose t}q^{t}(1-q)^{S_{-i}-t}\frac{1}{t+1}&\text{if}\ \ m-1\leq S_{-i}\\ 0&\text{if}\ \ m-1>S_{-i}.\end{array}\right. \tag{15}\]
Note that the winner-takes-all contest is a special case with \(v^{1}=V\) and \(p_{i}^{1}(\mathbf{s}_{-i})=p_{i}(\mathbf{s}_{-i})\) as given in (2).
We now characterize the equilibrium of the game with the modified payoff given in (14). We begin by nothing that Proposition 1 still holds with essentially no modification to its proof. The equilibrium threshold vector, \(\mathbf{c}^{*}\), if it is interior, must now satisfy the following system of indifference conditions: for all \(i\), \(c_{i}^{*}=\sum_{m=1}^{n}v^{m}\Psi^{m}(\mathbf{c}_{-i}^{*})\), where for \(m=1,\ldots,n\), \(\Psi^{m}:[\underline{c},\overline{c}]^{n-1}\to\mathbb{R}\) is given by
\[\Psi^{m}(\hat{\mathbf{c}}_{-i})\equiv q\sum_{\begin{subarray}{c}K\subset N\setminus \{i\}\\ |K|\geq m-1\end{subarray}}\left\{\prod_{j\in K}F(\hat{c}_{j})\prod_{j\notin K}( 1-F(\hat{c}_{j}))\left[\sum_{t=m-1}^{|K|}{|K|\choose t}q^{t}(1-q)^{|K|-t} \frac{1}{t+1}\right]\right\}.\]
\(\Psi^{m}\) is the probability that agent \(i\) will be the \(m\)-th agent to find the bug given that the other \(n-1\) agents deploy some threshold strategies and indeed \(\Psi^{1}=\Psi\). Some important properties of \(\Psi^{m}\)'s are as follows.
**Proposition 10**.: _The family of functions \(\Psi^{m}\) (\(m=1,\ldots,n\)) has the following properties:_
1. \(\sum_{m=1}^{n}\Psi^{m}=q\)_,_
2. \(\Psi^{m}>\Psi^{m+1}\)_,_
3. \(\Psi^{m}\) _is strictly decreasing in_ \(c_{j}\) _if and only if_ \(m=1\)_,_
4. \(\Psi^{1}(\underline{c},\ldots,\underline{c})=q\) _and_ \(\Psi^{1}(\overline{c},\ldots,\overline{c})=\frac{1-(1-q)^{n}}{n}\)_,_
5. _For_ \(m\neq 1\)_,_ \(\Psi^{m}(\underline{c},\ldots,\underline{c})=0\) _and_ \(\Psi^{m}(\overline{c},\ldots,\overline{c})=q\sum_{t=m-1}^{n-1}{n-1\choose t}q ^{t}(1-q)^{n-1-t}\frac{1}{t+1}\)_._
Some remarks are in order. First, because the agent wins some prize (not necessarily positive) with certainty if s/he finds the bug, \(\sum_{m=1}^{n}\Psi^{m}=q\). Second, there is a higher probability of winning the first prize than the second. The intuition is that for a fixed number of agents who find the bug, agent \(i\)'s ranking is uniformly random. Given this, the first prize is always available to agent \(i\) if s/he finds the bug regardless of how many other find it as well. The second prize, however, is only available if at least one other agent finds it. This reasoning leads to the fact that \(\Psi^{m}>\Psi^{m+1}\). Third, while the probability of winning the first prize goes down as more agents participate, the probability of winning other prizes may go up. That is, \(\Psi^{m}\) need not be strictly decreasing in \(c_{j}\) for \(m\neq 1\). To see this, consider \(\Psi^{2}\). Intuitively, if the thresholds used by the other agents are very low,
then there will be less participants and thus less agents finding the bug. In turn, this makes agent \(i\)'s probability of being second low as well since there is no one to be second to. Increasing the thresholds of others make them more likely to participate and find the bug, and thus increases agent \(i\)'s chance of being second.
As in the baseline case, we impose assumptions on \(\Psi^{m}\) to ensure uniqueness and interiority of the equilibrium threshold, and characterize the equilibrium of the game.
**Proposition 11**.: _Suppose \(\sum_{m}v^{m}\partial\Psi^{m}/\partial c_{j}\neq-1\), and \(\underline{c}<\sum_{m}v^{m}\Psi^{m}(\underline{c})\) and \(\sum_{m}v^{m}\Psi^{m}(\overline{c})<\overline{c}\). Then, the unique equilibrium of the game with prize vector \(\mathbf{v}\) is \(\mathbf{\sigma}_{c^{\mathbf{v}}}\). The equilibrium threshold \(c^{\mathbf{v}}\) is the solution to_
\[c^{\mathbf{v}}=\sum_{m=1}^{n}v^{m}\Phi^{m}(c^{\mathbf{v}}), \tag{16}\]
_where_
\[\Phi^{m}(\hat{c})\equiv q\sum_{k=m-1}^{n-1}\left\{\binom{n-1}{k}F(\hat{c})^{k }(1-F(\hat{c}))^{n-1-k}\left[\sum_{t=m-1}^{k}\binom{k}{t}q^{t}(1-q)^{k-t}\frac{ 1}{t+1}\right]\right\}.\]
We now focus on this unique equilibrium. Since \(\Phi^{m}\) is a "slice" of \(\Psi^{m}\) along the "diagonal", Proposition 10 implies that for all \(\hat{c}\), \(\Phi^{1}(\hat{c})>\Phi^{2}(\hat{c})>\cdots>\Phi^{n}(\hat{c})\) and that \(\sum_{m=1}^{n}\Phi^{m}(\hat{c})=q\). This observation leads to the following result.
**Proposition 12**.: _For any \(\mathbf{v}\in\mathcal{V}\),_
\[V\Phi^{1}(\hat{c})\geq\sum_{m=1}^{n}v^{m}\Phi^{m}(\hat{c})\geq\frac{V}{n}q\]
_for all \(\hat{c}\). It follows that_
* _The prize vector_ \(\mathbf{v}=(V,0,\ldots,0)\)_, i.e. the winner-takes-all contest, maximizes_ \(c^{\mathbf{v}}\) _and, consequently, maximizes the probability of success,_
* _The prize vector_ \(\mathbf{v}=(V/n,\ldots,V/n)\) _minimizes_ \(c^{\mathbf{v}}\) _and, consequently, minimizes the probability of success._
A similar result has been noted in Sarne and Lepioshkin (2017) in a different simple contest model. Consequently, since \(P^{*}\) is increasing in \(c^{*}\) setting the contest to be winner-takes-all maximizes the probability of success.
Note, however, that maximizing the probability of success is typically not the principal's objective when multiple prizes are allowed. Instead, suppose the principal derives utility \(W\) from finding the bug. Then, the principal's problem is to maximize
\[U(\mathbf{v})\equiv WP(c^{\mathbf{v}})-\sum_{m=1}^{n}v^{m}P^{m}(c^{\mathbf{v}}),\]
where \(P^{m}\) is the probability that at least \(m\) agents find the bug and is given by
\[P^{m}(c^{\mathbf{v}})=\sum_{k=m}^{n}\binom{n}{k}F(c^{\mathbf{v}})(1-F(c^{\mathbf{v}}))^{n-k} \sum_{t=m}^{k}\binom{k}{t}q^{t}(1-q)^{k-t}.\]
In fact, \(P^{1}(c^{\mathbf{v}})=P(c^{\mathbf{v}})\) and \(U(\mathbf{v})\) simplifies to \((W-v^{1})P(c^{\mathbf{v}})-\sum_{m=2}^{n}v^{m}P^{m}(c^{\mathbf{v}})\).
The optimal prize vector depends on the parameters of the model and is typically not the winner-takes-all structure. Characterizing the general optimal structure of the prizes is a direction of future research. Here we provide a simple example.
**Example 3**.: For \(n=2\), \(v^{1}+v^{2}=V\) and the prize vector can is characterized by one variable \(v^{1}\). Suppose further that \(F\sim\mathcal{U}[0,1]\). The principal maximizes
\[U(v^{1})=2(W-v^{1})qc^{\mathbf{v}}-(W+V-2v^{1})q^{2}(c^{\mathbf{v}})^{2}.\]
Now, the equilibrium threshold \(c^{\mathbf{v}}\) solves
\[c^{\mathbf{v}}=v^{1}\underbrace{\left(q-\frac{q^{2}}{2}c^{\mathbf{v}}\right)}_{\Phi^{ 1}(c^{\mathbf{v}})}+(V-v^{1})\underbrace{\frac{q^{2}}{2}c^{\mathbf{v}}}_{\Phi^{2}(c^{ \mathbf{v}})}=v^{1}q-(2v^{1}-V)\frac{q^{2}}{2}c^{\mathbf{v}}.\]
Combining yields
\[U(v^{1})=2(W-v^{1})q\frac{v^{1}q}{1+(2v^{1}-V)\frac{q^{2}}{2}}-(W+V-2v^{1})q^{ 2}\left(\frac{v^{1}q}{1+(2v^{1}-V)\frac{q^{2}}{2}}\right)^{2}.\]
Consider \(W=2\), \(V=1\), and \(q=1\). We then have that \(U(1)=8/9<24/25=U(3/4)\).
### Asymmetric Equilibria
Proposition 1 asserts that any equilibrium of the bug bounty game is in threshold strategies. The main analysis focuses on a symmetric equilibrium, where all agents use the same threshold. Without imposing Assumption 1, however, the game may have multiple equilibria, symmetric as well as asymmetric. We illustrate this with the case of \(n=2\), where \(\Psi(c_{-i})=q[1-\frac{q}{2}F(c_{-i})]\). Assuming the equilibrium threshold vector \((c_{1}^{*},c_{2}^{*})\) is interior, it must solve the system of equations in (3). In this example, the system is
\[c_{1}=qV\left[1-\frac{q}{2}F(c_{2})\right]\qquad\text{and}\qquad c_{2}=qV\left[ 1-\frac{q}{2}F(c_{1})\right]. \tag{17}\]
Let \(q=1\) and \(V=5/7\), and let \(F\) be defined for \(c\in[0,1]\) as:
\[F(c)=\left\{\begin{array}{rl}\frac{14}{15}c&\mbox{if}\quad 0\leq c<\frac{3}{7} \\ \frac{14}{5}c-\frac{4}{5}&\mbox{if}\quad\frac{3}{7}\leq c\leq\frac{4}{7}\\ \frac{14}{30}c+\frac{8}{15}&\mbox{if}\quad\frac{4}{7}<c\leq 1.\end{array}\right. \tag{18}\]
The system in (17) is depicted in Figure 2, which shows that any
\[(c_{1}^{*},c_{2}^{*})\in\left\{(c_{1},c_{2}):c_{1}\in\left[\frac{3}{7},\frac{4 }{7}\right]\mbox{ and }c_{1}+c_{2}=1\right\}\]
constitutes an equilibrium threshold vector. Indeed, the symmetric equilibrium threshold \(c^{*}=1/2\) is one of the solutions.
## 5 Uniform Cost Distribution
In this section, we consider the special case of uniform cost distribution. Given \(0\leq\underline{c}<\overline{c}<\infty\), the cumulative distribution function on the support is given by \(F(c)=\frac{c-\underline{c}}{\overline{c}-\underline{c}}\). We then have
\[\Phi(c,q,n)=\left\{\begin{array}{ll}\frac{\overline{c}-\underline{c}}{n(c- \underline{c})}\left[1-\left(1-\frac{q}{\overline{c}-\underline{c}}(c- \underline{c})\right)^{n}\right]&\mbox{for}\quad\underline{c}<c\leq\overline{c }\\ q&\mbox{for}\quad c=\underline{c}.\end{array}\right. \tag{19}\]
It is easy to see that \(\Phi\) is strictly decreasing in \(c\), strictly increasing in \(q\), and strictly decreasing in \(n\) on the appropriate domains.
To illustrate our results on the limit behaviors, we now consider two numerical examples with uniform cost distribution. Let \(V=1\) and \(q=1/2\). First, consider \(F\sim\mathcal{U}[0,1]\)
Figure 2: Asymmetric Equilibria Example.
The equilibrium threshold \(c^{*}(n)\) solves
\[(c^{*}(n))^{2}n=1-(1-c^{*}(n)/2)^{n}.\]
From Proposition 6, \(c^{*}(n)\to 0\) and \(P^{*}(n)\to 1\) for this distribution since \(\underline{c}=0\) and \(\kappa(0)=\infty\).
Now, consider \(F\sim\mathcal{U}[1/4,5/4]\). The equilibrium threshold \(c^{*}(n)\) solves
\[nc^{*}(n)(c^{*}(n)-1/4)=1-(9/8-c^{*}(n)/2)^{n}.\]
For this distribution, \(c^{*}(n)\to\frac{1}{4}\) and \(P^{*}(n)\to 1-e^{-\frac{1}{2}\kappa}\approx 0.797\), since \(\kappa=3.188\). Table 2 shows the numerical values of \(c^{*}(n)\) and \(P^{*}(n)\) for the two specifications for \(n=10,100,1000,2000\).
## 6 Conclusion
As the empirical literature suggests, bug bounty programs can make an important contribution to the security of businesses and public infrastructures, and private firms. We have provided a simple model to study important dimensions along which such programs can be designed. Of course, numerous further directions can be pursued. For instance, one might introduce entry checks regarding the reputation and past achievements of security researchers to build a favorable pool for finding bugs. Alternatively, would the opposite approach (only allowing greenhorns) be beneficial in a bug bounty scheme, as this would motivate many to participate? Also, one could consider a broader menu of rewards, as researchers may be motivated by monetary rewards as well as by reputation gains, which could be documented by success certificates and which would be valuable as an entry ticket for future bug bounty programs. Finally, one could develop further formulas how prizes for successful bug finding should be determined. |
2309.09911 | Neural Parametric Surfaces for Shape Modeling | The recent surge of utilizing deep neural networks for geometric processing
and shape modeling has opened up exciting avenues. However, there is a
conspicuous lack of research efforts on using powerful neural representations
to extend the capabilities of parametric surfaces, which are the prevalent
surface representations in product design, CAD/CAM, and computer animation. We
present Neural Parametric Surfaces, the first piecewise neural surface
representation that allows coarse patch layouts of arbitrary $n$-sided surface
patches to model complex surface geometries with high precision, offering
greater flexibility over traditional parametric surfaces. By construction, this
new surface representation guarantees $G^0$ continuity between adjacent patches
and empirically achieves $G^1$ continuity, which cannot be attained by existing
neural patch-based methods. The key ingredient of our neural parametric surface
is a learnable feature complex $\mathcal{C}$ that is embedded in a
high-dimensional space $\mathbb{R}^D$ and topologically equivalent to the patch
layout of the surface; each face cell of the complex is defined by
interpolating feature vectors at its vertices. The learned feature complex is
mapped by an MLP-encoded function $f:\mathcal{C} \rightarrow \mathcal{S}$ to
produce the neural parametric surface $\mathcal{S}$. We present a surface
fitting algorithm that optimizes the feature complex $\mathcal{C}$ and trains
the neural mapping $f$ to reconstruct given target shapes with high accuracy.
We further show that the proposed representation along with a compact-size
neural net can learn a plausible shape space from a shape collection, which can
be used for shape interpolation or shape completion from noisy and incomplete
input data. Extensive experiments show that neural parametric surfaces offer
greater modeling capabilities than traditional parametric surfaces. | Lei Yang, Yongqing Liang, Xin Li, Congyi Zhang, Guying Lin, Alla Sheffer, Scott Schaefer, John Keyser, Wenping Wang | 2023-09-18T16:21:50Z | http://arxiv.org/abs/2309.09911v1 | # Neural Parametric Surfaces for Shape Modeling
###### Abstract
The recent surge of utilizing deep neural networks for geometric processing and shape modeling has opened up exciting avenues. However, there is a conspicuous lack of research efforts on using powerful neural representations to extend the capabilities of parametric surfaces, which are the prevalent surface representations in product design, CAD/CAM, and computer animation. We present _Neural Parametric Surfaces_, the _first_ piecewise neural surface representation that allows coarse patch layouts of arbitrary \(n\)-sided surface patches to model complex surface geometries with high precision, offering greater _flexibility_ over traditional parametric surfaces. By construction, this new surface representation guarantees \(G^{0}\)_continuity_ between adjacent patches and empirically achieves \(G^{1}\) continuity, which cannot be attained by existing neural patch-based methods. The key ingredient of our neural parametric surface is a _learnable_ feature complex C that is embedded in a high-dimensional space \(B^{D}\) and topologically equivalent to the patch layout of the surface; each face cell of the complex is defined by interpolating feature vectors at its vertices. The learned feature complex is mapped by an MLP-encoded function \(f:C\rightarrow\mathcal{S}\) to produce the neural parametric surface
\(\mathcal{S}\). We present a surface fitting algorithm that optimizes the feature complex \(\mathcal{C}\) and trains the neural mapping \(f\) to reconstruct given target shapes with high accuracy. We further show that the proposed representation along with a compact-size neural net can learn a plausible shape space from a shape collection, which can be used for shape interpolation or shape completion from noisy and incomplete input data. Extensive experiments show that neural parametric surfaces offer greater modeling capabilities than traditional parametric surfaces.
**Computing Methodologies \(\rightarrow\) Parametric curve and surface models.**
## 1. Introduction
In this work, we explore the power of neural networks to enhance the capabilities of parametric surfaces, which are the prevalent representation in shape modeling in various forms, such as spline surfaces, subdivision surfaces, Coons patches, etc. Notwithstanding a proliferation of studies in utilizing deep neural networks for geometric processing (Aigerman et al., 2022; Deprelle et al., 2022; Morreale et al., 2022; Yang et al., 2021) and shape modeling (Groueix et al., 2018; Guo et al., 2022; Park et al., 2019; Sitzmann et al., 2020; Yang et al., 2018), there is a conspicuous lack of research efforts on the neural representation of parametric surfaces. Most existing works in this direction are concerned with neural _implicit_ representations, which represent a shape as the level set of some function over a spatial domain (Martel et al., 2021; Park et al., 2019; Takikawa et al., 2021). Despite their advantages of smoothness and compactness over discrete representations (e.g. point clouds and meshes), the neural implicit representations have difficulty in representing open surfaces, non-manifold surface patches, and surfaces with sharp features commonly present in CAD models, which can be handled naturally by parametric surfaces.
There have been a few learning-based methods for parametric forms. (Bednarik et al., 2020; Groueix et al., 2018; Williams et al., 2019) adopt a _patch-based_ representation to model a given shape as _an atlas of surface patches_ overlapping each other. The methods in (Guo et al., 2022; Sharma et al., 2020) produce a segmentation from given point cloud data and convert the segmented patches into parametric surfaces. However, none of these works can ensure continuity across adjacent surface patches, which is crucial for applications in product design, reverse engineering, and CAD/CAM.
A major limitation of traditional parametric surfaces (e.g. spline surfaces and subdivision surfaces) is the common assumption of a _rectangular_ or _triangular_ parametric domain for each surface patch, which precludes the use of semantically meaningful multi-sided surface patches, such as pentagonal or hexagonal patches. Furthermore, since typically lower-degree polynomials (e.g. cubic polynomials) are used as basis functions, the representation power of traditional parametric surfaces is limited in the sense that a large number of refinement patches are necessitated to represent shapes with fine geometric details, especially in shape fitting applications. Therefore, it is desirable to develop more powerful methods to accommodate the modeling of parametric surfaces composed of arbitrary \(n\)-sided patches that are coarse, semantically meaningful, and smoothly joined.
We present _Neural Parametric Surfaces_, the _first_ piecewise neural surface representation composed of arbitrary \(n\)-sided surface patches with \(G^{0}\) continuity to model complicated surface geometry, offering greater modeling flexibility than traditional parametric surfaces; see Fig. 2 as an example. The adjacent patches are guaranteed to be continuously joined; furthermore, training with an effective loss function is able to achieve empirically smooth joining (\(G^{1}\) continuity) between the patches. We employ an MLP-encoded function to parameterize each patch so as to allow for coarse patches to model complex shapes due to the increased representation power of individual patches. We demonstrate the applications of this new representation in surface fitting and in learning the shape space of a class of objects for several downstream tasks. All these properties of this new surface representation are attributed to two key components: 1) a _learnable feature complex_; and 2) a _learned continuous mapping function_\(f\) that is modeled by deep neural networks.
**Feature complex**. We introduce a _learnable feature complex_\(\mathcal{C}\), a 2-complex embedded in higher dimensional space \(\mathbb{R}^{D}\). As a topological structure, \(\mathcal{C}\) is defined by a collection of face cells, together with their shared boundary arcs, and vertices. A vertex of \(\mathcal{C}\) is represented by a learnable \(D\)-dimensional vector, called a _vertex feature vector_, or _vertex feature_ for short.
We adopt the mean value interpolation (Floater, 2003) to define the points on each face of the feature complex from its vertex features. Then an MLP network \(f\) is used to map the 2-manifold complex \(\mathcal{C}\) from \(\mathbb{R}^{D}\) to \(\mathbb{R}^{3}\) to yield a parametric surface, with each face cell of \(\mathcal{C}\) mapped to a surface patch. This allows the _Neural Parametric Surface_ to achieve an accurate approximation of the given geometry without the need for refinement schemes used in traditional spline surfaces, thus affording the use of a coarse and semantically meaningful patch layout.
We note that, for each face of the complex, the mean value interpolation induces a linear interpolation on each boundary edge from its two end vertex features. Therefore, all the adjacent faces
Figure 2. To model a _Kettle_ surface, the T-spline method (left column) needs 32 quad patches, while our representation (right column) uses fewer (16 here) polygonal patches.
of the complex have a shared boundary curve, which is a straight line segment given by linear interpolation. Hence, any two adjacent faces of the complex are continuously joined, and so are their corresponding patches of the neural parametric surfaces.
**Surface fitting**. We present a surface fitting method for converting other surface representations (e.g. point clouds or polygonal meshes) to the proposed _Neural Parametric Surface_. Given a pre-segmented surface shape,1 we first construct a feature complex \(\mathcal{C}\subset\mathbb{R}^{D}\) of a _Neural Parametric Surface_. This complex is topologically equivalent to the patch layout induced by the partition of the target shape. Then, the surface fitting task is to represent the target shape by a neural parametric surface \(\mathcal{S}\) composed of parametric surface patches. Each segment of the target shape is approximated by a patch of \(\mathcal{S}\).
Footnote 1: We assume the segmentation is either inherent to the shape (as in models using B-Rep representation) or manually prepared by users. The only requirement of this segmentation is that each segment needs to have a disc topology.
Our fitting method is a learning framework that optimizes the geometry of the feature complex \(\mathcal{C}\) and trains the neural mapping function \(f\) to produce the desired neural parametric surface. We show the proposed approach can reconstruct, at a high-fidelity level, a diverse set of surface shapes, ranging from CAD models and mechanical parts with smoothly changing surface geometry and sharp features, to garments and organic objects with detailed surface features such as wrinkles.
**Leaning from a data collection.** We demonstrate the _Neural Parametric Surface_'s capability of learning from a collection of shapes and optimizing for a shape space of objects. With this capability, the _Neural Parametric Surface_ can be used for shape interpolation, a task useful in style design and editing, and can be applied to surface generation from a noisy and incomplete point cloud input.
Our technical contributions are as follows:
* We propose the first piecewise neural parametric surface representation, _Neural Parametric Surface_, that is capable of modeling surfaces with coarse \(n\)-sided polygonal patches and attaining smooth surface reconstruction of the target geometry, thus greatly extending traditional parametric surfaces.
* A learnable feature complex is introduced. The feature complex is defined in a high-dimensional embedding space and decoded with a neural mapping function into a parametric surface with _smooth joining_ between the resulting patches.
* An efficient surface fitting algorithm is presented for converting other surface representations to a neural parametric surface.
* We demonstrate neural parametric surfaces can be used to learn the shape space of a shape collection for performing shape interpolation and shape generation from noisy and/or incomplete point clouds.
## 2. Related Works
**Neural implicit representations.** Several studies (Chibane et al., 2020; Gropp et al., 2020; Sitzmann et al., 2020) have exploited the approximation capabilities of deep neural networks to enhance implicit representations that define a target surface geometry as the zero-level set of a scalar function. These works have enabled learning a shape space from a collection of data (Deng et al., 2021; Park et al., 2019) or using regular feature grids to enhance the efficacy of ordinary MLP networks (Martel et al., 2021; Takikawa et al., 2021). To model complex geometry, several recent works (Tretschk et al., 2020; Zhang et al., 2022) adopt a compositional approach, which represents each part of a given 3D shape as the zero-level set of a learned implicit field and blends multiple such fields corresponding to different parts to fit the given 3D shape.
Due to the continuous nature of a neural network, it is challenging to represent sharp features present in CAD models using neural implicit representations. A recent study (Guo et al., 2022) proposed to represent the surface patches as neural half-spaces and then compose them into a watertight model via Boolean operations as used in Constructive Solid Geometry. This can be cumbersome as the network must additionally take care of the extended parts of the zero-level sets that approximate the surface patches; otherwise, artifacts may be observed.
In contrast to neural _implicit surface_ representations, we propose a new neural _parametric surface_ representation and show that it inherits many merits of traditional parametric surface representations, such as modeling open surfaces, sharp features, and non-manifold surface patches that neural implicit representations now struggle to achieve. Similar to traditional parametric representations, our neural parametric surfaces are easy to render as compared to _implicit surface_ representations which require a Marching Cubes-like algorithm to contour the zero-level set.
**Neural patch-based representations.** Some pioneering works have been proposed to represent a 3D shape either as a single patch (Yang et al., 2018) or multiple patches, as in AtlasNet (Groueix et al., 2018). Similar to traditional parametric representations, AtlasNet defines these patches on a 2D rectangular domain and lifts them to 3D using multiple learnable mapping functions to approximate a target shape. However, for complex 3D surface shapes (as shown in some examples in Fig. 1), using solely these rectangular patches can result in surface reconstructions with holes. AtlasNet, hence, circumvents this (visually) by wrapping the generated patches around the target shape to produce the surface reconstruction. However, these surface patches actually overlap with their surrounding patches rather than seamlessly stitching with each other along their boundaries, which is not desirable for downstream applications like product design and CAD/CAM.
A number of studies (Deprelle et al., 2019; Gadelha et al., 2021; Groueix et al., 2018; Low and Lee, 2022; Williams et al., 2019) have built upon AtlasNet's framework and extended it in various aspects, for example, further enhancing its ability in modeling shape collections (Deprelle et al., 2019; Groueix et al., 2018), mitigating the issue of overlapping patches (Gadelha et al., 2021), and enabling the capability of handling holes within an individual parametric patch (Low and Lee, 2022). However, like AtlasNet, these approaches also assume that each chart is _topologically rectangular_. They also use _distinct_ mapping functions for each chart. As a result, adjacent 3D surface patches still overlap with each other in the final reconstruction. To produce a seamless representation, (Williams et al.,
2019) requires an extra step to convert its output into an implicit field using Poisson reconstruction.
In contrast to AtlasNet and its variants, which separately map 2D parametric domains to individual 3D surface patches, our approach employs an optimized feature complex as an intermediate geometric entity to stitch multiple 2D domains in a learnable feature space. We then train a shared mapping function to project each point from this feature complex to 3D space. Consequently, our approach ensures that the resulting geometry is piecewise smooth and that adjacent surface patches connect continuously at their shared boundaries. While our formulation requires a given patch layout for the target shape as input, this layout can be easily generated using off-the-shelf tools like (Livesu et al., 2020). As shown in our experiment, it is difficult for AtlasNet-based approaches (e.g., (Deng et al., 2020)) to accommodate patch layouts containing \(n\)-sided polygonal patches due to their dependency on rectangular parametric domains.
In summary, the advantages of our proposed representation are as follows: 1) our _Neural Parametric Surface_ is a continuous representation, i.e. no gaps between two adjacent patches; 2) its surface patches can be arbitrary \(n\)-sided, rather than just rectangular as in traditional parametric representations or recent learning-based representations; and 3) meshing and rendering of the resulting shape are straightforward as will be described later, without the need for contouring the surface geometry required by neural implicit representations.
**Learning parameterization for 3D shapes.** Several works propose to learn a parameterization \(g\) that projects points from a given 3D shape onto a 2D parametric plane (Deprelle et al., 2022; Morreale et al., 2021) or multiple 2D charts (Pokhariya et al., 2022). However, since parameterizing topologically non-trivial surfaces on a rectangle or disk domain inevitably introduces cut seam(s), surfaces reconstructed using the inverse parameterization \(g^{-1}\) will carry the cut seams. Therefore, a simple planar parameterization such as (Deprelle et al., 2022) does not ensure seamless reconstruction results. In contrast, our approach can preserve the desired continuity between adjacent patches in the reconstructed results by leveraging the feature complex that captures the topological structure of the target shape.
**Constructing a patch layout from 3D shapes.** There is a line of work to predict the segmentation of a 3D geometry and reconstruct the shape using parametric primitives (Yu et al., 2022), Coons patches (Smirnov et al., 2020), B-spline surfaces (Sharma et al., 2020), or a neural parametric patch defined on a rectangle domain (Guo et al., 2022). Our work is complementary to these studies. For example, the patch layout predicted by (Smirnov et al., 2020) can be used as an input to our approach, while the representational power of our Neural Parametric Surface over Coons patches can be leveraged to represent more complex geometry (cf. Fig. 12).
Many previous methods are proposed to produce a (quad) patch layout for geometry processing (Born et al., 2021; Campen, 2017; Cohen-Steiner et al., 2004; Livesu et al., 2020; Tarini et al., 2004). We show that our Neural Parametric Surface can take as input the \(n\)-sided segmented patches produced by (Cohen-Steiner et al., 2004; Livesu et al., 2020) and convert a mesh-based representation to a neural piecewise parametric surface.
## 3. Neural Parametric Surfaces
### Constructing Feature Complex
Given a 3D surface geometry along with an associated patch layout (as shown in Fig. 3(d)), we introduce a 2D feature complex \(\mathcal{C}\) (see Fig. 3(b)) that is embedded in a high-dimensional feature space \(\mathbb{R}^{D}\) (\(D=128\) in this work). This feature complex maintains the same topological structure as the patch layout. The embedding of feature complex \(\mathcal{C}\) is determined by its vertices, each represented as a \(D\)-dimensional vector, referred to as _vertex features_. Each \(n\)-sided face cell of \(\mathcal{C}\) is obtained via mean value interpolation from the vertices on an \(n\)-sided planar polygon. Then, a _shared_ MLP-encoded
Figure 3. **Design for Neural Parametric Surfaces.** For a given target surface \(\mathcal{M}\), we construct a feature complex \(\mathcal{C}\) based on a set of learnable features \(\{\mathbf{z}_{k}\}\) defined at complex cells’ vertices (yellow nodes). Each complex face (\(C_{i}\)) is parameterized by a planar \(n\)-sided polygonal domain \(\Omega_{i}\) using the mean value interpolation \(g\). We train the complex embedding \(\{\mathbf{z}_{k}\}\) and the mapping function \(f_{0}\) to produce a neural parametric surface \(\mathcal{S}\) that approximates \(\mathcal{M}\) with high fidelity and preserves the partitioning (each patch delineated by green boundary curves) it carries.
function \(f:\mathbb{R}^{D}\rightarrow\mathbb{R}^{3}\) is employed to map the \(2\)-complex \(\mathcal{C}\) onto a continuous, piecewise smooth surface \(\mathcal{S}\in\mathbb{R}^{3}\), in which each patch corresponds one-to-one with the face cells of \(\mathcal{C}\), as illustrated in Fig. 3(c). The vertex features are learned during a surface fitting procedure. This neural parametric surface representation can be flexibly defined to depict various shapes, based on either user-designed or automatically computed patch layouts.
Consider a face cell \(\mathcal{C}_{i}\) within the complex \(\mathcal{C}\). On one hand, \(\mathcal{C}_{i}\) is mapped to a surface patch \(\mathcal{S}_{i}\subset\mathbb{R}^{3}\) via an MLP-encoded function \(f_{\theta}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{3}\). On the other, the face cell \(\mathcal{C}_{i}\) is defined by the mean-value interpolation of its vertex features, which parameterizes \(\mathcal{C}_{i}\) over a predefined 2D \(n\)-sided polygon \(\Omega_{i}\) through the function \(g:\mathbb{R}^{2}\rightarrow\mathbb{R}^{D}\). Consequently, the parameterization of the surface patch \(\mathcal{S}_{i}\) is defined by the composite function \(h\equiv g\circ f_{\theta}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{3}\). Collectively, the union of these surface patches \(\mathcal{S}_{i}\) forms the neural parametric surface. The learnable parameters of this surface representation consist of the vertex features of complex \(\mathcal{C}\) and the weights of the shared MLP \(f_{\theta}\).
For an \(n\)-sided patch \(\mathcal{S}_{i}\), its _parametric domain_\(\Omega_{i}\subset\mathbb{R}^{2}\) is constructed to be an \(n\)-sided convex polygon inscribed to a unit circle centered at the origin, with its vertices \(\mathbf{u}_{j}\) in correspondence with the corner vertices \(\mathbf{p}_{j}\) of the patch \(\mathcal{S}_{i}\). The 2D corners \(\mathbf{u}_{j}\), \(j=1,2,\ldots,n\) are placed on the circle so that the arc length between \(\overline{\mathbf{u}_{j}\mathbf{u}_{j+1}}\) over the circle circumference is proportional to the boundary lengths of \(\overline{\mathbf{p}_{j}\overline{\mathbf{p}_{j+1}}}\). See Fig. 4 as an example.
Then, the parameterization, \(g\), of a face cell \(\mathcal{C}_{i}\subset\mathbb{R}^{D}\) over \(\Omega_{i}\subset\mathbb{R}^{2}\) is given by the following interpolation based on mean value coordinates:
\[\mathbf{z}(\mathbf{u})=g(\mathbf{u})=\sum_{j=1}^{n}\lambda_{j}(\mathbf{u}) \mathbf{z}_{j}, \tag{1}\]
where \(\mathbf{z}_{j}\) are the vertex feature vectors of the cell \(\mathcal{C}_{i}\), and \(\lambda_{j}(\mathbf{u})\) are the mean value coordinates of the parameter value \(\mathbf{u}\in\Omega_{i}\) associated with the corresponding vertices \(\mathbf{u}_{j}\) of the polygon \(\Omega_{i}\).
Now consider two adjacent faces \(\mathcal{C}_{i}\) and \(\mathcal{C}_{j}\). Because mean value interpolation reduces to linear interpolation along the boundary edges of a face, the edge \(\partial\mathcal{C}_{i,j}\) shared by \(\mathcal{C}_{i}\) and \(\mathcal{C}_{j}\) receives the same embedding in the feature space, based on vertex features defined at the edge's endpoints. Consequently, the constructed feature complex \(\mathcal{C}\) is _continuous everywhere_. Together with the continuous function \(f_{\theta}\), all feature vectors along \(\partial\mathcal{C}_{i,j}\) of two faces \(\mathcal{C}_{i}\) and \(\mathcal{C}_{j}\) are mapped to a 3D curve shared by the corresponding surface patches \(\mathcal{S}_{i}\) and \(\mathcal{S}_{j}\). This ensures the continuity of the resulting piecewise parametric surface representation. The assurance of continuity between adjacent patches distinguishes our Neural Parametric Surface representation from AtlasNet-like representations.
### Optimizing Neural Parametric Surfaces
To derive a _Neural Parametric Surface_ fitting a given surface shape, we jointly optimize (1) the feature vectors \(\mathcal{Z}=\{\mathbf{z}_{k}\}\) that define the geometry of \(\mathcal{C}\) and (2) the parameters \(\theta\) of \(f_{\theta}\). This training pipeline is illustrated in Fig. 5(a). Later, we show how this training pipeline can be extended to learn a latent morphable space of a shape category for a diverse set of downstream tasks; see Fig. 5(b).
**Surface fitting.** We first elaborate on the procedure for fitting our _Neural Parametric Surface_ to a single shape. The input data for our training pipeline consists of a set of points, each accompanied by its corresponding unit normal vector and a label that indicates the patch to which it belongs. We will discuss later how to generate a patch layout (Sec. 4.1)
Specifically, to reconstruct \(\mathcal{M}=\{\mathcal{M}_{k}\}\) using surface \(\mathcal{S}=\{\mathcal{S}_{i}\}\), we minimize the following loss objective:
\[\begin{split} L_{\text{recom}}&=L_{\text{anchor}}+ \lambda_{\text{surface}}L_{\text{surface}}+\lambda_{\text{normal}}L_{\text{ normal}}\\ &+\lambda_{\text{smooth}}L_{\text{smooth}}+\lambda_{\text{fair}}L_{ \text{fair}}\\ &+\lambda_{\text{uniform}}L_{\text{uniform}}+\lambda_{\text{aspect }}L_{\text{aspect}}.\end{split} \tag{2}\]
The _anchor_ deviation term \(L_{\text{anchor}}\) measures the deviation from the corner vertices \(\mathbf{x}_{k}=f_{\theta}(\mathbf{z}_{k})\) of neural parametric patches \(\mathcal{S}_{i}\) to the corresponding corner vertices \(\mathbf{p}_{k}\in\mathcal{M}_{i}\),
\[L_{\text{anchor}}=\sum_{k}\|\mathbf{x}_{k}-\mathbf{p}_{k}\|_{2}. \tag{3}\]
The anchor term provides a one-to-one correspondence that is important at the early stage of the optimization when the _Neural Parametric Surface_ approximation is distant from the target geometry.
The _surface fitting_ term \(L_{\text{surface}}\) fits a neural parametric patch \(\mathcal{S}_{i}\) to points sampled from \(\mathcal{M}_{i}\). We denote \(\mathbf{x}_{j}=f_{\theta}(\mathbf{z}_{j})\) as the points sampled from a neural parametric patch \(\mathcal{S}_{i}\). Specifically, we compute for each sample point \(\mathbf{x}_{j}\) the closest point from the target patch \(\mathcal{M}_{i}\) using the Euclidean distance metric. Thus, a set of paired points is obtained. Likewise, we obtain another set of paired points by computing for each \(\mathbf{p}_{j}\in\mathcal{M}_{i}\) its closest point on \(\mathcal{S}_{i}\). We denote the two sets of paired points from all patches as \(\{(\mathbf{x}_{j},\mathbf{p}_{j})\}\). Therefore, we have
\[L_{\text{surface}}=\sum_{\{(\mathbf{x}_{j},\mathbf{p}_{j})\}}\|\mathbf{x}_{j}- \mathbf{p}_{j}\|_{2}+\beta\|\mathbf{n}_{j}{}^{T}(\mathbf{x}_{j}-\mathbf{p}_{j}) \|_{2}. \tag{4}\]
Here, \(\beta\) is a weighting coefficient and is set to \(0.1\) throughout the optimization procedure, and \(\mathbf{n}_{j}\) is the unit normal vector at sample point \(\mathbf{p}_{j}\).
We use the term \(L_{\text{normal}}\) to enforce the normal consistency between each pair of the corresponding points \(\{(\mathbf{x}_{j},\mathbf{p}_{j})\}\):
\[L_{\text{normal}}=\sum_{\{(\mathbf{x}_{j},\mathbf{p}_{j})\}}(1-\mathbf{n}_{j}^{T} \mathbf{n}(\mathbf{x}_{j})), \tag{5}\]
where \(\mathbf{n}(\mathbf{x}_{j})\) is the unit normal vector at \(\mathbf{x}_{j}\) that can be derived analytically as follows. We first compute the partial derivative \(\partial\mathbf{x}_{j}/\partial\mathbf{u}\)
Fig. 4. The pink polygonal surface patch (right) is mapped to a pentagon on the 2D plane (left). The vertices of this pentagon sequentially correspond to those of the surface patch.
exploiting the differentiability of the MLP-encoded map \(f_{\theta}\) and mean value interpolation \(g\). Next, we derive the normal vector \(\mathbf{n}(\mathbf{x}_{j})\) as the cross product of the two directions of the partial derivatives \(\partial\mathbf{x}_{j}/\partial\mathbf{u}=(\partial\mathbf{x}_{j}/\partial u, \partial\mathbf{x}_{j}/\partial\mathbf{v})\).
To enforce the smooth connection between two adjacent patches, we employ a _smoothness term_, \(L_{\text{smooth}}\), to encourage the normal vectors at this shared smooth connection to align with each other,
\[L_{\text{smooth}}=\sum_{\partial\mathcal{S}_{i,j}}\sum_{b}\|\mathbf{n}( \mathbf{x}_{i,b})-\mathbf{n}(\mathbf{x}_{j,b})\|_{2}, \tag{6}\]
where \(\partial\mathcal{S}_{i,j}\) denotes the shared boundary and \(b\) indexes a pair of collocated boundary samples within the smooth part of \(\partial\mathcal{S}_{i,j}\). As shown in our experiments (c.f. Fig. 14), this simple practice is sufficient to achieve smooth fitting results at high fidelity.
We define the _boundary fairness_ term \(L_{\text{fair}}\) based on the curve Laplacian to enforce the fairness along each boundary curve \(e\) of the resulting neural parametric patches \(\mathcal{S}_{i}\). We sampled a sequence of points, \(\mathbf{x}_{e,j}\), from the boundary curve \(e\). Then we minimize the discrete Laplacians along \(\mathbf{x}_{e,j}\),
\[L_{\text{fair}}=\sum_{e\in\{\partial\mathcal{S}_{i}\}}\sum_{j}\|\mathbf{x}_{e,j}-(\mathbf{x}_{e,j+1}+\mathbf{x}_{e,j-1})/2\|_{2}, \tag{7}\]
where \(\mathbf{x}_{e,j+1}\) and \(\mathbf{x}_{e,j-1}\) are the sample points previous and next to \(\mathbf{x}_{e,j}\). This regularization tends to straighten the boundary curves, so we gradually decrease its influence as the optimization proceeds.
The _uniform parameterization_ term \(L_{\text{uniform}}\), introduced in Bednarik et al. (2020), is adopted to encourage \(E(\mathbf{x}_{j})=\partial\mathbf{x}_{j}/\partial u\) and \(G(\mathbf{x}_{j})=\partial\mathbf{x}_{j}/\partial v\) to be similar to their averaged values in the patch. We compute these partial derivatives by differentiating \(\mathbf{x}_{j}\) w.r.t. \(\mathbf{u}=(u,v)\) in the 2D parametric domain as before. The loss term is given as follows:
\[L_{\text{uniform}}=\sum_{\{\mathbf{x}_{j}\}}|E(\mathbf{x}_{j})-\overline{E}|+| G(\mathbf{x}_{j})-\overline{G}|, \tag{8}\]
where \(\overline{E}\) and \(\overline{G}\) are the averaged quantities of \(E(\mathbf{x}_{j})\) and \(G(\mathbf{x}_{j})\), respectively.
Finally, we aim to preserve length aspect ratios of the edges of each cell \(\mathcal{C}_{i}\) to make them close to those of \(\mathcal{M}_{i}\). For simplicity, we omit the subscript \(i\) in the following. We use \(|\partial\mathcal{C}^{j}|\) (or \(|\partial\mathcal{M}^{j}|\)) to denote the Euclidean length of the \(j\)-th boundary curve \(\partial\mathcal{C}^{j}\) of cell \(\mathcal{C}\) (or \(\partial\mathcal{M}^{j}\) of \(\mathcal{M}\)). Hence, we define the _aspect ratio_ loss as follows:
\[L_{\text{aspect}}=\sqrt{1-d(C)^{T}d(\mathcal{M})}, \tag{9}\]
where \(d(C)\) is a normalized vector with its \(j\)-th entry computed by \(d_{j}(C)=\frac{|\partial\mathcal{C}^{j}|}{\sum_{k}|\partial\mathcal{C}^{k}|}\), and \(d(\mathcal{M})\) is likewise defined.
Fig. 5: **Overview of the training pipeline for Neural Parametric Surfaces.** (a) Geometric decoder: The top row shows our shape fitting pipeline that jointly optimizes the feature complex \(\mathcal{C}\) and MLP-encoded mapping function \(f\) to model a single shape with the Neural Parametric Surface representation. (b) Broadcast decoder: To learn from a shape collection, we extend the geometric decoding pipeline with a broadcast decoder \(h\) which learns a shape space for synthesizing a shape-specific complex \(\mathcal{C}_{m}\). The input to a broadcast decoder is the broadcast concatenation of a shape code \(\mathbf{c}_{m}\) (orange) and different vertex positional tokens \(P_{k}\) (each \(P_{k}\) corresponding to one complex vertex). The entire pipeline of (a) and (b) can learn a latent shape space that encodes a shape collection for various tasks, such as latent shape interpolation or shape editing.
**Learning from a shape collection.** Our proposed pipeline can be applied to model a set of morphable shapes by learning a shape space with our Neural Parametric Surface representation.
To this end, we additionally incorporate a set of learnable latent codes \(\{\mathbf{c}_{m}\in\mathbb{R}^{C}\}\), each representing a different shape in the collection, and a compact-size MLP network \(h_{\phi}\) to project a given shape latent code \(\mathbf{c}_{m}\) to the feature vectors \(\mathcal{Z}_{m}\) that define the geometry of the feature complex \(C_{m}\). These added components are shown in Fig. 5(b). We implemented this MLP network \(h_{\phi}\) as a broadcast decoder (Watters et al., 2019):
\[\mathbf{z}_{k}^{m}=h_{\phi}(\text{cat}(\mathbf{c}_{m},P_{k}))\]
where \(\phi\) are the trainable parameters of \(h\), \(P_{k}=1/K\) denotes the positional token for vertex feature \(\mathbf{z}_{k}\), and \(\text{cat}(\mathbf{c}_{m},P_{k})\) denotes the broadcasting concatenation of \(\mathbf{c}_{m}\in\mathbb{R}^{C}\) and different \(P_{k}\).
As for learning the shape space, our method takes as input a mini-batch of shapes \(\mathcal{B}=\{\mathcal{M}_{m}\}\) and optimizes the latent shape code \(\mathbf{c}_{m}\) for each \(\mathcal{M}_{m}\). We add to Eq. 2 a regularization term for the shape latent codes as in (Park et al., 2019). The total loss for learning a shape space is thus defined as
\[L_{\text{shape}}=\frac{1}{|\mathcal{B}|}\sum_{\mathcal{B}}L_{\text{recon}}+ \lambda_{\text{reg}}\sum_{\mathbf{c}_{m}}\|\mathbf{c}_{m}\|_{2}. \tag{10}\]
## 4. Experiments and Discussions
### Implementation details
**Network architecture and training.** In this work, we use a \(128\)-dimensional feature complex for each shape. The MLP network implementing the mapping function \(f\) has \(12\) layers with each hidden layer having \(256\) neurons. We employ _softplus_ activations (\(\beta=100\)) in between the layers to ensure differentiability (Gropp et al., 2020). The MLP network for the broadcast decoder is designed to be very compact, with \(3\) layers and using the same activation functions. We implement the networks in PyTorch (Paszke et al., 2019) and used its _autograd_ function to analytically compute partial derivatives \(\partial\mathbf{x}/\partial\mathbf{u}\) used in the loss.
For all the experiments on the _shape fitting_ task (Sec. 4.2) shown in this paper, results are obtained with \(2,000\) iterations to ensure the optimization convergence for shapes with different levels of complexity. In each training iteration, a batch of \(10,000\) points is randomly sampled from the input surface. The number of points sampled from each patch depends proportionally on the area of this patch. In this task, we adopt a warm-up stage to initialize the network with the training objective \(L_{\text{recon}}=L_{\text{anchor}}+\lambda_{\text{uniform}}L_{\text{uniform}}\) for the first \(100\) training iterations. After this warm-up, we minimize the full loss in Eq. 2. After the \(300\) iterations, we decay the influence of \(L_{\text{curvature}}\) to avoid excessive straightening of the boundary curves of the neural parametric patches. The Adam optimizer (Kingma and Ba, 2014) is used with an initial learning rate of \(10^{-3}\) and a cosine annealing scheduler is employed to gradually decay the learning rate to \(10^{-3}\) during the course of optimization.
All results shown in the experiments concerning the _shape-space learning_ task (Sec. 4.3) are obtained with \(100\) epochs. In each iteration, a mini-batch of \(24\) shapes is provided. \(5,000\) points are sampled from each shape in a similar practice as described for the shape fitting task. The full loss in Eq. 2 is minimized during the entire training course. We use the Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of \(10^{-3}\) which is decayed to \(5\times 10^{-4}\) for the final \(20\) epochs. More implementation details can be found in our open-source code for reproducibility.
We measure the reported time efficiency of the proposed method on a Linux OS desktop with an NVIDIA GeForce RTX 4090 (24G memory) graphics card and an Intel(R) Core(TM) i7-10870H CPU.
**Preparation of the patch layout for shapes.** Many existing methods can be used to generate a patch layout, enabling our approach to fit a _Neural Parametric Surface_ to a given surface shape. When the corner vertices of the patch layout are specified by users on the given surface shape, we can use the method in (Born et al., 2021) to automatically generate a quad layout, which is compatible with our method. For generating a patch layout that allows \(n\)-sided polygonal patches, we can use tools such as LoopyCuts (Livesu et al., 2020) or Variational Shape Approximation (VSA) (Cohen-Steiner et al., 2004).
The _only_ requirement for our patch layout is that each patch should be a \(2\)-manifold with a _disc_ topology, and each arc should be uniquely defined by its vertices. Namely, the graph constructed from the corner points and arcs of the patch layout should be a _simple_ undirected graph. In this graph, each arc should connect two distinct corner points, thereby avoiding _self-loops_; and there should be a maximum of one arc between any pair of corner points, thereby precluding _multiple edges_. This requirement ensures correct linear interpolation in the feature space. A semantic segmentation result may contain self-loops or multiple edges, and violate this requirement, hence, cannot be directly used here. To address this, we have developed an interactive segmentation tool to cut _self-loops_ into valid arcs or insert new corner points to remove _multiple edges_.
In our experiments, we show our method can robustly work with patch layouts obtained from different methods. For example, models such as _Car_, _Boat_, and _Toothgaste tube_ use layouts from an existing dataset (Bae et al., 2008; Pan et al., 2015). Models like _Pants_, _Skirt_, and _Jeans_ use layouts manually prepared by garment designers similar to (Pietroni et al., 2022). For the _Bunny_ and _Cat_ models, their patch layouts are automatically generated using LoopyCuts (Livesu et al., 2020). Subsequent results will demonstrate that our method is not sensitive to different patch layouts, whether they are prepared by LoopyCuts or VSA (Cohen-Steiner et al., 2004).
**Meshing neural parametric patches for visualization.** Since our representation is inherently a _parametric_ one, visualizing the surface geometry requires no more than tessellating the neural parametric surface patches. To this end, we sampled both inside the parametric domain \(\Omega_{i}\) and along its boundary \(\partial\Omega_{i}\) to obtain a set of planar coordinates \(\{\mathbf{u}_{j}\}\). We tessellated these \(2\)D samples with a constrained Delaunay triangulation. This triangulation in the parametric domain is then used to tessellate \(\mathbf{x}_{j}\) in the resulting neural parametric patch in \(3\)D space. A subsequent local curvature-based edge-flipping algorithm is applied to avoid aliasing. The runtime cost of the entire visualization pipeline depends on the density of the output mesh. For example, the pipeline runs at approximately \(10\) frames-per-second (FPS) when generating a mesh surface with \(12,000\) vertices. Note that this visualization pipeline is not different from traditional pipelines for rendering parametric surfaces. This stands in contrast to pipelines designed for rendering
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c c c c c c c c} \hline \hline & Boat & Car & TP-Tube & High heel & Bumpy & Cat & Fandisk & Sculpt & Kettle & Catcher & Shoe & Skull & Vase & Spoon & Pants & Skirt & Jeans \\ \hline Partitioning & O & O & O & O & A & A & A+M & M & M & M & M & M & A+M & M & M & M & M \\ \(|\mathcal{S}|\) & 33 & 36 & 24 & 11 & 59 & 46 & 14 & 10 & 16 & 10 & 13 & 15 & 28 & 6 & 4 & 4 & 4 \\ \(|\mathcal{Z}|\) & 44 & 43 & 24 & 16 & 58 & 45 & 24 & 24 & 27 & 20 & 30 & 26 & 45 & 5 & 13 & 9 & 12 \\ \(n_{max}\) & 8 & 5 & 5 & 5 & 7 & 5 & 9 & 9 & 9 & 10 & 18 & 13 & 11 & 3 & 6 & 5 & 6 \\ \(|n\geq 5|\) & 10 & 10 & 4 & 4 & 12 & 4 & 6 & 10 & 8 & 3 & 10 & 6 & 17 & 0 & 4 & 2 & 4 \\ \hline P2S (\(1\times 10^{-5}\)) & 0.261 & 0.299 & 0.266 & 0.762 & 1.347 & 0.619 & 0.572 & 1.255 & 0.462 & 0.267 & 0.983 & 0.570 & 1.130 & 0.444 & 1.351 & 4.705 & 3.207 \\ HD (\(1\times 10^{-5}\)) & 3.502 & 5.910 & 2.074 & 11.995 & 15.971 & 6.450 & 8.743 & 8.707 & 4.743 & 3.840 & 14.520 & 6.502 & 18.316 & 3.977 & 18.269 & 89.859 & 36.936 \\ NAE (deg) & 1.34 & 1.50 & 1.34 & 2.99 & 5.39 & 4.18 & 1.79 & 2.53 & 2.22 & 1.87 & 3.37 & 2.91 & 5.96 & 3.333 & 7.80 & 22.46 & 22.65 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **The statistics of each shape presented in the paper (the upper block) and the quantitative performance of our method on each shape (the bottom block). \(|\mathcal{Z}|\) and \(|\mathcal{S}|\) denote the number of vertices and the number of patches in the given patch partitioning of the shape. \(|n\geq 5|\) refers to the number of patches having \(5\) or more sides and \(n_{max}\) refers to the maximal sides of a patch in the given partitioning. _Partitioning_ indicates how the patch layout of each shape is generated, with O referring to _layout from original data benchmark_, \(M\) manually prepared_, and \(A\) automatically generated. For all performance metrics, smaller values are better.**
Figure 6: **A gallery of surface shapes represented by the proposed neural parametric surfaces.** The target surfaces are shown in grey, and the resulting surfaces are shown with different neural surface patches in different colors. The results show that our method can handle open surface boundaries (in _Car_, _Boat_, and _High heel_), sharp features (in _Boat_ and _Sculpt_), and non-manifold patches (in _Boat_). The bottom row shows that our representation can faithfully represent organic shapes, and can work with patch layouts produced by existing tools to produce high-quality results.
implicit representation based on ray casting or Marching Cubes algorithms (Lorensen and Cline, 1998) that contour the zero-level set.
### Surface fitting with neural parametric surfaces
#### 4.2.1. Data preparation and metrics
We evaluated our proposed Neural Parametric Patches on various shapes (Bae et al., 2008; Bhatnagar et al., 2019; Pan et al., 2015; Zhu et al., 2020). All shapes are normalized such that they are centered at the origin and the maximal extent of the shape is \(2\). Table 1 summarizes the statistics of each shape, including the number of patches \(|\mathcal{S}|\), the number of vertices \(|\mathcal{Z}|\) of the feature complex, the number of \(n\)-sided patches where \(n\geq 5\), and the maximal \(n\) in a patch layout.
_Fitting accuracy_ were evaluated using three metrics:
1. A two-sided point-to-surface (P2S) distance that measures the average distance between the reconstruction and the target;
2. The two-sided Hausdorff distance (HD) that measures the largest discrepancy between the reconstructed and target shapes; and
3. Averaged angular (degree) error (NAE) between normal vectors of corresponding points (i.e. the closest points, as described in Sec. 3.2).
All reported metrics are computed with \(30,000\) points randomly from both the target and the reconstructed surfaces.
#### 4.2.2. Experimental results
Fig. 6 shows a gallery of the shapes represented by the proposed Neural Parametric Surface. Our Neural Parametric Surface can convert a surface shape to a seamless parametric surface representation given a layout partitioning the shape. The neural parametric surfaces are shown in colors with each color indicating a surface patch and the target surfaces in grey are provided next to the results.
For example, the \(Car\) model shows the proposed representation retains the **smoothness** of the car surface of the target surface in grey. In addition, our representation can model **non-manifold surface patches** and **open surfaces** well (e.g. the fin heel, that is the turquoise patch at the bottom of the _Boat_ model). It can reproduce **sharp creases** (e.g. the enclosed region of the _Toothpaste Tube_ model and the sharp edges in the _Sculpt_ model). It is also able to represent **thin geometry**, such as the thin slender _Spoon_ model or the slender heel of the _High heel_ model.
Such geometric features are commonly seen in applications related to CAD/CAM, reverse engineering, or product design. Although traditional parametric representations can handle such cases, they require a high level of design expertise to ensure seamless results, due to the topological constraints of the parametric domain.
Figure 7. **Modeling free-form surfaces.** The proposed neural parametric surface representation can fit free-form surface geometries with coarse patch layouts (using just \(4\) patches for the _Skirt_ front panels (a) and back panels (b), _Jeans_ (c), and _Pants_ (b)) while retaining the geometric details including wrinkles. Among them, the _Skirt_ and _Jeans_ are represented as point clouds. In each subfigure, the left is the target geometry to fit; the middle is the reconstructed surface represented by the Neural Parametric Surface from the same view of the target geometry; the right is the fitting result viewed from another viewpoint.
Such features are also very challenging to model with neural implicit representations. Our method, however, naturally supports seamless handling of such features, extending the domain of neural representations, and easing the modeling burden compared to traditional parametric approaches.
More results in the bottom row of Fig. 6 show that our representation can approximate organic objects at high fidelity. Among them, the patch layouts of the _Cat_ and the _Bunny_ are prepared by LoopyCuts. From the results, we see that our proposed representation can work along with existing layout generation methods to produce high-quality results.
**Modeling shapes with coarse patch layouts.** Our proposed neural parametric surface representation can also model _free-form shapes with very coarse patch layouts_. Both the _Skirt_ and _Jeans_ of Fig. 7, and the _Pants_ of Fig. 1 are modeled with just four surface patches. Despite the coarse patch layout, geometric details such as the wrinkles at the bottom of both shapes are well captured with a single neural parametric patch. Further, note how wrinkles are smoothly reconstructed across adjacent patch boundaries. This demonstrates the advantage of our proposed representation brought by the deep neural networks, avoiding local refinement required by traditional parametric surface representations that result in dense control meshes.
**Reconstructing incomplete shapes.** We also note that our approach can utilize the given topological partitioning of the target geometry to approximate the target geometry with strong resilience to _imperfect data_. For example, in Fig. 7, both the _Skirt_ and _Jeans_ models are point clouds from multiple scans (Zhu et al., 2020), which suffer from missing data. Our result can produce plausible outcomes by smoothly filling in the region where ground-truth data is missing.
**Fitting accuracy and runtime statistics.** Tab. 1 reports the fitting errors in terms of the three metrics introduced earlier. Our method achieved very high fitting accuracy as compared to existing neural patch-based approaches; see Tab. 2. The **training time** needed for the listed shapes ranges from \(\sim\)10 minutes (e.g. for the _Pants_) to around half an hour (e.g. for the _Bunny_ with the most number of patches), that is comparable to similar methods. The **storage** that our representation takes is around 5.5 MB for all listed shapes, whereas a meshed model containing 200\(k\) vertices requires around 20 MB for storage. Compared with other neural approaches (c.f. Tab. 2), our representation is more compact. The limited use of storage is owing to the fact that the increase of the patch complexity incurs a little additional storage requirement for the feature complex, while the parameters used to represent mapping function \(f\) stay unchanged. This shows that our method is scalable to very complicated shapes and thus can be used as a compression tool for surface geometry.
In summary, these experiments demonstrated that the Neural Parametric Surface is a flexible representation that can handle complicated geometry with a variety of geometric features. It can model a given shape using coarse patch layouts with high fidelity and is compact in terms of storage.
### Neural Parametric Surface for Sets of Shapes
Our proposed Neural Parametric Surface representation can learn a morphable shape space from a set of 3D shapes. In the following, we show several applications that demonstrate the capability and potential of this proposed representation.
**Datasets.** We utilize two datasets for our experiments: the first consists of various types of T-shirts from a garment dataset (Wang et al., 2018), and the second includes \(10,000\) randomly sampled hands from a large-scale hand dataset (Gao et al., 2022). For the _garment_ dataset, we align all 3D meshes through their collars by translating the meshes so that the centers of collars coincide with the origin. Then, we scale these meshes by a constant factor to normalize the meshes. Similarly, for the _hand_ dataset, we align all meshes to their roots and scale them with a constant factor to normalize their size.
#### 4.3.1. Shape interpolation in learned latent space
We show a shape interpolation application using the latent space learned from the garment dataset. We employ an auto-decoding approach similar to that in (Park et al., 2019). Specifically, each garment is represented by a unique, learnable latent code \(\mathbf{c}_{m}\). A broadcast network \(h_{\phi}\) is employed to map this latent code \(\mathbf{c}_{m}\) to a feature complex \(C_{m}\). After this broadcast decoding step, we feed samples from the feature complex \(C_{m}\) to the MLP-encoded map \(f_{\theta}\) to produce a 3D reconstruction that fits the garment corresponding to \(\mathbf{c}_{m}\). This entire pipeline (shown in Fig. 5) is trained end-to-end by minimizing the loss defined in Eq. 10.
After training the networks \(h\) and \(f\), we obtain a codebook containing latent codes that represent various garment shapes. To perform interpolation between two specific garment shapes, we first randomly select two latent codes from the codebook, then generate a sequence of interpolated latent codes using linear interpolation between the two selected codes. Next, these interpolated latent codes are fed into the pipeline comprising \(h\) and \(f\) to generate a series of 3D garment geometries. Two sequences of interpolated garments are shown in Fig. 8(a) and (b). The interpolation smoothly morphs the two end shapes without introducing hollowing or undesirable artifacts (see the supplementary video for detailed results). This shows that the proposed method can learn a meaningful morphable shape space for design exploration.
#### 4.3.2. Editing hand meshes with sparse handles
We showcase another application of our proposed representation by deforming a hand mesh. Specifically, based on a learned latent space of hand shapes, we allow interactive editing on one or more fingertips. The training pipeline is identical to the one described previously, and is trained by minimizing the loss function given in Eq. 10, but applied to a dataset of hand shapes.
At the runtime stage, we allow users to specify the target positions of the fingertips. Each fingertip is represented as the center of the fingertip region. To deform the hand mesh, we use the source and target positions of the fingertips as deformation handles, and solve the deformation problem as a test-time optimization problem:
\[L=\operatorname*{arg\,min}_{\mathbf{c}_{m}}\|\text{cos}(\{\{\mathbf{x}\}_{i}})- \mathbf{p}_{i}\|+10^{-3}\|\mathbf{c}_{m}\|_{2}, \tag{11}\]
where \(\{\mathbf{x}\}_{i}\) denotes the sample points from the neural parametric patch corresponding to the \(i\)-th fingertip and \(\text{cos}(\{\{\mathbf{x}\}_{i}})\) denotes the
center of this region; \(\mathbf{p}_{i}\) is the user-specified target position; and \(\mathbf{c}_{m}\) denotes the trainable latent code that defines the hand pose. The only variable is \(\mathbf{c}_{m}\), while all the networks \(h\) and \(f\) are fixed.
We adopt the ADAM optimizer (with default parameters) and a constant learning rate of 0.005 to solve the above problem (Eq. 11) at an interactive rate (15 Hz). Adding the regularization term (the second term in Eq. 11) can effectively enforce an intuitive deformation. We show three exemplary results of hand meshes that are edited by posing their fingertips in Fig. 9. In the first two examples Fig. 9(a) and (b), the little finger and the thumb are specified to bend towards the palm, and the deformation results of the hand shapes are intuitive. We also demonstrate the robustness of this approach by illustrating how it handles the bending and over-stretching of a finger, as shown in Fig. 9(c). Even when the user-specified target position extends beyond the finger's natural range of motion, our pipeline still produces a plausible deformation, guiding this (middle) finger toward the target position without introducing undesirable artifacts.
#### 4.3.3. Fitting imperfect point cloud data
Once trained on a large set of shapes, our Neural Parametric Surface can be used to generate a surface from imperfect point cloud data without needing the patch layout information. We showcase this application on the hand dataset. To prepare the imperfect point cloud data for a given hand mesh, we introduce two types of imperfections: one arising from noisy scans and the other from partial observations due to a single-view scan. To simulate the noisy point cloud, we perturb the surface sample points and their normals on a ground-truth (GT) hand mesh
Figure 8. **Interpolated sequences between randomly selected pairs of shapes.** (a) and (b) show interpolated sequences between two pairs of garments (shown at the left and right ends in each individual row), which have different styles and sizes. The garments are aligned to their collar to better visualize the size change. (c) and (d) show the interpolated sequences between two pairs of hands with different pose codes. Please refer to the supplementary video for the entire sequence of the interpolated results.
by adding Gaussian noise \(\mathcal{N}\sim(0,0.01^{2})\). To replicate a single-view scan, we position a viewpoint one unit away from the origin, facing the center of the hand's palm, and then employ a simple strategy to filter out sample points whose normal directions form acute angles with the viewing direction.
We train a PointNet-based predictor (Qi et al., 2017) on the hand dataset to predict the latent shape code \(\mathbf{c}_{m}\) from a given imperfect point cloud data. During runtime, we first obtain an initial guess of the shape code \(\mathbf{c}\) of a given point cloud data. Then, we perform a test-time optimization to optimize the shape code \(\mathbf{c}\) so the generated neural parametric surface fits the given point cloud. The optimization problem is similar to Eq. 11 with slight modification as follows:
\[L=\operatorname*{arg\,min}_{\mathbf{c}}\sum_{(\mathbf{x}_{i},\mathbf{p}_{i})}L _{\text{surface}}+L_{\text{normal}}+10^{-3}\|\mathbf{c}\|_{2}, \tag{12}\]
where the pairs of \((\mathbf{x}_{i},\mathbf{p}_{i})\) are corresponding point pairs derived as described for Eq. 4 with an additional normal filtering scheme to filter out mismatches. Specifically, we filter out the point pairs whose cosine similarity values between the normal vectors at the respective paired points are larger than 0.7. The fitting results are shown in Fig. 10. Our proposed representation can robustly handle imperfect data from single-view scans (Fig. 10(a)) and noisy inputs (Fig. 10(b)). This validates our pipeline based on the proposed Neural Parametric Surfaces can learn a plausible shape space useful for the deformable template fitting task.
#### 4.3.4. Mapping hand poses to hand meshes
Lastly, we demonstrate that our Neural Parametric Surface representation can function as a _mapping mechanism_ to translate from abstract, physically meaningful concepts such as hand poses, to complex surface geometries like hand meshes. We adopt the same training strategy as used in the previous applications; the _only_ difference is to replace the learnable latent codes with the pose vectors of the hand meshes in the dataset. To showcase the capability of our approach, we randomly select a pair of hand pose vectors from the dataset and calculate interpolated pose vectors through linear interpolation.2 Our pipeline takes as input a valid pose vector and generates a hand mesh in the Neural Parametric Surface representation through the broadcasting network \(h\) and then the geometric decoder \(f\). Examples of the generated hand meshes are shown in Fig. 8(c) and (d). These results demonstrate the proposed pipeline can serve as a pose-to-shape mapping function for explicit shape editing.
Footnote 2: Since the pose parameters are represented as the rotational angles, we rule out the sequences that contain invalid poses from the interpolation.
### Comparison with existing works
#### 4.4.1. Comparison with neural patch-based methods
Most existing patch-based neural representations build upon a parametric atlas and model a given surface with a set of unorganized, overlapped patches to reconstruct the geometry. Therefore, they are not directly comparable to our representation that aims at not only reproducing the geometry with \(G^{0}\)-continuity but also maintaining the structural information inherent to the given semantics. We compare our patch-based neural representation with Deng et al. (2020), which is most relevant to this work. Building on AtlasNet (Groueix et al., 2018), Deng et al. (2020) improves the differential surface representation (DSP) (Bednarik et al., 2020) with a stitching loss term to minimize the gap between spatially close patches. We denote this method as _DSP-BetterStitch_ in the following.
To establish a fair comparison, we provide to _DSP-BetterStitch_ both the 3D geometry and the corresponding partitioning and require each of its parametric patches to fit one of the partitioned patches. We denote this modified version as _DSP-BetterStitch\({}^{*}\)_.
**Comparison results.** Fig. 11 compares our method with _DSP-BetterStitch_ and _DSP-BetterStitch\({}^{*}\)_ (the adapted version) on two shapes: the _Fandisk_ and _Toothpaste Tube_. Both shapes have multiple rectangular patches in the given partitioning (8 out of 14 for _Fandisk_ and 12 out of 24 for _Toothpaste Tube_). _Fandisk_ also has several 9-sided patches.
_DSP-BetterStitch\({}^{*}\)_ can produce semantically aligned results given the surface partitioning as supervision. However, the seams between neighboring patches are not well organized to stitch the patches. Instead, they often intersect with each other (as shown in _Fandisk_) or severely overlap (as shown in _Toothpaste Tube_). These unpleasing
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Shape} & P2S & HD & NAE & NN \\ & & x10\({}^{-3}\) & x10\({}^{-3}\) & (deg) & Storage \\ \hline \multirow{2}{*}{DSP-BetterStitch\({}^{*}\)} & Fandisk & 5.44 & 58.53 & 14.32 & 22.3 MB \\ & T-Tube & 3.39 & 55.00 & 7.39 & 38.2 MB \\ \hline \multirow{2}{*}{DSP-BetterStitch} & Fandisk & 5.34 & 67.99 & 14.29 & 22.3 MB \\ & T-Tube & 1.73 & 29.13 & 5.90 & 38.2 MB \\ \hline \multirow{2}{*}{DGP} & Fandisk & 1.13 & 56.17 & 7.56 & 115 MB (67 patches) \\ & T-Tube & 0.41 & 20.19 & 3.53 & 80.8 MB (47 patches) \\ \hline \multirow{2}{*}{Ours} & Fandisk & **0.57** & **8.74** & **1.79** & 55.3 MB \\ & T-Tube & **0.26** & **2.07** & **1.34** & 55.3 MB \\ \hline \hline \end{tabular}
\end{table}
Table 2. **Fitting error comparison between our and other methods.** DSP-Stitch refers to (Deng et al., 2020), DGP refers to (Williams et al., 2019), DSP-Stitch\({}^{*}\) refers to results derived with (Deng et al., 2020) supervised by the given patch decomposition (for a fair comparison with our method).
Figure 9. **Editing hand meshes with sparse handles.** The top row shows the rest pose of each example, and the bottom row shows the deformation results. The red dots represent the target positions of the fingertips. The root of the hand is also fixed during the test-time optimization. Example (c) shows an extreme case where the target position of the middle fingertip is set to be far away from the hand.
results can be observed even for the patches with rectangular partitioning (see _Toothpaste Tube_), and could lead to undesirable holes around the intersection of multiple patches in the target shape.
We also provide for reference purposes the results by the original implementation of _DSP-BetterStitch_ using the same number of patches (12 for _Fandisk_ and 24 for _Toothpaste Tube_). While the collection of parametric patches can capture the rough geometry of the given shapes, it is difficult for this method to reproduce meaningful features presented in the target shapes (see the zoom-in view for _Toothpaste Tube_ where the feature with sharp creases is missing).
Unlike _DSP-BetterStitch_ and _DSP-BetterStitch\({}^{*}\)_, our method can reconstruct the given shape following its original patch partitioning. Furthermore, it can nicely preserve geometry smoothness and sharp crease lines.
#### 4.4.2. Comparison with traditional parametric representations
We discuss the difference between our representation and the traditional parametric representations. In particular, we focus on the Hermite Coons patches that are defined on a _rectangle_ domain, and T-spline surfaces that support T-junctures and local refinement.
We show the results of different methods in Fig. 12. On the left part, we show the comparison between Hermite Coons patches and our result. We can see that our representation can better model the wrinkles and the overall shape of the skirt. Hermite Coons patches take as input the partial derivatives on the boundary curve to produce the shape, hence the wrinkles are extended from the boundary information, while incapable of capturing the details of the underlying geometry.
T-spline surfaces are flexible as they can accommodate T-junctures and thus the local refinement need not go through the entire parametric domain. We show additional comparisons in Fig. 13 where our method can achieve even more _flexible_ patch partitioning of the given surface, using fewer (10 and 13) patches to fit respective T-spline surfaces of _Catcher_ with 32 surface patches and _Shoe_ with 76 surface patches, and at the same time retain _faithful_ approximations of the surfaces modeled by the T-spline surfaces. Each color indicates a surface patch in our neural parametric surface.
### Ablation study
**Enforcing smoothness across adjacent patches**. We examine the effects of incorporation of the smoothness term (Eq. 6) on shared boundaries between adjacent patches, as shown in Fig. 14. To identify which portions of a shared boundary curve are smooth, we calculate the dihedral angles for each mesh edge along that boundary. When the dihedral angle exceeds a set threshold (in this work, \(\pi/4\)), the corresponding section is deemed smooth; otherwise, it is considered sharp. We aggregate smooth edges from all shared boundaries and apply the smoothness term. Our results in Fig. 14 demonstrate that this simple technique effectively enforces empirical \(G^{1}\) continuity between adjacent patches. Although our method can only guarantee \(G^{0}\)-continuity along the patch boundary, the addition of the smoothness term (Eq. 6) yields continuous normal transition when needed and hence, closely approximates \(G^{1}\) continuity.
**Robustness to different patch layouts.** We examine the impacts of different patch layouts on the fitting performance of our Neural Parametric Surface representation. Specifically, on the _Cat_ model, we compare a patch layout generated by LoopyCut (with 46
Figure 10. **The neural parametric surface augmented with a learned latent shape applied to reconstruct point cloud data without layout information.** In the 1st row, blue points are the input partial (a) or noisy (b) point clouds; the ground-truth (GT) models are provided in grey for reference. (Note: GT hands in grey are not used for reconstruction; they are only shown as a reference for better visualization.) The 2nd row shows the fitting results obtained by our method, with different colors indicating different surface patches. In most cases, our model reconstructs the ground-truth poses with high quality.
patches) to three patch layouts generated by VSA [Cohen-Steiner et al. 2004] (each containing 30, 40, or 50 patches, respectively). This comparison, quantitatively summarized in Tab. 3 and qualitatively illustrated in Fig. 15, shows that our method is relatively insensitive to the patch layout and can produce fitting results with similar performance. We observed a slight performance improvement in the P2S metric when using layouts with more patches, specifically using the one generated by LoopyCuts or the 50-patch layout by VSA. Conversely, the Hausdorff distance (HD) shows negligible variance based on the number of patches. In terms of Normal Angular Error (NAE), the layout generated by LoopyCut slightly outperforms those created by VSA.
**Dimension of the embedding space for feature complex.** Through experiments, we observed that networks using a low dimensional feature space (\(D=3\)) cannot faithfully fit the given shape. We consider two settings: 1) the 3-dimensional embedding space is learnable as did throughout the paper, and 2) using the anchor points \(\mathbf{p}_{k}\) in Eq. 3 as the vertices of the complex and thus embed it in 3D. In the first scenario, the network model yields self-intersected
Fig. 11. **Qualitative comparison among Deng et al. [2020] (DSP-BetterStitch), its modified version (DSP-BetterStitch\({}^{*}\)), and our results. DSP-BetterStitch\({}^{*}\) (middle row) failed to generate connected surface geometry even for the rectangular patches (upper part of the _Toothpaste Tube_). The use of a rectangular domain also makes it difficult to handle \(n\)-sided patches (see the zoom-in view). Our neural complex map can faithfully reproduce the seamless surface geometry with the surface partitioning preserved.**
Fig. 12. Comparison between the Skirt models produced by Coons patches (a) and by our method (b).
patches, strugggling to fit the target surface, as shown in Fig. 16. In the second scenario, since the anchor points are given as the vertices of the complex, the network model can produce a better approximation of the target shape than the previous setting. However, due to the lack of extra freedom, the resulting surface patches fail to reproduce highly curved wrinkles, as compared to our design choice (right). Incorporating the anchor points (p\({}_{\text{E}}\)) as part of the feature complex seems a feasible choice, however, note that this design will limit the proposed representation to _only_ the surface fitting application where anchor points are provided _a priori_. For applications like linearly interpolating the latent shape space or reconstructing imperfect data, the anchor points are _unknown_; interpolating the anchors linearly between two key-frame shapes can incur undesired artifacts. Therefore, it is more desirable that the feature complex can be learned through optimization.
**Network design.** We employ an MLP network with \(L=12\) layers, each having \(N=256\) neurons. The feature complex is embedded in a \(D\)-dimensional space, where \(D=128\). Here we evaluate these design choices (denoted as the _default setting_) via an ablation study. We
Figure 14: **Ablation on the loss term \(L_{\text{smooth}}\) (Eq. 6).** Partitionings of given shapes are given in (a) and (d). Without \(L_{\text{smooth}}\), discontinuous joins between two adjacent patches are observed (see the enclosed regions in (b, e)). Incorporating \(L_{\text{smooth}}\) improves the normal continuity across the patch boundary, and thus approximates \(G^{1}\)-continuity along the boundary (c, f).
Figure 13: **Comparisons between the results produced by T-splines (top row) and by our method (bottom row).** The T-spine surfaces of the _Catcher_ model (a) and the _Shoe_ model (b) contain 32 and 76 patches, respectively. Our neural parametric surfaces can represent the _Catcher_ (c) with 10 patches and the _Shoe_ (d) with 13 patches.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & \(|\mathcal{S}|\) & P2S & HD & NAE \\ & & \(\times 10^{-3}\) & \(\times 10^{-3}\) & (deg) \\ \hline VSA & 30 & 0.793 & 6.981 & 4.93 \\ & 40 & 0.703 & 5.982 & 4.93 \\ & 50 & 0.667 & 5.992 & 4.80 \\ \hline LoopyCuts & 46 & 0.619 & 6.450 & 4.18 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Our Neural Parametric Surface can fit the same shape robustly given different patch layouts. \(|\mathcal{S}|\) is the number of patches in the given layout.
Figure 15: **Robustness of our method to various input patch layouts.** The fitting results of the _Cat_ model are produced with different patch layouts. (a), (b), and (c) use the patch layouts automatically generated by VSA [12] with 30, 40, and 50 patches for the fitting. (d) adopts the patch layout generated by LoopyCuts [13]. Visually, different fitting results do not vary much, showing the robustness of our method to various input patch layouts regarding the same shape.
compare the default setting to a setting with fewer layers (\(L=6\)) and four settings with different feature space dimensions (\(D=\{32,64\}\)). The results are presented in Table 4. Our ablation study involves two shapes (_Fandisk_, a CAD model, and _Pants_, a free-form model). We observed that different settings can obtain similar results (both in terms of the geometry and of the performance metrics) on _Fandisk_ that consists of smooth surface patches. On the other hand, we can see that a network with high capacity (with a larger \(D\) or \(L\)) can yield better results in terms of fitting errors on _Pants_. We also tested with a wider network with \(N=512\). While the results (again both geometry-wise and metric-wise) are similar, it requires more than 20 MB to store the trained network. For a consistent experimental setting, we, therefore, chose to use the current setting to produce all results throughout the paper.
### Limitations
First, our algorithm requires the input shape to come with a partitioning. Although our design can handle surfaces with various layouts and can flexibly handle \(n\)-sided topological patches, it is currently unable to handle patches with holes. We now process patches with holes by further partitioning them into simpler sub-patches. We will explore other strategies to overcome this limitation so that a wider range of automatic segmentation tools (e.g., semantic segmentation) can be directly used to generate a layout of a shape for our pipeline.
Second, in terms of model sizes, although like other neural representations, our neural parametric surface is more compact than point clouds and meshes, its storage requirements are higher than conventional parametric representations such as splines. The enhanced modeling flexibility comes at the expense of a larger number of (network) parameters. Therefore, an interesting direction to explore is the development of a more compact network representation to reduce the model size of the neural parametric surface. We will explore network pruning or other optimization techniques in the near future to simplify the network architecture or sparsify network parameters, without compromising model accuracy. It is also worth noting that the neural parametric surface is a compact representation of a shape space because we can use just one network to encode and decode all the shapes (10K hands or 4500 garments) within this collection.
Third, the training and optimization process for a neural parametric surface currently takes approximately 5-30 minutes. Improving the efficiency of shape fitting and reconstruction would be highly beneficial, and it could open up new possibilities for applications like interactive design and shape editing. We will explore possible schemes to accelerate the training process or pre-train the model, to support Neural Parametric Surfaces to be updated at an interactive rate.
Finally, for highly concave surface patches, parameterizing them on a \(n\)-sided convex polygon might occasionally produce foldover triangles, although this chance is low due to the powerful capabilities of deep neural networks. We observed a few instances of self-intersecting triangles in the concave region of the top patch (highlighted in brown) of the _Fandisk_; see Fig. 11 for the patch in Fandisk. To mitigate this issue, we can segment the concave region into multiple convex ones, which helps reduce the occurrence of foldover triangles. However, it is worth exploring a more systematic approach, for example, to allow the corners of the polygonal region to be optimized during training, or to employ more advanced interpolation schemes that can handle concave regions effectively.
## 5. Conclusions
We presented _Neural Parametric Surface_, the first piecewise parametric surface representation equipped with deep neural networks. The key components are a learnable feature complex and a shared mapping function implemented as an MLP network. Neural Parametric Surfaces extend the range of shapes that can be effectively modeled as learned representations, and provide a simplified means of representing complex models with continuous and highly flexible patch layouts, compared to prior parametric approaches. This representation can also be used to learn a latent space from a collection of shapes for different downstream tasks. We have demonstrated its
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline & & P2S & HD & NAE & NN \\ Method & Shape & \(\times 10^{-3}\) & \(\times 10^{-3}\) & (deg) & Storage \\ \hline \multirow{2}{*}{\(D=32\)} & Fandisk & 0.585 & 7.990 & 1.98 & 5.4 MB \\ & Pants & 1.520 & 27.38 & 7.56 & 5.4 MB \\ \hline \multirow{2}{*}{\(D=64\)} & Fandisk & 0.832 & 11.711 & 2.14 & 5.4 MB \\ & Pants & 1.470 & 20.433 & 7.68 & 5.4 MB \\ \hline \multirow{2}{*}{\(L=6\)} & Fandisk & 0.523 & 9.586 & 2.21 & 2.2 MB \\ & Pants & 1.360 & 25.231 & 7.57 & 2.2 MB \\ \hline \multirow{2}{*}{Default} & Fandisk & 0.572 & 8.743 & 2.06 & 5.5 MB \\ & Pants & 1.351 & 18.269 & 7.36 & 5.5 MB \\ \hline \hline \end{tabular}
\end{table}
Table 4. **Ablation study on network design.** Our default setting uses an MLP network with 12 layers (\(L\)), each having 256 neurons (\(N\)), and the dimension of the feature space is 128 (\(D\)). The performance metrics show that the differences on _Fandisk_, a CAD model, are marginal. However, better performance is attained with a more powerful setting on the _Pants_, a free-form shape with highly curved features.
Figure 16. **Ablation on the different dimensions of learnable space to embed the feature complex.** Compared with the setting \(D=3\) learnable or \(D=3\) fixed, our design choice (right) provides more degrees of freedom and thus enhances the overall expressiveness of the neural parametric surfaces, allowing them to effectively reconstruct the wrinkles.
usefulness for shape interpolation, editing, and reconstruction from limited or noisy data.
|
2309.16831 | Propagation and Attribution of Uncertainty in Medical Imaging Pipelines | Uncertainty estimation, which provides a means of building explainable neural
networks for medical imaging applications, have mostly been studied for single
deep learning models that focus on a specific task. In this paper, we propose a
method to propagate uncertainty through cascades of deep learning models in
medical imaging pipelines. This allows us to aggregate the uncertainty in later
stages of the pipeline and to obtain a joint uncertainty measure for the
predictions of later models. Additionally, we can separately report
contributions of the aleatoric, data-based, uncertainty of every component in
the pipeline. We demonstrate the utility of our method on a realistic imaging
pipeline that reconstructs undersampled brain and knee magnetic resonance (MR)
images and subsequently predicts quantitative information from the images, such
as the brain volume, or knee side or patient's sex. We quantitatively show that
the propagated uncertainty is correlated with input uncertainty and compare the
proportions of contributions of pipeline stages to the joint uncertainty
measure. | Leonhard F. Feiner, Martin J. Menten, Kerstin Hammernik, Paul Hager, Wenqi Huang, Daniel Rueckert, Rickmer F. Braren, Georgios Kaissis | 2023-09-28T20:23:25Z | http://arxiv.org/abs/2309.16831v1 | # Propagation and Attribution of Uncertainty in Medical Imaging Pipelines
###### Abstract
Uncertainty estimation, which provides a means of building explainable neural networks for medical imaging applications, have mostly been studied for single deep learning models that focus on a specific task. In this paper, we propose a method to propagate uncertainty through cascades of deep learning models in medical imaging pipelines. This allows us to aggregate the uncertainty in later stages of the pipeline and to obtain a joint uncertainty measure for the predictions of later models. Additionally, we can separately report contributions of the aleatoric, data-based, uncertainty of every component in the pipeline. We demonstrate the utility of our method on a realistic imaging pipeline that reconstructs undersampled brain and knee magnetic resonance (MR) images and subsequently predicts quantitative information from the images, such as the brain volume, or knee side or patient's sex. We quantitatively show that the propagated uncertainty is correlated with input uncertainty and compare the proportions of contributions of pipeline stages to the joint uncertainty measure.
Keywords:uncertainty propagation uncertainty quantification Monte Carlo sampling
## 1 Introduction
Deep learning has become the state-of-the-art tool for the reconstruction, segmentation and interpretation of medical images. When applied to clinical practice, multiple deep learning models are commonly combined in a cascade of tasks across the imaging pipeline. Hereby, the output of an upstream model is subsequently used as input of a downstream model. For example, deep learning may be used to first reconstruct magnetic resonance (MR) images from raw k-space data before the reconstructed images are interpreted by another algorithm for
signs of disease. At the same time, the application of deep learning models in clinical practice requires an estimate of their uncertainty. Ideally, the algorithm informs its user about unsure predictions in order to prevent harmful medical decisions based on incorrect predictions [31]. Many solutions to estimate the uncertainty of a single deep learning model have been introduced [4, 7, 14].
Integrating uncertainty estimation with imaging pipelines consisting of cascading deep learning models comes with additional challenges. The uncertainty of upstream models directly affects the output of downstream models. The _propagation of uncertainty_ through the cascade has to be explicitly modeled in order to obtain a _joint_ uncertainty measure for the entire pipeline. This also allows for _attribution_ of the total uncertainty to the pipeline's individual components. To address these unmet needs, we make the following contributions in this work:
* We propose a novel method to propagate uncertainty through a pipeline with multiple deep learning components using Monte Carlo sampling.
* The proposed strategy allows the calculation of a joint uncertainty measure for the whole pipeline, and the attribution of the contributions to the pipeline's individual models for both classification and regression tasks.
* The utility of the proposed strategy is demonstrated on realistic medical image processing pipelines, in which the upstream models reconstruct magnetic resonance (MR) images from undersampled k-space data with varying, controllable amounts of aleatoric uncertainty. Different downstream models predict the brain volume, the knee side or the patient's sex. The code is available at github.com/LeonhardFeiner/uncertainty_propagation.
## 2 Related Work
In general, two sources of uncertainty can be distinguished: _epistemic_ (or model) uncertainty and _aleatoric_ uncertainty, i.e. noise and missing information in the data [14]. Recently, Bayesian methods have been developed to estimate epistemic uncertainty in machine learning models, such as Dropout during inference [7], learning weight distributions using backpropagation [4], and model ensembling [15]. To estimate the aleatoric uncertainty, previous works have suggested modeling the deterministic network output and intermediate activation functions by distributions [8], to perform test-time augmentation [32] or estimating the mean and variance of the target distribution [21]. Shaw et al. separate aleatoric uncertainty sources by removing components during training [28].
Uncertainty estimation has been applied to various tasks in medical image processing. In MR image reconstruction, pixel-wise epistemic uncertainty was estimated by drawing model parameters from a learned multivariate Gaussian distribution [20] or applying posterior sampling using Langevin Dynamics with a deep generative prior [11]. Another approach used Monte Carlo dropout and a heteroscedastic loss to model aleatoric and epistemic uncertainty [27]. Many works have evaluated uncertainty in medical image classification [10, 13] and regression [6, 16]. Uncertainty estimation has also been integrated with models
for MR image super-resolution [29] or medical image registration [5]. However, all these works are limited to the uncertainty estimation of a single model and do not consider a cascade of models across a typical imaging pipeline.
Techniques for uncertainty propagation include Monte Carlo sampling [1, 12, 32], unscented transforms [2, 22], linearizing the non-linearities of the network partially [3] or fully [25, 30], as well as performing assumed density filtering [8, 9]. They estimate uncertainty by assuming constant image noise as input uncertainty [3, 8, 12, 30], generating the uncertainty within the model layers [8, 9, 25, 30], interpreting augmentations as distribution [32], or using the output of classical language recognition models [1, 2, 22]. None of these works use a pipeline of deep learning models or combine the predicted uncertainty of multiple models into a joint uncertainty metric.
The following works have specifically investigated uncertainty estimation in medical imaging pipelines [17, 18, 23]. They use a cascade of models that each output uncertainty estimations in addition to their prediction. However, their methods cannot quantify the influence of the uncertainty of upstream tasks on the uncertainty of the downstream task. This is because their methods concatenate uncertainty maps and the prediction as input channels for downstream models. Consequently, it is impossible to attribute the output uncertainty to either the input uncertainty and individual components of the pipeline. Revealing this dependence would require the propagation of probability distributions through the network, which we propose in our work.
## 3 Methods
Our novel technique for propagation of aleatoric uncertainty can be applied to a model cascade of arbitrary length. For simplicity, we here limit the explicit presentation of our method to one upstream model \(g\) and one downstream model
Figure 1: An example of an imaging pipeline consisting of an upstream MR image reconstruction model \(g\), and a downstream regression model \(f\). Both models predict a measure of aleatoric uncertainty. Our method allows for the propagation of the mean and variance outputs of the upstream model through the downstream regression model.
\(f\). The latter uses the output of \(g\) as input (see Figure 1). In the following, \(\mathbf{z}\) denotes the input data of the pipeline, \(\mathbf{x}\) expresses the random variable of possible intermediate outputs of the upstream model, and \(y\) is a random variable of possible outputs of the entire pipeline. Without loss of generality, we assume that \(\mathbf{z}\) and \(\mathbf{x}\) are vectors (bold), whereas \(y\) is a single variable. Both \(\mathbf{x}\) and \(y\) are associated with a single data point \(\mathbf{z}\) of the dataset, whereas \(p(\mathbf{x}|\mathbf{z})\) and \(p(y|\mathbf{z})\) are the distributions of the random variables that are predicted by the pipeline up to a certain model stage. We assume the distribution of \(p(y|\mathbf{x})\) to be normal in the case of regression and categorical in the case of classification. We estimate the mean and variance of the target normal distributions using the technique by Nix et al. [21], whereas the parameters of categorical distributions are given by softmax outputs. We choose the variance or entropy of \(y\), respectively, as an uncertainty measure. In the following, we describe the composition of our pipeline in more detail.
### Upstream Model
The upstream model \(g\) outputs a prediction and its uncertainty in the form of the parameters of a distribution \(p(\mathbf{x}|\mathbf{z})\) from which we can sample. In our case, the upstream model produces images as outputs. We follow the common practice to model image uncertainty as pixel-wise variance [14], recognizing that this neglects potential higher order spatial correlations [19]. Spatial correlations in medical images can be associated with various factors, including similar tissue types across the image, local neighborhoods of voxels, or reconstruction artifacts. As the model \(g\) uses a heteroscedastic loss for training and outputs a tuple of arrays containing the mean image \(\mathbb{E}\left[x\right]\) and the variance image \(\mathrm{Var}\left[x\right]\), the image \(\mathbf{x}\) is distributed as a diagonal multivariate normal distribution over predictions.
### Downstream Model and Joint Pipeline
Next, the downstream model \(f\) processes \(\mathbb{E}\left[x\right]\) and \(\mathrm{Var}\left[x\right]\). As part of our contributions, we introduce a method to aggregate the uncertainties of the upstream model \(g\) and the downstream model \(f\) to a joint uncertainty measure. We propagate the uncertainty of the intermediate result, given by the distribution \(p(\mathbf{x}|\mathbf{z})\), and obtain its contribution to the uncertainty of the final prediction \(p(y|\mathbf{z})\). The joint uncertainty measure can be calculated by marginalizing the distribution of possible predictions \(\mathbf{x}\) of the upstream model \(p(y|\mathbf{z})=\int p(y|\mathbf{x})p(\mathbf{x}|\mathbf{z})\mathrm{d}\mathbf{x}\). We approximate this integral by Monte Carlo sampling. The form of the joint uncertainty is different for regression and classification downstream tasks. We describe both cases below.
**Regression Downstream Model:** In the regression case, we assume that the prediction is normally distributed. Its mean is given by the expectation over the output value \(\mathbb{E}[y]\). Its variance describing the joint aleatoric uncertainty can be denoted as the variance of the output value \(\mathrm{Var}[y]\). Hence, the plausible predictions \(y\) are distributed as \(y\sim\mathcal{N}(\mathbb{E}[y],\mathrm{Var}[y])\). However, the joint variance of the pipeline is the uncertainty of the upstream model propagated through
the downstream model \(\mathrm{Var}[x_{\mathrm{prop}}]\) and the uncertainty of the downstream model itself \(\mathrm{Var}[y_{\mathrm{ds}}]\). The uncertainty of the downstream model \(\mathrm{Var}[y_{\mathrm{ds}}]\) can not easily be computed in closed form, but is in general correlated with the propagated uncertainty of the first model \(\mathrm{Var}[x_{\mathrm{prop}}]\).
The downstream model \(f\) does not output \(\mathbb{E}[y]\) and \(\mathrm{Var}[y_{\mathrm{ds}}]\) directly, but rather returns a tuple of outputs \((\hat{y},\Delta)\). To perform variance propagation, we obtain \(T\) Monte Carlo samples \((\mathbf{x}_{(1)},\ldots,\mathbf{x}_{(T)})\) from the distribution \(p(\mathbf{x}|\mathbf{z})\). In Figure 1, the \(T\) copies of the downstream model \(f\) denote simultaneous forward passes using the same model but with different samples as inputs. We use these Monte Carlo samples to approximate the expectation of the predictive distribution using the empirical mean \(\mathbb{E}[y]\approx\mu_{\hat{y}}\) and the joint variance \(\mathrm{Var}[y]\) by the empirical variance \(\sigma_{\hat{y}}^{2}\) and the empirical mean \(\mu_{\Delta}\) in the following form:
\[\mathrm{Var}[y] = \underbrace{\mathrm{Var}[x_{\mathrm{prop}}]}_{\text{$\mathrm{Var }[y]$}}+\underbrace{\mathrm{Var}[y_{\mathrm{ds}}]+2\,\mathrm{Cov}[x_{\mathrm{ prop}},y_{\mathrm{ds}}]}_{\text{$\mathrm{Var}[y_{\mathrm{ds}}]$}}\] \[= \underbrace{\frac{1}{T-1}\left(\sum_{t=1}^{T}\hat{y}_{(t)}^{2}- \left(\sum_{t}^{T}\hat{y}_{(t)}\right)^{2}\right)}_{\text{$\mathrm{Var}[y]$}}+ \underbrace{\frac{1}{T}\sum_{t=1}^{T}\Delta_{(t)}}_{\text{$\mathrm{Var}[y]$}}\] \[= \sigma_{\hat{y}}^{2}+ \mu_{\Delta}\qquad\qquad\qquad\qquad.\]
**Classification Downstream Model:** In the classification case, the prediction of the pipeline is given as a categorical distribution over classes \(c\) of a one-hot-encoded vector \(y\). The downstream model outputs a vector of class confidences \(\hat{y}\). To approximate the pipeline's expectation of the predictive distribution, we calculate the empirical mean of the model output \(\mathbb{E}[y]\approx\mu_{\hat{y}}\). The resulting categorical distribution specifies a certain class confidence by \(p(y=c|\mathbf{x})=\mathbb{E}\left[y\right]_{c}\). One measure to express the joint uncertainty of a categorical distribution is the entropy of the prediction. The entropy for this pipeline is calculated as follows:
\[\mathrm{H}\left[y|\mathbf{z}\right]=-\sum_{c=1}^{C}\mathbb{E}\left[p\left(y=c| \mathbf{x}\right)\right]\log\mathbb{E}\left[p\left(y=c|\mathbf{x}\right) \right].\]
As the joint entropy consists of the combined uncertainty from the upstream and downstream model, we want to separate the part of the uncertainty contributed by the propagation. Hence, the mutual information \(\mathrm{I}\) of the propagated aleatoric uncertainty and the aleatoric uncertainty of the downstream model is derived from the entropy \(\mathrm{H}\) as follows: \(\mathrm{I}\left[y,\mathbf{x}|\,\mathbf{z}\right]=\mathrm{H}\left[\mathbb{E} \left[y|\mathbf{z}\right]\right]-\mathbb{E}\left[\mathrm{H}\left[y|\mathbf{x},\mathbf{z}\right]\right]\). Further details can be found in the supplementary material.
### Loss and Parametrization
Instead of predicting the variance \(\sigma^{2}\) of the upstream model and the residual uncertainty \(\Delta\) of the downstream regression model directly, we reparametrize \(\sigma^{2}\) as \(\sigma^{2}=e^{s}\) and \(\Delta\) as \(\Delta=e^{\delta}\), where \(s\) and \(\delta\) are the respective outputs of the models. This ensures that \(\sigma^{2}\) and \(\Delta\) are positive. For both training and evaluation of the downstream model, we have to minimize the negative log likelihood
of the joint distribution \(p(y|\mathbf{z})\) for each input data point \(\mathbf{z}\). Since the input of the downstream task is a probability distribution over images, we empirically chose to take 8 Monte Carlo samples during training and 256 Monte Carlo samples during evaluation to approximate the expectation. While the lower number of samples during training sacrifices accuracy for efficiency, we feel that the increased level of noise is mitigated over the course of repeated forward passes.
## 4 Experiments and Results
We test the utility of our method for three different medical pipelines with up- and downstream tasks. In all cases, the upstream task is to reconstruct MR images from undersampled k-space data. Different undersampling rates represent a varying source of aleatoric uncertainty. The downstream task is either a classification or regression task based on the reconstructed images. We demonstrate how our method propagates the uncertainty stemming from different undersampling factors to the ultimate prediction. Additionally, we show how it facilitates the attribution of the joint uncertainty to the individual models of the pipeline.
**Reconstruction and Classification of Knee MR Images:** Our first experiments are based on the fastMRI single-coil knee MR dataset [33]. The dataset contains reference images and raw k-space data, which we undersample with varying acceleration rates (Accel.) of 4, 8, 16, 32 and 64 by randomly removing columns from the k-space data. A fraction of the most centered columns in k-space is always used during reconstruction (C. Frac.). We use a physics-based reconstruction model based on an unrolled neural network [26, 27]. In addition to the reconstructed 2D MR image, the model also outputs a map of the heteroscedastic aleatoric uncertainty. The upstream model's output is subsequently processed by a downstream model, a modified ResNet-50, that classifies the side of the knee (i.e. left or right knee). Based on the outputs of the downstream
Figure 2: Higher acceleration factors (color) lead to higher reconstruction uncertainty (x-axis) and, through propagation via our method, contribute to a higher uncertainty of the final pipeline output. The propagated portion of the output uncertainty (y-axis) is given as mutual information for knee side classification (left) and as standard deviations of brain volume predictions in ml (right). Large dots are aggregate values.
model, the pipeline calculates the parameters of a joint categorical distribution over possible predictions containing the uncertainty information. We use the fastMRI validation set containing 199 images as test set and split the original training set containing 973 images patient-wise into two parts to train the upstream and downstream model, respectively. Each model is trained with 80% of its data split and validated on the remaining 20%.
We observe that both the uncertainty in the reconstructed images and the propagated uncertainty in the final prediction increase with higher undersampling factors (see Figure 2 left). Figure 3 illustrates this effect for a representative, single sample. These observations are also reflected in the quantitative results (see Table 1). With increasing undersampling, the uncertainty in the data increases, which is reflected in a higher estimated reconstruction uncertainty and lower structural similarity (SSIM) compared to the ground truth image that is obtained using the entire k-space data. The estimated reconstruction uncertainty is given as the square root of the average over the dataset and pixels (\(\sqrt{\mathrm{Var}[\mathbf{x}]}\)). This increased uncertainty is propagated by the downstream classification model.
\begin{table}
\begin{tabular}{c c|c c|c c c c} k-space & \multicolumn{2}{c|}{Reconstructed image} & \multicolumn{4}{c}{Classification output (knee side)} \\ Accel. & C. Frac. & SSIM & \(\sqrt{\mathrm{Var}[\mathbf{x}]}\) & ACC & \(\mathrm{I}[y,\mathbf{x}|\,\mathbf{z}]\) & \(\mathbb{E}\left[\mathrm{H}\left[y|\mathbf{x},\mathbf{z}\right]\right]\) & \(\mathbb{H}\left[\mathbb{E}\left[y|\mathbf{z}\right]\right]\) \\ \hline
4 & 0.16 & 0.823 & 0.065 & 0.992 & 0.32 \(\cdot 10^{-2}\) & 1.63 \(\cdot 10^{-2}\) & 1.95 \(\cdot 10^{-2}\) \\
8 & 0.08 & 0.757 & 0.079 & 0.997 & 0.32 \(\cdot 10^{-2}\) & 1.40 \(\cdot 10^{-2}\) & 1.71 \(\cdot 10^{-2}\) \\
16 & 0.04 & 0.674 & 0.114 & 0.992 & 0.34 \(\cdot 10^{-2}\) & 1.37 \(\cdot 10^{-2}\) & 1.71 \(\cdot 10^{-2}\) \\
32 & 0.02 & 0.556 & 0.156 & 0.982 & 0.76 \(\cdot 10^{-2}\) & 2.14 \(\cdot 10^{-2}\) & 2.90 \(\cdot 10^{-2}\) \\
64 & 0.01 & 0.445 & 0.192 & 0.967 & 1.74 \(\cdot 10^{-2}\) & 5.20 \(\cdot 10^{-2}\) & 6.93 \(\cdot 10^{-2}\) \\ \end{tabular}
\end{table}
Table 1: Quantitative measures characterizing the input k-space data, reconstructed images from the upstream model and final output from the downstream model.
Figure 3: Representative example (left knee) of a reconstructed knee MR images from k-space data with varying undersampling factors. Shown are the mean (top) and standard deviation maps (middle) as well as the accompanying uncertainty measures of the downstream network’s knee side predictions (bottom).
Higher upstream uncertainty yields a higher joint entropy (\(\mathrm{H}\left[\mathbb{E}\left[y|\mathbf{z}\right]\right]\)), as well as a higher mutual information (\(\mathrm{I}\left[y,\mathbf{x}|\,\mathbf{z}\right]\)) between the joint entropy and propagated uncertainty. For all undersampling factors, the prediction accuracy (ACC) is very high, which is most likely due to the simple task at hand.
**Reconstruction of Brain MR Images:** For the second set of experiments, we use the Alzheimer's Disease Neuroimaging Initiative (ADNI) brain MR image dataset [24]. We calculate the complex-valued k-space data by applying a Fourier transform to the images and add Gaussian noise to the synthetic k-space to mimic the MR imaging process. We again simulate undersampling with acceleration factors of 2, 4, 6, and 8. We perform two different tasks on this dataset: regression of the brain volume and classification of the patient's sex. For the classification task, we use the same pipeline as for the fastMRI dataset, whereas for regression, we change the downstream model's output and loss accordingly. This dataset of 818 MR images is split patient-wise. We use 20% as a test set and split the
\begin{table}
\begin{tabular}{c c|c c|c c c c} k-space & \multicolumn{3}{c|}{Reconstructed image} & \multicolumn{3}{c}{Regression output (brain volume)} \\ Accel. C. Frac. & SSIM & \(\sqrt{\mathrm{Var}[\mathbf{x}]}\) & \(\mathrm{L}_{1}\) & \(\mathrm{L}_{2}\) & \(\sqrt{\sigma_{\hat{g}}^{2}}\) & \(\sqrt{\mu_{\Delta}}\) & \(\sqrt{\mathrm{Var}[y]}\) \\ \hline
2 & 0.16 & 0.714 & 0.0642 & 63.6 & 78.9 & 22.0 & 59.9 & 64.2 \\
4 & 0.08 & 0.567 & 0.0880 & 64.3 & 80.9 & 24.6 & 60.7 & 65.9 \\
6 & 0.06 & 0.502 & 0.0957 & 64.3 & 81.6 & 25.1 & 62.3 & 67.6 \\
8 & 0.04 & 0.450 & 0.0973 & 66.1 & 83.7 & 24.6 & 65.6 & 70.4 \\ \end{tabular}
\end{table}
Table 2: Quantitative measures characterizing the input k-space data, reconstructed images from the upstream model and final output from the downstream model.
Figure 4: Representative example of a reconstructed brain MR images from synthetic k-space data with varying undersampling factors. Shown are the mean (top) and standard deviation maps (middle) as well as the accompanying uncertainty measures in scale of ml of the downstream network’s volume predictions (bottom).
remaining 80% into four subsets to train and validate the up- and downstream models separately, as described above.
The brain volume regression model also demonstrates that both the uncertainty in the image and the propagated uncertainty in the final prediction are positively correlated with the acceleration factor (see Figure 2 right and Figure 4). Higher acceleration rates lead to increasing uncertainty in the data, which returns in a higher variance of the prediction and lower structural similarity compared to the ground truth image (see Table 2). This increased uncertainty is propagated by the downstream regression model. Higher upstream uncertainty yields a higher propagated variance \(\sigma_{\tilde{y}}^{2}\) and a higher joint variance \(\text{Var}[y]\). Both of them are given as the square root of the average over the dataset (\(\sqrt{\sigma_{\tilde{y}}^{2}}\) and \(\sqrt{\text{Var}[y]}\)). Beyond the uncertainty estimations, the sparser input data also leads to reduced model performance, as the average \(\text{L}_{1}\) (Manhattan) and \(\text{L}_{2}\) (Euclidean) distances between the prediction and the ground truth brain volumes rise with higher undersampling factors. We show the results for the patient sex classification pipeline in the supplementary material.
## 5 Conclusions
To the best of our knowledge, this is the first work to demonstrate how uncertainty can be propagated through medical imaging pipelines consisting of cascades of deep learning models. Our method quantifies the models' individual contributions to the joint uncertainty and be consequently aggregates them to a joint uncertainty measure. In extensive experiments, we have shown that our method can be integrated into real-world clinical image processing pipelines.
At the moment, our method does not capture the spatial correlation between the uncertainty of pixels. Future work could extend our method to handle probability distributions beyond pixel-wise independent normal distributions. Additionally, it would be valuable to incorporate epistemic uncertainty into the method. One major challenge that remains unresolved is the calibration of the pipeline's uncertainty. Moreover, to ensure meaningful comparisons between uncertainty propagation techniques and effectively evaluate different pipelines, the establishment of a well-defined metric is imperative.
Ultimately, we envision that our method will allow clinicians to assess and apportion all sources of aleatoric uncertainty within the medical imaging pipeline, increasing their confidence when deploying deep learning in clinical practice.
## Acknowledgments
This research has been funded by the German Federal Ministry of Education and Research under project,,NUM 2.0" (FKZ: 01KX2121). Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012) |
2309.10298 | Learning Orbitally Stable Systems for Diagrammatically Teaching | Diagrammatic Teaching is a paradigm for robots to acquire novel skills,
whereby the user provides 2D sketches over images of the scene to shape the
robot's motion. In this work, we tackle the problem of teaching a robot to
approach a surface and then follow cyclic motion on it, where the cycle of the
motion can be arbitrarily specified by a single user-provided sketch over an
image from the robot's camera. Accordingly, we contribute the Stable
Diffeomorphic Diagrammatic Teaching (SDDT) framework. SDDT models the robot's
motion as an Orbitally Asymptotically Stable (O.A.S.) dynamical system that
learns to stablize based on a single diagrammatic sketch provided by the user.
This is achieved by applying a \emph{diffeomorphism}, i.e. a differentiable and
invertible function, to morph a known O.A.S. system. The parameterised
diffeomorphism is then optimised with respect to the Hausdorff distance between
the limit cycle of our modelled system and the sketch, to produce the desired
robot motion. We provide novel theoretical insight into the behaviour of the
optimised system and also empirically evaluate SDDT, both in simulation and on
a quadruped with a mounted 6-DOF manipulator. Results show that we can
diagrammatically teach complex cyclic motion patterns with a high degree of
accuracy. | Weiming Zhi, Tianyi Zhang, Matthew Johnson-Roberson | 2023-09-19T04:03:42Z | http://arxiv.org/abs/2309.10298v2 | # Learning Orbitally Stable Systems for Diagrammatically Teaching
###### Abstract
Diagrammatic Teaching is a paradigm for robots to acquire novel skills, whereby the user provides 2D sketches over images of the scene to shape the robot's motion. In this work, we tackle the problem of teaching a robot to approach a surface and then follow cyclic motion on it, where the cycle of the motion can be arbitrarily specified by a single user-provided sketch over an image from the robot's camera. Accordingly, we introduce the _Stable Diffeomorphic Diagrammatic Teaching_ (SDDT) framework. SDDT models the robot's motion as an _Orbitally Asymptotically Stable_ (O.A.S.) dynamical system that learns to follow the user-specified sketch. This is achieved by applying a _diffeomorphism_, i.e. a differentiable and invertible function, to morph a known O.A.S. system. The parameterised diffeomorphism is then optimised with respect to the Hausdorff distance between the limit cycle of our modelled system and the sketch, to produce the desired robot motion. We provide theoretical insight into the behaviour of the optimised system and also empirically evaluate SDDT, both in simulation and on a quadruped with a mounted 6-DOF manipulator. Results show that we can diagrammatically teach complex cyclic motion patterns with a high degree of accuracy.
## I Introduction
Specifying the desired behaviour for a robot has traditionally involved crafting a cost function and solving an optimisation problem. This process can often be complicated and require trial and error. Another way to generate desired robot motion has been Learning from Demonstration (LfD) [1], where an expert demonstrates the movement to the robot. The demonstrations are often provided by _Kineshtetic Teaching_, where the user physically handles the robot, or via teleoperation, which requires an additional remote controller. Both approaches face challenges when operating on robots with high degrees of freedom. Diagrammatic Teaching [2] is a recently introduced paradigm that circumvents physical contact and teleoperation, where the user specifies robot skills by sketching examples of the robot's end-effector motion on images of the scene. Correspondingly, in this paper, we seek to use user sketches as a medium for the user to shape the motion of the robot.
We focus on generating robot end-effector motion that approaches a flat surface and converges to a continuous periodic motion on it. This motion pattern arises in many tasks, such as painting, wiping or sanding a surface. We represent the robot's end-effector motion as a dynamical system where trajectories of this system will eventually converge to the limit cycle. Dynamical systems with this convergence property are known to be _Orbitally Asymptotically Stable_ (O.A.S.). This paper aims to develop methodologies to learn robot policies, represented by O.A.S. dynamical systems, that are shaped by diagrammatic sketches provided by the user.
We introduce _Stable Diffeomorphic Diagrammatic Teaching_ (SDDT), a novel framework for diagrammatic teaching to learn policies that produce periodic and surface-approaching motions. SDDT allows the user to provide input by sketching the shape of the desired limit cycle onto an image of the surface. We use the insight that when two dynamical systems map to one another via a _diffeomorphism_ (a differentiable and invertible function), these systems have the same stability properties [3, 4]. SDDT learns O.A.S. systems by optimising a parameterised diffeomorphism to "morph" a system that is known to be O.A.S., such that the limit cycle matches the desired shape. We develop a loss for the corresponding optimisation. Then, we provide both theoretical guarantees into what classes of shapes the limit cycle can be "morphed" into and empirical evidence of the efficacy of our proposed framework, including real-world experiments on a quadruped with a mounted manipulator. SDDT is particularly appealing for mobile manipulators (like quadrupeds with installed arms fig. 1), where an egocentric view, readily available from the onboard vision system, is used to prompt diagrammatic sketches from the user.
The remainder of the paper is organised as follows: we begin by reviewing related work in section II, and then provide some background knowledge to understand SDDT in section III. We introduce the methodology of SDDT and provide theoretical results in section IV, followed by empirical results in section V. We end the paper with conclusions and future directions in section VI.
Fig. 1: Diagrammatic teaching is a paradigm to interface with robots by drawing sketches over camera images. We contribute SDDT to diagrammatically teach robots robot policies that approach a surface in view and stabilise at cyclic motions of the provided shape on the surface. (Left) A sketch of the desired pentagon-shaped cycle (in red) is provided by the user from the egocentric view of the robot. (Right) The resulting policy forces the end-effector to quickly approach the surface, and then stabilise to continuously trace out the shape of the provided sketch.
## II Related Work
### _Robot Motion Generation and Diagrammatic Teaching_
Generating robot motion is a central problem in robotics. This can be done in a motion planning fashion [5], where an optimisation problem needs to be constructed and a motion planner [6, 7, 8] is then used to find a solution to the problem. Motion planning approaches are beneficial in that once the constraints and costs for the motion have been specified, the task of motion generation is primarily "off-loaded" to the planner, and the solution inherits theoretical guarantees, such as probabilistic completeness [9]. However, many natural motion patterns cannot be easily distilled into a simple cost function, and additionally, the construction of the optimisation problem requires technical expertise. Another approach for specifying robot motion has been Learning from Demonstration (LfD) [1], where a human expert physically handles the robot to trace out the desired motion (i.e. kinesthetic teaching) [10, 11] or teleoperation via specialised remote controllers [12]. To allow demonstrations to be more easily and intuitively collected, _Diagrammatic Teaching_[2] has been introduced as an alternative interface for the user to specify movement patterns to robots: the user is provided with images and prompted to provide sketches, which are subsequently used to construct a model of robot motions. This work falls within the Diagrammatic Teaching paradigm, where the robot's motion is shaped by a sketch from the user.
### _Stable Dynamical Systems as Robot Policies_
In many LfD and motion generation problem formulations, robot policies are modelled by state-dependent dynamical systems [13, 14]. Enforcing the convergence properties of dynamical systems has been shown to increase the robustness of the robot policy and enables prior knowledge to be imbued into the system [15]. However, previous methods exclusively focus on systems that converge to a single fixed point, instead of an orbit. Additionally, these systems are learned in a LfD setup where the training data is a set of collected expert trajectories, in the form of sequences of end-effector or joint positions. Examples of such methods include [16, 13, 17, 18, 3], In particular, methods [3, 18] also take a diffeomorphic learning approach, but only considered stability with respect to convergence to a fixed point. A similar approach was provided in [19] which provided extensions to learning stochastic systems with stability properties from multiple kinesthetic demonstrations and examined stable orbits. Our work is unique from these previous approaches, in that we study learning stable systems within the diagrammatic teaching paradigm. Our approach does not require multiple kinesthetic demonstrations, and instead simply requires a single sketch provided by the user.
## III Preliminaries
Here, we introduce the necessary background concepts for the presentation of SDDT in section IV.
### _Robot Motion Generation via Dynamical Systems_
In this work, we shall directly model the robot's end-effector position \(\mathbf{x}\in\mathbb{R}^{3}\). We represent the robot's policy as a first-order time-invariant dynamical system.
\[\dot{\mathbf{x}}(t)=f(\mathbf{x}(t)),\qquad\qquad\dot{\mathbf{x}}(0)=\mathbf{ x}_{0}, \tag{1}\]
where \(f:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) is a non-linear mapping between \(\mathbf{x}\) and its time derivative \(\dot{\mathbf{x}}\), and \(\mathbf{x}_{0}\) is the initial condition. Individual motion trajectories \(\xi\) of time duration \(t\in\mathbb{R}\) can be obtained via integration,
\[\xi(t,\mathbf{x}_{0})=\mathbf{x}_{0}+\int_{0}^{t}\dot{\mathbf{x}}(s)\mathrm{s}, \tag{2}\]
where the integral can be evaluated using a numerical ODE integrator, such as Euler's method. Modelling the robot's policy, rather than individual trajectories, has the benefit of being robust to perturbations. That is, at any end-effector state after perturbation, the robot can follow the dynamical system and does not track a pre-determined trajectory. In our problem setup, we may wish to additionally constrain the end-effector to be orthogonal to the surface, fixing its rotation.
### _O.a.S. Systems_
We are interested in understanding the long-term behaviour of the dynamical system, namely, what happens to the trajectories after a long integration duration. Will the solution eventually converge to fixed points, a limit cycle, or diverge and blow up? We are interested in robot motion which approaches a surface and converges onto a cyclic motion on that surface. This requires the dynamical system to be _Orbitally Asymptotically Stable (O.A.S.)_ with a limit cycle. Here, we give a definition for O.A.S.
**Definition III.1** (O.a.S. Stability): _A dynamical system \(\mathbf{x}=f(\mathbf{x})\) is Orbitally Asymptotically Stable (O.A.S.) if for any initial condition \(\mathbf{x}_{0}\) within a region of state-space \(\mathcal{X}\), we have_
\[\lim_{t\rightarrow\mathrm{int}}\min_{\tau\in[0,T]}||\mathbf{x}(t)-\mathbf{ \bar{x}}(\tau)||=0, \tag{3}\]
_where \(\mathbf{\bar{x}}(\tau)\) is a solution of the system. Furthermore, \(\mathbf{\bar{x}}(\tau)\) is periodic, i.e. \(\mathbf{\bar{x}}(\tau)=\mathbf{\bar{x}}(\tau+T)\), where \(T\in\mathbb{R}^{+}\) is the period of the cycle. Here, \(\mathbf{\bar{x}}(\tau)\) is known as a "limit cycle" and \(\mathcal{X}\) is known as the "basin of attraction"._
### _Invertible Neural Networks_
_Diffeomorphisms_, which are differentiable and invertible functions, are crucial building blocks of our proposed framework. Diffeomorphisms can be parameterised by Invertible Neural Networks (INNs). INNs are function approximators that are invertible by definition and have easily computable Jacobians [20]. We use _Coupling-based INNs_[21] which contain the reversible block introduced, where the split the input \(\mathbf{u}\) into halves, \(\mathbf{u}=[\mathbf{u}_{1},\mathbf{u}_{2}]\), and the output \(\mathbf{v}\) into
haves, \(\mathbf{v}=[\mathbf{v}_{1},\mathbf{v}_{2}]\). We learn four fully-connected neural networks, \(p_{1},p_{2},q_{1},q_{2}\) such that,
\[\mathbf{v}_{1} =\mathbf{u}_{1}\odot\exp(p_{2}(\mathbf{u}_{2}))+q_{2}(\mathbf{u}_{ 2}), \tag{4}\] \[\mathbf{v}_{2} =\mathbf{u}_{2}\odot\exp(p_{1}(\mathbf{v}_{1}))+q_{1}(\mathbf{v} _{2}), \tag{5}\]
where \(\odot\) denotes the Hadamard product. By construction, the inverse is given as follows:
\[\mathbf{v}_{1} =\mathbf{u}_{1}\odot\exp(p_{2}(\mathbf{u}_{2}))+q_{2}(\mathbf{u} _{2}), \tag{6}\] \[\mathbf{v}_{2} =\mathbf{u}_{2}\odot\exp(p_{1}(\mathbf{v}_{1}))+q_{1}(\mathbf{v} _{2}). \tag{7}\]
As such, the INN is able to enforce invertibility without the functions \(p_{1},p_{2},q_{1},q_{2}\) being invertible.
## IV Stable Diffeomorphic Diagrammatic Teaching
Stable Diffeomorphic Diagrammatic Teaching (SDDT) first presents the user with an image of the contact surface and prompts the user to sketch a closed shape on the image. The corresponding points in the robot's task space are found via ray-tracing the sketch onto the surface. We then minimise the distance between the limit cycle of a parameterised O.A.S. system and the set of corresponding points.
This section is organised as follows: We shall first elaborate on how to construct a parameterised 3D dynamical system which is O.A.S. with a stable orbit on a surface (section IV-A). We describe how to shape the system to match a sketch provided by the user (section IV-B). Then, we provide theoretical guarantees that our model is sufficiently flexible to model any limit cycle that is smooth and closed (section IV-E).
### _Parameterising O.A.S. Systems via Diffeomorphisms_
We can learn a desired O.A.S. system \(\hat{\mathbf{x}}=f(\mathbf{x})\), by starting with a hand-designed _base_ O.A.S. system \(\hat{\mathbf{y}}=g(\mathbf{y})\). We then learn a diffeomorphism \(F\) such that \(\mathbf{x}=F(\mathbf{y})\). Intuitively, we can think of the diffeomorphism to be "morphing" the base system into the desired system. A simple illustration of this is provided in fig. 2. Throughout this section, we denote the state variables of the base system as \(\mathbf{y}\), and that of the desired system as \(\mathbf{x}\).
In this paper, we will by convention define the flat surface as the \(x,z\)-plane at \(y=0\). We begin by constructing a simple base system to have a stable circular orbit on the surface. Consider a system where a polar coordinate system (with polar variables \(r\) and \(\omega\)) is defined in the \(x,z\)-plane, with an additional attractor in the \(y\)-axis:
\[\dot{r}=\mu(1-\frac{r^{2}}{R^{2}})r,\hskip 28.452756pt\dot{\omega}=1, \hskip 28.452756pt\dot{y}=-\alpha y, \tag{8}\]
where \(\mu>0\) and \(\alpha>0\) are parameters which control how fast the system converges. Trajectories of this system will converge to an equilibrium at \(r=R\), \(y=0\), and any \(\omega\), for all \(r\geq 0\). This system is O.A.S., with a basin of attraction \(\mathcal{X}=\{(r,\omega,y)|r>0,\omega,y\in\mathbb{R}\}\). Example trajectories of this system are shown in fig. 3. We can transform the coordinates into Cartesian coordinates as the system:
\[\hat{\mathbf{y}}=g(\mathbf{y})=\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{z}\end{bmatrix}=\begin{bmatrix}-z+\mu\left(1-\frac{x^{2}+z^{2}}{R^{2}} \right)x,\\ -\alpha y,\\ x+\mu\left(1-\frac{x^{2}+z^{2}}{R^{2}}\right)z,\end{bmatrix}. \tag{9}\]
which has a limit cycle \(L:=\{(x,y,z)\in\mathbb{R}^{3}|x^{2}+z^{2}=R^{2},y=0\}\).
Dynamical systems with state variables satisfying \(\mathbf{x}=F(\mathbf{y})\), where \(F\) is a diffeomorphism, are topologically conjugates of one another. They can be thought of as the same system under a change of coordinate systems and their stability characteristics are not altered (proof in [4, 18, 22]). We seek a mapping \(F\), such that no change is made on the \(y\) axis while shaping the circle on the \(x,z\) plane \(x^{2}+z^{2}=R^{2}\) to match the provided data. Therefore, we decompose \(F\) into separate functions, using an INN to learn a diffeomorphism on the \(x,z\) axes and leaving the \(y\)-axis with the identity function. Specifically,
\[\mathbf{x}\!=\!F(\mathbf{y}),\hskip 5.690551pt\text{where}\hskip 5.690551pt[x_{x},x_{z}]\! =\!\operatorname{INN}_{\boldsymbol{\theta}}([y_{x},y_{z}]),\hskip 5.690551ptx_{y}\!=\!y_{y}, \tag{10}\]
where \(\mathbf{x}=[x_{x},x_{y},x_{z}]\), \(\mathbf{y}=[y_{x},y_{y},y_{z}]\) and \(INN_{\boldsymbol{\theta}}\) is an invertible neural network with parameters \(\boldsymbol{\theta}\).
The desired O.A.S. system dynamics \(\hat{\mathbf{x}}=f(\mathbf{x})\) is now related to that of the base O.A.S. system \(\hat{\mathbf{y}}=g(\mathbf{y})\) via the chain rule:
\[\hat{\mathbf{x}}=f(\mathbf{x})=J_{F}(F^{-1}(\mathbf{x}))g(F^{-1}(\mathbf{x})), \tag{11}\]
where \(J_{F}(F^{-1}(\mathbf{x}))\) is the Jacobian of \(F\) at \(F^{-1}(\mathbf{x})\).
### _Learning the Parameterised System via the Hausdorff Distance_
This section elaborates on how to train \(F\) defined in eq. (10) such that the limit cycle of \(\hat{\mathbf{x}}=f(\mathbf{x})\) matches the user's sketch. This involves projecting the user's sketch to the surface via ray-tracing. Then, defining and minimising a loss between the set of projected points on the surface to the limit cycle of our dynamical system model.
### _Ray-tracing onto Surface_
After prompting the user to sketch the desired limit cycle on the surface on the camera image, we are assumed to have a set of \(n\)\(2\)D coordinates, \(\mathbf{p}\), and corresponding depths, \(d\), to
Fig. 3: Trajectories converge to a limit cycle at \(y=0\).
Fig. 2: Diffeomorphisms can be thought of as “morphing” a dynamical system into one another. (Left) Five trajectories (red) of overlaid on grid points (blue); (Right) Morphed trajectories and the corresponding grid.
the surface, i.e. \(\{\mathbf{p}_{i},d_{i}\}_{i=1}^{n}\). We call the set of 2D coordinates the _view-space shape_. We follow the [2] and assume a pinhole camera. We construct a ray in 3D which passes through each 2D coordinate, \(\mathbf{s}_{i}=\mathbf{o}+r(\mathbf{p}_{i})d_{i}\), where \(\mathbf{o}\) and \(r\) are camera origin and projection direction respectively. These are obtainable from the camera parameters and camera position. We collect the set of projected points, \(\mathcal{S}=\{\mathbf{s}_{i}\}_{i=1}^{n}\), and use this as our training data. Note that as we align the flat surface to be the \(x,z\)-plane at \(y=0\), by convention, we drop the \(y\)-axis of our projected points and simply consider the 2D coordinates of the sketch on the surface. Figure 3(a) shows an example of projecting a pentagon shape from view-space to a surface, with the traced rays in blue and the pentagon on the surface in red.
### _Hausdorff Distance Loss_
The main component of the loss function is a measure of similarity between the shape specified by the user and the limit cycle. This requires us to define a distance between the set of sketched points projected onto the surface, \(S\), and the limit cycle, \(L\). Here, we compute a discretised Hausdorff distance [23], which provides distance between two point sets. Intuitively, the Haussdorff distance takes the larger of the maximum distances from one set to the other, and its reverse. A visualisation of this intuition is given in fig. 3(b). This is defined as:
\[H(S,L)=\max\!\left\{\!\!\!\max_{\mathbf{s}\in\mathcal{S}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
SDDT in the real world by deploying the framework on a quadruped-mounted manipulator.
### _A Qualitative Analysis: Learning Challenging Cycles_
We seek to explore the capabilities of our proposed method for learning systems with limit cycles that are intricate and vary greatly from the circular limit cycle of the base dynamical system. Here, we extract silhouette outlines and investigate how well SDDT is able to "morph" the cycle into the outlines.
We use outlines of a **whale**, a **dog**, a **flower**, and an **eagle** and learn an INN to morph the base system's limit cycle to match the outline. Throughout this paper, we use the INN models in the _FrEIA_ library [20], which is built on _PyTorch_[27]. In fig. 6, we provide qualitative results of both the limit cycle and trajectories integrated from the resulting dynamical system. We observe that the limit cycle is generally able to be accurately shaped into each of these outlines, and the integrated 3D trajectories are able to approach the surface and smoothly converge onto the limit cycle. We also visualise and compare the original base system and the target 2D shape, on the \(x,z\)-plane in fig. 5.
To gain insight into how the diffeomorphism "morphs" the \(x,y\)-plane at \(y=0\), in fig. 8, we visualise additional quantitative results on the **whale** outline. On the left, we show a transport map showing examples of points on the orbit of the base system mapped to the learned system, with correspondence between points on the systems indicated by the grey line. On the right, we visualise the result of diffeomorphism operating on concentric circles (in red), starting from a radius of \(0.1\) to \(2\). This gives us an intuition of how the
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & O.A.S. & Star & Knight & Arrow \\ \hline SDDT (ours) & ✓ & **0.011** & **0.011** & **0.017** \\ Neural ODEs [26] & ✗ & 0.040 & 0.035 & 0.063 \\ Base System & ✓ & 0.133 & 0.201 & 0.209 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Performance of SDDT and baselines, on each task, as measured by Hausdorff distance (lower is better).
Fig. 8: We visualise how the ambient space is morphed by the learnt diffeomorphism: (Left) Transport map with points on the base system (blue) mapped (shown by grey line) onto those of the learned system (red); (Right) Concentric circles passed through the diffeomorphism to match the desired shape.
Fig. 10: The learnt O.A.S. vector field on the \(x,z\)-plane. The vector field (red arrows) pushes points off the cycle onto the stable cycle (in black).
Fig. 6: Qualitative results of learning diffeomorphisms to shape the circular limit cycle into outlines of the **whale**, **dog**, **flower**, and **eagle**. We also show, at different viewing angles, of example trajectories integrated from multiple initial 3D positions (in green). We observe that each trajectory is able to converge onto the shaped limit cycle on the surface.
Fig. 7: We evaluate SDDT in the PyBullet Simulator. (Left) The user is provided a view from a camera in the scene and is prompted to sketch a star, a knight chess piece, and an arrow on the image respective. The user sketches (in red) are overlaid over the camera image. (Right) For each different sketch, we generate motion trajectories from SDDT from 3 different robot configurations. We observe that each of these trajectories is able to consistently and accurately converge onto the desired shape.
ambient space is stretched and compressed to match the target shape (in black). In fig. 10, we show the vector field of the resulting learned system, with the limit cycle shown in black. We observe that points not in the vector field shall be attracted towards the stable cycle -- points inside the cycle push outwards while those outside push inwards.
### _SDDT outperforms Free-form Neural ODEs_
We evaluate, in the PyBullet Simulator [28], the performance of the SDDT framework. We simulate a Franka on a mobile base facing a wall, with an RGB-D camera positioned behind the robot. We seek to generate robot motion that approaches the wall and converges onto the shape diagrammatically provided by the user on the camera image. We compare against the following baselines:
1. **Neural ODEs**[26]: Neural ODEs learn dynamical systems by parameterising the dynamics as a neural network and then train on data. We use a Neural ODE to learn the dynamics on the \(x,z\)-plane, and retain the attractor towards the surface in the \(y\)-axis direction. The dynamics of Neural ODEs are completely free-form, with no assumption made on stability.
2. **Base System** (defined in eq. (9)): We evaluate how well our stable base system performs. We allow the radius and the origin of the base system to be tuned, such that the limit cycle minimises the distance between the data. The base system is highly structured and contrasts with the entirely learning-based Neural ODE.
After collecting sketches of a **star**, a **knight**, and an **arrow**, we train each of these models and integrate trajectories at three different initial robot configurations. Here, we use the implementation of Neural ODEs provided in the _torchdiffeq_ library [26]. To measure how well the trajectories from the traced motion match the diagrammatic sketch provided by the user, we take points that have \(y\)-values under \(10^{-4}\) to be in contact with the surface, and compute the Hausdorff distance between the trajectory and sketch ray-traced onto the surface. A qualitative evaluation of our generated trajectories, as well as the user-provided sketches, can be seen in fig. 6, with results provided in table I. We observe that SDDT imbues knowledge of stability into the system via enforcing O.A.S., and outperforms Neural ODEs which treat the dynamics as a black-box. SDDT is also sufficiently flexible to learn complex patterns, greatly outperforming the inflexible stable base system.
### _SDDT on Real Robots_
We demonstrate the applicability of SDDT on real-world robots, by applying SDDT on a _Unitree Aliengo_ quadruped with an attached 6-DOF Z1 manipulator. The user is shown egocentric views of the environment via the RGB-D camera on board the quadruped and is asked to sketch a **pentagon** and a **star**. SDDT is then used to learn stable systems shaped by the projection of the drawing. We then integrate a trajectory from the current end-effector position and track the trajectory with a marker pen to trace out the corresponding motion patterns. We observe that in both instances the quadruped mount manipulator was able to approach the surface and stabilise on a cycle that matched the diagrammatically specified shapes, despite minor inaccuracies introduced by contact forces. In fig. 9, we overlay the provided sketches onto the egocentric view images from the quadruped and show the manipulator converging onto the surface and tracing the desired shapes.
## VI Conclusions and Future Work
In this work, we tackle the problem of learning robot policies that approach a surface and trace on it periodically. Robot motions of this kind are applicable to painting, wiping, and sanding tasks. We take a _diagrammatic learning_ approach, where the type of periodical pattern is provided by the user providing a 2D sketch on an image of the scene. We contribute the novel _Stable Diffeomorphic Diagrammatic Teaching_ (SDDT) method, where ray-tracing is used to project the user's sketch onto the surface, and an Orbitally Asymptotically Stable (O.A.S.) dynamical system, which converges to a cyclic orbit, is learned as a policy for the robot's motion. SDDT learns O.A.S. systems by learning a diffeomorphism that morphs a known stable base system into the desired system. We provide theoretical insight into the classes of 2D shapes the stable limit cycle can be shaped into, and provide extensive empirical evaluations of SDDT, both in simulation and on a real-world quadruped with a mounted manipulator. Future avenues of research include: (1) extending SDDT beyond flat surfaces, and to ensure stable motion patterns of curved surfaces; (2) extending SDDT to allow the specification of forces applied onto the surface.
Fig. 9: SDDT is particularly useful when egocentric images, from onboard cameras, are available. We run our real-world experiments on a quadruped with a mounted arm. (Left) We sketch the shapes of a pentagon and a star (in red) on an egocentric view from the onboard camera. (Right) The robot converges to the surface, stabilises at, and traces out the diagrammatically provided shapes. |
2309.07082 | Landau model to illustrate the process of learning and unlearning of
nociplastic pain | Recent advances in the comprehension of the consolidation of nociplastic pain
points to a complex nonconscious learnt process of threat perception.
Neurobiological education is emerging as a promising approach to unlearn
nociplastic pain supported by biopsychosocial tools (exposition to movement,
mindfulness, sharing group format...). However this approach is still poorly
known by clinisians and society in general forming a communication problem
that, unfortunately, perpetuate the suffering of the patients. We propose a
Landau model to describe the process of learning and unlearning nociplastic
pain to help to clarify this complex situation and facilitate communication
between different sectors of society. Nociplastic pain corresponds to a first
order transition with attention more likely in the alert-protection state than
in the trust-explore state. Two appealing results of the model are that the
perception of the critical context depends on the personal history about the
symptom and that biopsychosocial loops are formed when there are alarming
learnt historic information about the symptom together with confused and
contradictory expert information as in nocebo messages. Learning and unlearning
in the model correspond to a change in control parameters able to weight more
alert-protected state, trust-explore state or neutral state. This description
makes clear why neurobiological education is the ground therapy from which
others must be built to embody the pertinent, clear and trustful information. | Belén Valenzuela | 2023-09-11T11:08:13Z | http://arxiv.org/abs/2309.07082v1 | # Landau model to illustrate the process of learning and unlearning of neoplastic pain
###### Abstract
Recent advances in the comprehension of the consolidation of neoplastic pain points to a complex nonconscious learnt process of threat perception. Neurobiological education is emerging as a promising approach to unlearn neoplastic pain supported by biopsychosocial tools (exposition to movement, mindfulness, sharing group format...). However this approach is still poorly known by clinisians and society in general forming a communication problem that, unfortunately, perpetuate the suffering of the patients. We propose a Landau model to describe the process of learning and unlearning neoplastic pain to help to clarify this complex situation and facilitate communication between different sectors of society. Nociplastic pain corresponds to a first order transition with attention more likely in the alert-protection state than in the trust-explore state. Two appealing results of the model are that the perception of the critical context depends on the personal history about the symptom and that biopsychosocial loops are formed when there are alarming learnt historic information about the symptom together with confused and contradictory expert information as in nocebo messages. Learning and unlearning in the model correspond to a change in control parameters able to weight more alert-protected state, trust-explore state or neutral state. This description makes clear why neurobiological education is the ground therapy from which others must be built to embody the pertinent, clear and trustful information.
pacs: 74.70.Xa, 74.25.nd +
Footnote †: preprint: FacetTime
Chronic pain is increasing at an alarming rate in recent years as exemplified by low back pain [1; 2; 3]. Musculoskeletal chronic pain has been identified as one of the leading cause of disability worldwide [4]. In addition, musculoskeletal conditions may increase the risk of other chronic disease such as cardiovascular disease, cancer and diabetes. [5] This disturbing situation has increased the research interest for a better understanding of chronic pain. Recently, neoplastic pain has been defined as a large component of chronic pain not associated to tisular damage [6]. This term was necessary because most chronic pain is non specific and it does not correspond to an underlying pathology. [1; 2] From advances in cognitive and phenomenological science, there is strong evidence that the consolidation of neoplastic pain is a complex nonconscious learnt process of threat perception which can be formed by expectatives and/or learnt habits, alarming information from clinicians or experts and interpretation of the context giving rise to maladaptive loops [7; 8; 9; 10]. This insight opens the possibility to overcome or attenuate neoplastic pain if it were possible to unlearn these beliefs and habits via the plasticity of the nervous system reducing the perception of threat. Biopsychosocial models [11; 12; 13; 14; 15; 16] rooted in neurobiological education of pain (NBE) have emerged as the ground approach to help to this problem [17; 18; 19; 20].
Unlearning neoplastic pain via learning neurobiological education is not an intellectual learning but an embodied learning, meaning that it is a process where, in a safe and caring environment, the information is getting interiorized from the conscious patient to the nonconscious organism until it becomes the automatic perception. Thus, the patient makes sense of his/her own experience and develops an internal compass to discern what is threat, what is not and what is uncertain. When reducing the perception of threat the intensity of symptoms decreases, the person becomes more funcional and symptoms eventually might disappear. This is not an easy task since the patient with incoplastic pain presents a nonconscious learnt suffering pattern with intricate cognitive, emotional, attentional, motivational, motor, behavioral and social loops [21]. Physiologically, the entire nervous system including the brain, the endocrine system, the immune system and even the microbiota are taking part in the perception of threat [22; 23] and both innate and adaptive immune responses modulate pain perception and behavior [24]. Therefore the process of interiorizing the information of neurobiological education might be different for each patient and might take different time. Since there is a threat perception of the organism this learning might require building a safe and caring social environment for the patient simultaneously with the active coping of the patient with their own recovery. That's why it is usually complemented with other techniques that adapt to needs and preferences of the patient to embody the information [3] that we will call biopsychosocial tools. Examples of these tools are mindfulness [25], exposition to movement, sharing group format, playing, imaginative analgesia, psychological assistant etc. Very positive effects of this therapeutic approach have been reported in migraine, [26; 27] in musculoskeletal pain, [28] fibromialgia [29; 30; 31]. The advantages of this approach to pain are enormous since the patient is less exposed to secondary effects of pills. Most prescription pain killers cause significant side effects such as addiction and pharmacotherapy remains suboptimal, [32] especially
in the face of high placebo effects [10; 33]. Finally, this embodied learning helps to understand that hypervigilance, anxiety, depression, anger, fear and catastrophising in the pain experience is part of the process and that other annoying symptoms the patient might suffer, such as insomnia, brain fog, ruminating thoughts, tense jaw, restless legs, tense muscles, digestive disorders etc also might belong to threat perception [16].
Despite the advantages of the biopsychosocial framework this model has not yet permeated broadly neither to the health system nor to the society. The reason is complex and we will mention some aspects. First, pain is in the process of being understood with the definition of pain still changing. [34; 35] Pain is also addressed at several levels from biochemistry to physiology, psychology, sociology and philosophy, each level with their own complexity and terminology. In addition, the enormous amount, complexity and generation of the research information about pain makes difficult that the essential information permeates to different sectors of the scientific community, to the clinicians and finally to the society. This creates a communication problem at different levels reflected in a lack of update of advances in the understanding of pain neither to graduate studies [36] nor to the expert community. At the end of the chain, this uncertainty is translated to the patient increasing his/her threat perception that fuels the pain. In addition, these patients are visiting several experts, clinicians or not, trying to understand her/his different symptoms increasing the already confused state. We will name all these experts and clinicians the expert culture, a name borrowed from Arturo Goicoechea [20].
In this situation it is not surprising that the neurobiological education proposal is poorly known. Unfortunately, this fact facilitates that patients absorbs erroneous beliefs many of them adopted from the expert community [9; 37; 38]. Common misconceptions translated to the patients are "pain is related to similar damage", or "the sensation of pain is proportional to itsular damage". Another issue are fragility messages such as "you have pain because your muscles are weak" [39]. These erroneous messages, so called noebo effect, precipitate the consolidation of persistent pain [40]. This is even more important due to the bias of the mind to noebo messages [41]. The uncertainty exposed above together with these misconceptions form a larger social loop where the patient is embedded. One could think that in these cases pain is not formed by maladaptive loops of the patient since the loops of the patient are adapted to his/her misinformed social milieu. This is in line to new definitions of health with emphasis in that the organism adapts to the interiorized biopsychosocial information [42]. In this sense, these biopsychosocial loops are then adapted to society but maladaptive to life since there is a non necessary suffering pattern.
Implementing the biopsychosocial model is challenging. It seems that a new curriculum in pain is not enough to prepare medical students but that it is essential both competence and compassion toward their patients. [43] Since pain is related to a threat perception, conscious or not, to be in a trustful environment where the patient does not feel judged but listened, believed and understood is the starting point to initiate the embodied learning of NBE. The biopsychosocial model is also vaguely defined and there is the tendency to separate the patient into three domains (biological, psychological and social) without taking into account the experience of the patient. [39; 44]. As pointed out by Peter Stillwell and Katherine Harman [39], to explain pain sometimes is used a reductionist approach where in the patient education the clinicians might use problematic pain explanations as "pain is in the brain" which is confusing to some patients who think they might have something wrong in their head or pain is not real but psychological [39]. In [39] it is proposed instead, understanding the subjective experience of the patient from the Enactive approach [45; 46]. The enactive perspective is a branch of embodied cognitive sciences based on dynamical systems, phenomenology and organizational approaches to biology. It aims to build a bridge between life and mind, investigating organisms embedded in their physical and social context. In this approach cognition is defined as'sense-making', the capacity of an organism to evaluate different possible options and act in an adaptive manner to maintain and expand life.
On the other hand, expert community and patients are skeptical about the proposal that pain can be learnt unconsciously and can be unlearnt learning about neurobiology of pain. In fact, it is indeed remarkable that embodied education in neurobiology can be of such enormous help for the well being of the person. For this, the consensus of the messages by the expert community is key to build trust.
In Ref. [47] it was proposed that approaches from the adjacent field of Statistical Physics, that allows to model phase transitions, was the appropriate framework to understand chronification of pain and could be used as a communication tool. The idea put forward was to build an Ising model for positive and negative biopsychosocial factors relevant to pain although the model was not formally formulated. We also think that the analogy to phase transition is useful to illustrate the essential understanding of chronification of pain although instead of focusing on positive and negative biopsychosocial factors from an external perspective we propose to start from the subjective experience of the person. This will allow to also point out how is possible to unlearn the perception of threat in ncoplastic pain. We prefer the phenomenological Landau approach [48] to phase transitions to start with because it helps to discern the essential variables and parameters. It is also simpler, what makes it more attractive as a communication tool in diverse disciplines and to different sectors of society. Moreover, it is possible to connect Landau models with Ising models where the Ising models are the microscopic version of Landau models [49].
In this article, we model the automatic perception which can be either in an alert-protected state, a trust-explore state or a neutral state. This is determined by the following parameters of embodied information: information from senses about the context, the nonconscious or conscious historical experience related to the symptom and the information from expert community that polarize patient's opinion. The equivalent of the free energy [48] is the patient sense-making. This is a term borrowed from the Enactive approach [50]. Automatic attention is located in the most likely state in the sense-making landscape. Several sense-making landscapes corresponding to different subjective experience arise depending on the parameters: Zen, uncertain, hypervigilance, catastrophizing, curiosity, communicative... As a result it is seen that: 1. The critical context from where the alert-protected or trust-explore states arises depends on the personal history related to the symptom, this agrees with recent knowledge in neuroscience [51] and 2. A hysteresis loop is formed with the personal history and contradictory or misguided expert information. This hysteresis loop corresponds to the biopsychosocial loops found in patients [17; 18; 19; 20]. The model is used to illustrate the nonconscious learning process of ncoplastic pain with nocebo messages and the embodied learning of neurobiological education to dissolve the biopsychosocial loop. The model might help to communicate the synthesized information with a common thread and guide practitioners and health policies. It might also facilitate that the patient could make sense of his/her own experience. It can also be a tool to disseminate the benefits of a meaningful and updated biopsychosocial integrated framework to the society.
In the following we present the derivation of the Landau model for the automatic perception. Next, we span the different sense-making landscapes. We also show the formation of hysteresis loops with expert information and historic information. We illustrate the process of learning/unlearning ncoplastic pain using the model and we end up with a discussion and conclusions.
## I Derivation of Landau model of the automatic perception
Landau models [48] were originally proposed to describe phenomenologically phase transitions common in nature where a control parameter varies: for example how iron is magnetized when lowering the temperature below a critical temperature or when increasing a magnetic field. Magnetization would be the order parameter which is zero above the transition temperature and different from zero below the transition. The temperature and the magnetic field are control parameters that when varied can make a transition from one state to the other state. The free energy is a functional of the control parameters and the order parameter whose minima determine the most stable states. The representation for a given set of parameters is a free-energy landscape with minimum points that correspond to the most likely states and will determine the state of the system. In the case of magnetization there would be three possible states: downwards magnetization, upwards magnetization and neutral. An influential and inspiring article by Phil Anderson [52] in the context of condensed matter physics proposed that the concept of phase transition could help to understand emergent phenomena from interacting components at each hierarchical level of science, including life and mind.
Most common phase transitions are of first or second order in the Landau classification [53]. In first order transitions there is a mixed state at the transition. For example, in the case of magnetization there would be a mix between upward and downwards magnetization. In second order phase transition there is however criticality at the transition. A very important concept which is related to large scale cooperative phenomena. In the case of magnetization at the critical transition all the magnetic moments cooperatively align in either upward magnetization or downwards magnetization and the magnetic susceptibility diverges. Extensions of the concept of criticality are widely used to describe life systems [54; 55; 56] and neural activity [57; 58]. Psychodynamic processes have also a long tradition in dynamical systems [59; 60; 61] and Landau theory is also used to model other subjective experiences [62].
Now we proceed to use the Landau framework to build up a model to delineate the process of learning/unlearning ncoplastic pain. For that, we need to address the threat perception of the symptom. We will use inputs from phenomenologist and cognitive sciences about perception of pain or other symptoms related to an alert-protected state. It is not the scope of this work to achieve a comprehension of the complex process of perception of a sensation, we just borrow some intuitive concepts from the scientific literature to present the phenomenological model.
Let's start by the sensation. We understand pain and symptoms as persistent sensations. Sensations are experienced throughout the day reporting about demands or needs from homeostasis and allostasis. [7; 42] Thus, they are a nonconscious evaluation of the needs of the organism. Physiologically, the information needed for the evaluation is circulating through the neuro-immune-endocrine plus microbiota system and includes cognitive-emotional information from own history, context and culture. This forms a pattern of intricate rules aiming to survive and expand i.e. the process of homeostasis and allostasis. The sensation is expressed in our consciousness and urge us to interact with the external world to satisfy the need as an automatic response. For example, the hunger sensation urge us to look for food. We perceive the sensation, the evaluation that the sensation is hunger and the motivation to go for food. Consciously we can decide if we go for food or not. To describe a sensation are needed valence and arousal. Valence is related with how the organism validates the sensation if pleasant (positive) or unpleasant (negative). Arousal measures the intensity of
the sensation, if modulation is low it is felt quite and, if it is large, it is felt agitated. Zero arousal corresponds to a neutral sensation.
The automatic perception of the symptom, labeled by \(\phi\) is the order parameter of the Landau model since from all the information available, it collects just the most relevant. We will understand this automatic perception as a semiconscious cognitive-emotional evaluation of the symptom where the historical, sensorial and expert information is integrated to discern the evaluation-motivational state: either alert-protection or trust explore [7; 51; 63]. By semiconscious we mean that it is possible to become aware of this semiconscious evaluation by self-observation as in metacognition. The perception of the symptom \(\phi\) equals zero means the symptom is evaluated as neutral and there is no need. If \(\phi\neq 0\) there is uncertainty either because there is a novelty or an inconsistency or a contradiction in the information perceived. Information reduces uncertainty so a cognitive-emotional causal query looking for sense arises in the default mode of the mind to remind intrinsic (from own history) or look for extrinsic information related to the symptom (from the context and social milieu). The evaluation can result in \(\phi\) negative meaning the sensation is possibly dangerous for survival and protection is needed; or \(\phi\) positive, corresponding to liveliness perception since there is trust that it is safe to explore.
Naively one would think that when the valence of a sensation is negative (unpleasant sensation) then \(\phi<0\) especially if arousal is large, but it is also possible that an unpleasant sensation will fade away to neutral state eventually as, for example, in the case a person has done exercise and feels stiff muscles. Internally is reminded previous experiences and other people's experiences with stiff muscles after sport. The automatic perception resolves that the sensation is known and will disappear, no alarm is sent to the individual, no much attention is given to the stiff muscles and at some point, \(\phi\) might turn neutral and eventually arousal of the sensation turns to zero.
Thus, there are several layers of evaluations. The nonconscious evaluation from the organism expressed in the consciousness by the sensation. The automatic perception of the sensation \(\phi\), and the agent perception which, in principle, is a reevaluation of the perception and sensation in the present social and physical context to discern if following the automatism or not. These layers of evaluations might be wrong or contradictory. For example, in incoplastic pain, when the agent wishes to do his/her daily task, the organism evaluates pain and alert-protection perception and the agent cannot perform the task. That is, it is not possible for the intention's agent to become an action, agent and organism are not aligned. Alignment can come back consciously embodying NBE information. We will not model this feedback between the agent and the organism, just the automatic perception of the symptom from which the learning/unlearning process can be understood.
Having defined the automatic perception as the order parameter, \(\phi\), we are ready to build up the Landau model. In Statistical Physics, \(F\) is the free energy and a potential minimum is the most likely state of the phase space of a system, i.e. states with lower potential corresponds to higher probability. Analogously, a hill denotes an unstable state. The potential landscape will change shape at the transition. In the present case the analogous to the energy is the sense-making \(S\), a term borrowed from the enactive approach [45; 46]. As we have already mentioned, the perception of a sensation leads to a search for sense reminding intrinsic information (from own history) or extrinsic information (from physical or social context) related to the symptom. What makes more sense to survive or to expand is what determines the more likely state of the perception \(\phi\) among the possibilities. The minus sign comes because the higher the sense-making, maxima in the landscape, the higher the probability. To make analogy to Landau theory we prefer to add a minus sign in such a way that minima corresponds to likely states \(-S=F\). Having this in mind we will call the different landscapes the sense-making landscapes. For that, we express \(F\) expanded in powers of the order parameter \(\phi\):
\[-S=F=-h_{ext}\phi+\frac{a}{2}\phi^{2}+\frac{h_{int}}{3}\phi^{3}+\frac{b}{4} \phi^{4} \tag{1}\]
In this expression all control parameters, \(h_{ext}\), \(h_{int}\), \(a\) and \(b\), are nonconscious embodied information of causal relations to infer perception of the symptom, i.e., meaningful information for survival or living concerning the symptom. \(h_{ext}\) denotes an external bias provided by the information from the expert culture about the symptom. To model the present situation of threat perception in incoplastic pain exposed in the introduction, \(h_{ext}>0\) corresponds to precise and updated neurobiological information and \(h_{ext}<0\) corresponds to confuse and nocebo messages when there is unjustified alarming information. As we have already mentioned the expert advise has strong relevance since it is the one that can answer the uncertainty about patient's health. Next, \(h_{int}\) is the symptom historical information enclosing previous learnt rules such as beliefs and expectatives from past experiences and learnt habits related to the particular symptom. \(h_{int}\) is positive/negative corresponds to alarming/pleasant rules related to the symptom. Then, we define the parameter \(a\) as it is common in Landau as \(a=a_{0}(T-T_{0})\) where \(T\) is the registered information by the exteroceptive and propioceptive senses i.e. information about the actual context and the presence of the person in this context. \(T_{0}\) is the critical value of the organism with innate stored rules about when the context becomes uncertain. High \(T\) means collecting abundant information from senses, low \(T\) means collecting less information from senses and zero \(T\) means no information from senses. \(T\) is the only information not related to the symptom. Finally, \(a_{0}\) and \(b\) are innate positive parameters. By innate we mean the genetic tendencies of the
person. We will comment on these two last parameters on the discussion section.
In Fig. 1 we represent a typical sense-making landscape \(F(\phi)\) with two minima that allow to define useful vocabulary to include all previous concepts. Similarly the sensation has a valence and an arousal, the perception of the sensation, can also be positive, neutral or negative. Its intensity corresponds to the absolute value of the perception \(|\phi|\). \(\phi\) negative represents a survival perception and \(\phi\) positive, a liveliness perception. The minima denote the most likely perception. The minimum in the survival perception is called the alert-protected state (\(\phi_{a-p},F(\phi_{a-p})\)) and the minimum in the liveliness region, the trust-explore state (\(\phi_{t-e},F(\phi_{t-e})\)). \(\phi_{a-p}\) is the survival perception at the alert-protected state and \(\phi_{t-e}\) is the liveliness perception at the trust-explore state. The sense-making value at the alert-protected state is called hypervigilance \(F(\phi_{a-p})\). Since there is an alert-protected state, what makes sense is to look for information regarding the danger. On the other hand, the sense-making value at the trust-explore state is named curiosity \(F(\phi_{t-e})\). When there is a trust, perception of the sensation in a safe environment, makes sense a natural curiosity to know about what is around. We also define two bias, the perception bias: \(\Delta\phi=|\phi_{t-e}|-|\phi_{a-p}|\) as the difference between the intensity in the trust-explore state with respect to the alert-protection state and the sense-making bias defined as the distance between the two minima, the trust-explore state respect to alert-protected state \(\Delta F=|F(\phi_{t-e})|-|F(\phi_{a-p})|\). A positive bias in perception \(\Delta\phi>0\) is optimistic and a negative bias pessimistic \(\Delta\phi<0\). A positive bias in sense making \(\Delta F>0\) represents curiosity bias and a negative bias in sense making \(\Delta F<0\), hypervigilance bias. Attention is represented as a black point. If it is automatic it is more likely in the global minimum. Conscious attention can be in any extreme of the sense-making landscape depending on the person's will although might require more effort depending on the bias size. We finally define \(\phi_{dm,a-p}(T=0)\) and \(\phi_{dm,t-e}(T=0)\), not shown in the figure, that corresponds to a saturated perception where the mind is in complete default-mode in either the alert-protection state or the trust-explore state. The saturated perception appears when there is no information from senses \(T=0\) or there is some \(T\neq 0\) but there is enough \(h_{ext}\) such that \(\phi_{dm,t-e}(T=0,h_{ext}=0)=\phi(T,h_{ext})\). This saturation perception will appear in the hysteresis loops representing biopsychosocial loops.
## II Sense-making landscapes
Figure 1: Typical sense-making landscape \(F(\phi)\) versus the perception of the symptom \(\phi\) showing deepest minima in the alert-protection state than in the trust-explore state. The black point is attention which is located in the deepest minima. Hypervigilance, curiosity, hypervigilance bias sense-making and pleasant/unpleasant intensity in perception are depicted. Negative perception defines survival and positive perception liveliness.
Figure 2: Sense-making landscapes for \(h_{int}=0\) and \(h_{ext}=0\). \(T_{0}\) is the critical context. For \(T>T_{0}\) the most likely possibility is the minimum at the neutral state corresponding to the Zen landscape. At \(T=T_{0}\) information from senses is equal to the critical context and corresponds to uncertainty landscape. When the information from senses is below the critical context, \(T<T_{0}\), alert-protection and trust-explore states have equal sense-making value. This is the baby landscape. Attention is depicted as the black point in the trust-explore state.
In the following we analyze different sense-making landscapes available in the model depending on different possible perceptions and sense-making values and bias. We identify the landscapes with mindsets in chronic pain such as hypervigilance and catastrophizing and with expansive states such as curiosity and communicative. Notice that, different states also give rise to a particular social behavior that would be alert-protection-isolation and trust-explore-play.
Let's first analyze the simplest case with no expert information \(h_{ext}=0\) and no historical information \(h_{int}=0\). Fig. 2 shows this escenario with three different landscapes. This is the typical free energy of a second order phase [53]. Since there is not previous information about the sensation what is perceived is what is felt and the sensation from the organism and the perception from the agent have the same valence and intensity. In this case \(T_{0}\) is the critical context from a neutral state to an uncertain state. When the information from senses is bigger than the one focused on the critical context, \(T>T_{0}\), there is a minimum at the neutral state \(\phi=0\). The sensation is perceived as neutral. We will call this landscape the Zen landscape. Then, \(T=T_{0}\) (blue line) is the critical value where uncertainty about the sensation sets in: the uncertainty landscape. At this value there are as many minima on the left than on the right interpreted as it is not known if the sensation is a threat or is safe. Then, below the transition \(T<T_{0}\) in Fig. 1 (red line) there is less information from senses to focus perception on the sensation. The landscape corresponds to a balance between alert-protected state (left-minimum) and trust-explore state (right-minimum). The state is balanced in the sense that the sense-making at the minima are equal \(F(\phi_{a-p})=F(\phi_{t-e})\). There is not perception bias, \(\Delta\phi=0\), and no sense-making bias, \(\Delta F=0\). We will called this landscape the baby landscape. Attention is represented as a black point. Thus, if the baby feels afraid the attention is on the left minimum \(\phi_{a-p}\) and if the baby feels save and willing to explore, the attention goes to the right minimum \(\phi_{t-e}\).
Next, let's take \(h_{int}\neq 0\) i.e there are previous learnt rules about the sensation. To illustrate nociplastic pain we set \(h_{int}>0\). We remind positive \(h_{int}\) comes from alarming rules related to the sensation by the organism. The possible sense-making landscapes is represented in Fig. 3 and it is the typical free energy of a first order transition [53]. In this case \(T_{0}\) does not correspond to the critical information from senses representing uncertainty, but \(T_{*}=T_{0}+\frac{2h_{int}^{2}}{9a_{0}b}\). From this expression it is seen that if there are many rules related to the symptom i.e. \(h_{int}\) big, there are more contexts that are evaluated as possible threat, i.e. \(T_{*}\) big. This result agrees with studies in cognitive sciences [51] where it is observed that alarming beliefs (\(h_{int}>0\)) distorts the perception of how danger is the context. We call this blue landscape in Fig. 3 uncertainty pessimistic bias with more minima on the left than on the right meaning attention can wander between all these minima. At \(T>T_{*}\) (black line) we just have a minimum and this state corresponds to the Zen landscape, as we have explained above. Attention can just be in the neutral state. For \(T_{0}<T<T_{*}\) the organism is just in an alert-protected state that we have assigned it to the catastrophizing landscape. Attention is in the alert-protected state. Here there is pessimistic bias, \(\Delta\phi<0\), hypervigilance bias \(\Delta F<0\) and no curiosity \(F(\phi_{t-e})=0\). At \(T<T_{0}\) (red line) there is a mixed state again with pessimistic bias \(\Delta\phi<0\) and an hypervigilance bias \(\Delta F<0\) but with some curiosity in such a way that attention is more likely to be in the alert-protection state than in the trust-explore state. In this example, hypervigilance bias means that there is a tendency to absorb alarmed messages about the symptom. Notice that to focus on just information related to the symptom means lower information from senses (\(T\) lower). Therefore, this mixed state is called the hypervigilance bias landscape.
Let's consider now the case with \(h_{ext}\neq 0\). If \(h_{int}=0\), the figure represented in Fig. 2 (red line) will have lower minima in alert-protection or trust-explore depending on the sign of \(h_{ext}\). If this case represents a baby, \(h_{ext}\) would typically represent the parents that polarize the baby uncertainty. If \(h_{int}\neq 0\) and focusing in illustrating the case of nociplastic pain, \(h_{ext}\) is the information from expert culture with strong impact in reducing uncertainty. In this case \(h_{ext}\) can polarize the perception of the patient. We remind that \(h_{ext}<0\) denotes misinformed information by the expert culture and \(h_{ext}>0\) corresponds to updated expert information in relation to the knowledge of pain. Of course, in a general case \(h_{ext}>0\) might be also misinformed information representing placebo effect but we stick to the first situation to describe the noecob problem in nociplastic pain. A negative value of \(h_{ext}\) favors the alert-protected state as in the case of \(h_{int}>0\) shown in Fig. 3. The explanation of different states would be similar where in addition to historical alarming beliefs and maladaptive habits there is misin
Figure 4: Sense making landscapes for \(h_{int}<0\). The critical context \(T^{*}\) depends now on comforting previous rules in \(h_{int}\). Zen landscape for \(T>T_{*}\), \(T=T_{*}\) corresponds to the uncertainty with liveliness bias, \(T_{0}<T<T_{*}\) to the communicative sense-making landscape and \(T<T_{0}\) corresponds to curiosity bias.
formed messages from expert culture and proposition of rigid habits.
In Fig. 4 we represent the case when \(h_{ext}>0\) corresponding to an updated expert information and/or \(h_{int}<0\) corresponding to safe and comforting learnt rules about trust in the organism. Again \(T_{0}\) becomes \(T_{*}=T_{0}+\frac{2h_{ext}^{2}}{9a_{0}b}\), meaning that there is optimistic bias to perceive the surround at the critical context. In this case the landscape at \(T_{0}<T<T_{*}\) represents the communicative sense-making landscape where the person is willing to share her/his discoveries about how to recover from the symptoms. There is then optimistic bias, \(\Delta\phi>0\), curiosity bias, \(\Delta F>0\), and no probability for threat \(F(\phi_{a-p})=0\). \(T<T_{0}\) (red line) corresponds to a mixed state but now the global minimum is in the trust-explore state and there are again both, optimistic bias \(\Delta\phi>0\) and curiosity bias \(\Delta F>0\), with attention more likely in the trust-explore state than in the alert-protection state.
In summary, if \(h_{int}=h_{ext}=0\) we have a second order phase transition with three landscapes forming when the information from senses is decreasing: Zen, uncertainty and baby landscapes. In this case the sensations and the automatic perception of individual have same valence and intensity and there is no bias. When \(h_{int}\) is different from zero there are first order transitions. The critical information from senses to arrive to an uncertain state is \(T_{*}=T_{0}+\frac{2h_{int}}{9a_{0}b}\), meaning the critical context depends on the historical rules respect to the symptom \(h_{int}\). This case presents mixed states with bias in the perception and in what it makes sense in that situation. If the bias in perception is pessimistic the landscapes found when decreasing information from senses \(T\) are first a catastrophizing landscape with zero probability for the trust-explore state and then a hypervigilance bias to focus in information to protect oneself. On the other hand, if the bias in the perception is optimistic, when decreasing information from senses, there are the communicative landscape with zero probability for the alert-protected state and curiosity bias landscape.
Which landscape is the most appropriated? The organism evaluates with the information that it contains [42]. If the information is wrong there might be an error in the evaluation. Opportunities of potential well-being thus need to be investigated. In the following we will deep in an error in evaluation due to confused or erroneous messages from expert culture.
## III Hysteresis loop from expert information as a biopsychosocial loop
Imagine there are different criteria from the expert culture i.e. \(h_{ext}\) varies for a given history of the person \(h_{int}\) related to the symptom and context \(T\). In the case of first order transitions with mixed states corresponding to the hypervigilance bias and to the curiosity bias sense-making landscapes, hysteresis loops are formed of perception \(\phi\) respect to information absorbed from expert culture \(h_{ext}\). In the hypervigilance loop there is a bias for pessimistic information and in the curiosity loop there is a bias for optimistic information. In the present case for nociplastic pain we will relate negative \(h_{ext}\) with noecbo effect and positive \(h_{ext}\) with neurobiological education.
Mathematically the hysteresis loops are obtained from the minimum of \(F\) with respect to perception to find the most likely states:
\[\frac{\partial F}{\partial\phi}=0\to h_{ext}=a\phi+h_{int}\phi^{2}+b\phi^{3} \tag{2}\]
The hysteresis loop \(\phi\) versus \(h_{ext}\) in hypervigilance bias is represented in Fig. 5. In the figure is displayed the survival perception at the alert-protected state \(\phi_{a-p}(h_{ext}=0)\), the liveliness perception at the trust-explore state \(\phi_{t-e}(h=0)\) and the saturated perception corresponding to the default modes at the alert-protected state \(\phi_{dm,a-p}\) and trust-explore state \(\phi_{dm,t-e}\). \(h_{ext\uparrow}\) is the absorbed expert information needed to go from alert-protection state to trust-explore state and \(h_{ext}\downarrow\) is the absorbed expert information to change from the trust-explore state to the alert-protection state. In the case of nociplastic pain \(h_{ext}\downarrow\) will correspond to the absorbed noecbo messages needed to change states from trust-explore to alert-protection and \(h_{ext\uparrow}\) to the absorbed neurobiological education needed to change from alert-protection to trust-explore.
The loop starts at \(\phi_{t-e}(h=0)\). For negative \(h_{ext}\) there are alarming messages and \(\phi\) is decreasing towards \(h_{ext}\downarrow\). The dashed line has a negative slope meaning that the
Figure 5: Hysteresis loop between the perception of the symptom and the expert embodied information \(h_{ext}\) related to the symptom. Dashed line do not belong to the loop because corresponds to the case where the polarization of the perception is opposite to the expert information. In the figure is observed that the loop is mostly in alert-protection since \(h_{int}>0\) and the embodied neurobiological education \(h_{ext}>0\) must have a big value to counteract the previous bias \(h_{int}\). The default mode in the alert-protection state \(\phi_{dm,a-p}\) and in trust-explore state \(\phi_{dm,t-e}\) are depicted as well as the most likely perceptions \(\phi_{dm,a-p}\), \(\phi_{dm,t-e}\) at \(h_{ext}=0\). \(h_{ext\uparrow}/h_{ext\downarrow}\) represents the value of the model expert information when there is a transition from survival-¿liveliness/liveliness-survival perception.
trust/threat perception is opposed to information given by the expert culture \(h_{ext}\) and then do not polarize the opinion of the patient: on the contrary, patient's opinion is opposite to expert information. Since we have started with the assumption that the expert information polarizes the patient's opinion we do not consider this case. Then at \(h_{ext}\downarrow\) the perception turns to the default-mode of alert-protection \(\phi_{dm,a-p}\). If now neurobiological education is it absorbed, the threat perception is decreased until finally reaches \(h_{ext\uparrow}\) where there is a change in the state to \(\phi_{dm,t-e}\). Then, there are nocbo messages and the loop starts again.
The meaning of this hysteresis loop is that the patient is confused by the contradictory information between nocobe and neurobiological education. We associated with biopsychosocial loop meaning bio in the symptom, psycho from perception and social from information from expert culture \(h_{ext}\). This loop give rise to the cognitive, emotional, attentional, motor, motivational, conductual loops that help to consolidate the persistent symptom. This fact illustrates the necessity for the coherence in the information in the expert community, in media, at university and at school.
## IV Learning and Unlearning Persistent Pain Illustrated by the Model
In the model unlearning noeplastic pain is just changing parameters: learnt nocobo messages \(h_{ext}\) to trustful and updated neurobiological information \(h^{\prime}_{ext}\), alarming past learnt rules \(h_{int}\) to new rules from revising meaning of previous ones, \(h^{\prime}_{int}\) and even training senses to collect more sensorial information \(T^{\prime}\) or training conscious attention to realize the no permanent character of the perception states. Clearly the patient is the main character taking an active coping in all the process to embody the new information. Let's illustrate this process of learning/unlearning chronic pain with an hypothetical example.
Let's consider there is a perception \(\phi\) of a sensation that it is an ache in the neck. If the neck-ache remains and there is uncertainty because the pain is new or disturbing, the organism goes from a neutral state to an uncertainty landscape \(T=T^{*}\), blue line depicted in Fig. 3. The default mode of the mind will be wandering with ruminating thoughts correlation-causal possibilities about the symptom. For example, do I have to worry about the neck-ache? I have been told that screens force a bad neck posture. Should I go to the doctor? Should I buy another screen/mouse?(attention is in the alert-protection state). Let's move a little bit or let's go for a walk (attention is in the trust-explore state). I think it is nothing to be worry about (attention is in the neutral state) etc. There might be the possibility that there is a bad memory about neck ache because there was some accident some years ago. In this case \(h_{int}>0\) and \(T^{*}=T_{0}+(2h_{int})^{2}/(9a_{0}b)\). Thus, the critical context is uncertainty with pessimistic bias in perception. The interest about the neck-ache increases and the patient, conscious or not, focuses on information about it, paying less attention to senses information, i.e. \(T\) decreases. In principle, health experts have privilege information about pain and when the patient goes to the health expert expects to make sense of its pain and specially to get relief of its symptom. Imaging experts infer from X-ray a cervical deviation \(h_{ext}<0\) and tell the patient that pain arises because she/he acquire a bad posture when she/he works with computers. Nowadays it is well known that this information and recommendation can even worse the pain since a new fear about no correct position is leaking in the organism which feels the threat.[64] The survival organism is in the hypervigilance landscape shown in Fig. 3 (red line) with already alarming information about the neck and nocobo messages. Pain might consolidate, becoming persistent and sensitive to sitting on a chair i.e. context information included in \(T^{*}\). This experience disrupts the person's life since the organism finds danger at his/her workplace and other symptoms as brain fog and intrusive thoughts might appear when trying to concentrate at work giving rise to frustration. This will fuel the evaluation of threat, other symptoms corresponding to the alert-protection state might arise such as tense jaw, insomnia, digestive disorders... Each symptom will have their own sense-making landscape. The patient goes from one expert to other but no tisular damage is found. At this point the person is suffering and might distrust the expert community and distrust his/her organism.[65]
Imagine now the person decides to visit a neurobiological education clinic. The information provided by the clinicians is different \(h^{\prime}_{ext}>0\). At the beginning there will be the biopsychosocial loop due to the contradictory information shown in Fig. 5. How can the person trust NBE if the most likely state is alert-protected state without trust in neither the expert community nor in his/her organism? Therefore, the first challenge is to build trust and maintain trust. The time needed to build trust will depend on the information embodied from own history \(h_{int}\), from the experts \(h_{ext}\) and from the context by senses \(T\). That's why to build a safe and caring environment and a clear, accesible and honest information about pain is so relevant. Then, when trust is built between patient and clinician it becomes possible an active coping of the patient and a compromise to go through the practice. Certainly this trust building is necessary to start with but also during all the process of unlearning the threat perception when embodying learning NBE.
Concurrently, the patient helped by the clinicians explores threat perception is not permanent playing with conscious attention, his/her sense-making landscape and how to infer nonconscious rules in \(h_{int}\). How nonconscious rules can be identified if precisely there are not conscious? When learning about NBE there might be a contradiction between the new information and the person's misbeliefs. The contradiction might be disturb
ing and leads to the uncertainty state and the biopsy-chosocial loop. These contradictions can be also identified when listening the narrative of the patient, observing body language and behavior. It is necessary to approach to these contradictions gently and with empathy since if not alert-protection will emerge. Empathy might arise when the expert community realizes their own personal biopsy-chosocial loops maladaptive to life and understand how difficult it is to dissolve them. The patient can also become aware, via observing and exploring with curiosity instead of hypervigilance, the default mode of the mind and the own maladaptive cognitive, emotional, attentional, motivational, motor and conductance loops. From this exploration might be possible to infer misbelieves and maladaptive habits in \(h_{int}\). In the present example, the patient will learn in NBE that many people with strong cervical deviation do not have any pain (correlation not equal to causality), will learn that there is no correct position but a position for each occasion, will also learn that all the symptoms arise from the threat perception which points to the root of the problem etc [17; 18; 19; 20]. All this new learning contrasts with previous expert information. The updated information needs to be embodied exploring with curiosity for example by playing when the patient feels safe with any possible biopsy-chosocial tool available. Playing safely will also change how much information is extracted from the context, \(T^{\prime}>T\). Notice that playing might find some resistance because what the person wants it is to get rid of the pain. It is needed to remind that to embody the new information it is necessary to explore without any objective, like a baby. A challenging issue is that in the process the person might arrive to the catastrophizing state where the patient feels that there is no hope. However, notice that catastrophizing is closer to the neutral state than the hypervigilance state. We interpret this as the Fenix effect, where from the total suffering emerges a new perception when information from senses is allowed. Becoming aware of this state might be part of the process. In addition, resistance, pain and symptoms will reappear from time to time and all the unlearning process starts again but with a learnt base. Patience is necessary with trust in own organism. If finally the belief is dissolved, \(h_{int}\to h^{\prime}_{int}\), the sense-making landscape will change accordingly with more probability in the trust-explore state than before and less probability in the alert-protection state. The sense-making landscape can be also communicative where the patient is willing to tell his/her recovering experience. The clinician might become aware by a different narrative and a different body language.
It is clear then, that the patient becomes the main character in his/her own recovery guided by experts in NBE and complementing with biopsy-chosocial tools adapted to the patient. The recovery time will be particular of each person. It might also happens that there is a remanent pain and relapses but the patient increases her/his functionality and thus her/his quality of life.
## V Discussion and Conclusions
The present Landau model describes phenomenologically and qualitatively key aspects in the perception of a symptom \(\phi\): 1. The contribution of personal history, physical context and expert culture to build the perception. 2. Optimization of the sense-making to discern if perception should be in an alert-protected state, in a trust-explore state or in a neutral state; 3. The automatic attention located in the deepest minima of the sense-making landscape and the conscious attention that could be located in any extrema of the sense-making landscape. 4. Second order transitions are derived if there are not past learnt rules, \(h_{int}=0\), and first order transitions if there are past learnt rules, \(h_{int}\neq 0\). 5. There are possible sense-making landscapes where different stages of the subjective perspective can be identified. For second order transitions these sense-making landscapes are Zen, uncertainty and baby and for first order transitions: uncertainty bias, hypervigilance bias, catastrophyzing, curiosity bias and communicative. 6. Unlearning corresponds to a change of the parameters from noecbo messages to NBE education \(h_{ext}\to h^{\prime}_{ext}\), changing meaning of learnt rules \(h_{int}\to h^{\prime}_{int}\) and training senses \(T\to T^{\prime}\). As a result, in first order transitions the critical context depends on the personal history of the person \(h_{int}\). This agrees with neurosciences studies where it is found that the personal context is inferred by beliefs of the person [51]. Interestingly, from a different perspective, there have been proposals using neural networks to explain some mental illness as a disruptions of criticality [57; 58] which agrees with the view of pathology as a first order transition in this simplified model. In first order transitions we also find the formation of hysteresis loops interpreted in this work as a biopsy-chosocial loop. Hysteresis loops have also been proposed to explain perception in the context of neural representations [66]. This model is applied to address the learning/unlearning process of the threat perception given in noecbo effect in persistent pain as a biopsy-chosocial loop maladaptive to life.
The result \(T_{*}=T_{0}+(2h_{int})^{2}/(9a_{0}b)\) also points out that the extra term, \((2h_{int})^{2}/(9a_{0}b)\), besides the historic information of learnt rules, \(h_{int}\), depends on the innate parameters \(a_{0}\) and \(b\). In statistical physics \(a_{0}\) is related to the susceptibility to the magnetic field \(\chi\sim 1/a_{0}\) and in the present model would be the perception susceptibility to expert information. Thus, for bigger \(a_{0}\), there will be lower susceptibility to expert information and the extra term in the critical context \(T^{*}\) will decrease, what makes sense. There might be people more sensitive to expert information (\(a_{0}\) small) than others (big \(a_{0}\)). On the other hand, \(b\) is related with self-interaction [49], self-perception in the present case. If self-perception is interpreted as the perception of the perception it also makes sense that bigger the self-perception smaller \(T^{*}\). Thus, for a given \(h_{int}\), if there is not much sensitivity to expert information and there is a strong capacity of self-perception, the critical
context is closer to the one in a second order transition \(T\to T_{0}\). In the model \(b\) cannot be very large because this would mean that other powers in perceptions would be necessary such as \(\phi^{6}\). For a deep understanding of the consequences of these parameters and precise cognitive definitions a thorough study connecting the Landau model with Statistical Physics is necessary [49]. This will be left for future studies.
The model is simple and intuitive and clarifies why embodied neurobiological education goes to the core of the problem instead of just improving symptoms. It also points out the way to recover with an active coping of the patient. Neurobiological education helps to point out nocebo messages and other misconceptions and make sense of patient's experience. There are other approaches that aim to get rid of the symptoms but pain comes again since misconceptions remain and then the hypervigilance evaluation. That is, improving symptoms relieve the patient but do not change the landscape, just attention wanders from the alert-protected state to the metastable trust-explore state or to the neutral state while becoming aware of misconceptions and embodying the information with appropriated biopsychosocial tools do change the landscape. The model also makes clear why is important that all clinicians share the same knowledge about pain. Finally, the model illustrates how NBE is extremely useful to prevent persistent pain and other symptoms.
A clear limitation of the model is that the present Landau formalism is static and we introduce an effective dynamic by changing the control parameters reporting information from context, patient history and expert culture concerning the symptom. This dynamic does not correspond to time since each time there are different sensations and a persistent sensation is just more likely in time. The dynamic corresponds to a variation in the embodied information \(\delta h_{int}\), \(\delta h_{ext}\), \(\delta T\) what will be reflected in a variation of the perception. Another limitation is that the model does not include the negative/positive feedback loop that will arise in the hypervigilance/curiosity bias landscape. A non-equilibrium model will be necessary to address this effect. This study will also properly account for the probability of the trust-explore or alert protection metastable states in the mixed state [53] what again requires the development of the model from first principles in Statistical Physics [49]. However, for the purpose of this work, that is, proposing a minimal model to facilitate communication, the model is enough to illustrate the problematic of learning nociplastic pain and how to unlearn it.
The model can be used to address other mental syndromes such as anxiety, depression, mioclonus symptoms, addictions etc which seems to have a common underlying mechanism. [67; 7] It is interesting to notice that the biopsychosocial loops are in both, hypervigilance bias and curiosity bias landscapes, as might happens with screen addictions where curiosity bias is looking for the sensation of surprise. Here the patient instead of avoiding the sensation as in pain, is looking for the sensation. The model could be also adapted to pathologies where the rules learnt by the organism opposes to the expert rules as for example in a maniac state. Anosognosia is common in mental syndromes and it is not that surprising that if it would be possible to become aware of misbeliefs and mishabits this might be of extreme relevance for the recovering of the patient.
Remind the patient's biopsychosocial loop is adapted to his/her embodied learnt rules. This is a different perspective of seen mental pathologies, including nociplastic pain, as dysregulated processes. This perspective motivated the definition of allostasis as stability through change to adapt to different needs of the organism. [42] This requires prediction of the needs to satisfy them before they arise. Health is then define as the capacity for adaptive variation and disease as a compression of this capacity in contrast to the traditional definition of health as a list of "appropriate" lab values and disease as "inappropriate" values based in the control of homeostasis. The term allostatic load is used to refer to disease as a maladaptive loop behavior by the organism which is not dysregulated but coherent with their own innate and learnt rules. Allostasis thus enlarge the scope of health allowing to deal with cognitive and emotional symptoms. In this context, chronic pain has been described in terms of allostatic load. [23; 68]
In a long term processes however, the allostasis perspective "stability through change" might not be enough since in the historical process of life there is not stability but a continuous transformation where a process of individuation might emerge. This is in line to the proposal of extending criticality and symmetry breaking where the living state of matter is interpreted as an ongoing extended or critical transition always transient to a renewed organism. [56] We conceive the learning process in the line to the proposal given in the Enactive plus Simondonian approach [67] which emphasizes that "growth and transformation processes can be arguably be seen as fundamental for self-individuation for humans, not only subsistence". This devenir seems in line with the process of individuation proposed by Simondon as generation of metastable states by transforming tensions to the environment or to the society [69].
## VI Conclusions
We have built a Landau model to address the subjective perspective of a patient. The order parameter is the perception of a symptom and the control parameters are the context from senses, the embodied history and the embodied information from expert culture about the symptom. The model allows to show different perception scenarios corresponding to different sense making landscapes where automatic attention is placed in the most likely state. For second order transitions there are the Zen, uncertainty and baby landscapes. First order transitions present bias either for the alert-protected state
or the trust-explore state giving rise to other possible landscapes: uncertainty bias, hypervigilance bias, catastrophyzing, curiosity bias and communicative. From the model is derived two interesting results well known in cognitive science : 1. the critical context where uncertainty appears depends on nonconscious misconceptions and mishabits about the symptom and 2. an hysteresis loop named, the biopsychosocial loop, arises in perception when there is confused expert information together with nonconscious alarming historical information. We apply this model to illustrate the threat perception given in neoplastic pain and the unlearning process via embodied neurobiological education. Learning and unlearning corresponds to changing control parameters, namely, a revision of nonconscious misconceptions and mishabits, updated and trustful expert information and training senses and attention.
From this model is clearly seen that the alarming increasing rate of chronic pain could be partly explained by noecbo and confused expert information that creates a threat perception in the patient and precipitate the organism into an alert-protection state. Within the embody learning of NBE the patient might identify these noecbo messages, investigate its own sense-making landscape and infer own alarming beliefs and mishabits. Embodied learning of neurobiological education emerges as a valuable tool to reduce the perception of threat, prevent the chronic pain burden and antifragilize citizens who develops their own internal compass to be in the world. The strongest policy effort will be to promote this embodied neurobiological education besides clinicians, to the whole society from schools, to universities and media. This will avoid loops from noecbo effect, value the importance of the trust-explore state and of making sense of own experience.
###### Acknowledgements.
B.V. acknowledges deeply to Arturo Goicoechea and Inigo Arandia.
|
2306.00168 | Measuring the Robustness of NLP Models to Domain Shifts | Existing research on Domain Robustness (DR) suffers from disparate setups,
limited task variety, and scarce research on recent capabilities such as
in-context learning. Furthermore, the common practice of measuring DR might not
be fully accurate. Current research focuses on challenge sets and relies solely
on the Source Drop (SD): Using the source in-domain performance as a reference
point for degradation. However, we argue that the Target Drop (TD), which
measures degradation from the target in-domain performance, should be used as a
complementary point of view. To address these issues, we first curated a DR
benchmark comprised of 7 diverse NLP tasks, which enabled us to measure both
the SD and the TD. We then conducted a comprehensive large-scale DR study
involving over 14,000 domain shifts across 21 fine-tuned models and few-shot
LLMs. We found that both model types suffer from drops upon domain shifts.
While fine-tuned models excel in-domain, few-shot LLMs often surpass them
cross-domain, showing better robustness. In addition, we found that a large SD
can often be explained by shifting to a harder domain rather than by a genuine
DR challenge, and this highlights the importance of TD as a complementary
metric. We hope our study will shed light on the current DR state of NLP models
and promote improved evaluation practices toward more robust models. | Nitay Calderon, Naveh Porat, Eyal Ben-David, Alexander Chapanin, Zorik Gekhman, Nadav Oved, Vitaly Shalumov, Roi Reichart | 2023-05-31T20:25:08Z | http://arxiv.org/abs/2306.00168v5 | # Measuring the Robustness of Natural Language Processing Models to Domain Shifts
###### Abstract
Existing research on Domain Robustness (DR) suffers from disparate setups, lack of evaluation task variety, and reliance on challenge sets. In this paper, we pose a fundamental question: What is the state of affairs of the DR challenge in the era of Large Language Models (LLMs)? To this end, we construct a DR benchmark comprising diverse NLP tasks, including sentence and token-level classification, QA, and generation, each task consists of several domains. We explore the DR challenge of fine-tuned and few-shot learning models in natural domain shift settings and devise two diagnostic metrics of Out-of-Distribution (OOD) performance degradation: The commonly used Source Drop (SD) and the overlooked Target Drop (TD). Our findings reveal important insights: First, despite their capabilities, zero-to-few shot LLMs and fine-tuning approaches still fail to meet satisfactory performance in the OOD context; Second, TD approximates better than SD the average OOD degradation; Third, in a significant proportion of domain shifts, either SD or TD is positive, but not both, and therefore disregarding one can lead to incorrect DR conclusions.
## 1 Introduction
_Large Language Models_ (LLMs) have demonstrated improving performance on various tasks and evaluation setups, including fine-tuning Devlin et al. (2018); Raffel et al. (2020), as well as few-shot and zero-shot learning Brown et al. (2020); Chowdhery et al. (2022). Following that, there has been an improvement in the models' ability to perform tasks on data from domains with no labeled data available Hendrycks et al. (2020); Ben-David et al. (2022); Wang et al. (2022). And yet, while performance has improved, it is still inferior to the model's performance on data from domains where labeled data is available for model training Ramponi and Plank (2020); Wang et al. (2022). In this paper we refer to this problem as the _Domain Robustness_ (DR) challenge.
Research of DR is quite disparate: A wide variety of setups, models, training procedures, and different dataset sizes are used. There is also a severe lack of variety in evaluation tasks for DR: Most papers use classification tasks, omitting important types of tasks such as sequence tagging, question answering, and text generation Koh et al. (2021); Hendrycks et al. (2020). Moreover, many past works use challenge sets to measure the DR challenge. These sets are highly curated datasets that select synthetic Rychalska et al. (2019); Be
Figure 1: A domain shift example from the source domain A (notated as \(S\)) to the target domain B (notated as \(T\)). The black square, \(\mathrm{SS}\) (=80%), is the source in-domain performance (testing on \(S\)). The green diamond, \(\mathrm{ST}\) (=70%) is the cross-domain performance (a.k.a OOD performance, testing on \(T\)). We may say there is a Domain Robustness (DR) challenge since we observe a 10% performance drop. However, had we hypothetically trained and tested the model on data from \(T\), its target in-domain performance (\(\mathrm{TT}\), grey circle) would be 65%, meaning the model gains 5% when trained on \(S\), in comparison to \(T\). Most DR works consider only the _observed_ Source Drop (\(\mathrm{SD}=\mathrm{SS}-\mathrm{ST}\)) and ignore the _unobserved_ Target Drop (\(\mathrm{TD}=\mathrm{TT}-\mathrm{ST}\)), resulting in a partial depiction of the DR challenge.
linkov and Bisk, 2018) or particularly hard samples for models to process under domain shifts (McCoy et al., 2019). All this makes it hard to compare different works and map out the extent of the DR challenge in a _natural domain shift setting_.
Moreover, prior works focused exclusively on fine-tuning setups, neglecting few-shot and zero-shot setups that have become prominent setups in NLP. In those setups the DR challenge manifests itself more moderately: There is no training data that can potentially anchor the model to the source distribution, but only a few demonstration examples from the source domain that can be used in the prompt. In this work, we explore how this difference moderates the severity of the DR challenge.
Adding to the above, we suggest that there is a more fundamental problem with the way we approach the examination of the DR challenge. Let us conduct a short thought experiment: We train a model on data from domain A, and test it on data from domains A, B and C, achieving accuracy scores of 80%, 70% and 85% respectively. Observing a 10% drop when transfering to B and a 5% gain when transfering to C, can we say that in one shift (A to B) we have a DR challenge and in the other (A to C), we do not? In some sense, we can: This is what we expect to see in DR papers.
But at this point, we wish to push the thought experiment a bit further and raise the following question: If we were told that "had the model trained and tested on data from B it would have achieved a score of 65%", do we still believe it has a DR challenge when shifting from A to B? This scenario is illustrated in Figure 1, and accordingly, although we _observe_ a 10% drop, the model outperforms the in-domain score of the target domain (B to B). And if when training and testing on data from domain C, we achieve a 90%, is there still no DR challenge when shifting from A to C, even though the model could have performed better when trained on C?
We suggest that this thought experiment calls for two different views of the DR challenge, one of the _Source Drop_ (\(\mathrm{SD}\)) and one of the _Target Drop_ (\(\mathrm{TD}\)), alternating between the source and target in-domain performance as a reference point (see Figure 1). Notice that the \(\mathrm{SD}\) is the drop practitioners observe in practice while the \(\mathrm{TD}\) is unobserved since training data from the target domain is unavailable. We formally define the two random variables in 3, showing that they share the same expectation but might differ in variance.
To address the DR research gaps, in SS4 we introduce a diverse benchmark comprising various NLP tasks with distribution shifts, including classification, question answering, and text generation. We point to four properties that make our DR benchmark unique: (1) It focuses on topic shift, which can naturally occur in real-life scenarios; (2) It covers a wide variety of NLP tasks; (3) Each task consists of several domains (4-6); and (4) Each domain has sufficient labeled data, allowing us to measure both the \(\mathrm{SD}\) and the \(\mathrm{TD}\). Following that, we extensively experiment with varying sizes of fine-tuned and few-shot learning models as described in SS5.
Our results (SS6) provide valuable insights for approaching and measuring the DR challenge. First, the cross-domain performance (\(\mathrm{ST}\) in Figure 1) better correlates with the target in-domain performance (\(\mathrm{TT}\) in Figure 1) than the source in-domain performance (SS in Figure 1), meaning that without training data from the target domain, _predicting the cross-domain performance is exceptionally challenging_. Second, the variance and magnitude of \(\mathrm{TD}\) are smaller than that of \(\mathrm{SD}\), suggesting that \(\mathrm{TD}\)_better approximates the Average Drop_. Third, in notable proportions of domain shift setups across different tasks, _either the \(\mathrm{SD}\) or the \(\mathrm{TD}\) is positive, but not both_. Therefore, both \(\mathrm{SD}\) and \(\mathrm{TD}\) serve as diagnostic measures to characterize the DR, _disregarding one can lead to incorrect conclusions_.
This work also sheds light on the current status of the DR of LLMs. Accordingly, the most common scenario in both fine-tuning and few-shot learning is characterized by positive values for both \(\mathrm{SD}\) and the \(\mathrm{TD}\), _indicating the persistent existence of the DR challenge_. However, in few-shot learning, the impact of the domain shift is weak. Furthermore, increasing the fine-tuned model size improves in-domain and cross-domain performance. However, _only for fine-tuning larger models improve robustness_. Finally, our results indicate that _fine-tuned models outperform few-shot learning models_, achieving higher in-domain and cross-domain performance. However, _few-shot learning models exhibit lower drops_.
## 2 Related Work
**Large Pretrained Language Models (LLMs).** LLMs are based on the Multi-layer Transformer Vaswani et al. (2017) and leverage three primary architectures: (1) _Encoder-only (EO)_, such as BERT, RoBERTa and DeBERTa Devlin et al. (2018); Liu
et al., 2019; He et al., 2021), which excel at classification tasks; (2) _Encoder-decoder (ED)_, such as T5 and BART (Raffel et al., 2020; Lewis et al., 2020), which excel at conditional generation tasks (e.g.,Neural Machine Translation and Summarization, Calderon et al. (2023)); and (3) _Decoder-only (DO)_, such as GPT-3 and PaLM (Brown et al., 2020; Chowdhery et al., 2022), which excel open-text generation and zero/few-shot setups (Tay et al., 2022; Wang et al., 2022). All of the above models (except of PaLM) participate in our experiments.
LLMs are pretrained on large text corpora and can be used for downstream tasks through (1) _Fine-tuning_ them on task-specific labeled data; (2) _Zero-shot_ by generating a solution based on an instruction; or (3) _Few-shot_ where additional demonstrations are added to the input. The current trend is to increase LLM size and training tokens by using diverse datasets (Hoffmann et al., 2022). This study investigates the robustness and stability of fine-tuned and few-shot LLMs to domain shifts, challenging the assumption that they can transfer freely to new domains or tasks (Bommasani et al., 2021).
**Domain Robustness (DR).** The term DR generally refers to the extent to which the performance of an NLP model does not degrade when applied to newly collected samples coming from other domains. Sometimes, robustness may also refer to the consistency (low variance) (Yu et al., 2022). Literature on robustness in NLP can be categorized by the type of distribution shift examined: Synthetic and Natural (Wang et al., 2022).
_Synthetic shift_ works include adversarial attacks (Jin et al., 2020), input perturbations (Belinkov and Bisk, 2018), counterfactual (Kaushik et al., 2020), diagnostic (Wang et al., 2019) and challenge (or contrast) sets (McCoy et al., 2019). These works share a common approach of assessing a model's robustness by using a dataset specifically designed to challenge NLP models, rather than to represent a natural language distribution. While the synthetic shift may be helpful as a diagnostic tool for probing model behavior, it might not be an apt indicator for the real state of DR "in the wild". For that reason, we focus on natural domain shifts.
_Natural shift_ works focus on organic scenarios where a shift occurs between the training set and the data encountered during deployment. These shifts have been explored in various setups, e.g., medium shift (Miller et al., 2020), temporal shift (Cvejoski et al., 2022) and domain shift: medical (Miller et al., 2021) and legal (Chalkidis et al., 2020). The degradation in performance was demonstrated to be significant. This degradation is not limited to fine-tuned models but also to few-shot models which are sensitive to the demonstrations provided in their prompts (Min et al., 2022).
Our study extends previous work by addressing natural topic shifts on a broad range of NLP tasks: sequence and token-level classification, QA and generation, and models: EO, ED, DO, fine-tuned and zero/few-shot. Moreover, unlike previous robustness work, which focused on the source gap, we also consider the target gap and demonstrate the importance of this more nuanced analysis.
Alongside DR, a related area of research is DA which addresses additional setups with various assumptions on the availability of unlabeled and even labeled data from the target domain at the model training time (Blitzer et al., 2007; Plank and van Noord, 2011; Rotman and Reichart, 2019; Ben-David et al., 2020; Ramponi and Plank, 2020; He et al., 2021; Ben-David et al., 2022; Calderon et al., 2022; Volk et al., 2022). In this work, we limit ourselves to DR as a diagnostic problem.
**DR and DA Benchmarks.** Various benchmarks were proposed for evaluating the robustness of NLP models and the quality of DA solutions. Rychalska et al. (2019) proposed a DR benchmark that includes various synthetic shifts resulting from text corruption, such as article removal and typos. In contrast, natural shift DA benchmarks include the work of Reid et al. (2022) and Chronopoulou et al. (2022), which evaluate the pertaining language modeling task. Complementary, multiple DA datasets focus on a specific downstream NLP task, for example, QA (Budzianowski et al., 2018; Miller et al., 2020; Gekhman et al., 2022) or summarization (Zhong et al., 2021; Yu et al., 2021). Our DR benchmark focus on natural topic shift and covers various downstream tasks.
Notably, Koh et al. (2021) propose a DR benchmark that consists of ten general-ML tasks, including two NLP classification tasks. Koh et al. (2021) also discuss the importance of measuring degradation relative to the target performance (Target Drop). However, their benchmark does not support measuring it since the target domains do not have sufficient labeled data to train an NLP model. In contrast, we intentionally design our study and benchmark to support measuring the Target Drop.
The closest work to ours is Hendrycks et al. (2020), which constructed a benchmark of four classification tasks. The authors compared different pre-transformers NLP approaches with transformers and found that the latter improves DR. We extend their work by including up-to-date fine-tuned and few-shot models and a wider range of tasks.
## 3 Domain Robustness
### Domain Shift
The term domain is widely used in NLP but lacks a clear and consistent definition (Ramponi and Plank, 2020). This term typically refers to a cohesive corpus or dataset, which may be characterized by various factors such as topic, style, genre, syntax, linguistic register, and medium. We formally characterize a _domain_\(\mathcal{D}\) by a _data generating process (DGP)_, which is given by a joint distribution \(P_{\mathcal{D}}(X,Y)\) over \(\mathcal{X}\) (the input, features, covariates space) and \(\mathcal{Y}\) (the target, labels, outcome space).
In a _domain shift_, the source domain \(\mathcal{S}\), and the target domain \(\mathcal{T}\) differ in their underlying joint distribution \(P_{\mathcal{S}}(X,Y)\neq P_{\mathcal{T}}(X,Y)\). This shift can be caused by changes in the marginal distributions \(P_{\mathcal{S}}(X)\neq P_{\mathcal{T}}(X)\)_(covariate shift)_, \(P_{\mathcal{S}}(Y)\neq P_{\mathcal{T}}(Y)\)_(prior shift)_, or the conditional distribution \(P_{\mathcal{S}}(Y|X)\neq P_{\mathcal{T}}(Y|X)\)_(concept shift)_, see Moreno-Torres et al. (2012).
Given a training set of examples from the source domain \(S\sim\mathcal{S}\), the goal of the NLP model is to learn the joint distribution \(P_{\mathcal{S}}(X,Y)\) (or the conditional distribution \(P_{\mathcal{S}}(Y|X)\)), and also to generalize to the distribution \(P_{\mathcal{T}}(X,Y)\) (or \(P_{\mathcal{T}}(Y|X)\)) of the target domain in which it is deployed. To evaluate the performance of the NLP model on the target domain, we use a test set \(T\sim\mathcal{T}\), which we do not have access to during training. We use the term _Domain Robustness (DR)_ to describe the inherent ability (or inability) of an NLP model to perform the said generalization between the source and target domains.
In the following subsections, we propose metrics for characterizing the DR challenge. For fine-tuned models, the DR challenge arises when the test data comes from a different domain than the labeled training data. Meanwhile, few-shot learning models face the DR challenge when the domain of the demonstrations used in the prompt differs from that of the target data.
### Domain Robustness Metrics
In this subsection, we define the concepts and metrics we use for characterizing the DR of an NLP model, summarized in Table 1. Given a source domain \(\mathcal{S}\) and a target domain \(\mathcal{T}\), we use \(\mathrm{ST}\) to denote the _cross-domain performance_, which is the score (e.g., F1) achieved when training a model on data \(S\) from the source domain and evaluating it on test data \(T\) from the target domain. When training and evaluating the model \(f\) with data from the source domain, we use \(\mathrm{SS}=\mathrm{SS}(f)\) to denote the _source in-domain performance_. Likewise, \(\mathrm{TT}=\mathrm{TT}(f)\) denotes the _target in-domain performance_.
Finally, we define the _in-domain difference (\(\mathrm{IDD}\))_ to be \(\mathrm{SS}-\mathrm{TT}\). A positive \(\mathrm{IDD}\) may indicate a shift to a harder target domain: it is more difficult to train a model in the target domain, or the target domain is more challenging to perform on, for example, \(\mathrm{IDD}=26\) when shifting from A to C in Figure 2. Our basic premise in this paper is that in order to characterize the DR of a model properly,_we have to consider \(\mathrm{SS}\)_, \(\mathrm{TT}\), _and_\(\mathrm{ST}\).
**Performance Degradation Metrics.** We define the _Source Drop (\(\mathrm{SD}\))_ and _Target Drop (TD)_ to be the degradation in the performance between the cross-domain and the source or target in-domain performances of a model \(f\):
\[\mathrm{SD}=\mathrm{SD}(f,S,T)=\mathrm{SS}-\mathrm{ST} \tag{1}\] \[\mathrm{TD}=\mathrm{TD}(f,S,T)=\mathrm{TT}-\mathrm{ST} \tag{2}\]
Notice that the training data from the target domain may not be available in a real-life scenario, and in this case, the \(\mathrm{TT}\) can not be computed. The performance degradation we observe in practice is the \(\mathrm{SD}\). The \(\mathrm{TD}\) is a more theoretical measure: _"what would have been the drop compared to the case where the model could be trained on data from the target domain?"_.
\begin{table}
\begin{tabular}{l|l} \hline \hline \(\mathcal{S}\) & The source domain. \\ \(\mathcal{T}\) & The target domain. \\ \((S,T)\) & Data sampled from \(\mathcal{S}\) and \(\mathcal{T}\). \\ \hline \(\mathrm{SS}\) & Source in-domain performance. \\ \(\mathrm{TT}\) & Target in-domain performance. \\ \(\mathrm{ST}\) & Cross-domain performance. \\ \hline \(\mathrm{SD}\) & Source Drop (Observed Drop): \(\mathrm{SS}-\mathrm{ST}\). \\ \(\mathrm{TD}\) & Target Drop (Unobserved Drop): \(\mathrm{TT}-\mathrm{ST}\). \\ \(\mathrm{IDD}\) & In-domain difference: \(\mathrm{SS}-\mathrm{TT}\). \\ \hline \hline \end{tabular}
\end{table}
Table 1: The notations of Domain Robustness concepts and metrics we use in this study.
In this study, we use the pair \((\mathrm{SD},\mathrm{TD})\) to characterize the extent of the DR challenge of a model, a source domain, and a target domain. From the above definitions, it follows that: \(\mathrm{SD}=\mathrm{TD}+\mathrm{IDD}\) Accordingly, \(\mathrm{SD}\) and \(\mathrm{TD}\) are connected via \(\mathrm{IDD}\). This is a solid justification for using both \(\mathrm{SD}\) and \(\mathrm{TD}\) when quantifying the DR challenge, as it shows that each of \(\mathrm{SD}\) and \(\mathrm{TD}\) can be very large when the other quantity is small, depending on the magnitude of \(\mathrm{IDD}\) which is not a by-product of a domain shift. For instance, in studies involving challenge sets that report a large \(\mathrm{SD}\), the drop may be primarily influenced by a large \(\mathrm{IDD}\) rather than both \(\mathrm{SD}\) and \(\mathrm{TD}\) being large (e.g., shift AB in Figure 2).
### Aggregated DR Metrics
We would next characterize the DR challenge of an NLP model \(f\) over the domain space (or a set of domains), which is the focus of this study. Specifically, we can use aggregation metrics of the two random variables \(\mathrm{SD}\) and \(\mathrm{TD}\):
**1. The Average Drop** - Notice that the Average Drop is equal to \(\mathbb{E}[\mathrm{SD}]=\mathbb{E}[\mathrm{TD}]\), that is, when the source domains are the same as the target domains (i.e., every domain is used for training and for testing the NLP model), then the average \(\mathrm{SD}\) is equal to the average \(\mathrm{TD}\). This results from the fact that \(\mathbb{E}[\mathrm{SS}]=\mathbb{E}[\mathrm{TT}]\), and from the linearity of the expectation. For example, in Figure 2, the average in-domain performance is 82, and the average cross-domain performance is 77. The Average Drop is then 5 (same as the average \(\mathrm{SD}\) or \(\mathrm{TD}\)).
**2. The Standard Deviation of \(\mathrm{SD}\) (or \(\mathrm{TD}\))** - We would like to emphasize that although their expectations are equal, \(\mathrm{SD}\) and \(\mathrm{TD}\) are different random variables with different variances.
**3. Worst \(\mathrm{SD}\) (or \(\mathrm{TD}\))** - which is the largest \(\mathrm{SD}\) (or \(\mathrm{TD}\)) within all the domain shifts. In Figure 2 the Worst \(\mathrm{SD}\) is 20 and the Worst \(\mathrm{TD}\) is 18.
**4. Average Worst \(\mathrm{SD}\) (or \(\mathrm{TD}\))** - for each source domain we first calculate the largest \(\mathrm{SD}\) (or \(\mathrm{TD}\)), and then average these values. In Figure 2 the Average Worst \(\mathrm{SD}\) is 9.67 (=\(\frac{20+17-8}{3}\)), and the Average Worst \(\mathrm{TD}\) is 11 (=\(\frac{4+11+18}{3}\)).
**5. Average Worst \(\mathrm{SD}\) (or \(\mathrm{TD}\)) Performance** - equals the average in-domain performance minus the average Worst \(\mathrm{SD}\) or \(\mathrm{TD}\). We use this metric for visualizing the Average Worst drops in the scale of the absolute performance. In Figure 2, Average Worst \(\mathrm{SD}\) performance is 72.33, and for \(\mathrm{TD}\) is 71.
### Domain Shift Scenarios
In this subsection, we describe four possible domain shift scenarios, which are defined by the order of the performance scores, \((\mathrm{SS},\mathrm{TT},\mathrm{ST})\), (or the sign of the performance drops). We wish to outline these scenarios to characterize better the existence (or non-existence) of a DR challenge. It is important to note that we do not discuss in this subsection the magnitude of the DR challenge, which can be determined by the drop values, although there is a difference between a minor performance drop (which may be caused by the randomness of the data sample) and a major one. Below in parentheses, we provide examples of shifts from Figure 2.
**The Classic DR Challenge.** (AB, BC shifts) This scenario is defined when \(\mathrm{ST}\) is smaller than both \(\mathrm{SS}\) and \(\mathrm{TT}\): \(\mathrm{ST}<\mathrm{SS}<\mathrm{TT}\) and \(\mathrm{ST}<\mathrm{TT}<\mathrm{SS}\). In this scenario, both the \(\mathrm{SD}\) and the \(\mathrm{TD}\) are positive. We term this scenario "classic" since there is no doubt regarding the existence of a DR challenge: the cross-domain performance is lower than the in-domain performance at both the source and the target domains, and the cause of the degradation must hence be the inability of the model to generalize from the source domain to the target.
**The Observed DR Challenge.** (AC shift) This scenario is defined when the shift is to a harder domain, hence \(\mathrm{TT}<\mathrm{ST}<\mathrm{SS}\). In this scenario, only the observed drop, \(\mathrm{SD}\) is positive. Although we observe a performance drop, it might be explained by moving to a harder domain and not due to a genuine DR challenge since the model achieves generalization to the target domain and even exhibits higher performance than \(\mathrm{TT}\).
**The Unobserved DR Challenge.** (BA, CA shifts) This scenario is defined when the shift is from a
Figure 2: Our running example of domain shifts involving three domains: A, B and C. Black dashed horizontal lines represent in-domain performance. Short green lines represent cross-domain performance. Orange arrows represent \(\mathrm{SD}\)s, and blue arrows represent \(\mathrm{TDs}\).
harder domain to an easier one, hence \(\mathrm{SS}<\mathrm{ST}<\mathrm{TT}\). According to this scenario, only \(\mathrm{SD}\) is negative, and we do not observe a performance drop. However, since \(\mathrm{TD}\) is positive, we know that the NLP model has the potential to generalize better to the target domain, and therefore it might suffer from a DR challenge.
**The No DR Challenge.** (CB shift) This scenario is defined when \(\mathrm{ST}\) is larger than both \(\mathrm{SS}\) and \(\mathrm{TT}\): \(\mathrm{SS}<\mathrm{TT}<\mathrm{ST}\) or \(\mathrm{TT}<\mathrm{SS}<\mathrm{ST}\). In this scenario, both \(\mathrm{SD}\) and \(\mathrm{TD}\) are negative and presumably, no DR challenge exists.
## 4 The Domain Robustness Benchmark
In this section, we present the suite of NLP tasks that compose our benchmark and describe the collection process of the datasets. The DR benchmark focus on natural topic shift and covers various downstream tasks, including sequence-level and token-level classification and QA tasks (SS4.1), and generation tasks (SS4.2). Additionally, each task consists of several (4-6) domains. Table 2 details the number of examples in each task domain. Finally, in SS4.3 we discuss our technical assumptions and their justifications.
### Classification Tasks
**Sentiment Analysis (SA)** Following Ziser and Reichart (2018) and Calderon et al. (2022), we combine five domains of the Amazon product review dataset Blitzer et al. (2007) with the airline review dataset Nguyen (2015) into a single dataset with six domains: _Appliances, Beauty, Books, Games, Software, and Airline_. We additionally remove links from texts.
**Natural Language Inference (NLI)** In our study, we use five domains from the MNLI dataset Williams et al. (2018): _Fiction, Government, Slate, Telephone, and Travel_.
**Aspect Extraction (AE)** Following Lekhtman et al. (2021), we combine the SemEval 2014, 2015, and 2016 Pontiki et al. (2014, 2015, 2016) AE datasets, together with the MAMs dataset Jiang et al. (2019) into a single dataset with four domains: _Device, Laptopos, Restaurants, Service, and MAMs_.
**Question Answering (QA)** We rely on the SQuAD v2 dataset Rajpurkar et al. (2016, 2018), one of the most common QA datasets. We asked human annotators to categorize the documents according to Wikipedia's taxonomy1, and created six domains: _Geography, History, Philosophy, Science, Society, and Technology_. We then split the documents of each category (and their corresponding questions) into train, development, and test sets.
Footnote 1: We merged the vital articles categories: [https://en.wikipedia.org/wiki/Wikipedia:Vital_articles](https://en.wikipedia.org/wiki/Wikipedia:Vital_articles), into eight categories and used six of them as domains.
### Generation Tasks
**Abstractive Summarization (AS)** For this task, we rely on the Webis-TLDR-17 dataset Volske et al. (2017), which consists of Reddit posts and their "TL;DR" summary. We asked human annotators to categorize subreddits (Reddit forums dedicated to a specific topic) into categories that constitute our five domains: _Drugs, Fitness, LoL (video game), Politics, and Relationships_.. Since the summaries of the Webis-TLDR-17 dataset were automatically extracted and not verified, they may be of low quality. After manually examining dozens of them, we decided to use only summaries that have 15-60 words, and at least 75% of them appear in the grounding document.
**Title Generation (TG)** In our study, we focus on generating titles for Amazon product reviews. Our TG dataset contains six domains: _Beauty, Books, DVD, Kitchen, Sports, and Wireless_. After manually examining dozens of reviews and their titles, we found many reviewers misused the title option: They started writing a long review in the title and continued it in the body box. Therefore, we decided to use only titles that have 5-20 words, and at least 75% of them appear in the grounding review.
**Question Generation (QG)** We rely on our domain partition of the SQuAD dataset and only use examples with an answer. Accordingly, given a
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline
**Task** & & **\#D** & **Train** & **Dev** & **Test** \\ \hline SA & Sentiment Analysis & 6 & 10K & 2.5K & 2.5K \\ NLI & Natural Language Inference & 5 & 25K & 2.5K & 2K \\ AE & Aspect Extractions & 5 & 2.2K & 560 & 1.4K \\ QA & Question Answering & 6 & 9K & 1K & 2.5K \\ QG & Question Generation & 6 & 7.5K & 900 & 1K \\ AS & Abstractive Summarization & 5 & 10K & 1K & 500 \\ TG & Title Generation & 6 & 17.5K & 1K & 1K \\ \hline \hline \end{tabular}
\end{table}
Table 2: Details about the tasks in The Domain Robustness Benchmark. “#D” is the number of domains. “Train”, “Dev”, “Test” columns present the size of the splits of each domain. Note that we present the average size for the test split since it differs between domains. More details can be found in the project repository.
Wikipedia document and an answer to the question, the task of the NLP model is to generate the question. The input is a concatenation of the document and the answer, separated by the "answer:" token.
### Technical Domain Shift Assumptions
As discussed in SS3.1, a domain can be characterized by various attributes such as topic, style, syntax, and medium. When one of these attributes changes, the DGP and joint distribution \(P(X,Y)\) changes, and a domain shift occurs. We intentionally create a DR benchmark with simple and well-defined domain shift assumptions, making the analysis and characterization of the DR challenge precise and clear. Specifically: (1) Our benchmark mainly focuses only on natural topic shift, e.g., training an NLP model on book reviews and applying it to kitchen product reviews. Notice that this means the medium of the text is the same. Alternatively, we could have utilized distinct datasets as different domains, which many other works have done, however, this approach lacks a clear definition of the domain shift; (2) For each task, all the domains have the same number of training examples; and (3) We try to reduce the effect of the prior shift, i.e., changes in \(P(Y)\): For classification tasks, we create balanced datasets, while for generation tasks, we sample examples with similar distributions with respect to the target length.
In the future, we plan to include more challenging domain shifts caused by multiple attributes. Researchers that wish to focus on a specific type of prior shift (e.g., unbalanced domains) can easily use our publicly available benchmark to construct more challenging setups.
## 5 Experimental Setup
Our experiments are conducted in the PyTorch and HuggingFace frameworks and optimize the fine-tuning models with the AdamW optimizer. An exception is the OpenAI's models, which were run via their paid service and their results are correct as of March 2023. Table 3 presents details about the participating LMs in this study.
**Task Metrics:** For classification and QA tasks we report the F1 score, and for generation tasks the BertScore (Zhang et al., 2020) (which uses a pre-trained SBERT model (Reimers and Gurevych, 2019) as the backbone transformer). While we report a single metric (BertScore) in the paper, we conducted a comprehensive analysis using multiple metrics and observed similar trends that support our reported results.
### Finetuning
As discussed in SS2, for classification and QA tasks (SA, NLI, AE, QA and MC), we use Encoder-only families: RoBERTa (Liu et al., 2019) and DeBERTa-v3 (He et al., 2021). In addition, we use the small DistilBERT (Sanh et al., 2019). For conditional generation tasks (QG, AS, TG), we use two common Encoder-decoder families: T5 (Raffel et al., 2020) and BART (Lewis et al., 2020).
We select these LMs since they are among the top-performing encoder and encoder-decoder LMs on NLP leaderboards (Wang et al., 2019; Gehrmann et al., 2021) and provide a range of sizes. In addition, these LMs are open-sourced, and their training data and code are publicly available.
**Hyperparameter Tuning.** For each model and source domain, we first perform hyperparameter tuning and select the best hyperparameters according to the validation set of the source domain. Then, we test the model on every target domain. For example, there are six domains in the SA task, thus, we select six sets of hyperparameters and test each of them on six domains, resulting in 36 domain
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Name** & **Arch.** & **Params** & **Layers** & **Open** & **Tasks** \\ \hline DistilBert & EO & 66m & 6 & Yes & SA, NLI, AE, QA \\ RoBERTa-B & EO & 125m & 12 & Yes & SA, NLI, AE, QA \\ RoBERTa-L & EO & 355m & 24 & Yes & \\ \hline DeBERTa-XS & EO & 70m & 12 & Yes & \\ DeBERTa-S & EO & 142m & 6 & Yes & SA, NLI, AE, QA \\ DeBERTa-B & EO & 184m & 12 & Yes & \\ DeBERTa-L & EO & 435m & 24 & Yes & \\ \hline T5-S & ED & 60m & 12 & Yes & \\ T5-B & ED & 220m & 24 & Yes & QG, AS, TG \\ T5-L & ED & 737m & 48 & Yes & \\ \hline BART-B & ED & 139m & 12 & Yes & \\ BART-L & ED & 406m & 24 & Yes & \\ \hline OPT:350m & DO & 350m & 24 & Yes & \\ OPT:1.3b & DO & 1.3b & 24 & Yes & SA, NLI, AS, TG \\ OPT:2.7b & DO & 2.7b & 32 & Yes & \\ OPT:6.7b & DO & 6.7b & 32 & Yes & SA, NLI, AS, TG \\ \hline GPT:1 & DO & 6.7b & 32 & Yes & SA, NLI, AS, TG \\ \hline GPT:3-ada & DO & 350m & 24 & No & SA, NLI, AS, TG \\ ChatGT & DO &? &? & No & SA, NLI, AS, TG \\ GPT-4 & DO &? &? & No & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Details about the participating Language Models in this study. “Arch.” states the architecture type of the LLM: EO for Encoder-only, ED for Encoder-decoder, and DO for Decoder-only. Notice that EO and ED models are fine-tuned, while DO models are used in few-shot learning setups. “Params” is the number of parameters in millions (m) or billions (b). “Layers” is the number of layers of the LLM. For ED models, this includes both encoder and decoder layers. “Open” states whether the LLM is open-sourced or not. “Tasks” states the tasks in which the LLM is tested.
shifts. For hyperparameter tuning of classification models, we try the following learning rate values: [1e-5, 5e-5, 1e-4] and the following batch sizes: [4, 8, 16, 64]. For generation models we try the following learning rate values: [1e-3, 5e-4, 1e-4, 5e-5, 1e-5] and use a batch size of 64.
### Zero-shot and Few-shot Learning
For zero-shot and few-shot learning, we use Decoder-only LMs the open-sourced OPT-family (Zhang et al., 2022) which support multiple model sizes and replicate the GPT-3-family sizes (Brown et al., 2020). In addition, we use GPT-JT (Together, 2022), a stronger version of the open-sourced GPT-J (Wang and Komatsuzaki, 2021).
We also compare these open-source Decoder-only LLMs to OpenAI's LLMs: GPT-3-ada, the smallest GPT-3 model trained with instruction tuning, ChatGPT, and GPT-4. We use OpenAI's paid service since their LLMs are extremely popular and widely used. For computational and financial reasons, we tested the Decoder-only LLMs on two classification tasks: SA and NLI, and on two generation tasks: AS and TG.
**Instructions and Demonstrations.** For each test example, the input of the LLM contains an instruction (a template). For the SA and NLI tasks, we use templates similar to the ones used by Gao et al. (2021) and Wang et al. (2022). For AS and TG, the instructions are _"Write a summary"_ and _"Write a title for the review"_. For few-shot learning, we follow Gao et al. (2021) and select the (n=4) most similar demonstrations from the source domain to the target test example for few-shot learning. The similarity is computed by a pre-trained SBERT model (Reimers and Gurevych, 2019).
## 6 Results
**R.1 Question:** What DR scenarios are common in each task (see SS3.4)?
**Visualizations:** Figure 3 presents the distribution of each scenario for each task.
**Results:** (1) The Classic DR challenge, which occurs when both \(\mathrm{SD}\) and \(\mathrm{TD}\) are positive, is the most common scenario. For fine-tuning classification tasks (and the AS task), the Classic scenario dominates, with more than 70%; (2) Notice that the four scenarios are prevalent in all of the few-shot learning tasks, and in that case, we may say that the effect of the domain shift is weak. It is also the case in the fine-tuning QA and QG tasks, which share the same partition of examples into domains and therefore might behave similarly; (3) The Unobserved DR challenge scenario (positive \(\mathrm{TD}\) but negative \(\mathrm{SD}\)) is common, occurring in 7 out of 8 FT tasks (ranging between 11% in SA to 40% in TG), and in all few-shot learning tasks occurring in noticeable proportions. This observation is important since many of the past works looked only at \(\mathrm{SD}\), and our study suggests that a DR challenge may exist (in terms of positive \(\mathrm{TD}\)) even if participants do not observe a performance degradation; (4) The Observed DR challenge (positive \(\mathrm{SD}\) but negative \(\mathrm{TD}\)), while not as common as the Unobserved scenario, still occurs in 4 out of the 8 fine-tuning tasks and in all few-shot learning tasks. Our study highlights the importance of providing an additional perspective (i.e., the \(\mathrm{TD}\)) on performance degradation.
**Limitations:** This analysis includes all the participating models that may lead to a broader and more general picture of the challenge. In addition, it neither discusses the magnitude of the performance drops nor the scores. In what comes next, we present scores and drops of specific models.
**R.2 Question:** What is the relationship between the fine-tuned model size and the DR challenge?
**Visualizations:** Figures 4 and 5 which present the performance and the drops of fine-tuned models, for classification and generation tasks, respectively. We present the average in-domain and cross-domain performance, and the average worst \(\mathrm{SD}\) and \(\mathrm{TD}\).
**Results:** (1) For all tasks and models, the average in-domain performance is higher than the average
Figure 3: The proportion of each domain shift scenario, (which is determined by order of \((\mathrm{SS},\mathrm{TT},\mathrm{ST})\), see §3.4) for fine-tuned (left) and few-shot learning models (right). For each task, the proportion is measured over all the participating models and domain shifts.
cross-domain performance,except for the QA and QG tasks, where the difference between the two is minor (Notice that the QA and QG tasks share the same partition of examples into domains); (2) The magnitude of the average drop differs between the tasks, for example, the Average Drop in the AE task, which is around 50 F1 points, is much larger than in the QA tasks, which is around one F1 point; (3) Larger models of the same architecture family improve the average in-domain and the average cross-domain performance scores; (4) Regarding the performance gap, the general trend is that larger models improve performance drops; (5) For all tasks and models, the Average Worst \(\mathrm{SD}\) is larger than the Average Worst \(\mathrm{TD}\). We will discuss this finding later when answering R.5.
**Limitations:** We did not fine-tune models with more than one billion parameters due to computational reasons.
**R.3 Question:** What is the relationship between the few-shot model size and the DR challenge?
**Visualizations:** Figure 6 presents the average performance scores of several few-shot learning models and the average drops as a function of the size.
**Results:** (1) For all tasks and models, the average in-domain performance is higher than the cross-domain performance; (2) Similar to fine-tuning, increasing the model size improves both the average in-domain and cross-domain performance of few-shot learning models, as illustrated by the OPT-family model curves; (3) Larger models do not improve the performance drop and the effect of the size on the DR challenge is not clear.
**Limitations:** Few-shot learning models typically require several to dozens of demonstrations as input, especially when the texts are short. However, due to the system's maximum token length of 2048 and the long nature of our texts, we used a maximum of 4 demonstrations in our analysis. Surprisingly, when we attempted the NLI task involving short texts, we found that increasing the number of demonstrations up to 12 led to a continued
Figure 4: Fine-tuning performance for classification and QA tasks. The top row presents the F1 scores for different Encoder-only models: It includes the average performance for in-domain (black line), cross-domain (green line), worst \(\mathrm{SD}\) (blue line) and Worst \(\mathrm{TD}\) (orange line). The average worst \(\mathrm{SD}\) or \(\mathrm{TD}\) performance is calculated by taking the performance of the hardest shift for each source domain which the largest \(\mathrm{SD}\) or the \(\mathrm{TD}\) determines. The bottom row presents the performance drops **only for the DeBERTa-family models**: It includes Average Drop (green bars), Worst \(\mathrm{SD}\) (orange bars), and Worst \(\mathrm{TD}\) (blue bars). The lines on the bars present the Average Worst \(\mathrm{SD}\) and \(\mathrm{TD}\).
Figure 5: Fine-tuning performance for generation tasks. The bottom row presents the performance drops **only for the T5-family models**. See the caption of Figure 4.
decrease in performance. Moreover, it is important to note that few-shot learning models can be sensitive to the demonstrations or instructions provided [14, 15], and different prompts may yield different results. Additionally, for generation tasks, the performance of GPT-family models may be difficult to evaluate automatically. Upon manual inspection of the outputs generated by these models, we found them comparable or even superior to the reference texts.
**R.4 Question:** How do fine-tuned and few-shot learning models differ in the DR challenge?
**Visualizations:** Figure 7 compares the best-performing fine-tuned model to several few-shot learning models.
**Results:** (1) For all tasks, fine-tuned models outperform few-shot learning models, achieving higher average in-domain and average cross-domain performance; (2) Few-shot learning models seem more robust to domain shift, as their drops are consistently lower than those of fine-tuned models. (3) The average drop in the SA task of the few-shot learning models with an instruction-tuning phase (GPT-JT, ChatGPT and GPT-4) is positive. However, the worst drops are not, and in the magnitude of the other models. (4) In the NLI task, in-domain demonstrations seem to have a negative effect on the performance of GPT-4, since the average cross-domain performance is higher than the average in-domain performance.
**Limitations:** See R.2 and R.3 Limitations.
**R.5 Question:** Given the DR has two aspects, SD and TD, what is their relationship with ST?
**Visualizations:** Table 4 presents statistics on SD and TD, including their std deviations, the Worst drops, and the correlations of ST with SS and TT.
**Results:** (1) For every task and for both fine-tuning and few-shot learning models, the variance of SD is larger than the variance of TD (with and exception of the NLI task for few-shot learning). (2) For almost every task, the Worst and the Average Worst SD is larger than Sometimes, the average worst SD is two to three times bigger (e.g., QG and TG). An exception is the SA task, where the worst TD is bigger. This result indicates that the observed
\begin{table}
\begin{tabular}{c c|c c|c c|c c} \hline
**Task** & \(\sigma_{\mathrm{SD}}\) & \(\sigma_{\mathrm{TD}}\) & \(\rho_{\mathrm{SS}}\) & \(\rho_{\mathrm{TT}}\) & \(W_{\mathrm{SD}}\)**(Avg)** & \(W_{\mathrm{TD}}\)**(Avg)** \\ \hline \multirow{5}{*}{\begin{tabular}{c} NLI \\ \end{tabular} } & SA & 3.62 & 3.33 & 0.29 & 0.43 & 13.2 (9.2) & 17.1 (8.0) \\ & NLI & 3.06 & 1.29 & -0.30 & 0.83 & 7.1 (4.8) & 5.1 (3.7) \\ & AE & 7.09 & 6.53 & -0.15 & 0.12 & 36.0 (28.2) & 36.6 (28.8) \\ & QA & 3.71 & 2.07 & -0.06 & 0.68 & 6.8 (3.2) & 4.5 (2.4) \\ & QG & 2.29 & 0.46 & -0.28 & 0.95 & 4.5 (2.0) & 1.2 (0.6) \\ & AS & 1.91 & 0.65 & 0.06 & 0.77 & 4.8 (2.1) & 2.5 (1.4) \\ & TG & 2.70 & 1.47 & 0.31 & 0.70 & 6.9 (4.4) & 4.9 (3.0) \\ \hline \multirow{5}{*}{
\begin{tabular}{c} NLI \\ \end{tabular} } & SA & 6.65 & 4.70 & 0.09 & 0.39 & 17.8 (10.3) & 14.7 (8.0) \\ & NLI & 4.16 & 4.52 & 0.04 & -0.09 & 7.9 (3.9) & 8.1 (3.3) \\ \cline{1-1} & AS & 1.40 & 0.67 & -0.14 & 0.64 & 3.0 (1.3) & 1.8 (0.9) \\ \cline{1-1} & TG & 1.14 & 0.71 & -0.06 & 0.63 & 2.6 (1.5) & 2.0 (1.0) \\ \hline \end{tabular}
\end{table}
Table 4: Statistics for characterizing the SD and the TD of fine-tuning (FT) and few-shot learning (FS). We first calculate the statistic for each participating model and then present the mean statistic for the given task. The statistics include: (1) The standard deviation of the SD (\(\sigma_{\mathrm{SD}}\)) and the TD (\(\sigma_{\mathrm{TD}}\)); (2) Spearman’s correlation between the ST and SS (\(\rho_{\mathrm{SS}}\)) or TT (\(\rho_{\mathrm{TT}}\)); (3) The Worst SD (\(W_{\mathrm{SD}}\)) and the Worst TD (\(W_{\mathrm{TD}}\)), in parenthesis we present he Average Worst SD or TD (see §3.2).
Figure 6: Few-shot learning performance for several models with 4 demonstrations. The top row plots the average performance for two models. The bottom row plots the performance drops as a function of the model size **only for the OPT-family models model**. See the caption of Figure 4 for more details about the metrics.
degradation attributed to domain shift in the majority of DR works might not tell the whole story. In SS7, we discuss the impact of this finding on DR research; (3) The Worst \(\mathrm{SD}\) and Worst \(\mathrm{TD}\) suggest that in every task and model, there is a substantial performance drop. Accordingly, the challenging shifts indicate that the DR challenge is still relevant. Moreover, the large differences between the Average Drop and the Worst \(\mathrm{SD}\) and \(\mathrm{TD}\) drops suggest that synthetic domain shifts, like challenge sets that typically focus on hard shifts, do not represent the average robustness of NLP models truthfully; (4) The correlation between \(\mathrm{ST}\) and \(\mathrm{TT}\) is much stronger than the correlation with \(\mathrm{SS}\), and for most fine-tuning tasks, it is above 0.8. This result can explain why the variance and the Worst drops of the \(\mathrm{SD}\) are larger than the ones of the \(\mathrm{TD}\). Accordingly, we find the \(\mathrm{TT}\) to approximate the \(\mathrm{ST}\) better, and consequently, the \(\mathrm{TD}\) approximates better the Average Drop than the \(\mathrm{SD}\). (5) Nevertheless, when the DR problem is severe, the \(\mathrm{TT}\) is not informative for the \(\mathrm{ST}\). For example, the AE task, which has the highest degradation in performance (more than 20 F1 points) has the weakest correlations. In contrast, the strongest correlations are observed for the QA and the QG tasks, demonstrating the lowest performance drops.
## 7 Discussion
**On predicting performance degradation.** Estimating performance has an important impact on the deployment and maintenance of NLP models and finance decisions (like the need for annotation) [22, 23, 24, 25, 26]. In our study, we find that \(\mathrm{TT}\) is a better predictor of the cross-domain performance (\(\mathrm{ST}\)) than the \(\mathrm{SS}\). Therefore, knowledge about the target domain is essential and without it estimators may struggle to predict cross-domain performance.
**On Domain Robustness Research.** As discussed in the paper, most of the past DR works focus solely on the observed performance degradation (\(\mathrm{SD}\)) as a measure of the DR challenge. However, as proposed in this paper, the DR challenge should be characterized by two random variables: the \(\mathrm{SD}\) and the \(\mathrm{TD}\). Our results highlight that it is important to discuss both variables when characterizing the DR challenge of NLP models, since in some tasks, they commonly disagree (i.e., the Observed and the Unobserved scenarios). Moreover, the \(\mathrm{TD}\) is actually a better predictor of the Average Drop since empirically we find its variance to be lower.
In addition, we suggest that current research may paint an inaccurate picture of the state of domain robustness. This stems from two of our findings. First, performance degradation is larger when measured in relation to the source in-domain performance (\(\mathrm{SD}\)) than in relation to the target's (\(\mathrm{TD}\)). Secondly, as we've seen, every task has its harder shifts in which models suffer from severe performance drops, even when the vast majority of shifts are not remotely as bad. This means that works that propose challenge sets and other highly
Figure 7: Comparing Fine-tuned models and Few-shot learning models. For every plot, the leftmost model is the best-performing fine-tuned model, while the rest are few-shot learning. The top row plots the average in-domain and average cross-domain performance. The bottom row plots the average drops.
curated datasets for measuring DR can be useful as diagnostic tools, but they present a distorted image of the actual state of DR, which is actually much milder. In our analysis, we found that even under such methods of measurement there exists a DR challenge for NLP models in most tasks.
**On the relevance of fine-tuning.** Zero-shot and few-shot learning models can perform various tasks without the additional development cost of annotating data or training a model. However, their usage can be very costly, as they require massive computational resources, and their latency can be extremely high. Moreover, when the data cannot be sent to external servers because of privacy constraints or when the domain is unique or specific (e.g., in national security settings or human conversations), huge LLMs that cannot be fine-tuned may be less effective. Moreover, with enough task-specific labeled data that few-shot LLMs can cheaply acquire, it is possible to develop a small high-performing fine-tuned LLM (Calderon et al., 2023). For these reasons, fine-tuning a smaller model that doesn't have few-shot capabilities is still the de-facto standard (Levine et al., 2022). We believe that fine-tuning specific and small models will still be the standard and their DR research is essential.
Despite the advantages of few-shot LLMs, several works have demonstrated that fine-tuning LLMs on specific downstream tasks outperform few-shot LLMs (Soltan et al., 2022; Tay et al., 2022), even when the latter are much larger. Moreover, there is strong evidence that few-shot language models underperform fine-tuned models when the domain is specific and requires expertise, such as biomedical (Gutierrez et al., 2022). This study also shows that task-specific fine-tuned models outperform few-shot models, although this gap may be closed soon.
**On the relevance of DA research.** With the recent LLMs advances, we receive an indication that NLP models are more robust than pre-transformers models (Hendrycks et al., 2020). LLMs are probably robust due to the pretraining process, where the models have seen a vast amount of diverse data from various domains. Furthermore, pretraining the model on data from a downstream task has improved the performance on it (Radford et al., 2019; Gururangan et al., 2020; Han and Eisenstein, 2019). Nevertheless, as demonstrated in this study, the DR challenge still exists. We show there is a performance drop due to domain shift in every task or model, and moreover, some shifts are remarkably challenging. We believe that the DA research remains essential and relevant, particularly for NLP. To facilitate further research, we provide an NLP DA benchmark for natural topic shifts, with challenging setups for various of NLP tasks. In Table 5 we present these domain shifts including the performance of the best-performing fine-tuned model. We hope this benchmark will be used to evaluate and improve DA methods for NLP.
\begin{table}
\begin{tabular}{l|l l|c c c c c} \hline
**Task** & **Source** & **Target** & SS & ST & TT & SD & TD \\ \hline SA & Airline & Books & 91.7 & 87.5 & 97.7 & 4.3 & 10.2 \\ NL & Travel & Telephone & 90.1 & 82.0 & 87.7 & 81.4 & 5.7 \\ AE & Device & mams & 74.6 & 37.6 & 73.8 & 37.0 & 36.2 \\ QA & Geography & Science & 80.4 & 75.9 & 79.0 & 4.4 & 3.1 \\ \hline QG & Tech & History & 50.7 & 44.0 & 45.2 & 6.7 & 1.2 \\ AS & Relationships & Lal. & 26.0 & 19.0 & 20.7 & 7.1 & 1.7 \\ TG & Beauty & Books & 39.6 & 27.6 & 29.8 & 11.9 & 2.2 \\ \hline \end{tabular}
\end{table}
Table 5: Performance and drops for the DA benchmark, consisting of a single challenge domain shift for each task. The table presents the F1 scores of DeBERTa-L for the classification and the QA tasks, and the ROUGE-L F1 scores of T5-L for the generation tasks. |
2307.16423 | Cellular automata in the light of COVID-19 | Currently, the world has been facing the brunt of a pandemic due to a disease
called COVID-19 for the last 2 years. To study the spread of such infectious
diseases it is important to not only understand their temporal evolution but
also the spatial evolution. In this work, the spread of this disease has been
studied with a cellular automata (CA) model to find the temporal and the
spatial behavior of it. Here, we have proposed a neighborhood criteria which
will help us to measure the social confinement at the time of the disease
spread. The two main parameters of our model are (i) disease transmission
probability (q) which helps us to measure the infectivity of a disease and (ii)
exponent (n) which helps us to measure the degree of the social confinement.
Here, we have studied various spatial growths of the disease by simulating this
CA model. Finally we have tried to fit our model with the COVID-19 data of
India for various waves and have attempted to match our model predictions with
regards to each wave to see how the different parameters vary with respect to
infectivity and restrictions in social interaction. | Sourav Chowdhury, Suparna Roychowdhury, Indranath Chaudhuri | 2023-07-31T06:14:38Z | http://arxiv.org/abs/2307.16423v1 | # Cellular automata in the light of COVID-19
###### Abstract
Currently, the world has been facing the brunt of a pandemic due to a diseases called COVID-19 for the last two years. To study the spread of such infectious diseases it is important to not only understand their temporal evolution but also the spatial evolution. In this work, the spread of this disease has been studied with a cellular automata (CA) model to find the temporal and the spatial behavior of it. Here, we have proposed a neighborhood criteria which will help us to measure the social confinement at the time of the disease spread. The two main parameters of our model are (i) disease transmission probability (\(q\)) which helps us to measure the infectivity of a disease and (ii) exponent (\(n\)) which helps us to measure the degree of the social confinement. Here, we have studied various spatial growths of the disease by simulating this CA model. Finally we have tried to fit our model with the COVID-19 data of India for various waves and have attempted to match our model predictions with regards to each wave to see how the different parameters vary with respect to infectivity and restrictions in social interaction.
## 1 Introduction
Epidemics and pandemics have a long story throughout human history. Recently human civilization has faced another pandemic named COVID-19. This pandemic has affected many countries through multiple waves. Total of 504,451,689 people have been infected worldwide and 6,222,430 people have died due to COVID-19 till 17 April 2022. In India 43,042,097 people have suffered and 521,781 have died as of 17/04/2022 due to this disease [1]. COVID-19 is caused by the virus which has named SARS COV-2. Multiple variants of this virus, like delta, omicron and many others makes it harder to control and predict its behavior. Recently another variant of COVID-19 named XE has been found [2]. Mathematical modeling helps us to understand the behavior of disease spread such that prevention and control strategies can be built. Also, mathematical models can help us to find some inherent properties of the disease and nature of its spread.
There are many different types of models that have been used in the past to study various diseases. These models are mainly modified versions of the Kermack McKendrick SIR model which is based on a system of coupled ordinary differential equations [3]. Currently, the ODE-based models and statistical models are widely used in literature to model the temporal behavior of the spread of COVID-19 from different aspects. Most of these models have tried to analyze the spread of this disease and tried to predict its future behavior [4, 5, 6, 7]. There are models which have proposed various intervention and vaccination strategies to prevent and control the spread of the disease [8, 9, 10, 11, 12]. Also, some authors have tried to predict different inherent properties of this pandemic like herd immunity and its chaotic nature [13, 14, 15, 16, 17]. These temporal models can give us much valuable information, however most of these models assume that a population is homogeneously mixed and cannot describe any spatial behavior. To incorporate this spatial behavior, deterministic and probabilistic Spatio-temporal models have been used in recent studies. Cellular automata (CA) is one such kind of spatio-temporal model.
Cellular automata (CA) has been used in many studies to model different aspects of epidemics. It has been widely used to model the disease spread of influenza and various vector-borne diseases like dengue [18, 19, 20, 21, 22, 23, 24, 25, 26]. A neighborhood condition is an important aspect in the CA. The most used neighborhood conditions are (i) Neumann's neighborhood condition, (ii) Moore's neighborhood condition, (iii) Extended neighborhood condition, and (iv) Random interactions. Coupled with these neighborhood conditions, various models like SEIR, SEIRS, SEIRD, and SEIRQD have been studied with the help of CA to model the spatial growth of epidemics [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 32]. Currently, CA has gained a lot of momentum in the studies of COVID-19. Various advanced studies with Genetic algorithms and network models have been for COVID-19 data [33, 34, 35, 36, 37, 38]. Also there are models where COVID-19 has studied from different aspects.
In this paper, we have tried to model COVID-19 using cellular automata (CA) to find spatio-temporal behavior of it. We have also made some analysis to understand its behavior in different waves of the disease. A cellular automata (CA) model
is represented in a square lattice and defined by some neighborhood and boundary conditions, the details of which are given in following sections.
This paper is arranged as follows: Section 2 consists of a detailed discussion of the model, neighborhood conditions, probability of infection, and the algorithm of the model. The result of the simulation has been shown in section 3 and the data analysis is shown in section 4. Finally, Section 5 consists of the conclusions of our model.
## 2 Mathematical Model
In this article, we have illustrated a Cellular automata (CA) model for epidemics and assumed the SEIR model as the base model. SEIR model stands for Susceptible-Exposed-Infectious-Removed. Here we have considered a \(N\times N\) square lattice, where each cell of the lattice is assumed to be a person. Each cell of the lattice can have the set of states, \(\mathcal{S}=\{S,E,I,R\}\) and these states are represented by the values \(\mathcal{V}=\{0,1,2,3\}\). The updation of a cell's state depends upon various conditions like, (i) the current state of the cell, (ii) the amount of time spent in the current state, and (iii) the current states of the neighbors. The main assumptions of our model are given below:
* Every cell represents a person.
* Only susceptible persons can interact with the other cells.
* A removed person cannot be infected again.
* For this CA model, we have assumed a periodic boundary condition. If a cell of \(i\)th row and \(j\)th column of a \(N\times N\) lattice is denoted by \((i,j)\) then, \[\begin{split}(N+1,j)\equiv(1,j)&\quad j=0,1,...N.\\ (i,N+1)\equiv(i,1)&\quad i=0,1,...N.\end{split}\] (1)
* One susceptible person can interact with a single person in each time step.
### Neighborhood condition
Nearest neighborhood condition is a widely used concept in the literature. Here, it has been assumed that a particular cell can only interact with its nearest neighborhood cells. Such two famous neighborhood conditions are: (i) Neumann's neighborhood condition and (ii) Moore's neighborhood condition.
Fig. 1 shows the two neighborhood conditions. Fig. 0(a) shows Neumann's neighborhood condition, where the nearest neighborhoods of any chosen \((i,j)\) cell are the first neighborhood cells with respect to the chosen cell. Similarly, Fig. 0(b) shows Moore's neighborhood condition. In this case, all first and second neighborhoods are treated as the nearest neighborhoods of the chosen \((i,j)\) cell.
In this work, we have assumed that a cell can interact with any other cells depending on the probability of interaction (\(p_{int}\)) between them. Here we have assumed that the probability of interaction (\(p_{int}\)) of a cell \((i,j)\) to any other cell varies inversely as a function of \(d\) (distance between two cells) in the form of a power law. Hence,
\[p_{int}(d)\propto\frac{1}{d^{n}} \tag{2}\]
where \(n\) is the degree exponent and can have a value greater than zero. Here, we have assumed that the distance between two cells is not just the geometrical distance between two. It depends on the layer number (\(l\)). In Fig. 2, we have shown how
Figure 1: Different neighborhood conditions. Blue cells represent the nearest neighborhoods of \((i,j)\) cell.
layers are defined. Also, it shows that a layer \(l\) contains the \(8l\) number of cells. If we choose a lattice of size \(N\times N\) then the total number of layers in this lattice is \(L=\frac{N-1}{2}\), when \(N\) is odd.
Hence, we can write,
\[d\propto l. \tag{3}\]
For mathematical simplicity, we can assume \(d=l\). Hence, from Eq. 2 we can write,
\[p_{int}(d)=p_{int}(l)\propto\frac{1}{l^{n}} \tag{4}\]
If there are \(L\) number of layers then the above equation can be written as,
\[p_{int}(l)=\frac{\frac{1}{l^{n}}}{\sum_{l=1}^{L}\frac{1}{l^{n}}}=\frac{1}{A_{n }l^{n}} \tag{5}\]
where, \(A_{n}=\sum_{l=1}^{L}\frac{1}{l^{n}}\). Hence, a person at the \((i,j)\) cell can interact with any other cell of layer \(l\) with a probability \(p_{int}(l)\). Thus, average interaction distance (\(\langle d\rangle\)) can be defined as,
\[\langle d\rangle=\sum_{l=1}^{L}lp_{int}(l)=\frac{1}{A_{n}}\sum_{l=1}^{L}l\frac {1}{l^{n}}=\frac{1}{A_{n}}\sum_{l=1}^{L}\frac{1}{l^{n-1}} \tag{6}\]
Figure 2: Different layers of a lattice with respect to \((i,j)\) cell.
Fig. (a)a shows the variation of average interaction distance (\(\langle d\rangle\)) with the degree exponent (\(n\)). From this figure, it can be found that the average interaction distance (\(\langle d\rangle\)) is quickly decreases and saturates to unity as \(n\) increases. Also, from Fig. (b)b it can be seen that the average interaction distance (\(\langle d\rangle\)) is approximately \(\sim 1\) for exponents \(n>3\). Hence for \(n\gg 3\) the neighborhood condition is approximately similar to Moore's neighborhood condition as discussed earlier and does not give any significantly different results.
### Probability of infection (\(Q_{i}\))
Let, \(q\) denote the disease transmission probability when a susceptible and an infectious person interact. The probability that a susceptible person will interact with any person at the layer \(l\) is \(p_{int}(l)\). If the probability of finding an infectious person in that layer is \(p_{I}(l)\) then the probability that the susceptible person will be infected is \(qp_{int}(l)p_{I}(l)\). Hence, the probability of infection (\(Q_{I}\)) of a susceptible person is,
\[Q_{I}=q\sum_{l=1}^{L}p_{int}(l)p_{I}(l). \tag{7}\]
As, \(p_{int}(l)=\frac{1}{A_{n}l^{n}}\), from the above equation (Eq. 7) we can write,
\[Q_{I}=\frac{q}{A_{n}}\sum_{l=1}^{L}\frac{p_{I}(l)}{l^{n}} \tag{8}\]
From the above equation (Eq. 8) we can say that the terms with small layer number (\(l\)) dominate the summation. Hence, the infection possibility of a susceptible person mainly depends on the infection situation around the person.
Thus in our model, instead of choosing a traditional neighborhood condition where the degree of the interaction is fixed, we have assumed a model where we can vary the degree of the social confinement by changing \(n\) (degree exponent). Also we have calculated the probability of infection (\(Q_{I}\)) for this modified model.
## 3 Algorithm and Simulations
### Algorithm
Here, we have discussed the state updation algorithm of the SEIR model. As we have mentioned earlier, every cell's state is denoted by a value (\(v\)) which is present in this set \(\{0,1,2,3\}\). The algorithm is given below:
* Let, at time \(t\) there is a susceptible person at \((i,j)\) cell. So, the value of the \((i,j)\) cell is \(v(i,j,t)=0\) and the probability of infection is \(Q_{I}(i,j,t)\). To find the infection possibility of the susceptible person, we will generate a uniform random number \(u\) between \(0\) and \(1\).
Figure 3: Plots of average interaction distance (\(\langle d\rangle\)) with \(n\) and the total number of layers (\(L\)). (a) Plot of average interaction distance (\(\langle d\rangle\)) with \(n\) by considering the total number of layers (\(L\)) = 50. (b) Plot of average interaction distance (\(\langle d\rangle\)) with the total number of layers (\(L\)) for \(n\)=3,4, and 5.
If, \(u\leq Q_{I}(i,j,t)\) then the susceptible person is exposed and at time \(t+1\) the state of the \((i,j)\) cell will be changed from \(v=0\) to \(v=1\). else, at time \(t+1\) the state of the \((i,j)\) cell will be unchanged.
* An exposed person (\(v=1\)) will remain exposed for \(\tau_{I}\) number of days. After that, the person will be infectious and the state of the corresponding cell will be changed from \(v=1\) to \(v=2\).
* An infectious person will remain in this state for \(\tau_{R}\) number of days. After that, the person will be removed (recovered or dead) and the state of the cell will be changed from \(v=2\) to \(v=3\).
### Simulation
In this part, we have done simulation of our model with \(n=1,2,3\). The values of the parameters and initial conditions that are used in the simulations are listed in the tables (Table 1 and Table 2) below:
\begin{table}
\begin{tabular}{|l|c|c|} \hline Description of the parameters & Parameters & Values of the parameters \\ \hline Lattice size & \(N\times N\) & \(101\times 101\) \\ Disease transmission probability & \(q\) & \(0.3\) \\ Latency period of the disease & \(\tau_{I}\) & \(8\ days\) \\ Removal period & \(\tau_{R}\) & \(18\ days\) \\ \hline \end{tabular}
\end{table}
Table 1: Table for the parameter values that are used in the simulations.
\begin{table}
\begin{tabular}{|c|c|} \hline States & Initial values \\ \hline \(S(0)\) & \(10200\) \\ \(I(0)\) & \(1\) \\ \(E(0)\) & \(0\) \\ \(R(0)\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 2: Table for the initial conditions of the simulations.
In Fig. 4, the first plot shows the temporal behavior of the exposed cases (\(E(t)\)) and the infectious cases (\(I(t)\)) of the epidemic. These temporal plots are averaged on 50 simulation samples. The rest plots of Fig. 4 are CA plots that represent the spatial evolution of disease spread. From these CA plots, we can hardly detect any clustering of the infected cases. This happens because the average interaction distance, \(\langle d\rangle\approx 11.11\). Thus a susceptible person can be infected by an infectious person who is far away from the susceptible one.
Figure 4: Plots of the temporal and spatial behavior of the disease spread for \(n=1\).
In Fig. 5, the first plot again shows the temporal growth of the epidemic. It can be seen from the CA plots that for \(n=2\), clusters are formed. The reason behind this is the short average interaction distance. For \(n=2\), the average interaction distance, \(\langle d\rangle\approx 2.77\). Also from the temporal plot, we can see that the infection spread time is increased than in the \(n=1\) case. This is also because of the short average interaction distance. For a short average interaction distance, only a few susceptible persons can interact with the infectious person. So, if most of those susceptible persons become infected then the infectious person cannot spread the disease further. Whereas, for \(n=1\) case, an infectious person can interact with many susceptible persons. Hence, an infectious person can infect more people during the infectious period.
Figure 5: Plots of the temporal and spatial behavior of the disease spread for \(n=2\).
Fig. 6 shows the evolution of the disease for \(n=3\). Here we can also clearly find the clusters. These clusters are more prominent than the \(n=2\) case because of the less average interaction distance (\(\langle d\rangle\)). Value of the average interaction distance for \(n=3\) is \(\langle d\rangle\approx 1.35\). Here we can see that the disease takes a longer time to fall for \(n=3\) than for \(n=1\) and \(n=2\). The reason behind this is the lower value of the average interaction distance which is discussed earlier.
Hence, from the above discussions, we can conclude that the clustering behavior of the disease spread depends on the average interaction distance (\(\langle d\rangle\)) as well as on degree exponent \(n\). Also, the average interaction distance (\(\langle d\rangle\)), gives an average estimation of the number of susceptible persons who can interact with an infectious person which is represented by \(8\langle d\rangle\). So, for a large \(\langle d\rangle\) (or small \(n\)) an infectious person can spread the disease to distanced region. Thus the infection period depends on the average interaction distance (\(\langle d\rangle\)) and also on \(n\).
Figure 6: Plots of the temporal and spatial behavior of the disease spread for \(n=3\).
Comparison with data
In this section, we have tried to fit our model with current COVID-19 data. Our model has four free parameters which are, (i) \(q\) : disease transmission probability, (ii) \(n\) : degree exponent, (iii) \(\tau_{I}\) : mean latency period, (iv) \(\tau_{R}\) : mean infectious period. We have optimized these free parameters for different waves of the COVID-19 pandemic in India. Here, we have considered each wave separately and normalized the active cases of each wave with the total number of infected cases in the respective wave. The data is taken from covid19india.org.[39] The date range of the different waves that we have considered here are given below:
To fit the model with the data we have optimized the sum of squared errors (\(SSE\))
\[SSE=\sum_{k}\left(i_{k}^{d}-i_{k}\right)^{2} \tag{9}\]
\(i_{k}^{d}\) : fraction of the active cases from the data. \(i_{k}\) : fraction of the infectious cases from the model.
The results of the best fit parameter values are given below.
We have optimized COVID-19 data with a \(101\times 101\) lattice space. Fig 6(a) and 6(b) shows the fitted model along with data for the first and the second wave.
Here we can see that the model fits reasonably well with the data. Also from the table 4, we see that the disease transmission probability (\(q\)) increases and the degree exponent (\(n\)) decreases in the second wave as compared to the first wave. Increment of \(q\) represents that in the second wave of the disease it was possibly more infectious and spread faster than in the first wave. Also decrement of \(n\) indicates that the interaction between infectious and the susceptible population spread out to a larger distances in the second wave as compared to the first wave. This is possibly because of the COVID protocols which is relaxed much more in the initial phase of the second wave as compared to the first wave.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Waves & q & n & \(\tau_{I}\) (\(days\)) & \(\tau_{R}\) (\(days\)) \\ \hline First wave & 0.1950 & 1.9310 & 5 & 11 \\ Second wave & 0.2406 & 1.3449 & 8 & 10 \\ \hline \end{tabular}
\end{table}
Table 4: Fitting parameter values for different waves.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Waves & Start date & End date \\ \hline First wave & 30-Jan-2020 & 16-Feb-2021 \\ Second wave & 17-Feb-2021 & 31-Oct-2021 \\ \hline \end{tabular}
\end{table}
Table 3: Date ranges for different waves.
Figure 7: Model fitting of the COVID-19 data of India (a) Model fitting of the first wave. (b) Model fitting of the second wave.
Conclusions
In this section we have summarized the main features and results of our model. The cellular automata (CA) is a very common tool to model a disease spread and has been used extensively in literature for studying different systems. In this paper, we have modeled the CA by proposing a new neighborhood criteria. Usually in earlier studies, neighborhood condition is such that the neighborhood of a lattice cell is always fixed. Whereas in our model, rather than choosing a specific neighborhood condition, we assume that a lattice cell can interact with any other cell at distance \(d\) with a certain probability which is called interaction probability (\(p_{int}(d)\)) (Eq.5). We have assumed that the interaction probability (\(p_{int}(d)\)) is a function of the distance (\(d\)) and has a form of inverse power law with degree exponent \(n\). Here, exponent \(n\) is a very important parameter as it enables us to tune the social confinement of our model. With this newly defined neighborhood criteria, we have calculated various relations like average interaction distance (\(\langle d\rangle\)) and the probability of infection (\(Q_{I}\)) to understand and represent our model properly.
From Fig. 3a, we can see that the average interaction distance (\(\langle d\rangle\)) deceases and saturation is reached to \(\sim 1\) as \(n\) increases. So, for a higher \(n\), a person can mainly interact with nearest neighbors. However, for a smaller value of \(n\), a person can interact with the distant neighbors. Hence higher values of \(n\) represents higher social confinement and vice-versa. Also we want to mention that for exponents, \(n>3\), the average interaction distance, \(\langle d\rangle\approx 1\). Thus the values \(n\gg 3\) do not give us any new results.
In the simulation section, we have studied the temporal and spatial behavior of our model for different degree exponents (\(n\)). As \(n\) increases, the disease spread becomes slower and it is more clustered. This happens because of the decrease in the average interaction distance (\(\langle d\rangle\)) or in the other words increase in the social confinement with increasing values of \(n\). Thus the disease transmission probability (\(q\)) and the degree exponent \(n\) regulates the speed of the disease spread.
Also, we have compared our model with COVID-19 data of India for different waves. We have first normalized the active cases of a wave with the total number of infected cases in that wave. Then we have optimized the sum of squared errors of the infectious cases (Eq. 9) to get the best fit result with the data. Here all simulations are done on a \(101\times 101\) lattice. We have found that the disease transmission probability (\(q\)) increases in the second wave than the first wave. This means that the disease is more infectious in the second wave than the first wave. Also the degree exponent (\(n\)) decreases in the second wave. This implies that the decrement in the COVID-19 restrictions (or decrement in the degree of social confinement) at the initial time of the second wave played a significant role to the faster spread of the disease. Our model fits the peak of the waves well, however fall in the data at the end of both waves and plateauing at the end of the second wave has some matches with our fitted model. This possibly indicates that our model needs to be modified to incorporate these aspects which we will look at in our future works.
Modeling spread of a disease is a very complicated process since several factors have to be considered. Non-uniform distribution of the population and economic situation of the regions are two major factors which affects the disease spread. In future we want to look at these complex aspects by refining this model to find the behavior of the disease spreading more accurately. We would also like to study these possibilities in the context of COVID-19.
## Acknowledgment
The authors would like to thank the Department of Physics, St. Xavier's College, Kolkata for providing support during this work. One of the authors (S. C.) acknowledges the financial support provided from the University Grant Commission (UGC) of the Government of India, in the form of CSIR-UGC NET-JRF. Finally, the authors would also like to express their gratitude to the anonymous referee for his/her valuable comments and suggestions.
|
2309.15486 | Transferability of Representations Learned using Supervised Contrastive
Learning Trained on a Multi-Domain Dataset | Contrastive learning has shown to learn better quality representations than
models trained using cross-entropy loss. They also transfer better to
downstream datasets from different domains. However, little work has been done
to explore the transferability of representations learned using contrastive
learning when trained on a multi-domain dataset. In this paper, a study has
been conducted using the Supervised Contrastive Learning framework to learn
representations from the multi-domain DomainNet dataset and then evaluate the
transferability of the representations learned on other downstream datasets.
The fixed feature linear evaluation protocol will be used to evaluate the
transferability on 7 downstream datasets that were chosen across different
domains. The results obtained are compared to a baseline model that was trained
using the widely used cross-entropy loss. Empirical results from the
experiments showed that on average, the Supervised Contrastive Learning model
performed 6.05% better than the baseline model on the 7 downstream datasets.
The findings suggest that Supervised Contrastive Learning models can
potentially learn more robust representations that transfer better across
domains than cross-entropy models when trained on a multi-domain dataset. | Alvin De Jun Tan, Clement Tan, Chai Kiat Yeo | 2023-09-27T08:34:36Z | http://arxiv.org/abs/2309.15486v1 | Transferability of Representations Learned using Supervised Contrastive Learning Trained on a Multi-Domain Dataset
###### Abstract
Contrastive learning has shown to learn better quality representations than models trained using cross-entropy loss. They also transfer better to downstream datasets from different domains. However, little work has been done to explore the transferability of representations learned using contrastive learning when trained on a multi-domain dataset. In this paper, a study has been conducted using the Supervised Contrastive Learning framework to learn representations from the multi-domain DomainNet dataset and then evaluate the transferability of the representations learned on other downstream datasets. The fixed feature linear evaluation protocol will be used to evaluate the transferability on 7 downstream datasets that were chosen across different domains. The results obtained are compared to a baseline model that was trained using the widely used cross-entropy loss. Empirical results from the experiments showed that on average, the Supervised Contrastive Learning model performed 6.05% better than the baseline model on the 7 downstream datasets. The findings suggest that Supervised Contrastive Learning models can potentially learn more robust representations that transfer better across domains than cross-entropy models when trained on a multi-domain dataset.
## 1 Introduction
In recent years, there has been renewed interest in contrastive learning due to its success in self-supervised learning for computer vision tasks. This has led to a resurgence of research related to contrastive learning and these research produced results comparable or even outperformed state-of-the-art results obtained for its supervised counterpart in the ImageNet benchmark [1; 2; 3; 4; 5; 6]. Contrastive learning is a technique in representation learning that is used to learn an embedding space to represent the features from the dataset of interest. By contrasting samples, the aim is for sample pairs that are similar to stay close to each other and sample pairs that are dissimilar to stay far apart from each other in the embedding space. Motivated by the promising performance of self-supervised contrastive learning, Khosla _et al_[4] proposed the Supervised Contrastive Learning framework, which leveraged on the label information and incorporated the use of labels into their contrastive training objective, known as the SupCon loss. Supervised Contrastive Learning achieved a better performance in ImageNet accuracy using the proposed SupCon loss than the standard cross-entropy loss [4].
Models trained using contrastive learning have been shown to provide comparable or even better quality representations than models trained using supervised cross-entropy loss, especially in the image classification task [2; 1; 4; 7; 8; 9; 6; 5]. Islam _et al_[9] showed that the representations learned from using contrastive objectives contained more low-level and/or mid-level semantics than cross-entropy models, allowing them to more effectively transfer the representations learned into a new task quickly. Previous studies have also shown that the representations learned can be easily transferable to a different downstream dataset [9; 1; 4; 5; 6]. However, the representations were often learned using a single source dataset that contains images from a single domain (e.g. ImageNet [10]). There has been little work done that explored the transfer capability of learned representations from a multi-domain dataset using contrastive learning. Taking a step back from our discussion of contrastive learning, we note that Convolutional Neural Networks (CNN) trained on a single dataset from a single domain often end up learning biased and less robust representations, and will likely perform well only on the specific domain it was trained for [11].
Inspired by the superior capability of contrastive learning in providing better quality representations and to learn more robust representations that are able to generalize to downstream datasets across different domains, we applied Supervised Contrastive Learning on a multi-domain dataset. We compared the transfer performance of the learned representations to a model trained using supervised cross-entropy loss on 7 downstream datasets across different domains. The cross-entropy model forms the baseline in my study. To evaluate transfer performance, a fixed feature linear evaluation protocol was used. The multi-domain dataset used was DomainNet [12]. To the best of our knowledge, this is the first research to apply Supervised Contrastive Learning on DomainNet and evaluate the transferability of the learned representations on downstream datasets across different domains. More specifically, we seek to answer the following question in this study: does supervised contrastive learning give better transfer performance than the commonly used cross-entropy loss when trained on a multi-domain dataset?
Figure 1 illustrates the DomainNet dataset consisting of images from multiple related domains. Each domain contains the same classes as the others. When training using the SupCom loss, the set of positives would contain images from the same classes but these images could be from different domains. These positives would be contrasted against the negatives, which are images of a different class. Using a multi-domain dataset can increase the data variety, potentially enriching the robustness of the representations learned and allowing knowledge transfer across domains. The main contributions of this paper can be summarized as follows:
* We applied Supervised Contrastive Learning on a multi-domain dataset by combining all images from the different domains in DomainNet.
* We compared the transfer performance of the representations learned from Supervised Contrastive Learning to the baseline: a model trained using the widely used cross-entropy loss.
* We evaluated the learned representations using the fixed feature linear evaluation protocol on 7 downstream datasets selected across different domains. Our results showed that the transfer performance of the Supervised Contrastive Learning model outperformed the baseline model on all the downstream datasets selected when trained on the combined DomainNet dataset.
## 2 Related Work
**Transfer Learning.** In general, for deep neural networks to perform well on a task, huge amount of data and compute power is required. As a result, using deep learning to train a different model for each task from scratch is expensive, in terms of the cost of obtaining a large amount of good quality task-specific data and the computing resources required. Transfer learning provides a solution to this, by training a model using a large-scale dataset (e.g. ImageNet), and then transferring the features learned to many downstream tasks. In this way, the downstream tasks can be trained with
Figure 1: A Dataset Consisting of Images from Multiple Domains (Domains and Images were selected from DomainNet)
significantly lesser data. Earlier research showed that using features from ImageNet-trained models to train Support Vector Machines (SVM) and logistic regression classifiers outperformed manual hand-engineered features [13; 14; 15]. Kornblith _et al[16]_ showed that better ImageNet-trained models (in terms of accuracy) tend to provide better features for transfer learning and fine-tuning. Mensink _et al[17]_ found that success of transfer depends on the source and target task types and the source dataset should include the domain of the target dataset for better transfer performance. Most of the previous research studied transfer learning in the premise of a model trained using a dataset comprising only a single domain (e.g. ImageNet). Moreover, the models studied in previous work were trained using cross entropy loss.
A similar study to our research was done by Islam _et al[9]_. They studied the transferability of representations learned using contrastive learning on downstream datasets across different domains. They combined supervised and self-supervised contrastive training objectives with cross entropy loss to see if it can improve transfer performance. However, the models were also pre-trained using ImageNet.
Different from those previous works, in this paper, we studied the transfer performance of a model trained with Supervised Contrastive Learning and compared it to a model trained with cross-entropy loss using a multi-domain dataset. The transfer performance was evaluated using linear evaluation with fixed feature extractor on 7 downstream datasets that were selected from different domains.
## 3 Methodology
### Analysis Setup
Given \(S\) number of source domains, \(D_{i}=\{(\textbf{x}_{i1},y_{i1}),(\textbf{x}_{i2},y_{i2}),...,(\textbf{x}_{iN},y_{iN})\}\), where \(i\in\{1,2,...,S\}\), each source domain has the same number of classes, and a marginal distribution of \(P_{i}\). We have a target domain \(D_{T}=\{(\textbf{x}_{T1},y_{T1}),(\textbf{x}_{T2},y_{T2}),...,(\textbf{x}_{TN},y_{TN})\}\), with marginal distribution of \(P_{T}\). It is not necessary that \(P_{i}=P_{T}\). The objective is to learn a target prediction function \(f(\cdot)\) using all the knowledge from the source domains \(D_{1},D_{2},...,D_{S}\). We used the linear evaluation over fixed feature extractor as the target prediction function in this study.
### Cross-Entropy Loss
Given an input image \(i\), \(K\) number of classes, and \(s_{ij}\) being the logit for class \(j\), where \(j\in\{1,...,K\}\), the multi-class cross-entropy loss is given by:
\[\mathcal{L}_{i}^{Cross-Entropy}=-\sum_{k=1}^{K}t_{i}*log(\frac{e^{s_{ij}}}{ \sum_{j^{\prime}=1}^{K}e^{s_{ij^{\prime}}}}) \tag{1}\]
where \(t_{i}\) is the ground truth label for input \(i\), and evaluates to 1 if \(t_{i}=k\), 0 otherwise. This can be easily extended to a mini-batch setting, and the loss for the mini-batch is given by:
\[\mathcal{L}^{Cross-Entropy}=\sum_{i=1}^{N}\mathcal{L}_{i}^{Cross-Entropy} \tag{2}\]
Figure 2: Overview of Training Process for: (a) SupCon Model; (b) Cross-Entropy Model.
where \(N\) is the number of samples in the mini-batch.
### Supervised Contrastive Loss
Given a mini-batch of \(N\) samples, \(\{\mathbf{x}_{l},y_{l}\}_{l=1}^{N}\), and the corresponding mini-batch of \(2N\) samples \(\{\tilde{\mathbf{x}}_{l},\tilde{y}_{l}\}_{l=1}^{2N}\), where \(\tilde{\mathbf{x}}_{2k}\) and \(\tilde{\mathbf{x}}_{2k-1}\) are the two randomly augmented images of sample \(\mathbf{x}_{k}\), and \(\tilde{y}_{2k}=\tilde{y}_{2k-1}=y_{k}\), where \(k\in\{1,...,N\}\). Denote the image encoder (e.g ResNet50) as \(g(\cdot)\) and MLP projection head as \(h(\cdot)\). \(\mathbf{z}=h(g(\mathbf{x}))\) represents the projected vector representation of an image. The supervised contrastive loss (SupCon) is defined as follows:
\[\mathcal{L}_{i}^{SupCon}=\frac{-1}{2N_{\tilde{y}_{i}}-1}\sum_{j=1}^{2N}\mathds{1 }_{i\neq j}\cdot\mathds{1}_{\tilde{\mathbf{y}}_{i}=\overline{y}_{j}}\cdot log \frac{exp(sim(\mathbf{z}_{i},\mathbf{z}_{j})/\tau)}{\sum_{k=1}^{2N}\mathds{1 }_{i\neq k}\cdot exp(sim(\mathbf{z}_{i},\mathbf{z}_{k})/\tau)} \tag{3}\]
\[\mathcal{L}^{SupCon}=\sum_{i=1}^{2N}\mathcal{L}_{i}^{SupCon} \tag{4}\]
where,
* \(sim(\cdot,\cdot)\) is the dot product between the two normalised vectors
* subscript \(i\) denotes the index of an augmented image from the augmented mini-batch
* subscript \(j\) denotes the index of all the other augmented images from the augmented mini-batch that is not \(i\) and has the same class as \(i\)
* subscript \(k\) denotes the index of an augmented image from the augmented mini-batch that is not \(i\)
* \(N_{\tilde{y}_{i}}\) is the total number of images in the mini-batch (before augmentation) that has the same class as \(i\), hence \(2N_{\tilde{y}_{i}}-1\) is the number of augmented images in the augmented mini-batch that has the same class as \(i\) (note \(2N_{\tilde{y}_{i}}-1\) is also the number of images in the set of positives, consisting of all images with same class)
### Datasets
**DomainNet.** DomainNet was used as the pre-training dataset. DomainNet [31] is a dataset consisting of common objects in six different domains: sketch, real, quickdraw, painting, infograph, clipart. Every domain includes 345 classes of objects, such as airplane, clock, flower and bus. For the purpose of this research, the images from all the different domains were combined into one huge dataset containing 409,832 images to form a multi-domain dataset. Note that only the training set split of DomainNet and the cleaned version was used1.
Footnote 1: The dataset is available from [http://ai.bu.edu/M3SDA/](http://ai.bu.edu/M3SDA/)
**Downstream datasets.** 7 downstream datasets were selected across different domains to evaluate the transferability of the representations learned. They were broadly categorized into natural, symbolic, illustrative and texture. The datasets were also selected with a mix of finer/coarser-grained labels, for e.g. CIFAR100, Aircraft and Flowers102 were
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline Category & Dataset & Train & Test & No. of & Evaluation \\ & & Size & Size & Classes & Metric \\ \hline \multirow{6}{*}{Natural} & CIFAR10 [18] & 50,000 & 10,000 & 10 & Top-1 Accuracy \\ \cline{2-6} & CIFAR100 [18] & 50,000 & 10,000 & 100 & Top-1 Accuracy \\ \cline{2-6} & Flowers102 [19] & 2,040 & 6,149 & 102 & Mean-Per-Class \\ \cline{2-6} & Aircraft [20] & 6,667 & 3,333 & 100 & Mean-Per-Class \\ \cline{2-6} & SVHN [21] & 73,257 & 26,032 & 10 & Top-1 Accuracy \\ \hline Illustrative & Kaokore [22] & 6542 & 813 & 8 & Top-1 Accuracy \\ \hline Texture & DTD [23] & 3760 & 1880 & 47 & Top-1 Accuracy \\ \hline \end{tabular}
\end{table}
Table 1: Summary of Datasets Used for Downstream Linear Evaluation
finer-grained while CIFAR10 is coarser-grained. Table 1 summarizes the downstream datasets. More details about the downstream datasets can be found in the supplementary materials.
### Experimental Setup
**SupCon Model.** The SupCon training process was split into two stages (the pre-training stage and linear evaluation stage). Figure 2(a) shows an overview of the training process. In the pre-training stage, ResNet50 was used as the base encoder and the MLP projection head contains two linear layers (first linear layer was 2048-d with ReLU activation and the second linear layer was 128-d). Temperature \(\tau=0.13\) was used in the SupCon loss. The SGD optimizer with momentum of 0.9 and weight decay of 1e-4 was used to train the model, with a learning rate of 0.1. A batch size of 1024 was used and the model was trained for 400 epochs. The learning rate was warmed up linearly for the first 10 epochs, and we applied a step decay with decay rate of 0.1 at epochs 250 and 350 respectively. Further details on the augmentations used can be found in the supplementary materials.
In the linear evaluation stage, to evaluate the learned representations, the MLP projection head was discarded and a new linear layer was attached on top of the frozen ResNet50. The linear layer essentially acts as a linear classifier, with an input size of 2048 and output size corresponding to the number of classes that is in the downstream dataset to be trained on. The usual cross-entropy loss was used. No augmentation was applied on the training images other than resizing them to a dimension of 32x32 pixels. The SGD optimizer with momentum of 0.9 and no weight decay was used to optimize the cross-entropy loss. The linear layer was trained for a total of 50 epochs with no learning rate decay. We swept the hyperparameter space (learning rate and batch size) for each downstream dataset as follows:
* Learning rate: 0.1, 0.01, 0.001
* Batch size: 32, 64, 128
The official training and test split was used for all the downstream datasets. If a training and validation split was available, they were used for the hyperparameter tuning, with the optimal learning rate and batch size selected from the model that gave the highest validation accuracy. If the dataset consists of multiple training and validation splits (e.g. DTD), we used the first split for hyperparameter tuning. If there was no training and validation split, we randomly selected 70% of the training set for training and the remaining for validation. After hyperparameter tuning, the training and validation set was combined and used for training with the optimal learning rate and batch size, and the test accuracy was used to evaluate the transfer performance.
**Cross-Entropy (Baseline) Model.** The training process for the Cross-Entropy model is illustrated in Figure 2(b). No MLP projection head was used and the training objective to be optimized was the cross-entropy loss. A linear layer was attached on top of the base encoder, with input size of 2048 and output size of 345, which corresponds to the number of classes in the combined DomainNet dataset. The same hyperparameters as the SupCon model was used, except for a batch size of 512 and the learning rate was decayed at epochs 150, 250 and 350 respectively. In the linear evaluation stage, the same steps (including the hyperparameter tuning) as the SupCon model was performed.
## 4 Results and Discussions
### Linear Evaluation over Fixed Feature Extractor
The linear evaluation was ran over 5 runs using the optimal set of learning rate and batch size found for each dataset and Table 2 reports the average test accuracy along with its standard deviation. It can be observed that the SupCon model outperformed the cross entropy model in all the downstream datasets for linear evaluation with fixed feature extractor. The SupCon model obtained an average accuracy of 68.23% among the downstream datasets, while the cross entropy model obtained an average accuracy of 62.18%. The SupCon model performed, on average, 6.05% better than the cross entropy model on the 7 downstream datasets when trained with a multi-domain dataset.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Model & CIFAR10 & CIFAR100 & Aircraft & Flowers102 & SVHN & Kaokore & DTD & Mean \\ \hline SupCon & 92.31 \(\pm\) 0.04 & 75.74 \(\pm\) 0.05 & 36.53 \(\pm\) 0.27 & 75.09 \(\pm\) 0.09 & 70.03 \(\pm\) 0.05 & 76.63 \(\pm\) 0.2 & 51.26 \(\pm\) 0.13 & **68.23** \\ \hline Cross Entropy & 90.16 \(\pm\) 0.06 & 70.99 \(\pm\) 0.07 & 26.95 \(\pm\) 0.35 & 65.92 \(\pm\) 0.11 & 64.6 \(\pm\) 0.07 & 71.22 \(\pm\) 0.25 & 45.43 \(\pm\) 0.3 & 62.18 \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy (%) for SupCon and Cross Entropy Model on the Downstream Datasets for Linear Evaluation. (Note: Mean-per-class accuracy is provided for Aircraft and Flowers102 while the rest are top-1 accuracy. Mean and standard deviation over 5-runs are provided.)
### Ablation Studies
**Effect of Temperature \(\tau\).** The temperature (\(\tau\)) parameter used in the SupCon loss is adjustable and as noted in [4] that smaller \(\tau\) values can benefit training more, but a very small value of \(\tau\) can lead to numerical instability. A model trained with the optimal \(\tau\) can improve its performance. As such, we studied the effect of \(\tau\) on the transfer accuracy across the 7 downstream datasets for the SupCon model. Keeping other hyperparameters constant, we performed the experiments with different \(\tau\) values. Figure 3 shows the plot of the mean accuracy against temperature. Table 3 shows the average accuracy and standard deviation (over 5 runs) obtained for each dataset. From Figure 3, a similar trend as [4] was observed, where at lower temperature values of 0.04 and 0.07, the mean accuracy was lower than that of higher temperature values like 0.10 and 0.13. However, it is also observed that when temperature is increased further to 0.17, there is a drop in the mean accuracy. This would imply that it is important to select an optimal temperature value that can benefit the training process so that the representations learned can give better transfer performance.
**Effect of Augmentations.** As augmentations generally play an important role in contrastive learning [1; 4; 6], we studied the effect of augmentations on the transfer accuracy across the 7 downstream datasets for the SupCon model. In particular, we performed further experiments with AutoAugment (ImageNet policy), RandAugment and Stacked RandAugment. All the other hyperparameters were kept constant. Table 4 shows the average accuracy and standard deviation (over 5 runs) obtained for each dataset. It can be observed that SimAugment and Stacked RandAugment, which are stronger augmentation strategies, performed better than AutoAugment (ImageNet policy) and RandAugment in terms of the mean accuracy obtained across all the downstream datasets, except for SVHN. We conjecture that in the SVHN dataset, the house numbers are often sheared or skewed, and the transformations in AutoAugment (ImageNet policy) and RandAugment include shearing, translation and rotation, which could potentially boost the transfer performance for SVHN.
**Effect of Base Encoder.** The base encoder acts as a feature extractor to extract useful representations of the underlying data. As such, the performance of the model is highly dependent on whether the base encoder can learn meaningful representations that can be transferred to downstream datasets. Hence, we decided to study the effect of using a deeper base encoder, ResNet101, on the transfer performance, with the premise that a deeper network has a larger capacity and can generalize better. Again, all the other hyperparameters were kept constant. Table 5 shows the results obtained. However, we do not notice an improvement in the mean accuracy across the downstream datasets when ResNet101 was used. We conjecture that the combined DomainNet dataset that was used for pre-training was not large enough, hence
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Temperature (\(\tau\)) & CIFAR10 & CIFAR100 & Aircraft & Flowers102 & SVHN & Kaokore & DTD & Mean \\ \hline
0.04 & \(90.66\pm 0.06\) & \(74.07\pm 0.1\) & \(28.51\pm 0.31\) & \(70.91\pm 0.09\) & \(65.41\pm 0.04\) & \(74.15\pm 0.18\) & \(48.25\pm 0.25\) & 64.56 \\ \hline
0.07 & \(90.2\pm 0.01\) & \(72.48\pm 0.04\) & \(26.53\pm 0.31\) & \(67.06\pm 0.1\) & \(64.53\pm 0.04\) & \(74.07\pm 0.13\) & \(47.47\pm 0.05\) & 63.19 \\ \hline
0.10 & \(91.9\pm 0.02\) & \(76.11\pm 0.04\) & \(35.27\pm 0.1\) & \(75.2\pm 0.09\) & \(70.01\pm 0.12\) & \(77.37\pm 0.13\) & \(49.83\pm 0.08\) & 67.95 \\ \hline
0.13 & \(92.31\pm 0.04\) & \(75.74\pm 0.05\) & \(36.53\pm 0.27\) & \(75.09\pm 0.09\) & \(70.03\pm 0.05\) & \(76.63\pm 0.2\) & \(51.26\pm 0.13\) & **68.23** \\ \hline
0.17 & \(91.85\pm 0.03\) & \(74.49\pm 0.07\) & \(31.98\pm 0.19\) & \(70.89\pm 0.12\) & \(67.55\pm 0.01\) & \(73.8\pm 0.08\) & \(48.81\pm 0.08\) & 65.63 \\ \hline \end{tabular}
\end{table}
Table 3: Accuracy (%) for SupCon Models Trained with Different \(\tau\) Values. (Note: Mean-per-class accuracy is provided for Aircraft and Flowers102 while the rest are top-1 accuracy. Mean and standard deviation over 5-runs are provided. \(\tau=0.13\) corresponds to the original model.)
Figure 3: Plot of Mean Accuracy over all the Downstream Datasets against Temperature
even with a deeper network that has larger capacity, it was not able to learn better representations. Cross-referencing to a study done by Kolesnikov _et al._[24] on the effect of model capacity and dataset size on transfer performance, it was found that the benefit from larger model diminishes given a constant number of pre-training images and solely increasing model capacity may degrade performance. Another plausible explanation could also be due to the small resolution (32x32) that the images were resized to during pre-training. A lower resolution may cause a lost in semantic information, of which a deeper network will not be able to take advantage of. Further studies on the effect of larger models on transfer performance may be required.
### Limitations
Due to GPU memory constraints and that a large batch size was required for a good performance using the Supervised Contrastive Learning framework, the images in the combined DomainNet dataset were resized to a small resolution of 32x32 during pre-training. We acknowledge that this could affect the results as the lower image resolution would remove some semantic information that the model could potentially leverage on to learn better representations and boost transfer performance. It remains unknown if a larger image size such as 224x224 or 256x256 being used in pre-training with the DomainNet images may provide better transfer performance.
## 5 Conclusion
In conclusion, to answer the research question posed in the introduction, we empirically showed that supervised contrastive learning can give better transfer performance than cross entropy loss when trained on the multi-domain DomainNet dataset. Across the 7 downstream datasets that was selected, the SupCon model outperformed the cross entropy model in linear evaluation with fixed feature extractor by 6.05% on average. The results from this study suggests that the representations learned from Supervised Contrastive Learning could perhaps be more robust and capture more domain invariant features that are transferable to downstream datasets across different domains.
We would like to highlight an implication of the research on deep learning in the real world, where data distribution shift is a common problem faced. Data distribution shift refers to the phenomenon where the joint distribution of the inputs and outputs differ during the training and test stages. To give an example, an autonomous car that is trained on a dataset containing images of the icy roads in Norway would likely not perform well when put to test on the roads of Singapore. In safety critical systems such as autonomous driving, this could potentially result in catastrophic consequences. As such, it is important that we perform more research into building robust representations that can perform well across domains in the real world. Although the results of this research is far from that ultimate goal, we hope that this would inspire further research into the capabilities of contrastive training objectives or frameworks that are able to learn robust representations to be applied to downstream tasks across multiple domains.
|
2309.14773 | Relativistic hydrodynamics with phase transition | Assessing the applicability of hydrodynamic expansions close to phase
transition points is crucial from either theoretical or phenomenological points
of view. We explore this within the gauge/gravity duality, using the
Einstein-Klein-Gordon model, a bottom-up string theory construction. This model
incorporates a parameter, $B_4$, that simulates different types of phase
transitions in the strongly coupled field theory existing at the boundary. We
thoroughly examine the thermodynamics and dynamics of time-dependent,
linearized perturbations in the spin-2, spin-1, and spin-0 sectors. Our
findings suggest that "hydrodynamic series breakdown near transition points" is
valid exclusively for second-order phase transitions, not for crossovers or
first-order phase transitions. Additionally, we observe that the
high-temperature and low-temperature limits of the radius of convergence for
the hydrodynamic series ($q^2_c$) are equal. We also discover that the
relationship $(\text{Max}\vert q^2_c \vert)_{\text{spin-2}} < (\text{Max}\vert
q^2_c\vert)_{\text{spin-0}} < (\text{Max}\vert q^2_c \vert)_{\text{spin-1}}$ is
consistent for different spin sectors, regardless of the phase transition type.
At the chaos point, we observe the emergence of pole-skipping behavior for both
gravity and scalar perturbations at $\omega_n = - 2\pi T n i$. Lastly,
comparing the chaos momentum with $q^2_c$, we find that $q^2_{ps} < q^2_c$,
except for extremely high temperatures. | F. Taghinavaz | 2023-09-26T09:16:39Z | http://arxiv.org/abs/2309.14773v2 | # Relativistic hydrodynamics with phase transition
###### Abstract
Examining the validity of hydrodynamics series near the phase transition points is of great importance either in the theoretical or phenomenological point of view. We investigate this in the framework of gauge/gravity duality by using the Einstein-Klein-Gordon model which is a bottom-up string theory construction and a parameter \(B_{4}\) in this model is responsible for mimicking the various kinds of phase transition of the strongly coupled field theory that lives on the boundary. We study both the thermodynamics and dynamics of time-dependent linearized perturbations on top of this background for the spin-2, spin-1 and spin-0 sectors of perturbations extensively. We observe that the paradigm "_breakdown of the hydrodynamic series near the transition points_" seems to be (not true, true, not true) for the (crossover, second-order, first-order) phase transitions. We also observe that the high-temperature and low-temperature limits of the \(q_{c}^{2}\) (the radius of convergence of the hydrodynamic series) are equal which can be a sign of the same equality for using the low-momentum expansions. Moreover, we observe that the relation \((\mathrm{Max}|q^{2}|_{c})_{\text{spin-2}}<(\mathrm{Max}|q^{2}|_{c})_{\text{ spin-1}}\) does hold between the different spin sectors irrespective of the kinds of phase transition. We find that at the chaos point, the phenomenon of pole-skipping emerges for gravity perturbations as well as for scalar perturbations at \(\omega_{n}=-2\pi Tni\). Also, we compare the chaos momentum with \(q_{c}^{2}\) and find \(q_{ps}^{2}<q_{c}^{2}\) except at very high temperatures.
###### Contents
* 1 Introduction
* 2 Summary of main results and method
* 3 Holographic model
* 3.1 Action
* 3.2 Thermal states
* 3.3 Thermodynamics
* 4 Linearized equations
* 5 Some remarks
* 6 Crossover phase transition
* 6.1 Spin-2 sector
* 6.2 Spin-1 sector
* 6.3 Spin-0 sector
* 7 Second-order phase transition
* 7.1 Spin-2 sector
* 7.2 Spin-1 sector
* 7.3 Spin-0 sector
* 8 First-order phase transition
* 8.1 Spin-2 sector
* 8.2 Spin-1 sector
* 8.3 Spin-0 sector
* 9 Pole-skipping
* 10 Conclusion
* A Holographic Renormalization
* B Near-horizon expansions
C Low temperature black holes D Radius of convergence in high temperatures D.1 Large momenta
## 1 Introduction
Our knowledge of strongly interacting matter in extreme conditions has been growing in the last decades. Heavy-ion experiments at relativistic energies can make a hot and dense quark matter that the composed particles can weakly interact with each other. This is the well-known Quark-Gluon Plasma (QGP) phase that is reachable in the early universe, inside compact stars or at terrestrial experiments [1; 2; 3]. In the limit of nowadays energies, it has been confirmed that the QGP shows fluid-like or collective behaviors and relativistic hydrodynamics (RH) as an effective theory is a very powerful tool to describe such collective motions of heavy-ion particles [4].
The RH has earned great achievements in experimental events. For instance, the flow behaviors of emitted particles [5; 6] or the particle yields seen at the detectors can be completely described by using the RH hybrid codes [7; 8]. Furthermore, in the collision of light particles such as in proton-proton or even in the electron-electron, it is believed that a kind of collective behavior can be inferred [9; 10; 11]. All of this evidence has marked the "unreasonable effectiveness" of the RH [12].
On the other hand, experimental research has shown that the QGP is a strongly interacting matter that the perturbative approaches are not valid anymore [13]. Therefore, non-perturbative methods such as the gauge/gravity duality are helpful to look at various properties of the QGP. One famous example of this duality is the AdS/CFT correspondence in which the type IIB supergravity on the AdS\({}_{5}\times\)S\({}^{5}\) is confirmed to be dual to the \(4-\)dimensional supersymmetric Yang-Mills (SYM) theory living on the boundary of the AdS\({}_{5}\)[14; 15; 16]. Actually, this duality is a strong-weak duality that maps a strongly-coupled quantum gauge field theory to a weakly-coupled classical gravity in one higher dimension. One of the remarkable outcomes of the AdS/CFT is the prediction for the universal ratio of shear viscosity to entropy density, \(\eta/s=1/(4\pi)\)[17; 18], which agrees very well with the experimental data [5].
Theoretically, the RH approach is an effective field theory that works in the long wave-length regime of momenta in which each conserved current can be expressed in terms of a gradient expansion [19; 20]. Transport coefficients are the coefficients of this series that contain valuable information about the underlying microscopic theory. One
can ask about the convergence properties of this series and how a possible divergence could potentially alter the physical results. A piece of work has been done to shed more light on this topic. For example, the divergences of the RH series in the real space [21] can be handled by using the Borel-Pade techniques [22; 23; 24; 25; 26]. This can say more about the origin of these divergent points and even the region in which the RH series might be convergent. On the other hand, the convergence of the gradient expansion in real space can be related to the radius of convergence of RH in momentum space [27; 28].
One of the important guidelines for today's strong interaction research is to probe the near-critical points of the QCD phase diagram [29; 30]. This is a very challenging regime since so many degrees of freedom have entered the dynamics. Fluctuations play an important role near the transition points and the standard RH has to be modified near the critical points [31; 32]. Apart from the dynamical properties, thermodynamics is not very well-known there because the lattice QCD simulations are forbidden due to the sign problem and one has to resort to the effective models [33; 34]. Much effort has been focused on this topic so that the BEST (Beam Energy Scan Theory) collaboration is organized at the RHIC laboratory and one of their primary goals is to examine the QCD phase diagram extensively and meticulously [35].
Theoretically, it is very well-motivated and valuable to investigate the RH series near the critical points and see whether its validity is affected on the critical points or during the phase transition. In the current paper, we investigate the thermodynamics as well as the dynamics of time-dependent and linearized perturbations on top of a holographic model, the so-called Einstein-Klein-Gordon model [36]. This is a phenomenological string theory model and by the AdS/CFT dictionary translates to a strongly coupled scalar field theory living on the boundary of spacetime. There is a family of superpotentials parameterized by \(B_{4}\) in which evaluating the thermal variables such as pressure or entropy of the dual theory and taking the boundary limit, reveals that the spectra \(B_{4}<0\) can give the desired phase transition. For example, \(B_{4}=0\) mimics the crossover phase transition of the boundary theory, \(B_{4}=-0.0098\) imitates the second-order phase transition and any \(B_{4}<-0.0098\) can provide us with the first-order phase transition. This is very remarkable since according to the AdS/CFT dictionary turning on the small perturbations on top of this background is equivalent to studying the RH for systems having a kind of phase transition. This holographic model does not have any known analytical solution, so numerical solutions are used. We fix the parametrization freedom by choosing the Gubser gauge \(u:=\phi(u)\) that leads to a free parameter \(\phi_{H}\), the horizon value of the scalar field [36]. This free parameter plays the role of an external source thereby all quantities depend on it and label the many-body properties of the theory.
We completely review the main results in the next section and to avoid being repetitive, we here don't mention them again. The organization of this paper is as
follows. In section 2 we review the results coming from the theoretical or numerical computations. After that in section 3 we provide the ingredients of the Einstein-Klein-Gordon action including the solutions for thermal states and thermodynamics derived from this background. To complete this section and to avoid the mathematical complexities, we provide details in separate Appendices A, B. Also, in Appendices C and D we investigate the low and high-temperature limits of the solutions and perturbations. Then in section 4 we give the linearized equations of perturbations in the master formula framework. Next, we elaborate on some important points in section 5. Having said this, in section 6 we discuss the QNM spectra and radius of convergence for the crossover phase transition. Distinct subsections are devoted to describing the results in the spin-2, spin-1 and spin-0 sectors. In what follows, in section 7 we describe the numerical outputs for the second-order phase transition in which different subsections are responsible for giving the details of each spin sector. Similar work is done for the first-order phase transition in section 8. In section 9 the pole-skipping feature of this model in detail is studied and we compare the radius of convergence with the chaos momentum. Eventually, we conclude with a summary and an outlook to further directions in section 10.
## 2 Summary of main results and method
The great advantage of the present paper is to explore the hydrodynamic series in various kinds of phase transition. Due to the diversity and intricacy of the outcomes, we review the upcoming results in this section. For more convenience, we itemize them below.
* **Action:** In the gauge/gravity duality, we use the Einstein-Klein-Gordon(EKG) model which is a bottom-up string theory construction to mimic the crossover, second-order and first-order phase transition of the strongly coupled boundary scalar field theory [37]. This model explicitly violates the conformal symmetry due to a non-vanishing expectation value of the scalar field. Via the AdS/CFT dictionary, we obtain the energy-momentum tensor and one-point function of the scalar field on the boundary [38; 39; 40] and after that the kinds of phase transition can be judged. There is a parameter \(B_{4}\) in the EKG action that enables us to get various phase transitions, in such a way that \(B_{4}=0\) gives the crossover, \(B_{4}=-0.0098\) gives the second-order, and \(B_{4}=-0.02\) gives the first-order phase transition. Transition points of the model are given in Tab. 1. We are working in the Gubser gauge to fix the residual reparametrization freedom. Solutions to the equations of motion are solved numerically and then thermodynamic variables such as pressure "\(p\)", speed of sound "\(c_{s}^{2}\)" and
entropy "\(s\)" as well as the transport coefficients \((\eta,\xi)\) are derived. We always find \(\eta/s=1/(4\pi)\) and \(\xi/\eta\) has peaks around the transition point according to the kinds of phase transition. The first-order phase transition is notable since around the transition point there is an instability that reflects the possibility of tunneling among different thermodynamic states. This instability can be seen as negative values for \(c_{s}^{2}\) or twisting in \(\xi/\eta\) plots. Likewise, the near-horizon expansion and low-temperature solutions are provided in separate Appendices B and C.
* **Linearized Equations:** Linearized equations for gravity and scalar perturbations in different spin sectors are derived according to the master formula approach [41; 42] in the Eddington-Finkelstein coordinates. This eases the numerical process of the QNMs.
* **Numerical method:** To derive the QNMs, we benefit from the spectral method with Chebyshev polynomials as basis functions [43]. Fast convergence of the solutions and rapid decay of errors are some benefits of this method. To obtain accurate outputs, we perform the numerical tasks by two different discretization points and choose the ones with less than \(0.1\%\) discrepancy. The results include the hydro modes (the modes at low momenta) and non-hydro modes (the modes besides the hydro modes), the latter are very important near the collision points.
* **Radius of convergence:** The main motivation of this work is to obtain the radius of convergence of the hydro series for systems with phase transitions. To do this, we utilize a method that momenta are analytically continued to the complex plane (\(q^{2}=|q^{2}|e^{i\theta}\)) in the QNMs spectra and find the smallest points where the modes at a certain \(\theta\) and \(|q^{2}|\) start to collide to each other [44; 45; 46].
* **Crossover results:*
* We find that for real \(q^{2}\) the modes don't mix, i.e. the high-temperature (\(T\)) and low-temperature limits of the modes are the same. Determining collisions for the radius of convergence occur at negative values of \(q^{2}\) or at \(\theta=\pi\). The spin-2 sector doesn't have the hydro mode and the collision occurs between the two lowest non-hydro modes. Near the transition point of the crossover phase, the \(q^{2}_{c}\)(the critical value or the smallest one that determines the radius of convergence) increases. We observe a high-\(T\)/low-\(T\) duality in \(q^{2}_{c}\) figures which states the same equality for using the hydro series in these two regimes. The Figs. 5-8 belong to this subsection.
* **Crossover results:*
* We find that for real \(q^{2}\) the modes don't mix, i.e. the high-temperature (\(T\)) and low-temperature limits of the modes are the same. The collisions for the radius of convergence occur at negative values of \(q^{2}\) or at \(\theta=\pi\). The spin-2 sector doesn't have the hydro mode and the collision occurs between the two lowest non-hydro modes. Near the transition point of the crossover phase, the \(q^{2}_{c}\)(the critical value or the smallest one that determines the radius of convergence) increases. We observe a high-\(T\)/low-\(T\) duality in \(q^{2}_{c}\) figures which states the same equality for using the hydro series in these two regimes. The Figs. 5-8 belong to this subsection.
* **Crossover results:*
* We find that for real \(q^{2}\) the modes don't mix, i.e. the high-temperature (\(T\)) and low-temperature limits of the modes are the same. The collisions for the radius of convergence occur at negative values of \(q^{2}\) or at \(\theta=\pi\). The spin-2 sector doesn't have the hydro mode and the collision occurs between the two lowest non-hydro modes. Near the transition point of the crossover phase, the \(q^{2}_{c}\)(the critical value or the smallest one that determines the radius of convergence) increases. We observe a high-\(T\)/low-\(T\) duality in \(q^{2}_{c}\) figures which states the same equality for using the hydro series in these two regimes. The Figs. 5-8 belong to this subsection.
\(\bullet\) **Spin-1:** For real \(q^{2}\) the high-\(T\) and low-\(T\) modes at different levels don't mix. However, there is a possibility to collide in real and positive \(q^{2}\)s. In the spin-1 sector, the collisions are always between the hydro modes and the lowest gravity non-hydro modes because the perturbations include only the gravity fluctuations. The \(q_{c}^{2}\) increases in the vicinity of the transition point and high-\(T\)/low-\(T\) limits in \(q_{c}^{2}\) figures are equal. The Figs. 9-13 belong to this part. \(\bullet\) **Spin-0:** Here, the modes appear in pairs because the gravity and scalar field perturbations are coupled to each other. Apart from that at \(q^{2}=0\), the modes' numbers are even since equations are decoupled according to the Eqs. (4.11) and (4.12). However, at \(q^{2}\neq 0\) the modes' numbers are odd due to the addition of the hydro mode. Again, the high-\(T\) and low-\(T\) limits of modes are the same. Determining collisions for \(q_{c}^{2}\) at low-\(T\) and high-\(T\) occur between the hydro mode and the closest scalar non-hydro mode, while in the middle points, it happens between the hydro and nearest gravity non-hydro mode. We observe the high \(T\)/low \(T\) duality in \(q_{c}^{2}\) figures and the increase of the \(q_{c}^{2}\) near the transition point. We believe in Eq. (6.1) to hold in the crossover part. The Figs. 14-16 belong to this part. \(\bullet\) **Summary of crossover results: We believe that the paradigm "breakdown of the hydro series near the transition points" seems to be not true in the crossover phase transition. Eq. (6.1) is a new finding. The high-\(T\)/low-\(T\) duality in \(q_{c}^{2}\) curves is observed. The high-\(T\) and low-\(T\) limits of the modes at real \(q^{2}\) are the same.**
\(\bullet\) **Second-order results:**
\(\bullet\) **Spin-2:** For real \(q^{2}\) the second lowest mode goes to infinity at low-\(T\), while the rest approaches the 5D-AdS black hole results. This happens such that \(\Omega_{i+1}(T\gg T_{c})\rightarrow\Omega_{i}(T\ll T_{c})\) for \(i>2\). No collision is happening between the modes at positive \(q^{2}\) and the first collision is between the non-hydro modes at \(\theta=\pi\). Near the transition point \(q_{c}^{2}\) decreases. The Figs. 17-20 are devoted to this part. \(\bullet\) **Spin-1:** The property of escaping mode discussed above is observed in this subsection. Collisions happen for positive \(q^{2}\) among the hydro and the nearest gravity non-hydro modes. We detect a decrease of \(q_{c}^{2}\) near the transition point of the second-order phase transition and high-\(T\)/low-\(T\) values of \(q_{c}^{2}\) are the same. The Figs. 21-25 belong to this part.
\(\bullet\) **Spin-0:** The escaping modes are doubled due to the coupled nature of the equations in the spin-0 sector. At real and \(q^{2}\neq 0\) the high-\(T\)/low-\(T\) limit of the hydro modes remains the same. Determining collisions for the radius of the convergence is similar to the crossover spin-0 case. At very low temperatures and except for the escaping modes, the gravity and scalar non-hydro modes come in pairs with equal frequencies. We see a decrease of \(q_{c}^{2}\) for the spin-0 sector near the transition point. We observe Eq. (6.1) to hold among the second-order results. The Figs. 26-28 belong to this part. **Summary of the second-order results: We believe that the paradigm "break-down of the hydro series near the transition points" seems to be true in the second-order phase transition. Eq. (6.1) does hold. The high-\(T\)/low-\(T\) duality in \(q_{c}^{2}\) curves is observed. The high-\(T\) and low-\(T\) limits of the hydro modes at real \(q^{2}\) are the same. For real and positive \(q^{2}\) some modes escape to infinity at low temperatures.**
\(\bullet\) **First-order results:**
* **Spin-2:** The number of escaping modes is greater than the second-order ones. Similar to the former spin-2 results, the collision is happening for negative \(q^{2}\) between the two non-hydro modes. The first-order transition is marked with three \(\phi_{H}\)s (the horizon value of the scalar field which is the only free parameter of the theory). Near the largest one \(q_{c}^{2}\) increases, while at rest no clear words can be said. However, the high-\(T\)/low-\(T\) equality in \(q_{c}^{2}\) curves is seen. The Figs. 29-32 belong to this part. \(\bullet\) **Spin-1:** The hydro modes given at \(q^{2}\neq 0\) are the same at low and high temperatures. The collision is happening for positive \(q^{2}\) between the gravity hydro and the smallest gravity non-hydro mode. Near the largest \(\phi_{H}^{c}\), \(q_{c}^{2}\) increases and at rest no clear behavior is seen. The Figs. 33-37 belong to this part. \(\bullet\) **Spin-0:** The escaping mode feature is very complex for real \(q^{2}\). However, around \(3\lesssim\phi_{H}\lesssim 8\) the real part of the hydro modes is omitted and the imaginary part is split. This is a very well-known property of mode collision in hydrodynamics. This lets that \(q_{c}^{2}\) to be happened for positive values. Similar to the previous first-order results, we see an increase near the largest \(\phi_{H}^{c}\) in \(q_{c}^{2}\). The Figs. 38-40 belong to this part. **Summary of the first-order results: We believe that the paradigm "break-down of the hydro series near the transition points" seems to be not true
in the first-order phase transition. Eq. (6.1) does hold. The high-\(T\)/low-\(T\) duality in \(q_{c}^{2}\) figures is confirmed again. The escaping mode feature for real \(q^{2}\) at low temperatures is very bold.
* **Results for pole-skipping:** We observe that at the chaos point, namely \(\omega=i\lambda_{L}=2\pi Ti\) and \(k_{\star}=ik_{0}=i\lambda_{L}/v_{B}\) the "\(vv\)" component of Einstein's equation in the spin-0 sector becomes identically zero which is a sign of multivaluedness of \(G_{T_{0}T_{0}}^{R}(\omega,k)\). Having multiple results around \(\omega_{n}=-2\pi Tni\) is seen for scalar field perturbations. For the sake of clarifying, we compare the \(q_{ps}^{2}=k_{\star}^{2}/(2\pi T)^{2}\) with \(q_{c}^{2}\) and find that at high temperatures there is room for \(q_{ps}^{2}=q_{c}^{2}\). Besides that, always \(q_{ps}^{2}<q_{c}^{2}\) which marks hydro validity even on the chaos point. We find that in the region \(1\lesssim\phi_{H}\lesssim 6\), the inequality \((q_{ps}^{2})_{\rm FO}<(q_{ps}^{2})_{\rm SO}<(q_{ps}^{2})_{\rm CO}\) does hold. \(q_{ps}^{2}\) is able to find the location of the transition point.
* **Further results:** We find analytic expressions for temperature and entropy at very low temperatures and show how \(\phi_{H}\) can violate the conformal symmetry. Moreover, we investigate the \(q_{c}^{2}\) in the spin-2 sectors at very high temperatures and find \(q_{c}^{2}=1.486\,e^{i\theta_{c}}\) with \(\theta_{c}=0.98\pi\) regardless of the kinds of phase transition. We observe that for very large \(|q^{2}|\) the lowest non-hydro scalar modes approach the sound mode of the 5D-AdS-Schwarzschild black hole.
## 3 Holographic model
In this work, we consider deformed holographic Conformal Field Theories(CFTs) that are dual to five-dimensional Einstein gravity with a minimally coupled massive self-interacting scalar field. In this section, we summarize the gravity side of the holographic model that we study in the rest of this work.
In section 3.1 we present the Einstein-Klein-Gordon (EKG) action and the explicit form of the scalar potential. In section 3.2 we discuss solutions corresponding to thermal states in the dual field theory. For completeness, in Appendix A we derive the explicit form of the counter-term that renders the n-point functions of local boundary operators well-defined and explicitly derive the one-point functions of the boundary theory. Also, in Appendix B we provide the near-horizon solutions of bulk fields. Moreover, in Appendix C we derive thermal gas solutions, corresponding to vacuum solutions of the boundary theory. Then we present a method to compute the low-temperature solutions perturbatively.
### Action
The total action of the EKG model is given as follows [36; 37]
\[S_{\rm tot}=\frac{1}{16\pi G_{5}}\int_{\cal M}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In what follows, we will see that different kinds of phase transitions are reachable including the first-order, second-order, and crossover by varying \(B_{4}\). Linearized perturbations are studied on top of these backgrounds and hence the hydrodynamics for various thermodynamics can be done.
### Thermal states
The equations of motions for the metric and scalar fields in the action (10) can be written as
\[R_{MN}-\frac{1}{2}\partial_{M}\phi\,\partial_{N}\phi-\frac{1}{3} V(\phi)g_{MN} =0,\] \[\frac{1}{\sqrt{-g}}\partial_{M}\left(\sqrt{-g}g^{MN}\partial_{N} \phi\right)-\frac{\partial V(\phi)}{\partial\phi} =0. \tag{12}\]
To describe a more general (thermal) state, we make the following ansatz for the metric in the Eddington-Finkelstein coordinates [36]
\[\mathrm{d}s^{2}=e^{2A(u)}\left(-H(u)\,\mathrm{d}t^{2}+\mathrm{d}x^{2}+\mathrm{ d}y^{2}+\mathrm{d}z^{2}\right)+2e^{A(u)+B(u)}\,\mathrm{d}u\,\mathrm{d}t. \tag{13}\]
\(A,B\) and \(H\) are functions of the radial coordinate \(u\) only. This ansatz possesses solutions that are \(SO(3)\) invariant (invariant in the \(x,y,z\) directions). The black-brane geometry corresponds to a simple zero of \(H(u)\) at some \(u=u_{h}\), with a regular event- and Killing horizon at \(u=u_{h}\). This leads to finite temperature and entropy density of the dual field theory states. The boundary is located at \(u=0\).
It is noteworthy that the ansatz (13) has residual gauge freedom, namely reparametrizations of the radial coordinate. We fix this freedom by using the Gubser gauge [36], where the radial coordinate is identified with the corresponding value of the scalar field
\[u:=\phi(u). \tag{14}\]
Hereafter, we provide the solutions in the Gubser gauge. According to the ansatz (13), the equations of motion for \(A,B\) and \(H\) are
\[H\left(B^{\prime}-4A^{\prime}\right)-H^{\prime}+e^{2B}V^{\prime} =0, \tag{15}\] \[6\left(A^{\prime}B^{\prime}-A^{\prime\prime}\right)-1 =0,\] (16) \[H^{\prime\prime}+(4A^{\prime}-B^{\prime})H^{\prime} =0,\] (17) \[6A^{\prime}H^{\prime}+H\left(24A^{\prime 2}-1\right)+2e^{2B}V =0, \tag{18}\]
where prime denotes derivation with respect to the \(u\). We can rephrase these equations to a single master equation for \(G(\phi)\equiv A^{\prime}(\phi)\)
\[18G^{2}G^{\prime}V^{\prime\prime}+9GV^{\prime}\left(G^{\prime} \left(6G^{\prime}+8G^{2}+1\right)-2GG^{\prime\prime}\right)+V\left(G^{\prime} \left(6G^{\prime}+24G^{2}+1\right)-6GG^{\prime\prime}\right)=0. \tag{19}\]
This is so beneficial because for a given potential \(V(\phi)\) a solution of (3.13) allows to express \(A,B\), and \(H\) in terms of the function \(G\). According to the Eqs. (3.9), (3.10), and (3.12) the solution for \(A,B\) and \(H\) can be obtained as
\[A(\phi) =A(\phi_{h})+\int\limits_{\phi_{h}}^{\phi}\mathrm{d}\phi^{\prime} G(\phi^{\prime})\,,\] \[B(\phi) =B(\phi_{h})+\log\frac{G(\phi)}{G(\phi_{h})}+\int\limits_{\phi_ {h}}^{\phi}\frac{\mathrm{d}\phi^{\prime}}{6G(\phi^{\prime})}\,,\] \[H(\phi) =-\frac{e^{2B(\phi)}(V(\phi)+3G(\phi)V^{\prime}(\phi))}{3G^{ \prime}(\phi)}\,, \tag{3.14}\]
where \(\phi_{h}\) stands for the horizon value of the scalar field.
For certain simple choices of \(V\) it is possible to solve the second order ordinary differential equation (3.13) in the closed form [36], but in general the master equation (3.13) needs to be solved numerically. In that case, it is useful to extract the divergent asymptotic behavior of the master field \(G\) inherited from the asymptotic behavior of \(A\)
\[A(\phi)=\frac{\log(\phi)}{\Delta-4}+\tilde{A}(\phi),\quad B(\phi)=\log\left( \frac{1}{\phi(4-\Delta)}\right)+\tilde{B}(\phi),\quad G(\phi)=\frac{1}{(\Delta -4)\phi}+\tilde{G}(\phi), \tag{3.15}\]
where \(\tilde{A}(\phi),\tilde{B}(\phi)\) and \(\tilde{G}(\phi)\) remain finite at the boundary \(\phi\to 0\). As discussed in [36] such a near boundary behavior of the fields corresponds to a relevant deformation of the CFT\({}_{4}\), namely
\[\mathcal{L}=\mathcal{L}_{\text{CFT}_{4}}+j^{4-\Delta}\mathcal{O}_{\phi}. \tag{3.16}\]
To find the equation of state (EOS), we set the source \(j\) of the dual operator to one in units of AdS radius, \(j=1\). This leaves the horizon value of the scalar field \(\phi_{h}\) (which is equal to the horizon radius in the Gubser gauge) as the only free parameter.
### Thermodynamics
In this section, we examine the thermodynamics of the system for different choices of the \(B_{4}\) resulting in different types of phase transitions. We do this for the deformation of an operator with fixed conformal weight \(\Delta=3\) and modify the quartic term of the potential (3.4) by choosing different values for \(B_{4}\). We provide the near-horizon solution for \(A,B\), and \(H\) in terms of \(V\) in Appendix B.
he entropy density and the temperature of the boundary system can be expressed in terms of horizon data as follows 1
Footnote 1: Hereafter, we work in units \(8\pi G_{5}=1\).
\[s=2\pi\,e^{3A(\phi_{h})},\qquad T=\frac{1}{4\pi}\,e^{A(\phi_{h})-B(\phi_{h})}H^{ \prime}(\phi_{H})=\frac{1}{4\pi}\,e^{A(\phi_{h})+B(\phi_{h})}\,|V^{\prime}( \phi_{h})|. \tag{3.17}\]
Also, the speed of the sound of the boundary theory can be computed from the horizon information
\[c_{s}^{2}=\frac{\mathrm{d}\,\ln T}{\mathrm{d}\,\ln s}=\frac{\frac{\mathrm{d}\, \ln T}{\mathrm{d}\,\phi_{H}}}{\frac{\mathrm{d}\,\ln s}{\mathrm{d}\,\phi_{H}}}. \tag{3.18}\]
Energy density, 2 pressure and the expectation value of the deformation operator are expressed in Appendix A. The transport coefficients are extensively discussed in 5.
Footnote 2: By ‘density’ we mean that we divide by the trivial but infinite volume along the black brane.
In Fig. 1 we show the \(T(\phi_{H})\) for different choices of \(B_{4}=(0,-0.0098,-0.02)\) corresponding to cross-over, second-order and first-order phase transition, respectively. In the cross-over EoS, the temperature has a smooth fall-off with \(\phi_{H}\), while in second-order and first-order EoSs it has a complex pattern near the phase transition point. It possesses a flat profile in the second-order and in the first-order there is a dip and high corresponding to the inhomogeneous and mixed states near the phase transition which is a vital property of every first-order phase transition.
In Fig. 2 we sketch \(p/T^{4}\) v.s. \(T/T_{c}\) for every phase transition. As we expect in the cross-over and second-order pressure rises near the phase transition, while in the first-order pressure drops to negative values signaling the thermodynamic instability of states. This instability is due to the multiple available states which can tunnel in between them. At large temperatures, the pressure approaches the Steffan-Boltzmann value and this indicates that in this limit the corresponding states in all cases are close to the CFT\({}_{4}\).
Figure 1: Dimensionless plots of \(T(\phi_{H})\) for the cross-over, second-order and first-order phase transition from left to right, respectively.
To derive the values of critical temperature and scalar field, namely the \(T_{c}\) and \(\phi_{c}\), we examine the minimum points of \(c_{s}^{2}\) or equivalently the maximums of \(\xi/s\). In Fig. 3 we sketch \(c_{s}^{2}(\frac{T}{T_{c}})\) and \(\frac{\xi}{s}(\frac{T}{T_{c}})\) for different phase transitions. In the Tab. 1 we show transition points for different phase states. Values of \(T_{c}\) and \(\phi_{c}\) are given in a unit where \(L_{ADS}=1\) and \(8\pi G_{5}=1\). The transition point in the crossover is softer than the first-order and in the first-order, there is a region where \(c_{s}^{2}<0\) signals the presence of imaginary values for the speed of sound and quasinormal modes in this part are unstable which is known as spinoidal instability [47]. In all kinds of transitions in the low-temperature limit, we get \(c_{s}^{2}(T\to 0)\to 1/3\) and \(\xi/\eta(T\to 0)\to 0\).
The other important figure to probe is Fig. 4 which shows \(\langle\mathcal{O}_{\phi}(T)\rangle\) for different transitions. It peaks around the transition point, which according to Eq. (11) the conformal symmetry breaks strongly. However, at high and low temperatures it vanishes, and conformal symmetry is restored again.
For \(B_{4}>-0.00983491\) the system undergoes a smooth crossover as the temperature increases. For \(B_{4}=-0.00983491\) we find a second order phase transition and \(c_{s}^{2}\) vanishes at the critical temperature \(T_{c}\approx 0.11463\) and the entropy density shows critical behavior close to \(T_{c}\) as
\[s(T)=s_{0}+s_{1}\left(\frac{T-T_{c}}{T_{c}}\right)^{1-\tilde{ \gamma}}, \tag{12}\]
\begin{table}
\begin{tabular}{|l|c|c|} \hline & \(T_{c}\approx\) & \(\phi_{c}\approx\) \\ \hline \(B_{4}=0\) (Cross-over) & 0.08888 & 3.779 \\ \hline \(B_{4}=-0.00983\) (Second-order) & 0.11463 & 5.371 \\ \hline \(B_{4}=-0.02\) (First-order) & 0.1447 & \((2.446,4.202,12.715)\) \\ \hline \end{tabular}
\end{table}
Table 1: Table of transition points for different kinds of phase transitions. \(T_{c}\) and \(\phi_{c}\) values are given in units of \(L_{ADS}=1\) and \(8\pi G_{5}=1\).
Figure 2: Plots of \(\frac{p}{T^{4}}\) for cross-over, second-order and first-order phase transition from left to right, respectively. Negative values of pressure near the phase transition points at the first-order plot reflect the possibility of damping the thermodynamic states.
where we estimate \(\tilde{\gamma}=2/3\) for the critical exponent [36; 37]. For \(B_{4}<-0.00983491\) the system has a first-order phase transition. On the gravity side, it follows from the existence of three different black brane solutions around the critical point. They have the same Hawking temperature but different free energies. In this family, we choose \(B_{4}=-0.02\) as an example, which leads to a first-order phase transition at \(T_{c}\approx 0.1447\) between a large and small black brane geometry.
Linearized equations
The linear response of the system to external sources is analyzed by equations of motion for perturbations in specified channels proportional to the underlying symmetries [48]. In this section, we formulate the equations of motion for invariant perturbations and represent corresponding boundary conditions and thereafter discuss the results in the following parts of the paper.
Generally, the perturbations on top of the background solutions can be written in the following form 3
Footnote 3: Due to the \(SO(3)\) symmetry, we choose the momentum direction to be aligned in the \(z\) direction.
\[g_{MN}(u,t,z)=g^{(0)}_{MN}(u)+h_{MN}(u)e^{-i\omega t+ikz},\] \[\phi(u,t,z)=u+\psi(u)e^{-i\omega t+ikz}, \tag{30}\]
where \(g^{(0)}_{MN}(u)\) is the metric components in the Eq. 3. The perturbations \(h_{MN}(u)\) and \(\psi(u)\) can be classified according to the infinitesimal diffeomorphism transformations, i.e. \(x^{A}\mapsto x^{A}+\xi^{A}\), which \(\xi_{A}=\xi_{A}(u)e^{-i\omega t+ikz}\) is the infinitesimal displacement vector and the metric and scalar fields vary in a familiar way
\[h_{MN}\mapsto h_{MN}-\nabla_{M}\xi_{N}-\nabla_{N}\xi_{M}\,\qquad\quad\phi \mapsto\phi-\xi^{A}\nabla_{A}\phi\, \tag{31}\]
To obtain the invariant perturbations, we have to combine them to remain intact under the diffeomorphism transformations. This enables us to decompose the perturbations into spin-2, spin-1 and spin-0 sectors due to the \(SO(3)\) symmetry rules which are inherent in the Eq. (30). Moreover, it is convenient to write the equations of linearized perturbations in a master form [41; 42]
\[\Box\Phi^{(s)}_{h}-W^{(s)}_{h,h^{\prime}}\Phi^{(s)}_{h^{\prime}}=0. \tag{32}\]
Here, \(\Box=\frac{1}{\sqrt{-g}}\partial_{M}\left(\sqrt{-g}g^{MN}\partial_{N}\right)\) and "\(s\)" and "\(h\)" refer to the spin and helicity number, respectively. \(W^{(s)}_{h,h^{\prime}}\) is the master and symmetric potential that couples different helicity states in a given spin sector. This technique facilitates numerical computations and we use the Eq. (32) for each spin sector in our numerical computations.
In the spin-2 sector, the only independent perturbation is \(h_{xy}(u)\) which is invariant under the diffeomorphism transformations of the Eq. (31). The resulting equation is [42]
\[\Box\Phi^{(2)}(u)=0, \tag{33}\] \[\Box=H(u)e^{-2B(u)}\partial_{u}^{2}+\left(V^{\prime}(u)+2i\omega e ^{-A(u)-B(u)}\right)\partial_{u}+e^{-2A(u)}\left(-k^{2}+3i\omega e^{A(u)-B(u) }A^{\prime}(u)\right),\]
where \(\Phi^{(2)}(u)\equiv h_{y}^{x}(u)=e^{-2A(u)}h_{xy}(u)\). Indeed, the dynamics of the spin-2 mode is identical to a massless scalar field equation.
In the spin-1 sector, the existing perturbations are \((h_{tx}(u),h_{ux}(u),h_{xz}(u))\) and they can be combined in the following way to make invariant fields
\[\mathfrak{h}_{tx}(u)=kh_{tx}(u)+\omega h_{xz}(u),\qquad\mathfrak{h}_{ux}(u)=ikh_ {ux}(u)-h^{\prime}_{xz}(u)+2A^{\prime}(u)h_{xz}(u). \tag{4.5}\]
In relation to the spin-1 master scalar field \(\Phi^{(1)}(u)\), we may write the above expressions to [42]
\[\mathfrak{h}_{tx}(u)=H(u)e^{3A(u)-B(u)}\left(3\Phi^{(1)}(u)A^{ \prime}(u)+\Phi^{\prime(1)}(u)\right),\quad\mathfrak{h}_{ux}(u)=-\frac{e^{A(u )+B(u)}}{H(u)}\partial_{t}\Phi^{(1)}(u). \tag{4.6}\]
We can set \(\mathfrak{h}_{uz}(u)=0\) since time dependence is factored out and perturbations are completely "\(u\)" dependent. The following steps of [42] in the spin-1 sector will lead us to this relation
\[\square\Phi^{(1)}(u)-W^{(1)}(u)\Phi^{(1)}(u)=0, \tag{4.7}\]
where
\[W^{(1)}(u)=\frac{1}{2}e^{-2B(u)}\left(-6A^{\prime}(u)H^{\prime}(u )-6H(u)A^{\prime}(u)^{2}+H(u)\right), \tag{4.8}\]
In the spin-0 sector, we are to work with the perturbations
\((h_{tt}(u),h_{uu}(u),h_{tu}(u),h_{uz}(u),h_{tz}(u),h_{xx}(u)=h_{yy}(u),h_{zz}(u))\) and there are five invariant combinations out off these perturbations[41; 42]4
Footnote 4: In the Eqs. (4.9) the \(g_{ij}\) components refer to the background metric in the Eq. (3.7). Also, we express the invariant fields (4.9) in the Eddington-Finkelstein coordinates that are derived by coordinate transformations with respect to the Schwarzschild forms [41].
\[\mathfrak{h}_{tt}(u) = k^{2}h_{tt}(u)+2\omega kh_{tz}(u)+\omega^{2}h_{zz}(u)+\left(k^{2} \frac{g^{\prime}_{tt}(u)}{g_{tt}(u)}-\omega^{2}\right)h_{xx}(u),\] \[\mathfrak{h}_{tu}(u) = h_{tu}(u)-\frac{g_{ut}(u)}{g_{tt}(u)}h_{tt}(u)+\frac{i}{k} \left(\partial_{u}+i\omega\frac{g_{ut}(u)}{g_{tt}(u)}-i\frac{g^{\prime}_{tt}(u )}{kg_{tt}(u)}\right)h_{tz}(u)\] \[+ \frac{i\omega g_{ut}(u)^{2}}{g_{tt}(u)g^{\prime}_{xx}(u)}h_{xx}(u )+\frac{i\omega}{2k^{2}}\left(-\partial_{u}-i\omega\frac{g_{ut}(u)}{g_{tt}(u)} +\frac{g^{\prime}_{tt}(u)}{g_{tt}(u)}\right)(h_{xx}(u)-h_{zz}(u)),\] \[\mathfrak{h}_{uu}(u) = h_{uu}(u)-2\frac{g_{ut}(u)}{g_{tt}(u)}h_{tu}(u)+\frac{g_{ut}(u) ^{2}}{g_{tt}(u)^{2}}h_{tt}(u)-\frac{2g_{ut}(u)^{2}}{g_{tt}(u)g^{\prime}_{xx}( u)}\left(\partial_{u}+i\omega\frac{g_{ut}(u)}{g_{tt}(u)}\right)h_{xx}(u),\] \[+ \left(\frac{g_{ut}(u)^{2}}{g_{tt}(u)^{2}}\left(\frac{g^{\prime}_{ tt}(u)}{g^{\prime}_{xx}(u)}+\frac{g_{tt}(u)}{g_{xx}(u)}\right)-\frac{2g_{ut}(u)^{2}g_{ xx}(u)}{3g_{tt}(u)g^{\prime}_{xx}(u)^{2}}\right)h_{xx}(u),\] \[\mathfrak{h}_{uz}(u) = h_{uz}(u)-\frac{g_{ut}(u)}{g_{tt}(u)}h_{tz}-\frac{ikg_{ut}(u)^{ 2}}{g_{tt}(u)g^{\prime}_{xx}(u)}h_{xx}(u)\] \[- \frac{i}{2k}\left(\partial_{u}+i\omega\frac{g_{ut}(u)}{g_{tt}(u)} -\frac{g^{\prime}_{xx}(u)}{g_{xx}(u)}\right)(h_{xx}(u)-h_{zz}(u)),\] \[\boldsymbol{\phi}(u) = \psi(u)-\frac{h_{xx}(u)}{g^{\prime}_{xx}(u)}. \tag{4.9}\]
We can write these invariant perturbations in terms of the master fields \(\Phi_{2}^{(0)}\) and \(\Phi_{0}^{(0)}\)[42]. If we choose a gauge in which master fields do not depend on time, then the above combinations reduce to the following relations
\[\mathfrak{h}_{tu}=0,\] \[\mathfrak{h}_{uu}(u)=\Phi_{0}^{(0)}(u)\frac{e^{2B(u)}\left(\frac{ 12k^{2}e^{2B(u)}H(u)A^{\prime}(u)}{3e^{2A(u)}A^{\prime}(u)H^{\prime}(u)+2k^{2 }e^{2B(u)}}-6H(u)A^{\prime}(u)\right)}{18H(u)^{2}A^{\prime}(u)^{2}}+\frac{e^{2 B(u)}\Phi_{2}^{\prime(0)}(u)}{\sqrt{3}H(u)A^{\prime}(u)}\] \[+\Phi_{2}^{(0)}(u)\frac{e^{2B(u)}\left(\frac{24\sqrt{3}k^{2}e^{2 B(u)}H(u)A^{\prime}(u)^{2}}{3e^{2A(u)}A^{\prime}(u)H^{\prime}(u)+2k^{2}e^{2B(u)}}-3 \sqrt{3}A^{\prime}(u)H^{\prime}(u)+\sqrt{3}H(u)\right)}{18H(u)^{2}A^{\prime}( u)^{2}},\] \[\mathfrak{h}_{uz}(u)=\Phi_{0}^{(0)}(u)\frac{e^{2(A(u)+B(u))}}{3e^ {2A(u)}A^{\prime}(u)H^{\prime}(u)+2k^{2}e^{2B(u)}}+\frac{\sqrt{3}e^{2A(u)} \Phi_{2}^{\prime(0)}(u)}{2k^{2}}\] \[+\Phi_{2}^{(0)}(u)\frac{e^{2B(u)}\left(3e^{2A(u)}A^{\prime}(u)H^{ \prime}(u)+12e^{2A(u)}H(u)A^{\prime}(u)^{2}+2k^{2}e^{2B(u)}\right)}{2\sqrt{3} H(u)A^{\prime}(u)\left(3e^{2A(u)}A^{\prime}(u)H^{\prime}(u)+2k^{2}e^{2B(u)}\right)},\] \[\mathfrak{h}_{tt}=e^{2A(u)}H(u)\bigg{(}e^{-2B(u)}H(u)\left( \mathfrak{h}_{uu}(u)-2\mathfrak{h}^{\prime}_{uz}(u)\right)+2\mathfrak{h}_{uz}(u )\left(2A^{\prime}(u)e^{-2B(u)}H(u)-V^{\prime}(u)\right)\bigg{)},\] \[\boldsymbol{\phi}(u)=-\Phi_{0}^{(0)}(u)+\frac{\Phi_{2}^{(0)}(u)}{2 \sqrt{3}A^{\prime}(u)}. \tag{4.10}\]
We can call the \(\Phi_{2}^{(0)}(u)\) mode as the sound mode, while the \(\Phi_{0}^{(0)}(u)\) might be called a non-conformal mode since it is intimately related to the scalar field. The equations of motion for the master field in the spin-0 sector can be written as follows [42]
\[\square\Phi_{2}^{(0)}(u)-W_{22}^{(0)}(u)\Phi_{2}^{(0)}(u)-W_{02}^{( 0)}(u)\Phi_{0}^{(0)}(u)=0,\] \[\square\Phi_{0}^{(0)}(u)-W_{00}^{(0)}(u)\Phi_{0}^{(0)}(u)-W_{02}^{( 0)}(u)\Phi_{2}^{(0)}(u)=0, \tag{4.11}\]
where
\[W_{22}^{(0)}(u)=\frac{k^{4}C_{22,k^{4}}^{(0)}+k^{2}C_{22,k^{2}}^{ (0)}}{3\left(3e^{2A(u)}A^{\prime}(u)H^{\prime}(u)+2k^{2}e^{2B(u)}\right)^{2}},\] \[W_{02}^{(0)}(u)=\frac{k^{4}C_{02,k^{4}}^{(0)}+k^{2}C_{02,k^{2}}^ {(0)}}{\sqrt{3}\left(3e^{2A(u)}A^{\prime}(u)H^{\prime}(u)+2k^{2}e^{2B(u)} \right)^{2}},\] \[W_{00}^{(0)}(u)=\frac{k^{4}C_{00,k^{4}}^{(0)}+k^{2}C_{00,k^{2}}^ {(0)}+C_{00,k^{0}}^{(0)}}{6\left(3e^{2A(u)}A^{\prime}(u)H^{\prime}(u)+2k^{2}e^ {2B(u)}\right)^{2}}. \tag{4.12}\]
The coefficients are given below
\[C_{22,k^{4}}^{(0)}=-8e^{2B(u)}\left(6A^{\prime}(u)H^{\prime}(u)+ H(u)\left(6A^{\prime}(u)^{2}-1\right)\right),\] \[C_{22,k^{2}}^{(0)}=-72e^{2A(u)}A^{\prime}(u)^{2}H^{\prime}(u) \left(3H(u)A^{\prime}(u)+H^{\prime}(u)\right),\] \[C_{02,k^{4}}^{(0)}=-8e^{2B(u)}\left(H(u)\left(A^{\prime}(u)-B^{ \prime}(u)\right)+H^{\prime}(u)\right), \tag{4.13}\] \[C_{02,k^{2}}^{(0)}=-2e^{2A(u)}H^{\prime}(u)\left(H(u)\left(-6A^{ \prime}(u)B^{\prime}(u)+18A^{\prime}(u)^{2}+1\right)+6A^{\prime}(u)H^{\prime}( u)\right),\] \[C_{00,k^{4}}^{(0)}=-8e^{2B(u)}\left(H(u)\left(12A^{\prime}(u)B^{ \prime}(u)+3B^{\prime\prime}(u)-6B^{\prime}(u)^{2}-1\right)+6B^{\prime}(u)H^{ \prime}(u)\right),\] \[C_{00,k^{2}}^{(0)}=-12e^{2A(u)}H^{\prime}(u)\bigg{(}H^{\prime}(u )\left(12A^{\prime}(u)B^{\prime}(u)-1\right)\] \[+H(u)\left(A^{\prime}(u)\left(24A^{\prime}(u)B^{\prime}(u)+6B^{ \prime\prime}(u)-12B^{\prime}(u)^{2}-1\right)+2B^{\prime}(u)\right)\bigg{)},\] \[C_{00,k^{0}}^{(0)}=-3e^{4A(u)-2B(u)}H^{\prime}(u)^{2}\bigg{(}18H (u)A^{\prime}(u)^{2}B^{\prime\prime}(u)\] \[+\left(6A^{\prime}(u)B^{\prime}(u)-1\right)\left(H(u)\left(-6A^{ \prime}(u)B^{\prime}(u)+12A^{\prime}(u)^{2}+1\right)+6A^{\prime}(u)H^{\prime }(u)\right)\bigg{)},\]
An analysis of the equations (4.11) near the conformal boundary leads to the asymptotic behavior as \(r\sim 0\)
\[\Phi_{2}^{(0)}(u)\sim A_{1}+B_{1}\,u^{\frac{4}{4-\Delta}}\,\qquad\quad\Phi_{0}^{(0) }(u)\sim A_{2}\,u+B_{2}\,u^{\frac{\Delta}{4-\Delta}}. \tag{4.14}\]
Transformation to the usual Fefferman-Graham coordinates close to the boundary, \(u\mapsto\rho^{4-\Delta}\), reveals that \(\Phi_{2}^{(0)}(\rho)\) has the asymptotic of metric components like the
perturbations considered in [48]. This perturbation corresponds to the sound mode of the theory. On the other hand \(\Phi_{0}^{(0)}(\rho)\) has the asymptotic of the background scalar field \(\phi\) and is similar to the case studied in [49]. The right boundary conditions for the QNM spectrum are (\(A_{1}=0,A_{2}=0\)). Examining the Eqs. (4.4) and (4.7) has shown that the spin-2 mode \(\Phi^{(2)}(u)\) and spin-1 mode perturbations \(\Phi^{(1)}(u)\) have similar asymptotic behavior as \(\Phi_{2}^{(0)}(u)\) near the boundary. Thus, it demands a standard Dirichlet boundary condition at \(u=0\).
We will use dimensionless frequencies \(\Omega\equiv\frac{\omega}{2\pi T}\) and dimensionless momenta \(q\equiv\frac{k}{2\pi T}\) to present our results. This helps us to make a reasonable comparison for different regimes of temperatures and various phase structures. In what follows, to illustrate better the results, we separate the sections according to the kinds of phase transitions and in each section we clarify the corresponding results of each spin sector.
## 5 Some remarks
Before we deep into the details of numerical results, it is useful to discuss some points. In practice, the problem of finding the QNM is nothing but solving a generalized eigenvalue equation. In our work, we do this task by using the spectral method that tries to write the solution of the differential equation as a sum of certain "basis functions" and then choose the coefficients in the sum to satisfy the differential equation. Rapid decay of errors and fast convergence of solutions are among the benefits of this approach. We use spectral discretization with Chebyshev polynomials to solve the complex equations (4.4), (4.7) and (4.11) [43]. The polynomial character of the resulting matrix equation would enable us to determine the frequency points for a given \(q^{2}\), i.e. \(\Omega(q^{2})\) by evaluating the determinant of the matrix and setting it to zero. To find physical solutions with the right boundary conditions given in the Eq. (4.14) we fit the tail of the function to a form obtained from the small \(u\) analysis. To avoid getting unphysical modes and to have larger modes, we perform the above procedure by two different series of Chebyshev functions. Then we select the outputs that differ only by \(0.1\%\) discrepancy. The choice of summation numbers depends on the \(\phi_{H}\). For smaller values, they are selected to be \((N_{1},N_{2})=(15,20)\), while for larger values of \(\phi_{H}\) they have to be chosen at least in \((N_{1},N_{2})=(30,40)\). It is noteworthy that all modes for which \(\mathrm{Re}\Omega\neq 0\), come in pairs due to the parity symmetry
\[\Omega(q^{2})=\pm\mathrm{Re}\,\Omega(q^{2})+i\,\mathrm{Im}\,\Omega(q^{2}). \tag{5.1}\]
Apart from this doubling, there is another degeneracy in the spin-0 sector because of the coupling between \(\Phi_{2}^{(0)}\) and \(\Phi_{0}^{(0)}\). According to the hydrodynamic description, the
spin-2 mode equation (4.4) has no hydro mode, namely \(\lim_{q\to 0}\Omega\neq 0\) and all modes are non-hydro, per se. However, in the spin-1 and spin-0 equations, we get either the hydro or non-hydro modes.
Non-hydro modes are universal properties once we investigate Green's functions [22]. Indeed, in analyzing (high order) hydrodynamics one finds poles/cuts in the Borel plane which exactly correspond to the lowest nonhydrodynamic QNM. This reveals that non-hydro excitations have to be included for the self-consistency of the theory. The importance of these modes will become obvious when we approach the critical points since the lowest QNMs become comparable to the hydrodynamic ones near the critical points. So, the applicability of the effective hydrodynamic description is to be questioned. These phenomena will be the focus of the present paper. We find that they become very important in the vicinity of a transition point.
Another important topic is transport coefficients. To derive them from the usual ADS/CFT dictionary, we have to follow some actions on the bulk solutions [18] and match the result with the hydro two-point functions [19]. For our model, this procedure ends with the following for shear and bulk transports
\[\eta=\lim_{\omega\to 0}\lim_{u\to 0}\frac{1}{2\omega}\,e^{4A(u)-B(u)}H(u)\, \text{Im}\left(h_{y}^{x}(u)\,h_{y}^{rx}(u)\right),\] \[\xi=-\lim_{\omega\to 0}\lim_{u\to 0}\frac{1}{2\omega}\,e^{4A(u)-B(u)}H(u)\, \text{Im}\left(h_{x}^{x}(u)\,h_{x}^{rx}(u)\right). \tag{5.2}\]
The fluctuations \((h_{y}^{x}(u),h_{x}^{x}(u))\) should have incoming solutions near the horizon and regularity conditions on the boundary surface. For numerical backgrounds such as we did in the present paper, it is a tedious job to get these kinds of solutions. However, from the hydro mode or QNMs in low momenta regime, we could find them. We know that in the spin-1 and spin-0 sectors, the hydro modes have the following dispersion relation
\[\Omega^{(1)}(q^{2})\approx-i2\pi\frac{\eta}{s}q^{2},\qquad\Omega^{(0)}(q^{2}) \approx\pm c_{s}|q|-i2\pi T\Gamma_{s}q^{2}. \tag{5.3}\]
Superscript indices refer to the spin numbers and \(\Gamma_{s}\) is the sound attenuation constant
\[\Gamma_{s}=\frac{1}{2T}\left(\frac{4\eta}{3s}+\frac{\xi}{s}\right). \tag{5.4}\]
Therefore, finding the QNMs in low momenta and matching them with the polynomial behavior (5.3) will give us the \(\eta/s,\xi/s\) as well as the \(c_{s}\). In all cases, we found \(\eta/s=1/(4\pi)\) and \(c_{s}\) coincides with the Eq. (3.18). The results are shown in Fig. 3.
Crossover phase transition
According to the numerical setups which we stated before, in this section we illustrate the outcomes for crossover phase transition with \(B_{4}=0\). Specific results for each spin sector are given in separate subsections, from spin-2 to spin-0 consecutively. Let us emphasize that to find the collision between the hydro and non-hydro modes or detect the radius of convergence in the hydrodynamic series in each part, we apply the complex momentum paradigm in the QNM spectra [44; 46]. We obtain the QNMs for a complex momentum \(q^{2}=|q^{2}|e^{i\theta}\) and by varying \(|q^{2}|\) and \(\theta\) we could find the position of mode touching. This shows the location where the modes possess a singularity in the complex momenta plane.
### Spin-2 sector
In Fig. 5 we show the real and imaginary part of the lowest QNMs for the crossover EoS with \(B_{4}=0\). Note that in these plots \(q^{2}\) is real. To remind in the spin-2 sector there are no hydro poles. From these figures, we see that in large \(\phi_{H}\), corresponds to the low-temperature limit, each mode asymptotically approaches its known value for the 5D-AdS-Schwarzschild black hole. The approach is also seen in the high temperature or low \(\phi_{H}\). Moreover, it turns out that the dependence of these frequencies on the \(q^{2}\) is very mild, especially in the imaginary part, and can be neglected in a first approximation. This property leads to a certain "ultralocality" of the dynamics of the nonequilibrium modes on top of a hydrodynamic flow [50]. Near the phase transition point, \(\phi_{c}\approx 3.77\) and not just right on it, there is a bump in the real and imaginary parts that reflects the deviation from the 5D-AdS-Schwarzschild results. Additionally, the modes of high and low temperatures do not mix together. The feature that is unique for crossover transition.
To see how the modes evolve with real \(q^{2}\), we sketch the lowest three QNMs in Fig. 6 for different \(\phi_{H}=(3.779,6)\) from left to right. The imaginary parts change slowly by increasing the \(q^{2}\), while the real parts vary by a larger rate. Moreover, increasing the \(\phi_{H}\) doesn't alter the modes reasonably. The pattern of modes for real \(q^{2}\) is at different \(\phi_{H}\)s is similar to what has been shown in Fig. 6. So we infer that the collision of modes that define the location of validity of hydro in this sector should occur at negative values of \(q^{2}\).
To illustrate the collisions between the non-hydrodynamic modes in the spin-2 sector, we show in Fig. 7 the collision at the critical temperature of crossover EoS. Let us emphasize that in this sector the collisions occur always in imaginary momenta, \(q^{2}<0\). In Fig. 7, we consider \(-2\leq q^{2}\leq 0\) and we use the rainbow style.
We repeat this calculation for each \(\phi_{H}\) in the complex momenta plane and observe that collision always happens on \(\theta=\pi\). In Fig. 8 we show the \(|q_{c}^{2}|\) in terms of \(\phi_{H}\) and \(T/T_{c}\) where the dashed line stands for the transition point, \(\phi_{c}=3.779\). Near the transition point, the \(|q_{c}^{2}|\) increases which can be viewed as an improvement of the hydrodynamic series for this kind of phase transition. An interesting fact is that the high-temperature and low-temperature limit of \(|q_{c}^{2}|\) seems to be equal and it shows the same validity limit for using the hydro expansion in these two regimes.
Figure 5: The real part (first row) and the imaginary part (second row) of the lowest three QNMs in Spin-2 sector as functions of \(\phi_{H}\) at \(q^{2}=(0,3)\) for the crossover state, \(B_{4}=0\). Dashed lines show the results for the 5D-AdS-Schwarzschild black hole.
Figure 6: The real and imaginary parts the lowest QNMs in Spin-2 sector as functions of \(q^{2}\) for the crossover transition with \(B_{4}=0\) at \(\phi_{H}=(3.779,6)\) from left to right.
### Spin-1 sector
Similar to the former subsection, in Fig. 9, we show the real and imaginary parts of the lowest modes for real \(q^{2}=(0,3)\) in the spin-1 sector for \(B_{4}=0\) EoS. Unlike the spin-2 case, in the spin-1 we get a hydro mode for each \(q^{2}\) which is purely imaginary. This is the well-known shear mode that according to the Eq. (5.3) at small momenta defines the ratio \(\eta/s\). The magenta lines locate the lowest-lying mode or hydro mode at small momenta. Evolving the \(q^{2}\) results in small modifications of the real parts, while the imaginary parts of the modes start to go through each other. Hydro modes come down and a non-hydro mode comes up to kiss. This occurs at two points of \(\phi_{H}\). Similar
Figure 8: The radius of convergence for the lowest non-hydrodynamic modes in the spin-2 sector for the crossover phase transition. Collision always occurs at \(\theta=\pi\). The dashed line indicates the critical point, \(\phi_{c}=3.779\). The left(right) plot shows the dependency \(q^{2}(T)(q^{2}(\phi_{H}))\).
Figure 7: The collision of the lowest non-hydrodynamic modes at the critical point of the crossover EoS (left), \(B_{4}=0\). The imaginary momentum goes from \(q=0\) (Blue) to \(q=i\sqrt{2}\) (red). The collision is at \(q^{2}=-1.7080\).
to the spin-2 sector, there is no mixing of the modes at high and low temperatures.
To display how this touch occurs at different real \(q^{2}\), we show the real and imaginary parts of the lowest modes for \(\phi_{H}=(3.779,6,15)\) in Fig. 10. Contrary to spin-2, in spin-1 the collision can happen for positive \(q^{2}\).
This leads to the fact that the radius of convergence or \(q_{c}^{2}\geq 0\) in this sector is shown in Figure 11. At very high or very low temperatures, it reaches the 5D-ADS values, \(|q^{2}|_{c}\approx 2.224\). We would expect that at the transition point, the radius of convergence has decreased due to some physical arguments [32]. However, it is not the case because the crossover phase transition is not a true transition and therefore the paradigm "_breakdown of the hydrodynamics near the transition point"_ seems not to be true everywhere.
In the spin-1 sector, the collision happens whenever the imaginary and real parts of the hydro and closest non-hydro mode are equal to each other. This should be scanned for each \(\phi_{H}\) and see how they interact. Since in the spin-1 we were only involved with the metric field perturbations (look at Eq. (4.5)), the collision of the hydro modes and first gravity non-hydro modes define the radius of convergence. In Fig. 12 we show this collision for \(\phi_{H}=(3,6)\). The left (right) panels are before (after) collision. Before the collision, the modes have their own path but the collisions lead to path sharing
Figure 9: The real part (first row) and the imaginary part (second row) of the lowest three QNMs in Spin-1 sector as functions of \(\phi_{H}\) at real \(q^{2}=(0,3)\) for crossover EoS, \(B_{4}=0\). Dashed lines show the results for the 5D-AdS-Schwarzschild black hole. The Magenta line in each plot stands for the hydro modes, while the rest show non-hydro modes.
in which the hydro modes change their way from a circle round to a more complex one. Increasing the \(|q^{2}|\) turns out the collision between the higher modes with hydro modes. Our criteria for the radius of convergence is the smallest value in which hydro modes and non-hydro modes collide. For instance, in Fig. 13 we show the approach of the hydro mode and the lowest gravity non-hydro modes at \(\phi_{H}=3.779\). The collision occurs at \(\theta=0.364\pi\) and \(|q^{2}|=2.4\) where real and imaginary parts are equal.
### Spin-0 sector
In Fig. 14 we show the real and imaginary parts of the lowest QNMs for \(B_{4}=0\) EoS at \(q^{2}=(0,3)\). According to Eq. (4.11) in this sector there are two coupled perturbations, i.e. the gravity and scalar perturbations. This complexity makes the computations very busy and we expect a doubling of modes with respect to rest sectors. This is seen very clearly in Fig. 14 which can not be observed in analogous Figs. 5 and 9. There lies
Figure 11: Left panel: the radius of convergence in spin-1 sector in terms of \(\phi_{H}\), Right panel: the same in terms of \(T/T_{c}\). Plots correspond to the cross-over phase transition with \(B_{4}=0\). The gray dashed line represents the location of the transition point, \(\phi_{H}^{c}\approx 3.779\).
Figure 10: The real and imaginary parts of the lowest three QNMs in spin-1 sector as functions of \(q^{2}\) for the cross-over phase transition with \(B_{4}=0\) at \(\phi_{H}=(3.779,6)\) from left to right. The blue color represents the hydro modes, while the rest show non-hydro modes.
two modes on top of each gray line one corresponds to gravity and the other belongs to the scalar field solutions. Besides this doubling, there is an extra interesting feature concerning the value of \(q^{2}\). At \(q^{2}=0\) equations of gravity and scalar field decouple due to Eqs. (4.11) and (4.12) and we get an even number of "non-hydro" modes. However, a non-vanishing \(q^{2}\) leads to coupled equations which apart from doubling at higher non-hydro modes we get one hydro mode (smallest modes), and so are the odd modes. Patterns of collision are also different for \(q^{2}=0\) and \(q^{2}\neq 0\). At \(q^{2}=0\) the first collision occurs between two higher modes around \(\phi_{H}\approx 5\), while at \(q^{2}\neq 0\) this collision happens frequently between the smallest mode and higher non-hydro modes. At very high or very low temperatures this touch is among gravity and scalar field perturbations, while at intermediate points it occurs between the hydro and first gravity non-hydro modes. This feature is seen in the real parts but in the imaginary parts, the collisions happen always
Figure 12: Mode collision for \(B_{4}=0\) in spin-1 sector. The top (bottom) rows indicate \(\phi_{H}=3\) (\(\phi_{H}=6\)) data and the left (right) panels demonstrate the modes before (after) the collision. At \(\phi_{H}=3\), collision occurs at \(|q^{2}|=2.333\) with \(\Omega_{\star}=\pm 1.498-0.437i\) and for \(\phi_{H}=6\) collision occurs at \(|q^{2}|=2.637\) with \(\Omega_{\star}=\pm 1.687-0.3344i\). The marked circles in each plot stand for the location of \(\Omega_{\star}\).
between two similar types of modes. 5 Also the high temperature and low temperature limits of the hydro modes are the same.
Footnote 5: By collision we mean a collision that determines the radius of convergence. So, we refer to the collision of the smallest and first non-hydro modes. Other non-hydro modes collision has nothing to do with hydrodynamic series.
In Fig. 15 we show the mode configurations for \(\phi_{H}=3\) and \(\phi_{H}=6\) and for two complex momenta choices corresponding to before and after the collision. For \(\phi_{H}=3(\)\(\phi_{H}=6)\) the collision occurs at \(|q^{2}|=2.216\) (\(|q^{2}|=2.39\)) and both of them happen between the hydro mode and first gravity non-hydro modes. In the \(B_{4}=0\) EoS and spin-0 sector the first collision that determines the radius of convergence always occurs between these two modes.
Finally Fig. 16 demonstrates the radius of convergence in the \(B_{4}=0\) EoS and the spin-0 sector. Similar to the Figs. 8 and 11 at low and high temperatures the radius of convergence approaches an identical value \(|q^{2}|_{c}=1.486\) that reflects a kind of duality for using the hydro expansion. Again, around the transition point \(q_{c}^{2}\) increases which is special for crossover phase transitions. Furthermore, it seems that the following relation does hold between different spin sectors
\[(\text{Max}|q^{2}|_{c})_{\text{spin-2}}<(\text{Max}|q^{2}|_{c})_{\text{spin-0 }}<(\text{Max}|q^{2}|_{c})_{\text{spin-1}}, \tag{6.1}\]
where \((\text{Max}|q^{2}|_{c})\) define the greatest value of \(|q^{2}|_{c}\) in the \(\phi_{H}\) or temperature plane. We will see that relation (6.1) is valid for second and first-order phase transitions.
Figure 13: The collision of the lowest hydro mode with a non-hydro mode at \(\phi_{H}=3.779\) for \(B_{4}=0\) in the spin-1 sector. This collision occurs at \(\theta=0.364\pi\) and \(|q^{2}|=2.4\) which is shown by the gray dashed line. The blue (red) dots correspond to the hydro (closest non-hydro) modes.
## 7 Second-order phase transition
In this section, we review the numerical outputs of the second-order phase transition with \(B_{4}=-0.0098\). Similar to the previous section, each spin sector's results are given in separate parts.
### Spin-2 sector
In Fig. 17 we show the real and imaginary part of the lowest QNMs for the second-order phase transition with \(B_{4}=-0.0098\) with the real choices for \(q^{2}\). There are interesting observations
* For \(q^{2}\leq 3\) the second lowest mode in the high-temperature regime skips to infinity in the low-temperature limit while the QNMS structure of the lowest frequencies approaches the 5D-AdS-Schwarzschild black hole. This happens such that the \(i+1\)-th mode in high temperature will be the \(i\)-th mode in low temperature, \(\Omega_{i+1}(T\gg T_{c})\to\Omega_{i}(T\ll T_{c})\) for \(i>2\).
Figure 14: The real part (first row) and the imaginary part (second row) of the lowest QNMs in Spin-0 sector as functions of \(\phi_{H}\) at \(q^{2}=(0,3)\) for \(B_{4}=0\). The magenta line denotes the smallest mode value. The gray dashed lines are the results of the 5D-AdS-Schwarzschild black hole.
* There is no collision between the modes.
To further illustrate the collision, in Fig. 18 we show the real and imaginary parts of modes for real momenta at \(\phi_{H}=(5.371,11)\). It can be seen that no collision occurs
Figure 16: Left panel: the \(\phi_{H}\) dependency of the radius of convergence in the spin-zero sector, Right panel: dependency of the radius of convergence in terms of \(T/T_{c}\). Both plots are for cross-over phase transition with \(B_{4}=0\). The Red dashed line represents the location of the critical point.
Figure 15: Mode collision for \(B_{4}=0\) in spin-0 sector. The top (bottom) rows indicate \(\phi_{H}=3\) (\(\phi_{H}=6\)) data and the left (right) panels demonstrate the modes before (after) the collision. At \(\phi_{H}=3\), collision occurs at \(|q^{2}|=2.216\) with \(\Omega_{\star}=\pm 1.192-0.487i\) and for \(\phi_{H}=6\) collision occurs at \(|q^{2}|=2.39\) with \(\Omega_{\star}=\pm 1.293-0.378i\). The marked circles stand for the \(\Omega_{\star}\).
for \(q^{2}>0\) and it should happen for \(q^{2}<0\).
In Fig. 19 we show those collisions at the critical temperature of the second-order phase transition. In this sector, the collisions occur always in imaginary momenta or \(q^{2}<0\). In this figure, we consider \(-2\leq q^{2}\leq 0\) and we use the rainbow style. The collision at \(\phi_{c}\) occurs at \(q^{2}=-1.6148\). In the spin-2
Figure 17: The real part (first row) and the imaginary part (second row) of the lowest three QNMs in Spin-2 sector as functions of \(\phi_{H}\) at \(q^{2}=0,3\) for the second-order phase transition, \(B_{4}=-0.0098\). The magenta line denotes the lowest non-hydro mode.
Figure 18: The real and imaginary parts the lowest QNMs in Spin-2 sector as functions of \(q^{2}\) for the \(2^{\rm nd}\) order phase transition \(B_{4}=-0.00983491\) at \(\phi_{H}=(5.371,11)\) from left to right.
sector, the first collision always occurs negative \(q^{2}\) and it is between the closest gravity non-hydro modes.
The radius of convergence is sketched in Fig. 20 in terms of \(T/T_{c}\) or \(\phi_{H}\). Unlike the crossover transition, we see somehow a dip near the transition point. Values of \(q_{c}^{2}\) are almost equal to its counterpart of crossover.
### Spin-1 sector
In the Fig. 21 we show the real and imaginary parts of the lowest three modes for real \(q^{2}=(0,3)\) in the second-order EoS with \(B_{4}=-0.0098\). Mixing of the modes between high temperature and low temperature is seen. However, there are some similarities and differences between Figs. 21 and 17. In Fig. 17 only the gravity non-hydro modes play the role and the lowest non-hydro mode remains the same either at low temperature
Figure 19: The collision of the lowest non-hydrodynamic modes at the transition point of the second-order phase transition. The imaginary momentum goes from \(q=0\) (purple) to \(q=i\sqrt{2}\) (red). The collision is at \(q^{2}=-1.6148\).
Figure 20: The radius of convergence for the lowest non-hydrodynamic modes in the spin-2 sector for second-order phase transition.
or high temperature in the real and imaginary parts of the modes. But in Fig. 21 the hydro modes come into play so that at \(q^{2}=0\) the lowest mode doesn't change and the second (third) mode goes to infinity in real (imaginary) parts. At \(q^{2}\neq 0\) the latter keeps going in addition to the hydro mode (the third mode in imaginary parts at high temperatures) which collides with others in between and becomes the second mode at low temperatures.
To see whether the collision happens in positive \(q^{2}\) and how it is possible to have \(q_{c}^{2}\geq 0\), we plot the real and imaginary parts of the modes for \(\phi_{H}=(5.371,11)\) in Fig. 22. It can be inferred that positive \(q^{2}\)s, except the spin-0 sector outcomes, are eligible for determining \(q_{c}^{2}\geq 0\) in spin-1.
Similar to the previous parts, to show the configurations of mode collision for complex momenta choices we present in Fig. 23 the mode trajectories before and after the collision for \(\phi_{H}=10\) and \(\phi_{H}=12\). Before the collision path of the hydro and non-hydro modes are separate but after that, they share their path and accordingly by going on the path after the collision we go up to the next Riemann surface [44].
Our main result, namely the radius of convergence in this part is shown in Fig. 24. Similar to the crossover transition, spin-1 has the greatest \(q_{c}^{2}\) that occurs around
Figure 21: The real part (first row) and the imaginary part (second row) of the lowest three QNMs in Spin-1 sector as functions of \(\phi_{H}\) at \(q^{2}=(0,3)\) for the second-order EoS with \(B_{4}=-0.0098\). The Magenta line stands for the hydro modes, while the rest show non-hydro modes.
\(\phi_{H}=11\) which is shown by the gray dashed line. This is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. Likewise, near the transition point, \(q_{c}^{2}\) falls down which can be marked by the black line. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the second non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro to hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode collision changes from hydro with the first non-hydro. The black line is the point where the pattern of mode change from hydro with the first non-hydro.
as an observation to detect a critical point. This conclusion is in agreement with the paradigm _"breakdown of the hydrodynamic series near the transition point."_
The final plot of this subsection is devoted to Fig. 25 which demonstrates the approach of the hydro and the closest non-hydro mode towards each other at \(\phi_{H}=5.371\). The collision occurs at \(|q^{2}|=1.855\) and \(\theta=0.575\pi\).
### Spin-0 sector
In each section, the spin-0 parts have rich and complex structures. For instance, in Fig. 26 the real and imaginary part of the lowest QNMs for the \(B_{4}=-0.0098\) for real \(q^{2}=(0,3)\) is shown. Apart from the mode doubling structure due to the coupling of scalar and gravity perturbations, the general structure in real and imaginary parts is
Figure 24: Left panel: the \(\phi_{H}\) dependency of the radius of convergence, and the right panel: the same in terms of \(T/T_{c}\). Both plots refer to second-order phase transition with \(B_{4}=-0.0098\) in the spin-1 sector. The Red dashed line represents the location of the critical point.
Figure 25: The collision of lowest non-hydro mode with the hydro mode at \(\phi_{H}=5.371\) for the second-order phase transition with \(B_{4}=-0.0098\) in spin-1 sector. It occurs at \(\theta=0.575\pi\) and \(|q^{2}|=1.855\).
similar to Fig. 21.
Additionally, in Fig. 27 we show the mode situations before and after the collision for \(\phi_{H}=3\) and \(\phi_{H}=12\). It is worthwhile to mention that either at \(\phi_{H}=3\) or \(\phi_{H}=12\) the collision occurs between the hydro mode and the smallest scalar non-hydro mode.
Our numerical computations have shown the radius of convergence for second-order EoS in the spin-0 sector is in the form of Fig. 28. There is a dip near the transition point which is a sign of less validity for the hydrodynamics series. We expect that the \(q_{c}^{2}\) approaches the same value at high and low temperatures. We also observe Eq. (6.1) does hold among the second-order results. It seems that this is independent of the kind of phase transition.
## 8 First-order phase transition
The first-order results are obtained with \(B_{4}=-0.02\). Due to the complex nature of first-order transition with respect to the other ones, we expect more involved and intricate curves.
### Spin-2 sector
In Fig. 29 we show the real and imaginary part of the lowest QNMs for the first-order phase transition with \(B_{4}=-0.02\) with real \(q^{2}\). We observe some similarities compared
Figure 26: The real part (first row) and the imaginary part (second row) of the lowest three QNMs in the spin-0 sector as functions of \(\phi_{H}\) at \(q^{2}=(0,3)\) for the EoS with \(B_{4}=-0.0098\).
to Fig. 17. For \(|q^{2}|<3\) the first three non-hydro modes escape to infinity at low temperatures and \(\Omega_{2k+1}(T\gg T_{c})\to\Omega_{k}(T\ll T_{c})\) for \(k\geq 1\). Also, we don't see any
Figure 28: Left panel: Dependency \(\phi_{H}\) for the radius of convergence in the spin-zero sector, Right panel: dependency of the radius of convergence in terms of \(T/T_{c}\). Both plots are for second-order phase transition with \(B_{4}=-0.0098\). The Red dashed line represents the location of the transition point.
Figure 27: Mode collision for second-order EoS with \(B_{4}=-0.0098\) in spin-0 sector. Top (bottom) rows correspond to \(\phi_{H}=3\) (\(\phi_{H}=12\)) in which left(right) plots indicate before (after) the collision situation. At \(\phi_{H}=3\) collision occurs at \(|q^{2}|=1.77\) with \(\Omega_{\star}=\pm 0.619+0.277i\) and at \(\phi_{H}=12\) it happens at \(|q^{2}|=2.525\) with \(\Omega_{\star}=\pm 1.343-0.136i\).
mode collision (the same real and imaginary parts at a given \(\phi_{H}\)). Therefore, we expect that collision happens for negative \(q^{2}\) and this is the case.
For further illustrations, we plot the real and imaginary parts of the modes in the real \(|q^{2}|\) plane in Fig. 30 at \(\phi_{H}=(2.4460,12.7157)\). This figure insists again that mode collision should happen for negative \(|q^{2}|\) or \(q^{2}=|q^{2}|e^{i\pi}\).
To illustrate the collisions between the non-hydrodynamic modes in the spin-2 sector, in Fig. 31 we show the mode patterns at \(\phi_{H}=12.71\) for purely negative \(|q^{2}|\). As you see the collision occurs at two \(\Omega=-0.0167i,-0.8100i\) and it is the universal feature that in the spin-2 sector collision always happens for negative \(q^{2}\).
To close this subsection, we show in Fig. 32 the radius of convergence for non-hydro modes collisions in the spin-2 sector with \(B_{4}=-0.02\). Due to the gravity and scalar field coupling, the collision is in between the gravity and scalar non-hydro modes. Looking at Fig. 32 has shown that near the transition point the radius of convergence increases, similar to what has been obtained in Fig. 8. However, there is a difference where in the first-order phase transition three distinct values of \(\phi_{H}\) exist and on the larger ones \(|q^{2}|_{c}\) is the local maxima, while the rest two have smaller \(|q_{c}^{2}|\).
Figure 29: The real part (first row) and the imaginary part (second row) of the lowest three QNMs in Spin-2 sector as functions of \(\phi_{H}\) at real \(q^{2}=(0,3)\) for the first-order phase transition with \(B_{4}=-0.02\).
### Spin-1 sector
In Fig. 33 we show the real and imaginary parts of the lowest three modes for real \(q^{2}=(0,3)\) at the first-order equation of state and the spin-1 sector. In the real parts besides the lowest mode, the modes \((2^{\rm nd},3^{\rm rd},5^{\rm th})\) go to infinity at low temperatures, and the rest modes fall to get their place. In the imaginary parts, the story is a little different. By increasing the \(|q^{2}|\) the hydro mode restores, evolves with \(\phi_{H}\) and backs to its place at low temperatures. The first non-hydro mode goes to zero at low temperatures, while some of the \(\Omega_{k}(T\gg T_{c})\to-\infty(T\ll T_{c})\) when \(k\geq 3\). In Fig. 34 we show the real and imaginary parts of the modes at real \(q^{2}\) plane for \(\phi_{H}=(2.446,12.7157)\) in the spin-1
Figure 31: The collision of the lowest non-hydrodynamic modes at \(\phi_{H}=12.71\) of the first-order phase transition in the spin-2 sector. The collision is at \(q^{2}=-1.740\) at which two sets of modes collide at different frequencies \(\Omega=(-0.0167i,-0.8100i)\). For larger temperatures (smaller \(\phi_{H}\)) the modes with the lowest imaginary are the ones that collide in the smallest \(|q^{2}|\) (similar to Fig. 7). For lower temperatures (larger \(\phi_{H}\)) the other set of modes collide at smaller \(|q^{2}|\) (see right panel in Fig. 32).
Figure 30: The real and imaginary parts the lowest QNMs in Spin-2 sector as functions of real \(q^{2}\) for the first-order phase transition at two transition points of the black holes \(\phi_{H}=(2.4460,12.7157)\) from left to right.
sector. We can infer that the mode would collide with each other at positive \(|q^{2}|\) in this sector.
It is very interesting to see how the modes collide with each other for different \(\phi_{H}\) in the complex plane for this sector. In Fig. 35 we plot the mode collision for two complex \(q^{2}\) at each of \(\phi_{H}=12\) and \(\phi_{H}=14\), one of them corresponds to before and the
Figure 32: The radius of convergence for the lowest non-hydrodynamic modes in the spin-2 sector for first-order phase transitions. Left panel represents the profile \(|q^{2}(T/T_{c})|\) and the right one refers to the \(|q^{2}(\phi_{H})|\). The inside plot focuses on near-transition points.
Figure 33: The real (first row) and the imaginary part (second row) of the lowest three QNMs in the spin-1 sector of second-order transition as functions of \(\phi_{H}\) for real \(q^{2}=(0,3)\). The Magenta lines stand for the lowest modes, while the rest show non-hydro modes.
other refers to after the collision. In spin-1 the collision is due to the gravity hydro and non-hydro modes which in \(\phi_{H}=12\) is very close to \(\theta=\pi\), namely purely imaginary momenta and \(Im\Omega>0\), while at \(\phi_{H}=14\) it happens for \(Im\Omega<0\) around \(\theta=0.4\pi\).
Figure 34: The real and imaginary parts of the lowest three QNMs in the spin-1 sector as functions of \(q^{2}\) at \(\phi_{H}=(2.446,12.7157)\) for the first-order phase transition, from left to right. The blue color represents the hydro modes, while the rest show non-hydro modes.
Figure 35: Mode collision for first-order equation of state with \(B_{4}=-0.02\) in spin-1 sector. Top (bottom) row plots indicate \(\phi_{H}=12\) (\(\phi_{H}=14\)) data in which the left (right) panels demonstrate the modes before (after) the collision. At \(\phi_{H}=12\), collision occurs at \(|q^{2}|=2.117\) with \(\Omega_{\star}=\pm 0.122+0.887i\) and for \(\phi_{H}=14\) collision occurs at \(|q^{2}|=2.714\) with \(\Omega_{\star}=\pm 1.678-0.286i\).
further demonstration, in Fig. 36 we show the real and imaginary parts of the gravity hydro and first non-hydro mode at \(\phi_{H}=4.2023\) in terms of \(|q^{2}|\) near the collision point. It occurs at \(|q^{2}|=1.684\) and \(\theta=0.53\pi\).
Eventually, in Fig. 37 we show the radius of convergence in the spin-1 sector on the whole EoS of a first-order phase transition in terms of \(\phi_{H}\) and \(T/T_{c}\). It is something strange happening near the transition points, the \(|q_{c}^{2}|\) increases which is seen in Fig. 32.
### Spin-0 sector
In this subsection, we first look at the real and imaginary parts of the modes for real \(q^{2}=(0,3)\) which is shown in Fig. 38. For the real parts and at \(q^{2}=0\) all modes escape
Figure 37: Radius of convergence in spin-1 sector for the left panel: the \(\phi_{H}\) dependency and the right panel: \(T/T_{c}\) dependency. Both plots are for first-order phase transition with \(B_{4}=-0.02\). In the left panel, the red Dashed line represents the location of critical points and in the right one, the inside plot demonstrates the near-transition point.
Figure 36: Collision of the lowest non-hydro mode with hydro mode at \(\phi_{H}=4.2023\) for the first-order phase transition, \(B_{4}=-0.02\) in spin-1 sector. This collision occurs at \(\theta=0.53\pi\).
to infinity at low temperatures except the lowest mode, while in the imaginary part, this is happening for a few modes. By increasing the \(q^{2}\) the hydro modes come into play and in \(3\lesssim\phi_{H}\lesssim 8\) the real part of the hydro modes vanishes and the imaginary part is split. This is the sign of mode collision which is very well-known in hydrodynamics. Apart from the hydro modes, the real and imaginary parts behavior is similar to the \(Q^{2}=0\).
To see whether the modes collide in the spin-0 sector, in Fig. 39 we show the pattern of modes before/after the collision for \(\phi_{H}=(6,10)\). For \(\phi_{H}=6\) the collision is happening at \(|q^{2}|=0.29\) with \(\Omega_{\star}=\pm 0.397+0.089i\) and for \(\phi_{H}=10\) this occurs at \(|q^{2}|=1.28\) with \(\Omega_{\star}=\pm 0.152+0.417i\). Both of them are determined by the collision of the hydro mode with the lowest gravity non-hydro mode.
Last but not least is the radius of convergence of the spin-0 sector in the first-order phase transition which is shown in Fig. 40. Similar to the spin-1 and spin-2 sectors, there is a rise near the transition point which seems to be unique in first-order transitions. Likewise, the Eq. (6.1) seems to be held in the first-order EoS.
## 9 Pole-skipping
It has been confirmed that pole-skipping is the general feature of every quantum field theory that has a dual gravity interpretation [51; 52; 53; 54]. It is the other side of quantum
Figure 38: The real part (first row) and the imaginary part (second row) of the lowest three QNMs in the spin-0 sector as functions of \(\phi_{H}\) at \(q^{2}=(0,3)\) for the first-order phase transition.
chaos that has been manifested in the linearized equations of motion. It works as follows. In the linearized equations of motion for gravity perturbations in the spin-0 sector, the "\(vv\)" component of Einstein's equations becomes trivial on the chaos point, namely the \(\omega=i\lambda_{L}=2\pi Ti\) and \(k=ik_{0}=i\frac{2\pi T}{v_{B}}\) where \(v_{B}\) is the butterfly velocity or velocity of information propagation [51; 52]. The result is that the hydro poles skip around this
Figure 40: Radius of convergence in the spin-0 sector for the first-order phase transition. The left(right) panel shows the \(q^{2}(\phi_{H})(q^{2}(T))\) dependence of this variable.
Figure 39: Mode collision in the spin-0 sector with first-order transition for two \(\phi_{H}=(6,10)\) in before and after the collision. At \(\phi_{H}=6\) the collision occurs at \(|q^{2}|=0.29\) with \(\Omega_{\star}=\pm 0.397+0.089i\) and for \(\phi_{H}=10\) this occurs at \(|q^{2}|=1.28\) with \(\Omega_{\star}=\pm 0.152+0.417i\).
point and the corresponding Greens function becomes multivalued in such a way that different trajectories with various slopes converge to this point. This is very interesting because the out-off equilibrium properties of a many-body quantum system can be probed even in the level of near-equilibrium situation.
Motivated by this, we study the chaotic nature of linearized equations in the spin-0 sector solutions for each kind of phase transition. To do this, we have to modify the Eq. (4.11) and take the fluctuations in the Eddington-Finkelstein coordinates as follows [54]
\[\delta\Phi=\bigg{(}\delta g_{vv},\delta g_{rv},\delta g_{rr},\delta g_{rz}, \delta g_{vz},\delta g_{xx},\delta g_{zz},\delta\phi\bigg{)}. \tag{9.1}\]
Solving the linearized equations is a very hard task. However, a series solution exists for every single point in the bulk. Among them, we scrutinize the near horizon and assume a series ansatz for each physical perturbation
\[\delta g_{MN}(u)=\sum_{n=0}^{\infty}\delta g_{MN}^{(n)}(u_{H})\,(u -u_{H})^{n},\] \[\delta\phi(u)=\sum_{n=0}^{\infty}\delta\phi^{(n)}(u_{H})\,(u-u_{H })^{n}, \tag{9.2}\]
and insert these into the equations. We observe that the "\(vv\)" component of Eq. (3.6) at the lowest non-vanishing order takes the following form
\[\delta g_{vv}^{(0)}(u_{H})\left(k^{2}+\frac{i\omega e^{2A(u_{H})}V(u_{H})}{4 \pi T}\right)+(\omega-2i\pi T)(2k\delta g_{zv}^{(0)}(u_{H})+\omega\delta g_{x^ {i}x^{i}}^{(0)}(u_{H}))=0, \tag{9.3}\]
where \(i=x,y,z\). For general \(\omega\) and \(k\), the Eq. (9.3) imposes a non-trivial constraint on the near-horizon components \(\delta g_{vv}^{(0)}(u_{H})\), \(\delta g_{zv}^{(0)}(u_{H})\), \(\delta g_{x^{i}x^{i}}^{(0)}(u_{H})\). But at the point \(\omega_{*}=i\lambda_{L}=2\pi Ti\) the metric component \(\delta g_{vv}^{(0)}(u_{H})\) decouples from the others and additionally at \(k=\sqrt{e^{2A(u_{H})}V(u_{H})/2}\) or \(k=ik_{0}=i\sqrt{6\pi Te^{A(u_{H})-B(u_{H})}A^{\prime}(u_{H})}\) the Eq. (9.3) becomes identically zero [54]. Therefore, it doesn't imply any constraint on the near-horizon components. Its message is that there exists one extra ingoing mode at this point and it leads to the pole-skipping in the retarded energy density correlation function \(G_{T^{00}T^{00}}^{R}(\omega,k)\) at the chaos point. In other words, slightly away from the chaos point with \(\omega=i\lambda+\epsilon\,\delta\omega\) and \(k=ik_{0}+\epsilon\,\delta k\) where \(|\epsilon|\ll 1\) and at leading order in \(\epsilon\), we have a family of different ingoing modes by different slopes \(\frac{\delta\omega}{\delta k}\). This slope can be chosen such that it corresponds to the resulting ingoing mode near the chaos point with different asymptotic solutions at the boundary. If one chooses \(\frac{\delta\omega}{\delta k}\) as follows
\[\frac{\delta\omega}{\delta k}=\frac{2k_{0}\delta g_{vv}^{(0)}(u_{H})}{\frac{k_{ 0}^{2}}{2\pi T}\delta g_{vv}^{(0)}(u_{H})-2k_{0}\delta g_{zv}^{(0)}(u_{H})-2 \pi T\delta g_{x^{i}x^{i}}^{(0)}(u_{H})}, \tag{9.4}\]
we will get an ingoing mode that matches continuously onto the normalizable solution at the boundary. All these lines pass through the chaos point and we see different lines with slopes (110) away from that point in the \(T_{00}\) correlation functions.
The multivaluedness of boundary retarded Green's function has another manifestation. Recently, there have been reported that at higher Matsubara frequencies, i.e. \(\omega=\omega_{n}=-2in\pi T\) the equations of motion of scalar field perturbations exhibit a pole-skipping property[53; 55]. This is because at these points the equations give no constraints on \(\delta\phi^{(n)}(u_{H})\) and these unknown coefficients reflect many hydrodynamics poles around the \(\omega_{n}\) with special slopes [55]. We would like to explore these features in our model. To do this, we expand the equations of motion for scalar perturbation around the horizon. The result is as follows
\[\mathcal{I}_{1} =M_{11}(\omega,k^{2})\delta\phi^{(0)}(u_{H})+(2\pi T-i\omega) \delta\phi^{(1)}(u_{H}),\] \[\mathcal{I}_{2} =M_{21}(\omega,k^{2})\delta\phi^{(0)}(u_{H})+M_{22}(\omega,k^{2} )\delta\phi^{(1)}(u_{H})+(4\pi T-i\omega)\delta\phi^{(2)}(u_{H}), \tag{111}\] \[\mathcal{I}_{3} =M_{31}(\omega,k^{2})\delta\phi^{(0)}(u_{H})+M_{32}(\omega,k^{2} )\delta\phi^{(1)}(u_{H})+M_{33}(\omega,k^{2})\delta\phi^{2}(u_{H})+(6\pi T-i \omega)\delta\phi^{(3)}(u_{H}),\]
where the coefficients \(M_{ij}(\omega,k^{2})\) take the following form
\[M_{ij}(\omega,k^{2})=i\omega a_{ij}+k^{2}b_{ij}+c_{ij}, \tag{112}\]
with \(a_{ij},b_{ij},c_{ij}\) are determined by the background solutions in (10) and their derivatives on the horizon. Their special form is very complicated and they have nothing to do with our goals. The Expressions \(\mathcal{I}_{i}\) are combinations of gravity perturbations with specific coefficients. Eq. (111) shows that at frequencies \(\omega=\omega_{n}\) it is not possible to read the coefficients iteratively from \(\delta\phi^{(0)}(r_{H})\). It means that \(\delta\phi^{(n)}(u_{H})\) are free parameters near the horizon. Also, at the point \(\omega=\omega_{n}\) the first \(n\) equation is decoupled and we can solve a simple matrix equation as it follows
\[\mathcal{M}^{n}(\omega,k^{2})\cdot\delta\tilde{\phi}=\mathcal{I}, \tag{113}\]
for \(\delta\tilde{\phi}=\big{(}\delta\phi^{(0)}(u_{H}),\cdots,\delta\phi^{(n-1)}(u_ {H})\big{)}\). However, we observe that at \(k=ik_{0}\) and \(\omega=\omega_{n}\), \(\det\mathcal{M}^{n}(\omega_{n},k_{0})=0\). Therefore, solutions for the linear equations (111) are labeled with two free parameters [46; 53; 55].
In Fig. 41 we plot \(\ln(q_{ps}^{2})=\ln(k_{0}^{2}/(2\pi T)^{2})\) for different phase transition in terms of \(\phi_{H}\). At very high temperatures, no remarkable difference is seen for \(q_{ps}^{2}\) between various kinds of phase transitions. However, in the region \(1\lesssim\phi_{H}\lesssim 6\), it seems that \((q_{ps}^{2})_{\rm FO}<(q_{ps}^{2})_{\rm SO}<(q_{ps}^{2})_{\rm CO}\). Besides that, it decreases with the increase of \(\phi_{H}\). It can be observed that \(q_{ps}^{2}\) has merit in pinpointing the location of phase transition. Furthermore, we compare chaos momenta and radius of convergence for different kinds of phase
transition in Fig. 42 where the cyan points refer to the \(q_{ps}^{2}\) values. It is seen that there is a possibility to collide \(q_{c}^{2}\) and \(q_{ps}^{2}\) at high temperatures around \(\phi_{H}\sim 0.27\) that is independent of phase transition. Except that always \(q_{ps}^{2}<q_{c}^{2}\), i.e. chaos momenta is in the range of hydro validity.
## 10 Conclusion
One of the significant challenges to the RH applications is the near-critical point situations that can be reached at the low-energy colliders. It is very valuable to give an estimation of the validity of the RH series during the transition points or on the transition point. Within the AdS/CFT conjecture, we extensively study the many-body dynamics of a strongly coupled and critical field theory which is dual to a gravity model with a self-interacting scalar field in one higher dimension. This model is the Einstein-Klein-Gordon model which is a phenomenological string theory construction. This model has merit to represent that critical strong field theory by the parameter \(B_{4}\) given in the superpotential function and it provides us with the crossover (\(B_{4}=0\)), second-order (\(B_{4}=-0.0098\)) and first-order (\(B_{4}<-0.0098\)) phase transition. We run the first-order codes with \(B_{4}=-0.02\). According to the fluid/gravity conjecture the small perturbation of bulk fields on top of the background is equivalent to studying the RH for the field theory on boundary. Knowing this, we investigate the dynamics of linearized fluctuations in the spin-2, spin-1 and spin-0 sectors for each phase transition kind.
Our main findings are as follows. We obtain \(\eta/s=1/(4\pi)\) for each kind of phase transition and \(\xi/s\) has peaks around the phase transition. Regardless of the phase
Figure 41: Logarithmic sketch of chaos momenta in terms of \(\phi_{H}\) for different kinds of phase transition, second-order (SO), crossover (CO) and first-order (FO).
transition, the first collisions in spin-2 sectors, happen at negative \(q^{2}\) between the two non-hydro modes, while determining collision for the radius of convergence in the spin-1 sector happens between the hydro mode and the closest gravity non-hydro mode at positive \(q^{2}\). In the spin-0 sector, the metric and scalar perturbations are coupled and we find that at very small and large temperatures the collision is happening between the hydro and scalar non-hydro mode, while in the middle temperatures, it happens between the hydro mode and gravity non-hydro mode. At \(q^{2}=0\) the spin-0 sector equations for the gravity and scalar fields are decoupled and this leads to an even number of modes. However, at \(q^{2}\neq 0\) the modes' numbers are odd. The high-temperature and low-temperature limit of the hydro modes in the spin-1 and spin-0 sectors remains the same for real \(q^{2}\). For the second-order and first-order phase transitions some of the non-hydro modes go to infinity at low temperatures and this leads to other modes taking their places. Our results have shown that the paradigm _"breakdown of the hydrodynamic series near the transition points"_ seems to be (not true, true, not true) for the (crossover, second-order, first-order) phase transition. We have seen that the high-temperature and low-temperature limits of the \(q^{2}_{c}\) are equal which can be a sign of the \(q^{2}_{c}\).
Figure 42: Compare the chaos momenta and radius of convergence for different kinds of phase transition. The cyan plots refer to the \(q^{2}_{ps}\) points and dashed lines indicate the location of the phase transition.
the same equality for using the low-momentum series. Furthermore, we observe that the Eq. (6.1) does hold between the different spin sectors irrespective of the kinds of phase transition. Additionally, we obtain a closed result for \(q_{c}^{2}\) in the spin-2 sector and at high temperatures regardless of the phase transition kind. It is \(q_{c}^{2}=1.486\,e^{i\theta_{c}}\) with \(\theta_{c}\approx\pi\). We also find that the lowest non-hydro modes of the scalar field for large \(-|q^{2}|\) approach one of the hydro (gravity) modes at high temperatures. In this sector, the radius of divergence is obtained for scalar fields with different conformal weights. The pole skipping for gravity in the "\(vv\)" component of equations and scalar field perturbations is seen at chaos point and higher Matsubara points, respectively. Besides at high temperatures, always \(q_{ps}^{2}<q_{c}^{2}\) and \(q_{ps}^{2}\) decreases with the increase of \(\phi_{H}\).
This study can be extended in multiple ways. One way is to explore the hydrodynamic solutions by critical EoS for particular evolution patterns such as Gubser flow or Bjorken flow and compare the results with ours. The other way which is very precious is to generalize this work to more real systems, namely the holographic models that mimic the QCD phase diagram [56]. There present gauge fields and first it is necessary to complete and correct the master formula approach for the gauge+scalar+gravity fields.
## Acknowledgement
We would like to thank cordially H. Soltanpanahi for his earlier contribution to this work, especially for providing the numerical codes and collaborating on giving background numerical solutions.
## Appendix A Holographic Renormalization
Having a suitable counter-term part in the ADS/CFT paradigm is very important to derive finite results for one-point functions of the boundary theory [39]. The Hamilton-Jacobi approach is a convenient way to obtain this term order by order in derivatives with respect to "r" which resembles Hamiltonian time, \(\tau\)[40]. Moreover, it has a great advantage to use the superpotentials because, in the process of holographic renormalization, superpotentials arise naturally. In \(d\) dimensions of boundary space, it is defined as
\[V(\phi)=2\left(\frac{\partial W(\phi)}{\partial\phi}\right)^{2}- \frac{d}{d-1}W(\phi)^{2}.\] (A.1)
Indeed, the superpotentials as counter-term fix the ambiguous coefficient of the finite \(\phi^{4}\) term to the unique value that gives zero free energy for the ground state dual to the domain wall geometry. Therefore, by construction, counter-terms can be obtained in
terms of the superpotentials. We follow the recipes of [40] to derive the counter-terms and the results are shown below
\[S_{\text{ct}} = -\frac{1}{16\pi G_{5}}\int_{\partial M}\text{d}^{4}x\sqrt{\gamma} \,\Big{[}W(\phi)+R(\gamma)\left(I(\phi)+J(\phi)\right)\Big{]},\] (A.2)
where \(\gamma\) is the determinant of the boundary-induced metric and
\[I(\phi) =\frac{1}{2}\int^{\phi}\,d\tilde{\phi}\frac{1}{W^{\prime}(\tilde{ \phi})}=\frac{1}{2}\ln\frac{(1-4B_{4}\phi^{2})^{\frac{1}{2}}}{\phi},\] \[J(\phi) =-e^{-2A(\phi)}\int^{\phi}\,d\tilde{\phi}\frac{e^{2A(\tilde{\phi })}}{W^{\prime}(\tilde{\phi})},\] \[A(\phi) =-\frac{1}{6}\int^{\phi}\,d\tilde{\phi}\frac{W(\tilde{\phi})}{W^{ \prime}(\tilde{\phi})}=-\ln\phi-\frac{\phi^{2}}{48}+\frac{1+96B_{4}}{192B_{4} }\ln\left(1-4B_{4}\phi^{2}\right).\] (A.3)
It is worthwhile to mention that these results are obtained in the Gubser gauge where \(u\equiv\phi(u)\). The one-point functions are given as functional derivatives of the renormalized action with respect to boundary fields
\[\langle\mathcal{O}_{\phi}\rangle =\lim_{u\to 0}\frac{u^{-\frac{3}{2}}}{\sqrt{-\gamma}}\frac{ \delta S_{\text{ren}}}{\delta\phi}, \langle T_{ij}\rangle =2\lim_{u\to 0}\frac{u^{-2}}{\sqrt{-\gamma}}\frac{\delta S_{\text{ ren}}}{\delta\gamma^{ij}},\] (A.4)
with the holographically renormalized action consists of the bulk action \(S_{\text{\tiny bulk}}\), the Gibbons-Hawking-York boundary term \(S_{\text{\tiny GHY}}\) and the holographic counter-term \(S_{\text{\tiny ct}}\)
\[S_{\text{ren}}=S_{\text{bulk}}+S_{\text{GH}}+S_{\text{ct}}.\] (A.5)
For the metric in the Eq. (3.7), we get \(R(\gamma)=0\). By ignoring the terms related to the equations of motion, the one-point functions are given as
\[\langle T_{ij}\rangle =\frac{1}{8\pi G_{5}}\lim_{u\to 0}u^{-2}\left(K_{ij}-(K+\frac{W}{2}) \gamma_{ij}\right),\] \[\langle\mathcal{O}_{\phi}\rangle =-\frac{1}{16\pi G_{5}}\lim_{u\to 0}u^{-\frac{3}{2}}e^{-B(u)}H(u)^{ \frac{1}{2}}.\] (A.6)
Careful manipulations will omit the divergent terms. We substitute the near boundary expansion of \(G(\phi)\) from numerical results of Eq. (3.13) and it yields the one-point functions
\[\varepsilon=\langle T_{tt}\rangle =\frac{1}{8\pi G_{5}}\,\Big{(}-\frac{1}{64}-\frac{9V_{6}}{2}+ \frac{a_{2}}{4}+15a_{2}^{2}+6a_{4}-\frac{B_{4}}{4}\Big{)},\] \[p=\langle T_{xx}\rangle =\frac{1}{8\pi G_{5}}\,\Big{(}\frac{1}{576}-\frac{3V_{6}}{2}+ \frac{a_{2}}{4}+5a_{2}^{2}+2a_{4}+\frac{B_{4}}{4}\Big{)},\] \[\langle\mathcal{O}\rangle =\langle\mathcal{O}_{\phi}\rangle =\frac{1}{8\pi G_{5}}\,\Big{(}\frac{1}{48}+\frac{a_{2}}{4}+B_{4} \Big{)},\] (A.7)
where
\[a_{2}=\frac{\tilde{G}(0)}{2},\qquad a_{4}=\frac{\tilde{G}^{\prime \prime}(0)}{8},\qquad V_{6}=\frac{B_{4}(24B_{4}+1)}{3}. \tag{111}\]
These one-point functions respect the anticipated Ward identity.
\[\langle T^{i}_{\phantom{i}i}\rangle=\langle\mathcal{O}_{\phi}\rangle. \tag{112}\]
The fact that the trace of the energy-momentum tensor is non-zero in general is due to the breaking of conformal symmetry in the presence of a dimensionful source.
## Appendix B Near-horizon expansions
Near-horizon expansion of solutions is crucial to study the thermodynamics of the system. We expand the functions near the horizon point (\(\phi=\phi_{h}\)) as follows
\[A(\phi) =\sum_{n=0}^{\infty}A_{n}(\phi_{h})\frac{(\phi-\phi_{h})^{n}}{n!},\] \[B(\phi) =\sum_{n=0}^{\infty}B_{n}(\phi_{h})\frac{(\phi-\phi_{h})^{n}}{n!},\] \[H(\phi) =\sum_{n=1}^{\infty}H_{n}(\phi_{h})\frac{(\phi-\phi_{h})^{n}}{n!}. \tag{113}\]
According to the Eqs. (108), (109), (110) and (111) and \(H(\phi_{h})=0\), we can solve for expansion coefficients up to desired orders. The lowest order results are shown below
\[A_{1}(\phi_{h}) =-\frac{V(\phi_{h})}{3V^{\prime}(\phi_{h})},\quad A_{2}(\phi_{h} )=-\frac{1}{6}+\frac{V(\phi_{h})V^{\prime\prime}(\phi_{h})}{6V^{\prime}(\phi_ {h})^{2}},\ldots, \tag{114}\] \[B_{1}(\phi_{h}) =-\frac{V^{\prime\prime}(\phi_{h})}{2V^{\prime}(\phi_{h})},\quad B _{2}(\phi_{h})=\frac{1}{9}\left(-\frac{3V^{(3)}(\phi_{h})}{V^{\prime}(\phi_{h} )}+\frac{V^{\prime\prime}(\phi_{h})\left(3V^{\prime\prime}(\phi_{h})+2V(\phi_{h })\right)}{V^{\prime}(\phi_{h})^{2}}-2\right),\ldots\] \[H_{1}(\phi_{h}) =e^{2B(\phi_{h})}V^{\prime}(\phi_{h}),\quad H_{2}(\phi_{h})= \frac{1}{6}e^{2B(\phi_{h})}\left(8V(\phi_{h})-3V^{\prime\prime}(\phi_{h}) \right),\ldots.\]
Moreover, we may obtain the near-horizon expansion of \(\tilde{A},\tilde{B}\) of Eq. (3.15). These functions have finite values near the boundary. For an operator with conformal weight
\(\Delta=3\) the lowest-order coefficients are written as
\[\tilde{A}_{1}(\phi_{h}) =\frac{1}{\phi_{h}}-\frac{V(\phi_{h})}{3V^{\prime}(\phi_{h})},\quad \tilde{A}_{2}(\phi_{h})=-\frac{1}{6}-\frac{1}{\phi_{h}^{2}}+\frac{V(\phi_{h})V^{ \prime\prime}(\phi_{h})}{6V^{\prime}(\phi_{h})^{2}},\ldots, \tag{111}\] \[\tilde{B}_{1}(\phi_{h}) =\frac{1}{\phi_{h}}-\frac{V^{\prime\prime}(\phi_{h})}{2V^{\prime} (\phi_{h})},\] \[\tilde{B}_{2}(\phi_{h}) =\frac{1}{9}\left(-2-\frac{9}{\phi_{h}^{2}}-\frac{3V^{(3)}(\phi_{ h})}{V^{\prime}(\phi_{h})}+\frac{3V^{\prime\prime}(\phi_{h})^{2}}{V^{\prime}( \phi_{h})^{2}}+\frac{2V(\phi_{h})V^{\prime\prime}(\phi_{h})}{V^{\prime}(\phi_{ h})^{2}}\right),\ldots\] \[H_{1}(\phi_{h}) =\frac{e^{2\tilde{B}(\phi_{h})}V^{\prime}(\phi_{h})}{\phi_{h}^{2 }},\quad H_{2}(\phi_{h})=\frac{1}{6\phi_{h}^{2}}e^{2\tilde{B}(\phi_{h})}\left( 8V(\phi_{h})-3V^{\prime\prime}(\phi_{h})\right),\ldots.\]
## Appendix C Low temperature black holes
In this appendix, we find the black hole solutions in a low-temperature regime perturbatively. To this aim, we solve the EKG equations perturbatively near the vacuum solution, the so-called thermal gas. It turns out that the Gubser gauge is suitable for this purpose. We plug the following ansatz
\[A(u)=A_{TG}(u)+\epsilon a(u),\quad B(u)=B_{TG}(u)+\epsilon b(u),\quad H(u)=H_{ TG}(u)+\epsilon h(u), \tag{112}\]
in the equations of motion and expand it to the first order \(\epsilon\). The thermal gas solution is given by
\[A_{TG}(u) =-\frac{1}{6}\int^{u}d\tilde{u}\frac{W(\tilde{u})}{W^{\prime}( \tilde{u})}=\frac{1}{192}\left(\frac{1}{B_{4}}+96\right)\ln\left(1-4B_{4}u^{2 }\right)-\frac{u^{2}}{48}-\ln(u),\] \[B_{TG}(u) =-\ln W^{\prime}(u)=-\ln\left(u(1-4B_{4}u^{2})\right),\] \[H_{TG}(u) =1, \tag{113}\]
where the asymptotic boundary is at \(u=0\). By imposing proper boundary conditions for \(a,b,h\) in a large horizon radius regime we find
\[a(u)=\frac{f(u)}{8f(u_{H})},\quad b(u)=-\frac{f(u)}{2f(u_{H})},\quad h(u)= \frac{f(u)}{f(u_{H})}, \tag{114}\]
where the auxiliary function \(f(u)\) for \(B_{4}=0\) is
\[f(u)=1+e^{\frac{u^{2}}{6}}\left(1-\frac{u^{2}}{6}\right), \tag{115}\]
and for \(B_{4}\neq 0\) is
\[f(u) =\Gamma\left(-2-\frac{1}{48B_{4}},\frac{1}{48B_{4}}\right)- \Gamma\left(-2-\frac{1}{48B_{4}},\frac{1-4u^{2}B_{4}}{48B_{4}}\right)\] \[+48B_{4}\left(\Gamma\left(-1-\frac{1}{48B_{4}},\frac{1-4u^{2}B_{4 }}{48B_{4}}\right)-\Gamma\left(-1-\frac{1}{48B_{4}},\frac{1}{48B_{4}}\right) \right). \tag{116}\]
As mentioned the low-temperature regime in all cases corresponds to large values of \(\phi_{H}:=u_{H}\). To understand the physics in this regime and also to cross-check our numerical results in Fig. 43 we show the temperature and the entropy density as functions of \(\phi_{H}\). The dashed lines are the analytical expressions we calculated using the solutions in low-temperature limits. We summarize the analytical results in the following:
\[T=\frac{\phi_{H}}{12\pi}\exp\left(-\frac{\phi_{H}^{2}}{24} \right), \tag{100a}\] \[s=\frac{2\pi}{\phi_{H}^{3}}\exp\left(-\frac{\phi_{H}^{2}}{8} \right),\] (100b) \[c_{s}^{2}=\frac{1}{3}-\frac{8}{\phi_{H}^{2}}+\mathcal{O}\left( {\phi_{H}}^{-3}\right),\] (100c) \[B_{4}\leq 0\left(\text{and}\ -\ln|B_{4}|\ll\phi_{H}\right):\] \[T=\frac{2^{\frac{1}{96B_{4}}}\left(-B_{4}\right)^{\frac{1}{192B_ {4}}+\frac{3}{2}}e^{-\frac{\phi_{H}^{2}}{48}}\phi_{H}^{\frac{1}{96B_{4}}+4}}{ 3\pi},\] (100a) \[s=2\pi\frac{2^{\frac{1}{32B_{4}}+3}e^{-\frac{\phi_{H}^{2}}{16}} \left(-B_{4}\phi_{H}^{2}\right)^{\frac{1}{64}\left(\frac{1}{B_{4}}+96\right)} }{\phi_{H}^{3}},\] (100b) \[c_{s}^{2}=\frac{1}{3}-\frac{32}{\phi_{H}^{2}}-\frac{8}{B_{4}\phi _{H}^{4}}+\mathcal{O}\left({\phi_{H}}^{-5}\right). \tag{100c}\]
We infer from Fig. 43 that for cross-over transition the analytical results converge very rapidly to numerical results, while for the second-and first-order transitions, this convergence happens for larger \(\phi_{H}\).
## Appendix D Radius of convergence in high temperatures
In the spin-0 sector (sound channel), there are two coupled equations that come from the metric perturbation and scalar field perturbation. Unlike other sectors and other observables, at high-temperature regimes the radius of convergence of hydrodynamic series does not approach its CFT value that is computed in [46]. Due to the coupling of the above two equations, the radius of convergence is computed by collision of the hydro mode with the lowest scalar field non-hydro mode. One key observation is that the collision happens in complex momentum with the phase \(\theta\) close to \(\pi\) for \(\phi_{H}=0.1\) (independent of the value of \(B_{4}\) in the potential)6
Footnote 6: With the SUGRA potential \(V=-4-8\cosh\left(\frac{\phi}{\sqrt{2}}\right)+\sinh^{2}\left(\frac{\phi}{ \sqrt{2}}\right)\) and with the simplest potential \(V=-12-\frac{3}{2}\phi^{2}\) we found the same result for \(\phi_{H}=0.1\).
\[q_{c}^{2}=1.486\exp(i\theta_{c}),\qquad\theta_{c}=0.988\pi. \tag{101}\]
To understand the underlying mechanism, remember that the lowest collision between the non-hydro modes in the external scalar field channel and in the spin-0 sector occurs always at \(\theta=\pi\). This is also what happens for the lowest collision of the hydro-modes with each other in the spin-0 sector for AdS-Schwarzschild black hole [46]. Therefore one may expect that if we compare the QNMs in the spin-0 sector of an AdS-Schwarzschild with purely imaginary momenta (\(\theta=\pi\)) and the QNMs of an external scalar field on the same geometry we should be able to understand the above observation.
In Fig. 44 we show the QNMs of an external scalar field (red lines) with various
Figure 43: Plots for entropy density (right column), and the temperature (left column). The first row corresponds to crossover (\(B_{4}=0\)), the second row is for second-order (\(B_{4}=-0.00983491\)) and the third row belongs to first-order (\(B_{4}=-0.02\)) phase transition. Blue plots correspond to numeric results obtained from Eq. (3.17), while red plots result from Eqs. (108) and (109),
conformal weights (masses) and hydrodynamic sound modes (green lines) for purely imaginary momenta. Note that the radius of convergence of the sound hydro mode is \(|q^{2}|=2\)[46]. Therefore, we look at the regime \(|q^{2}|\leq 2\). As we can see there is no coincidence of the modes for \(\Delta=(3.5,4)\) in the selected range of \(q^{2}\). However, for smaller \(\Delta\)'s it happens. In particular, for \(\Delta=3\) the modes coincide at \(q^{2}=1.485\) which is very close to what we observe in our model (104). The small deviation of (104) is because of the coupling between the channels even in extremely high temperatures.
Another important key point is that it is easy to show that at a high-temperature regime, the linearized equation for the gravity modes in the sound channel is decoupled from the scalar field perturbation but not vice versa. Similar behavior we found in small momenta limit in our earlier studies [57]. The important underlying message of this observation is as follows. If we want to study the perturbations around a state in the high-temperature regime of a deformed CFT, i.e. \(\mathcal{L}=\mathcal{L}_{\rm CFT}+j^{4-\Delta}O_{\Delta}\), taking the high-temperature limit and investigating the linearized dynamics do not commute. In other words, one must find the linearized equations of the full theory and then take the
Figure 44: Dimensionless QNM frequencies \(\Omega\equiv\frac{\omega}{2\pi T}\) as functions purely imaginary momentum in units of temperature \(q\equiv\frac{k}{2\pi T}\). Green lines: sound mode. Red lines: the lowest non-hydro modes for an external scalar field with mass \(m^{2}=\Delta(\Delta-4)\).
high-temperature limit. What we found is that after these two steps, one still finds a coupling between the perturbations of the sound channel and the scalar channel.
In the left panel of Fig. 45 we show the results for a full range of conformal weights and in the right plot of this figure, we show the critical value of the complex momenta as a function of conformal weight. One interesting feature in Fig. 45 is that it seems that at \(-q^{2}\to\infty\) the lowest non-hydro mode of an external scalar field with conformal weight \(2\leq\Delta\leq 4\) is bounded between the sound (gravity) hydro modes. Also one may guess that the collision between the sound mode and the non-hydro scalar mode for \(\Delta=4\) (massless scalar field) occurs at \(|q_{c}|\gg 2\). That can be seen both in the left and right panels. The left panel also shows that the non-hydro mode for \(\Delta=2\) is asymptotic to the other sound hydro mode.
The radius of convergence fixes the applicability of the hydrodynamic expansions in the theory. Since the radius of convergence even at high temperatures is fixed by the conformal dimension of the deformation term in the Lagrangian, one cannot simply take the high-temperature limit first and then apply the hydrodynamic description. Especially for \(\Delta<2.71\) where the \(|q_{c}^{2}|<1\) the gradient expansion should be taken carefully. The smallest radius of convergence is associated with the operator saturating BF-bond with \(|q_{c}^{2}|=0.365\).
### Large momenta
In large purely imaginary momenta, the sound hydro modes can put some "bounds" in the range of \(\Delta\). In other words, there is some information in sound hydro modes about the non-hydro modes of an external scalar field. Recall that \(\Delta=2\) is the BF-bound and \(\Delta=4\) corresponds to a marginal operator. But it is not the case. In extremely
Figure 45: Left panel: Similar to Fig. 44 but we just showed the relevant non-hydro mode for each \(\Delta\) from 2 to 4. Right panel: the critical value of the purely imaginary momentum as a function of conformal weight.
large momenta (e.g. \(q^{2}=-1000\)) all the lowest non-hydro modes of external scalar field approach to the upper dashed line in Fig. 46.
|
2309.06377 | Adversarial attacks on hybrid classical-quantum Deep Learning models for
Histopathological Cancer Detection | We present an effective application of quantum machine learning in
histopathological cancer detection. The study here emphasizes two primary
applications of hybrid classical-quantum Deep Learning models. The first
application is to build a classification model for histopathological cancer
detection using the quantum transfer learning strategy. The second application
is to test the performance of this model for various adversarial attacks.
Rather than using a single transfer learning model, the hybrid
classical-quantum models are tested using multiple transfer learning models,
especially ResNet18, VGG-16, Inception-v3, and AlexNet as feature extractors
and integrate it with several quantum circuit-based variational quantum
circuits (VQC) with high expressibility. As a result, we provide a comparative
analysis of classical models and hybrid classical-quantum transfer learning
models for histopathological cancer detection under several adversarial
attacks. We compared the performance accuracy of the classical model with the
hybrid classical-quantum model using pennylane default quantum simulator. We
also observed that for histopathological cancer detection under several
adversarial attacks, Hybrid Classical-Quantum (HCQ) models provided better
accuracy than classical image classification models. | Biswaraj Baral, Reek Majumdar, Bhavika Bhalgamiya, Taposh Dutta Roy | 2023-09-08T06:37:54Z | http://arxiv.org/abs/2309.06377v1 | Adversarial attacks on hybrid classical-quantum Deep Learning models for Histopathological Cancer Detection
###### Abstract
We present an effective application of quantum machine learning in histopathological cancer detection. The study here emphasizes two primary applications of hybrid classical-quantum Deep Learning models. The first application is to build a classification model for histopathological cancer detection using the quantum transfer learning strategy. The second application is to test the performance of this model for various adversarial attacks. Rather than using a single transfer learning model, the hybrid classical-quantum models are tested using multiple transfer learning models, especially ResNet18, VGG-16, Inception-v3, and AlexNet as feature extractors and integrate it with several quantum circuit-based variational quantum circuits (VQC) with high expressibility. As a result, we provide a comparative analysis of classical models and hybrid classical-quantum transfer learning models for histopathological cancer detection under several adversarial attacks. We compared the performance accuracy of the classical model with the hybrid classical-quantum model using penmylame default quantum simulator. We also observed that for histopathological cancer detection under several adversarial attacks, Hybrid Classical-Quantum (HCQ) models provided better accuracy than classical image classification models.
Adversarial, Hybrid Quantum Transfer Learning, Histopathological Cancer Detection, Variational Quantum Circuits (VQC), Adversarial attacks, Quantum Processing Unit (QPU), Machine learning (ML), and Artificial intelligence (AI).
## I Introduction
Predictive models face the reality of encountering various attacks from vindictive entities. Adversarial attacks are one of these attacks, which mainly target AI models like Deep Learning (DL) or Machine Learning (ML) models. These attacks involve deliberately perturbing original input images with a carefully crafted noisy image, resulting in incorrect image classification by the model. Perturbed Images are imperceptible to the human eye, but it confuses the model, leading to misclassification. As per a recent study [1], adversarial machine learning is a critical aspect of the ML field, pressing the need for practitioners and researchers to acknowledge and address the potential threats posed by adversarial attacks to the effectiveness and trustworthiness [2] of machine learning models. The use of machine learning in the healthcare system is increasing to make the diagnosis and decision system robust. Due to the widespread use of machine learning models in healthcare systems, such systems are at a high risk of adversarial attacks. One common impact of adversarial attacks that could be experienced in the healthcare system is misleading the insurance approval system. Insurance companies use predictive models to confirm the approval of insurance reimbursement. Fraudsters may integrate the insurance data with perturbed data and lead to false insurance claims [3]. It is crucial for next-generation DL models to mitigate these attacks to solve the image misclassification problem.
In this study, we aim to investigate the impact of adversarial attacks [4] on classical Deep Learning (C-DL),and hybrid classical-quantum Deep Learning(HCQ-DL) models. The primary goal here is to present more resilient HCQ-DL models compared to C-DL models, which can perform better during adversarial attacks in order to obtain better performance accuracy. Our HCQ-DL models are trained with quantum simulators. As a result, we provide a comparative study of C-DL and HCQ-DL models for histopathological adversarial images.
The paper is organized as follows. The details of model creation, QNN layer integration and generation of adversarial images using different types of adversarial attack algorithms used in are discussed in Section III. Results obtained from our experiment on different classical and hybrid classical quantum models are included in Section IV. The experiment performed in this study is summarized in conclusion Section V.
## II Literature Review
The recent development in computing technology and the availability of superior computing power and GPU leads to the extensive use of machine learning models in different sectors. Due to the availability of a large collection of health datasets, machine learning models are widely used in health sectors like gastroenterology, ophthalmology, pathology and dermatology for appropriate diagnosis and decision-making. Research performed on adversaries by different researchers in various applications of machine learning proved that almost all deployed machine learning models are extremely vulnerable to
adversarial attacks. Formally the term 'adversarial input' was first described in 2004 by Dalvi et al. when they designed the framework to defend different adversarial manipulation by spammers on spam classifier [5].
Finlayson et al. [6] used different white-box and block-box attacks to generate adversarial perturbations. The experiment was performed on three use cases of medical images classification: fundoscopy, chest x-ray and dermoscopy. The attack success rate upto \(100\%\) with a confidence score of \(100\%\) is achieved from the experiment. The adversarial experiment performed on a real-time smart healthcare system deteriorated the performance of the system [7]. The experiment used 4 different black-box and white-box attack methods to generate adversarial perturbations. There was a significant drop in classification accuracy under both targeted and untargeted attacks. The highest success rate achieved under adversarial attacks is \(15.68\%\). The adversarial experiment performed on ISIC dataset shows that there is a huge difference between the classification accuracy with and without adversarial perturbations [8]. Selvakkumar et al. used a pre-trained vgg19 transfer learning model for binary image classification. The study used Fast Gradient Sign Method(FGSM) algorithm for adversarial image generation which drop the accuracy of classification from \(88\%\) to \(11\%\).
There are several techniques that can be used for **adversarial attacks** on machine learning models. These threat models are categorized into black-box and white-box adversarial attack methods. In white-box attack the attacker has the information of deployed model like inputs, the architecture of model, internal gradients and weights and other parameters while in black-box the attacker has no access to such parameters. Some of the most common techniques of adversarial attacks are listed in the TABLE I below.
In this experiment, we attempt to generate adversarial perturbations using some of the methods of gradient-based attacks and boundary attacks only.
**Gradient-based attacks**: In gradient-based attacks, an attacker computes the gradients of the model with respect to the input data and then modifies the input data to maximize the loss function. This can be achieved using techniques such as the fast gradient sign method, the projected gradient descent method, or the momentum iterative method.
**Boundary attack**: In boundary attack-based attacks, an attacker generates a series of inputs that lie near the decision boundary of the model, and then perturbs these inputs in such a way that they are misclassified by the model. This technique can be more effective than other techniques because it does not require knowledge of the model's parameters or the gradients of the model. DeepFool attack is an example of such an attack.
_Figure 1 represents the three white-box attacks used in this study._
## III Method/ Framework
### _Original Data Images_
Various research has been done on the automated classification of histopathological cancer using different datasets. For our experiment, we used the benchmark dataset known as PatchCamelyon(PCam) [21]. This is a large-scale patch-level data set derived from Camelyon16 [22] data.
The aggregate of the patches makes up the slide-level image, which can be used to predict the likelihood of metastases, stage cancer. Example of patch data samples showing likelihood of cancer is shown in Fig. 2. The data set contains total 327,680 images. However, we used 10k images in this work model.
### _Classical and Hybrid Classical-Quantum Binary Image Classification Models_
Convolutional Neural Networks(CNN) are widely used in image-related operations due to their formidable performance. Instead of designing and training neural networks from scratch, different pre-trained transfer learning models [23] are used to enhance image classification performance. In our experiment, we used well-known transfer learning models like VGG16
Fig. 1: Three different types of white box adversarial attacks analyzed in this study
Fig. 2: Data samples: non-cancerous and cancerous images
[24], InceptionV3 [25], Resnet18 [26] and Alexnet [27]. These models are trained on an ImageNet dataset with 1000 target categories. However, initial layers of pre-trained models can act as feature extraction layers for customizing image classification tasks for the newer datasets. In our experiment, we have designed the classical and hybrid classical-quantum neural network using the pre-trained transfer learning models mentioned above. These neural networks are fine-tuned by replacing the final fully connected layer with the classical or quantum layer while keeping the weights of initial layers constant for feature extraction.
For the classical model, 'N' features are extracted from the input image using initial layers of a specific pre-trained transfer learning model. These features are inputted into the hidden layer, which consists of a fully connected layer of 'N' neurons with activation functions like ReLU, or sigmoid. Finally, an output layer is introduced with neurons equal to the number of target classes in our study, equal to two. This provides us with a comparable architecture with classical-quantum models since we introduce our quantum layers between the fully connected layers of the hidden layer and output layer.
For the hybrid classical-quantum model, a Quantum Neural Network (QNN) layer based on variational quantum circuits (VQC) is sandwiched between two classical neural network layers [28]. Features extracted from the initial layers are thresholded between 2 to 8 since an equivalent number of n-qubit systems is initialized for the quantum layer, and features are embedded in the quantum systems. This is done keeping in consideration of various quantum hardware constraint. These features are the same in number as the number of qubits used in VQC and are the inputs to the QNN layer. For the hybrid classical-quantum model, a Quantum Neural Network (QNN) layer based on variational quantum circuits (VQC) is sandwiched between two classical neural network layers. The outputs from the QNN layer are input to the final output layer.
From the model architecture shown in Fig.4, we can observe that the quantum operation is performed on three different layers of QNN layer. The first layer is the Embedding layer which is responsible for mapping the data in the classical vector into a quantum state. In order to map classical data into the quantum state, different single qubit gates like Hadamard gate, U1, U2, U3 gates, Rotational X, Rotational Y, and Rotational Z gates are used. Next layer is the variational quantum circuit layer which is the concatenation of quantum layers of depth 'd'. Two qubit gates like controlled Z, controlled NOT, controlled RX are used with parameterized single qubit gates to design parameterized circuits. The next step is to map the obtained outputs from the quantum circuit to the classical domain. For this the expectation values from the quantum circuit is measured on one of X, Y or Z basis. The result from this layer is the input to the next classical layer which is the output layer. In our case, the output layer is the fully connected layer with two neurons for binary classification with sigmoid activation.
In our experiment, we have used pre-trained transfer learning models from torchvision [29] and designed a classification model using Pytorch [30]. For hybrid classical-quantum model the circuit is designed using PennyLane [31] and the integration of quantum node with classical PyTorch layer is
Fig. 4: Model Architecture: Hybrid Classical-Quantum
Fig. 5: VQC-1 used in our hybrid classical-quantum model which achieved highest classification accuracy on 1000 subsets of data
Fig. 3: Model Architecture: Classical
Fig. 6: VQC-6 used in our hybrid classical-quantum model which achieved highest classification accuracy on 10000 subsets of data
done using TorchLayer class of qnn module from PennyLane. The quantum circuits are executed on the pennylane default simulator.
### _Preparing Adversarial Images_
Deep learning models based on CNN have achieved higher accuracy in histopathological cancer detection [21][22]. However, these classification models are highly vulnerable to different kinds of adversarial attacks which leads to unexpected predictions with higher confidence scores. The analysis of adversarial attacks on such models helps to better estimate the reliability of the classifier model and design a method to defend against such attacks. The general scenario of an adversarial attack on image classification model is depicted in figure 7.
Under normal conditions, the input image \(X_{i}\) is fed into the classifier \(C\), which gives output \(Y_{i}\) which is the predicted target class corresponding to the input sample \(X_{i}\). Under an adversarial attack, the input sample is intentionally adultered with random noise which is commonly known as adversarial perturbations. The integration of noise with the images are human-unobtrusive but they lead the classifier to misclassify the input sample with a high confidence score.
In this work model, we have evaluated the performance of different classical and hybrid classical-quantum models under three types of white-box adversarial attacks: Fast Gradient Sign Method(FGSM) attack, Deep Fool attack and Projected Gradient Descent (PGD) attack. For the generation of adversarial images, we have used an untargeted attack method. In FGSM attack, adversarial images are generated using the sign of the gradient [10]. Input images are fed into classifier to generate the prediction and loss. Then the gradient of loss is calculated with respect to input. The gradient undergoes sign function to calculate its sign value. The process of generating adversarial perturbation using FGSM is expressed in equation 1.
\[X_{adv}=X+\epsilon*sign(\nabla_{x}\mathcal{L}(C,X,Y)) \tag{1}\]
where,
\(X_{adv}\) = the generated adversarial image
\(\epsilon\) = perturbation coefficient which is lower enough to detect from human eye, higher enough to fool the classifier \(\mathcal{L}\)= loss function for classifier \(C\) with input \(X\) and target \(Y\)
PGD attack is another variant of gradient-based attack. Perturbations are generated by running FGSM multiple time with small step size and the adversarial values are clipped after each step to the perturbation constraint which are already defined [32][33]. The DeepFool attack algorithm works on the basis of decision boundaries to generate perturbations [16]. The algorithm iteratively computes the gradient of the classification model's output with respect to the input image and then determines the direction of the gradient that leads to the smallest change in the image classification. This process is repeated until there is a change in the prediction label of the input or the current iteration is the maximum.
Figure 8 shows a sample of adversarial images under different values of epsilon(\(\epsilon\)): '\(\epsilon\)' is the perturbation coefficient used in each of the adversarial attack algorithms.
## IV Results
We evaluated the performance of different classical and hybrid classical-quantum binary image classification models with and without adversarial perturbations. For classical computation, we created 4 models out of which 2 are custom-defined convolutional neural networks and the other two are resnet18 transfer learning based models. To generate adversarial perturbation using hybrid classical-quantum transfer learning models, we used 4 widely used pre-trained transfer learning models: VGG16, Resnet18, Alexnet and Inceptionv3. We created and trained 7 hybrid classical-quantum models with 6 different VQCs. These classical and hybrid models are trained and tested on different subsets of data.
Table II outlines the performance of different classical and hybrid classical-quantum models from our experiment. Column I represents the name of models from our experiment. The computation type of different model architectures are mentioned in column II. The computations are either classical or hybrid classical-quantum. Column III represents the number
Fig. 8: Adversarial Images: each row contains different types of attacks and each column contains perturbed images under different values of epsilon(\(\epsilon\))
Fig. 7: Adversarial Attack Scenario: \(X_{i}\) is the input image, \(Y_{i}\) is the prediction without attack and \(Y_{p}\) is the prediction with adversarial perturbaion
of images included in the subset of data. We used 3 subsets of data with 1000, 5000 and 10000 images. For the subset with 1000 images, we splitted it into 80:20 ratio of train and test sets. For datasets with higher number of images we splitted them into 60:20:20 ratio of train, validation and test sets. The test accuracy achieved from each of these models without adversarial perturbations is included in column IV. Column V includes the VQCs used in our hybrid classical-quantum transfer learning models. Expressibility [34] and the number of qubits of VQC in column V are depicted in columns VI and VII respectively. Different values of perturbation coefficients(\(\epsilon\)) used in each of the attack models are included in column VIII. For our experiment, we evaluated the performance for each of the classical and hybrid classical-quantum models under 3 perturbation coefficients: 0.05, 0.15 and 0.25. The accuracy of each model tested on adversarial samples generated using FGM, DeepFool and PGD with perturbation coefficient (\(\epsilon\)) are listed in columns IX, X and XI respectively.
### _Experimental results of classical models_
From the table II we can see that in classical models without adversarial perturbation, the classification accuracies of CNN models with transfer learning are higher than the classical models without transfer learning. The highest classification accuracy achieved from the classical models is 89.5 percent which is resnet18 transfer learning-based model trained on a subset of data with 10000 images. Whereas, in the case of evaluating classical models with adversarial perturbations the models trained without using pre-trained transfer learning models have achieved higher classification accuracies. Under the adversarial perturbation generated using FGM attack the highest classification accuracy is 52.9 percent which is obtained from the classical CNN model trained on subset of 5000 datasets. Using deefool model to generate perturbations the highest classification accuracy of 44.0 percent is achieved from the classical CNN model trained on a subset of 10000 images. Under PGD attack, the classical model trained on subset of 5000 images achieved the highest classification accuracy which is 61.3 percent.
### _Experimental results of Hybrid classical-quantum models_
For hybrid classical-quantum models we evaluated the performance of each transfer learning models with small subset of images (1000 images). The model with highest classification accuracy is selected to train with higher number of images. In our experiment, Resnet18 transfer learning model outperformed other 3 transfer learning models. The hybrid model
trained with 1000 subset of images achieved highest classification accuracy which is found to be 88.50 percent without adversarial perturbations. For the subset of images with 10000 images without adversarial perturbations the highest classification accuracy is 84.30 percent. 77.75 percent of classification accuracy under FGM attack is achieved from hybrid classical-quantum model with VQC-1. The hybrid classical-quantum model based of resnet18 transfer learning model with VQC-1 outperformed other hybrid models under FGM and DeepFool adversarial attacks. Under FGM and DeepFool attacks classification accuracies achieved are 77.75(\(\epsilon\)=0.25) and 48.80(\(\epsilon\)=0.25) respectively. Under PGD attack hybrid classical-quantum model with VQC-6 achieved highest classification accuracy which is found to be 55.65(\(\epsilon\)=0.05).
## V Conclusion and Future Work
Machine learning models have achieved state-of-the-art performance on different medical image-related operations like image classification, image segmentation. However, the use of these models in medical sectors are extremely vulnerable to different kinds of malicious attacks commonly known as adversarial attacks. In this experiment, we explored the impact of adversarial attacks on different classical and hybrid classical-quantum image classification models for histopathological cancer detection. For the evaluation of classical models, we chose 4 classical models with varying subsets of images. Similarly, we chose 7 hybrid classical-quantum models to evaluate their performance under different adversarial attacks. The experiment we performed shows that both the classical and hybrid classical-quantum models deployed are highly vulnerable to adversarial attacks. However, the success rate of defence against such adversarial perturbations of hybrid classical-quantum models are higher than that of classical classification models.
Currently, we have performed our experiment on a quantum default simulator from PennyLane. As a consequent work, we plan to test the impact of different adversarial attacks on real quantum hardware. The current experiment shows that there is a potential to develop resilient models towards different kinds of malicious attacks using the architecture of hybrid classical-quantum models.
|
2306.17399 | Japanese Lexical Complexity for Non-Native Readers: A New Dataset | Lexical complexity prediction (LCP) is the task of predicting the complexity
of words in a text on a continuous scale. It plays a vital role in simplifying
or annotating complex words to assist readers. To study lexical complexity in
Japanese, we construct the first Japanese LCP dataset. Our dataset provides
separate complexity scores for Chinese/Korean annotators and others to address
the readers' L1-specific needs. In the baseline experiment, we demonstrate the
effectiveness of a BERT-based system for Japanese LCP. | Yusuke Ide, Masato Mita, Adam Nohejl, Hiroki Ouchi, Taro Watanabe | 2023-06-30T04:37:43Z | http://arxiv.org/abs/2306.17399v1 | # Japanese Lexical Complexity for Non-Native Readers: A New Dataset
###### Abstract
Lexical complexity prediction (LCP) is the task of predicting the complexity of words in a text on a continuous scale. It plays a vital role in simplifying or annotating complex words to assist readers. To study lexical complexity in Japanese, we construct the first Japanese LCP dataset. Our dataset provides separate complexity scores for Chinese/Korean annotators and others to address the readers' L1-specific needs. In the baseline experiment, we demonstrate the effectiveness of a BERT-based system for Japanese LCP.
## 1 Introduction
Reading comprehension requires a certain level of vocabulary knowledge. The results reported by Hu and Nation (2000) suggest that most English learners need to understand 98% of tokens in a text to comprehend it. A follow-up study by Komori et al. (2004) estimates the percentage to be 96% for Japanese learners to comprehend text. Acquiring vocabulary to reach such levels, in turn, is a lengthy and challenging task for learners. This opens up opportunities for assistive applications, such as simplification or annotation of complex words. The first step necessary for such applications is to predict the complexity of the words. The task of **lexical complexity prediction (LCP)** is defined as predicting how difficult to comprehend words or phrases in a text are on a continuous scale (Shardlow et al., 2020). This differentiates LCP from complex word identification (CWI), i.e., binary classification of complex words (Yimam et al., 2018). As complexity is naturally perceived as continuous, a continuous scale used in LCP allows to represent it without loss of information.
The LCP research so far has been limited to English, for which two LCP datasets have been constructed (Shardlow et al., 2020, 2022), and no such dataset has been created for Japanese. Meanwhile, there are a number of features specific to the Japanese language that could affect lexical complexity, and their effects have yet to be studied. For example, the Chinese characters, which are used extensively in Japanese, lower text readability (Tateisi et al., 1988).
Previous studies on Japanese lexical complexity used pedagogical word lists to estimate complexity level. Nishihara and Kajiwara (2020) modeled lexical complexity of words based on the Japanese Educational Vocabulary List (Sunakawa et al., 2012). The word list assigns a degree of difficulty to each item, based on the subjective judgment of Japanese language teachers, not learners themselves, and does not consider the learners' L1 background.
In light of this, we present JaLeCoN1, Dataset of **J**apanese **L**exical **C**omplexity for **N**on-Native Readers. Our dataset has the following key features:
Footnote 1: JaLeCoN is available at [https://github.com/naist-nlp/jalecon](https://github.com/naist-nlp/jalecon).
1. Complexity scores for single words as well as multi-word expressions (MWEs);
2. Separate complexity scores from Chinese/Korean annotators and others, addressing the considerable advantage of the former in Japanese reading comprehension.
Our analysis reveals that the non-Chinese/Korean annotators perceive words of Chinese origin or containing Chinese characters as especially complex. In the baseline experiment, we investigate the effectiveness of a BERT-based system in the Japanese LCP task, and how it varies according to the word complexity and L1 background.
## 2 Task Setting
Since Japanese has no explicit word boundaries, word segmentation is the first prerequisite for LCP. We use short unit words (**SUWs**) as the basic word unit, combining them into longer word units in the case of multi-word expressions (**MWEs**):
**SUW:** SUWs consist of one or two smallest lexical units Ogura et al. (2011), and are commonly used for segmentation of Japanese.
**MWE:** We understand MWEs as multi-_SUW_ expressions that are semantically opaque or institutionalized (see Appendix C) and consequently may have higher complexity than their components. We identify MWEs either using long unit word (LUW)2 segmentation, or manually (see Section 3).
Footnote 2: The LUW is defined as a syntactic word by Omura et al. (2021).
Consequently, a **word**, can be either an SUW or an MWE (see Figure 1 for examples).
A **complexity score** represents perceived complexity based on the annotators' judgment on a scale from 0 (least complex) to 1 (most complex). We exclude proper nouns from our target because their complexity is influenced by factors unrelated to reading proficiency or vocabulary knowledge.3
Footnote 3: Sequences containing segmentation errors are also excluded (see Appendix D).
We annotate the words in an **in-context dense** setting. In-context here means including both intra-sentence and extra-sentence context of each word. Context is important for lexical complexity for two reasons Gooding and Kochmar (2019); Shardlow et al. (2021): (1) As polysemous words can have different complexity levels for each sense, context is necessary to differentiate between possible meanings of these words. (2) Presenting a word without context could increase its complexity. In particular, the recognition of abstract words relies on context Schwanenflugel et al. (1988). **Dense** means annotating each word of the text with a complexity label, instead of annotating one specific word in each sentence Shardlow et al. (2022). We adopt the dense setting to avoid any bias that could arise from targeting specific words.
## 3 Construction of JaLeCoN
In order to include both written and spoken language and a variety of vocabulary, we sourced texts from two different genres:
**News** comes from the Japanese-English data of the WMT22 General Machine Translation Task Kocmi et al. (2022). It contains a variety of news texts written for the general Japanese reader.
**Government** is composed of press conference transcripts from Japanese ministries or agencies.4
Footnote 4: The transcripts were retrieved from the websites of five organizations: JMA, JTA, MOJ, MOFA, and MLHW.
The whole dataset is composed of sequences of sentences constituting either the beginning of an article (News) or a question-answer pair (Government). We restricted the length of the sequences to at least 6 and at most 11 sentences to obtain similar amounts of text, and presented each sequence as a whole for annotation.
### Word Segmentation
We used Comainu 0.805 Kozawa et al. (2014) to perform two-level segmentation. The low-level SUW segmentation was done using MeCab Kudo et al. (2004), a Japanese morphological analyzer, and the UniDic 2.3.0 Den et al. (2007) dictionary. At the second level, Comainu chunked the SUWs into LUWs. Based on the two segmentations, we segmented the text into words as follows:
Footnote 5: [https://github.com/skozawa/Comainu](https://github.com/skozawa/Comainu)
1. If an LUW is a noun, we use the constituting SUWs as words. Transparent noun compounds are ubiquitous in Japanese (e.g., "), " " "next meteorological satellite"), and we do not consider them MWEs.
2. If an LUW is not a noun, we use the LUW as a word. Such an LUW may be a single SUW, or a sequence of SUWs, which we consider an MWE. Such MWEs most importantly include functional words, such as compound particles (e.g., " " "about") and auxiliary verbs (e.g., " " " " " "have to").
We also identified other MWEs manually, as explained in Section 3.3.
Figure 1: Example of text segmented as SUWs and as words (either \(\underline{\text{SUW}}\) or \(\underline{\text{MWE}}\)). Semantically opaque sequences are chunked into MWEs. Abbreviations in glosses: ADVerbializer, GERund, PReSent, PRoGressive.
### Complexity Annotation
To capture the lexical complexity for a non-native Japanese reader with intermediate or advanced reading ability, we recruited 15 annotators per sentence with Japanese reading proficiency ranging from CEFR (Common European Framework of Reference for Languages) level B1 to C2. We required at least intermediate proficiency, as it has been shown that complexity judgments made by intermediate or advanced learners can be used to adequately predict the needs of beginners but not vice versa Gooding et al. (2021). The proficiency levels were self-reported (see Appendix A for details). We used the annotations made by 14 of them, after removing one outlier, whose annotations had over 70% higher mean than those of any other annotator, clearly not corresponding to the reported reading proficiency.
Approximately half of the annotators we recruited have a Chinese/Korean L1 background (CK).7 CK learners have a considerable advantage in comprehension of words of Chinese origin, which also form a large part of Chinese and Korean vocabulary Koda (1989).
Footnote 7: On average, the CK annotators reported higher Japanese reading proficiency than the non-CK (see Appendix A).
The annotators were asked to assign one of the following labels to each span if they find it complex: 3 (Very Difficult), 2 (Difficult), or 1 (Not Easy); otherwise the annotators were to leave the span unlabeled and we interpreted it as 0 (Easy).8 Annotators could label a span of any length if it was complex as a whole, but were asked to create as short a span as possible. To calculate the average, the labels were converted to numerical values as follows: 3 \(\rightarrow\) 1, 2 \(\rightarrow\) 0.67, 1 \(\rightarrow\) 0.33, 0 \(\rightarrow\) 0. The averaging hinges on the assumption that the labels have an equal distance between them. We always presented the labels together with the values 0 to 3 to reinforce the perception of equal distance.
Footnote 8: See Appendix B for detailed definitions of each label.
### MWE Annotation
In parallel with the complexity annotation, we annotated MWEs not identified by LUW segmentation (see Section 3.1). Given the absence of an MWE detector for Japanese of sufficient quality, the annotation was performed manually by a native Japanese speaker and a non-native speaker with a degree in the Japanese language. The expression categories we consider MWEs are described in Appendix C.
### Complexity Scoring
Using annotations from the previous steps, we assigned complexity scores to words according to the following rules:
1. If a span contains one or more words, each word receives the complexity value of the span.
2. If an MWE (manually annotated according to Section 3.3) overlaps with or contains multiple spans, the MWE receives the maximum of the complexity values of the spans.
Finally, for each word, we calculated the complexity score for each L1 group as the average of the individual values from the annotators in that group.
## 4 Statistics and Analysis
Overall statistics for both genres and L1 groups are shown in Table 1.9 MWEs have higher mean complexity than single words for both L1 groups and are more frequent in the Government genre. There is a tendency towards perceiving higher complexity in the non-CK group, which corresponds to slightly lower average Japanese proficiency of the non-CK annotators (see Appendix A).
Footnote 9: See Appendix E for the complexity scores and annotation distributions of several words in the non-CK group.
We measured inter-annotator agreement (IAA) using Krippendorf's \(\alpha\) for interval values Krippendorff (1970). The IAA is 0.32 in the CK group, and 0.31 in the non-CK group, while it would be 0.19 if we merged the groups. As lexical complexity is
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & & & & \multicolumn{2}{c}{CK} & \multicolumn{2}{c}{Non-CK} \\ \cline{4-9} Genre & Sentences & Words & MWE Ratio & All Words & MWEs & All Words & MWEs \\ \hline News & 400 & 10,256 & 7.9\% &.009 &.020 &.024 &.072 \\ Government & 200 & 7,964 & 14.4\% &.005 &.009 &.028 &.047 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of JaLeCoN. The CK and Non-CK columns show the mean complexity scores by L1 group.
highly subjective Gooding et al. (2021), the low agreement does not imply low reliability, but it indicates that perception of complexity is more alike within the L1 groups than across all annotators.
The complexity score distribution in each L1 group is shown in Figure 2. No words achieved a score greater than 0.81 and 0.86 in the CK and non-CK groups, respectively, which reflects that words are rarely labeled as Difficult or Very Difficult by all annotators in a group.
In addition to the aforementioned difference in proficiency, there is also a clear difference in how the two L1 groups perceive complexity of words based on their origin and whether they contain Chinese characters10, as analyzed in Table 2. For the CK group, the mean complexity of words of Japanese and Chinese origin was similar. For the non-CK group, however, words of Chinese origin were markedly more complex (0.062) than words of Japanese origin (0.010), and both categories of words were more complex when they contained Chinese characters.11
Footnote 10: Japanese vocabulary consists of words of Japanese origin, Chinese (Sino-Japanese) origin, and foreign words from other languages (_gairaigo_). The first two categories can be written using Chinese characters (_kanji_), Japanese syllabary (_kana_), or a combination thereof, while other foreign words are usually written in syllabary only. (See Appendix F for examples.)
## 5 Experiments
The newly created dataset can be used to evaluate performance of LCP for non-native Japanese readers of different L1 backgrounds (CK and non-CK). We developed a baseline system based on a fine-tuned BERT Devlin et al. (2019) model, and evaluated it using cross-validation. We fine-tuned a Japanese pre-trained BERT model released by Tohoku University, namely the base model for UniDic Lite segmentation12.
Footnote 12: Available from [https://huggingface.co/cl-tohoku/bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2).
For each word \(w\) in our dataset and the sentence \(s\) that contains it at token indices \(i\) to \(j-1\), we construct an input sequence ([CLS], \(s_{0}^{i-1}\), <Unused1>, \(w\), <Unused2>, \(s_{j}^{|s|-1}\), [SEP], \(w\), [SEP]). The target word occurs first delimited by unused tokens (<Unusedn>) in the sentence context, and then on its own following the first [SEP] token.13 To predict the complexity score, we feed the final hidden representation of the [CLS] token into a linear layer with a single output. A similar fine-tuning approach, but without the special tokens, was used for English LCP by Taya et al. (2021), achieving one of the highest \(R^{2}\) values in the single-word subtask of SemEval-2021 Task 1 Shardlow et al. (2021).
Footnote 13: Due to a different segmentation (version of UniDic) used by Tohoku BERT and our dataset, we have to enforce segmentation at the word’s boundaries using spaces.
We fine-tune and evaluate models for CK and non-CK complexity separately. See Appendix G for the hyperparameters and cross-validation scheme.
The results are reported in Table 3. In addition to \(R^{2}\) (coefficient of determination)14, we report mean average error (MAE) by complexity score tiers to draw the full picture of the models' performance at different complexity levels. The score ranges of the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Japanese} & \multicolumn{2}{c}{Chinese} & \multicolumn{2}{c}{Other} \\ \cline{2-7} & All & CC & All & CC & All & CC \\ \hline CK &.003 &.009 &.004 &.004 &.071 &.000 \\ Non-CK &.010 &.032 &.062 &.072 &.007 &.143 \\ \hline Frequency & 52\% & 10\% & 26\% & 22\% & 4\% & 0\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean complexity (by L1 group) and frequency, according to (1) word origin: Japanese (_wago_), Chinese/Sino-Japanese (_kango_), and Other (_gairaigo_, borrowings from languages other than Chinese), and (2) whether the words contain Chinese characters only (denoted by CC). The origin was classified using MeCab and Comainu (see Section 3.1), excluding words of mixed or unknown origin.
Figure 2: Histogram of complexity scores by L1 group.
tiers are centered at annotation values as illustrated in Figure 3. We handle zero as a special tier, and merge Very Difficult with Difficult due to a low number of words.
The fine-tuned BERT model for CK and Non-CK achieves \(R^{2}\) of 0.4351 and 0.6142, respectively. For both L1 groups, the MAE value increases markedly in each successive complexity tier, as the number of training examples (shown in Table 4) diminishes. Similarly, the CK model achieves lower error than non-CK only in tier Zero, where it has more examples available than the non-CK model. This suggests that the scarcity of words with complexity above zero is a factor contributing to worse performance on CK data, as measured by \(R^{2}\).
## 6 Conclusion
In this paper, we presented the first dataset for Japanese LCP. It provides separate complexity scores based on the CK/non-CK distinction of annotators' L1 background. Our analysis corroborates our conjecture that special consideration of L1 background is useful for the Japanese LCP task in particular. We believe it could benefit LCP in other languages as well.
In the baseline experiment, we demonstrated the efficacy of our BERT-based system for both CK and non-CK readers. Even after separating CK and non-CK annotators, however, notable inter-annotator disagreement remains within these groups. Therefore personalized systems analogous to Gooding and Tagut (2022) could improve on our system. Future research should study this possibility, analyzing both its costs and benefits.
Models trained on JaLeCoN can be used as part of a lexical simplification pipeline for Japanese, both to identify complex words and to rank candidate simplifications. JaLeCoN itself can be further used as a basis for a lexical simplification dataset targeting words actually perceived as complex, similar to TSAR-ST datasets for English and Spanish Stajner et al. (2022).
## Limitations
Our task setting and baseline system requires that the input is already segmented into words including MWEs. The MWE identification step in the construction process of our dataset involved time-consuming manual annotation. Building a high-quality system that fully automates the process is an issue for future work. Our dataset can be used to evaluate such a Japanese MWE identification system.
Additionally, as shown in Section 5, our baseline model performed relatively poorly in the higher complexity tiers. This is an effect of the dense annotation setting; it results in uneven distributions of
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Zero & Easy \(>0\) & Not Easy & (Very) Difficult \\ \hline CK & 17,563 & 393 & 223 & 41 \\ Non-CK & 15,209 & 2,067 & 837 & 107 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Word counts in the whole dataset by L1 group and MAE tier.
Figure 3: Illustrated score ranges of the MAE tiers: \(\{0\}\) for Zero, \((0,0.165]\) for Easy \(>0\), \((0.165,0.5]\) for Not Easy, and \((0.5,1]\) for (Very) Difficult.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{MAE by Gold Complexity Score Tier} \\ \cline{2-6} & Zero & Easy \(>0\) & Not Easy & (Very) Difficult & \(R^{2}\) \\ \hline CK & 0.0034 & 0.0676 & 0.1913 & 0.2954 & 0.4351 \\ Non-CK & 0.0066 & 0.0510 & 0.1169 & 0.2932 & 0.6142 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of the fine-tuned BERT model by L1 group (means over 5 cross-validation folds).
complexity as shown in Figure 2, where easy words greatly outnumber difficult words. One possible solution would be creating another LCP dataset using sparse annotation, where target words are selected using frequency bands so that the words are distributed across a wide range of frequency (Shardlow et al., 2022). Our data could provide insights as to what kind of words should be targeted by sparse annotation for such a dataset.
## Acknowledgments
We would like to express our gratitude to Justin Vasselli and the anonymous reviewers for their insightful feedback. This work was supported by JSPS KAKENHI grant number JP19K20351 and NAIST Foundation.
|
2309.09722 | The analysis of vertex feedback stabilisability of a star-shaped network
of fluid-conveying pipes | It is an outstanding problem whether a pipe-flow system on a star-shaped
network is stabilisable by a feedback control on the common vertex. In the
present paper we deal with this problem. In particular, we study the equation
governing the small vibrations of a stretched elastic pipe conveying fluid in a
star-shaped network and examine the question of vertex feedback stabilisability
of such a system via control moments. Finding an answer to the question is not
straightforward, for the system operator associated with the corresponding
closed-loop system is unbounded and nonselfadjoint. An approach to the study of
the stabilisation problem for the closed-loop system is presented based on the
spectral approach previously introduced by the authors for star graphs of
stretched elastic beams. When the tension in the pipes is greater than the
square of the fluid-flow velocity, we establish a positive result that in fact
gives the strong property of uniform exponential stability of the closed-loop
system. | Xiao Xuan Feng, Gen Qi Xu, Mahyar Mahinzaeim | 2023-09-18T12:35:59Z | http://arxiv.org/abs/2309.09722v3 | Exponential stabilisation of a star-shaped network of fluid-conveying pipes induced via vertex control
###### Abstract.
In this paper we study the equation governing the small vibrations of a stretched fluid-conveying pipe in a star-shaped network. We examine the question exponential stabilisability of such a system by means of feedback control torques applied at the inner vertex of the system. An approach to the study of the stabilisability problem for the closed-loop system is presented using the spectral approach previously introduced by the authors for star graphs of beams. When the tension in the pipes is very large compared to fluid-flow velocity, we establish a positive result that gives exponential decay of the energy of solutions of the closed-loop system.
MSC2020: 37L15, 93D23, 37C10, 34B45, 35P10, 47B06
Keywords: pipe conveying fluid, pipe network, spectral problem on a metric star graph, spectral analysis, Riesz basis property, exponential stability
## 1. Introduction and system description
In recent years one very active area of mathematical systems theory has been the investigation of control, or control related, problems in mechanics and mathematical physics on networks or metric graphs. Much of the research has focused on the problems of controllability and stabilisability of vibrating elastic (string, beam, and plate) networks by insertion of control action into the boundary and vertex conditions, and there are already adequate texts on these subjects, e.g. [2, 4, 9, 17]. The reason for investigating such systems is not difficult to explain: remarkable results are obtained in networked elastic systems when control is exercised. For example, elastic systems that are unstable can be stabilised by boundary or vertex control when a network setup is considered, and vice versa. A conclusive answer as to whether or not all vibrating elastic networks can be stabilised by such control action cannot be expected, of course, and must be considered individually for each problem. This is specially true when it comes to the treatment of flow-induced oscillations problems on networks and is typical of the particular kind of stabilisability problem we are interested in here.
In a recent paper [10], the authors have studied (among other things) the stabilisation problem for a single vibrating fluid-conveying pipe, represented by the partial differential equation
\[\frac{\partial^{4}w\left(s,t\right)}{\partial s^{4}}-\left(\gamma-\eta^{2} \right)\frac{\partial^{2}w\left(s,t\right)}{\partial s^{2}}+2\beta\eta\frac{ \partial^{2}w\left(s,t\right)}{\partial s\partial t}+\frac{\partial^{2}w \left(s,t\right)}{\partial t^{2}}=0\]
which together with boundary conditions
\[w\left(0,t\right)=\left.\frac{\partial^{2}w\left(s,t\right)}{ \partial s^{2}}\right|_{s=0} =0,\] \[\left.\left(\frac{\partial^{2}w\left(s,t\right)}{\partial s^{2}}+ \kappa\frac{\partial^{2}w\left(s,t\right)}{\partial s\partial t}\right)\right|_ {s=1} =0,\] \[\left.\left(\frac{\partial^{3}w\left(s,t\right)}{\partial s^{3}}- \left(\gamma-\eta^{2}\right)\frac{\partial w\left(s,t\right)}{\partial s} \right)\right|_{s=1} =0\]
and given initial conditions \(w\left(s,0\right),\ \left(\partial w/\partial t\right)\left(s,t\right)\big{|}_{t=0}\) forms what we called in [10] a _closed-loop system_. Here \(w\left(s,t\right)\) for \(0\leq s\leq 1\), \(t\geq 0\), represents the deflection of a thin horizontal pipe of unit length, subjected to a "nonfollowing" external tensile force proportional to a parameter \(\gamma>0\) and carrying the stationary flow of an ideal incompressible fluid with velocity \(\eta\geq 0\), pinned at \(s=0\), and subjected at \(s=1\) to a torque feedback control proportional to \(\kappa\geq 0\). The parameter \(\beta\in\left(0,1\right)\) depends only on the pipe and fluid densities. It was shown that the energy of solutions of the closed-loop system decreases exponentially fast as \(t\to\infty\) via the control for \(\kappa>0\) as long as \(\gamma>\eta^{2}\), the latter restriction meaning that the tension in the pipe is much bigger than the fluid-flow velocity.
In a different paper [11], vertex feedback stabilisability for a star-network setup of Beck's Problem with damped, pinned-elastic ends is studied (refer to [3, 19] for a physical description of Beck's Problem and its variants). In its essential characteristics - partial differential equations, boundary and vertex conditions - this system is similar to an interpretation as a star-network setup of the above closed-loop system with \(\eta=0\) (i.e. when there is no flow). The purpose of the paper is to show that our approach in [11], suitably modified, can accomplish the same end of studying feedback stabilisability for a star-network setup of the closed-loop system when \(\eta>0\).
(We note that recently Aissa et al. [1] and Khemmoudj [8] have considered an alternative approach to the question of feedback stabilisability in a single pipe conveying fluid, which assumes that the controls are achieved via time-varying boundary feedbacks. For a review of early work on pipe problems, see [13, 14].)
### System description
Let us now precisely describe the system we study in this paper. Referring to Fig. 1 we consider an equilateral 3-edge metric star graph \(\mathbf{G}\coloneqq\left\{\mathbf{V},\mathbf{E}\right\}\), where \(\mathbf{V}\) and \(\mathbf{E}\) are the sets of vertices \(\left\{0\right\}\cup\left\{a_{k}\right\}_{k=1}^{3}\) and edges \(\left\{e_{k}\right\}_{k=1}^{3}\), respectively, such that each edge \(e_{k}\) connecting the inner vertex at the origin \(0\) to the outer vertex \(a_{k}\) is of unit length and is identified with the interval \(0\leq s_{k}\leq 1\). The value \(s_{k}=1\) corresponds to the outer vertices, and \(s_{k}=0\) corresponds to the inner vertex.
Let \(w_{k}\left(s_{k},t\right)\) be the deflection of the edge \(e_{k}\) for \(0\leq s_{k}\leq 1\) at time \(t\geq 0\) from the equilibrium position of the network which we identify with \(\mathbf{G}\). Consider the partial differential equation
\[\frac{\partial^{4}w_{k}\left(s_{k},t\right)}{\partial s_{k}^{4}}-\left(\gamma -\eta^{2}\right)\frac{\partial^{2}w_{k}\left(s_{k},t\right)}{\partial s_{k}^{ 2}}+2\beta\eta\frac{\partial^{2}w_{k}\left(s_{k},t\right)}{\partial s_{k} \partial t}+\frac{\partial^{2}w_{k}\left(s_{k},t\right)}{\partial t^{2}}=0, \quad k=1,2,3. \tag{1.1}\]
As initial conditions we require that
\[w_{k}\left(s_{k},0\right)=g_{k}\left(s_{k}\right),\quad\left.\frac{\partial w _{k}\left(s_{k},t\right)}{\partial t}\right|_{t=0}=h_{k}\left(s_{k}\right), \quad s_{k}\in e_{k},\quad k=1,2,3, \tag{1.2}\]
where the known functions \(g_{k}\), \(h_{k}\) are smooth. For boundary and vertex conditions we proceed as in [11] (to which we refer the reader if an explanation of their physical meaning is desired). The pinned boundary conditions at the outer vertices \(a_{k}\) imply
\[w_{k}\left(1,t\right)=\frac{\partial^{2}w_{k}\left(s_{k},t\right)}{\partial s _{k}^{2}}\Big{|}_{s_{k}=1}=0,\quad k=1,2,3. \tag{1.3}\]
Further, the deflections \(w_{k}\left(s_{k},t\right)\) are continuous at the inner vertex so that
\[w_{j}\left(0,t\right)=w_{k}\left(0,t\right),\quad j,k=1,2,3. \tag{1.4}\]
The remaining connectivity conditions at the inner vertex have to do with control torques applied to a system, and also have to do with systems which in the absence of control may be called "energy-conservative systems". Thus we have
\[\left.\left(\frac{\partial^{2}w_{k}\left(s_{k},t\right)}{\partial s _{k}^{2}}-\alpha\frac{\partial w_{k}\left(s_{k},t\right)}{\partial s_{k}}- \kappa\frac{\partial^{2}w_{k}\left(s_{k},t\right)}{\partial s_{k}\partial t} \right)\right|_{s_{k}=0} =0,\quad k=1,2,3, \tag{1.5}\] \[\sum_{k=1}^{3}\left.\left[\frac{\partial^{3}w_{k}\left(s_{k},t \right)}{\partial s_{k}^{3}}-\left(\gamma-\eta^{2}\right)\frac{\partial w_{k} \left(s_{k},t\right)}{\partial s_{k}}+\beta\eta\frac{\partial w_{k}\left(s_{k},t\right)}{\partial t}\right]\right|_{s_{k}=0} =0, \tag{1.6}\]
where the feedback control parameters \(\alpha,\kappa\geq 0\), conditioned on the restriction \(\gamma>\eta^{2}\), should be chosen so as to exponentially stabilise the resulting closed-loop system (1.1)-(1.6) in an appropriate sense.
We point out that one important difference between these conditions and those in [11] is the "addition" of extra terms \(\beta\eta\left.\left(\partial w_{k}/\partial t\right)\left(s_{k},t\right) \right|_{s_{k}=0}\) to the condition (1.6), which, as we have noted above, states that the system is energy-conservative when there is no control, \(\kappa=0\). In fact, it may be verified readily that a variational derivation (conservative Hamilton's principle) of (1.1) will lead to the vertex condition (1.6). Then (1.6) also has the physical interpretation as the force balance condition at the inner vertex just as in [11].
Although there are several approaches to verification of exponential stabilisability, a direct _spectral approach_ is the natural one to follow. In particular, this approach is used in [10, 11] (see also the many other papers by the second author which appeared over the past years) and should be of considerable interest to the engineer wishing to use only spectral information - location and asymptotics of eigenvalues, typically - for the boundary-eigenvalue problem associated with the closed-loop system to study its stability, as in the case for lumped parameter systems.
For the spectral approach to work for a semigroup formulation of the closed-loop system, in principle, we must show that it has a system operator which has a compact resolvent and whose root vectors (eigen- and associated vectors) form a Riesz basis for the underlying Hilbert state space. Then the familiar Spectrum Determined Growth Assumption (definition at the end of Section 2) is satisfied and one is justified to use spectral information as a criterion for exponential stabilisability alone, as it is explained in detail in [10, 11] and elsewhere
Figure 1. A star-shaped network with vertex control.
The organisation of the paper follows. In Section 2 an operator formalism in an appropriate Hilbert state space is given so that it will be possible to study well-posedness of the closed-loop system in the setting of strongly continuous semigroups of bounded linear operators (abbreviated henceforth to \(C_{0}\)-semigroups). Section 3 deals with a complete spectral analysis (existence, location, multiplicity and asymptotics for eigenvalues) of the closed-loop system operator. The completeness, minimality, and Riesz basis properties of its root vectors are investigated in Section 4, where the satisfaction of the Spectrum Determined Growth Assumption is verified. We conclude in Section 5 with a positive result for the exponential stability property of the closed-loop system.
## 2. Operator formulation and well-posedness
We begin this section with a summary of the restrictions on the parameters specified in the Introduction:
\[\alpha,\kappa\geq 0,\quad\beta\in\left(0,1\right),\quad\eta\geq 0,\quad\gamma> \eta^{2}.\]
We shall henceforth put \(s\) in place of the variable \(0\leq s_{k}\leq 1\), \(k=1,2,3\), and set
\[v_{k}\left(s,t\right)=\frac{\partial w_{k}\left(s,t\right)}{\partial t},\quad x _{k}\left(s,t\right)=\left(w_{k}\left(s,t\right),v_{k}\left(s,t\right)\right), \quad k=1,2,3,\]
and
\[x\left(s,t\right)=\left(x_{1}\left(s,t\right),x_{2}\left(s,t\right),x_{3} \left(s,t\right)\right).\]
We then introduce the space \(H_{*}^{2}\left(0,1\right)\coloneqq\left\{w\in H^{2}\left(0,1\right)\,\big{|} \,\,w\left(1\right)=0\right\}\), where \(H^{r}\left(0,1\right)\), \(r\in\mathbb{N}_{0}\), denotes the usual Sobolev space of order \(r\) related to \(L^{2}\left(0,1\right)\). It follows that \(H_{*}^{2}\left(0,1\right)\), equipped with the inner product
\[\left(w,\widetilde{w}\right)=\int_{0}^{1}w^{\prime\prime}\left(s\right) \overline{\widetilde{w}^{\prime\prime}\left(s\right)}\,ds+\left(\gamma-\eta^ {2}\right)\int_{0}^{1}w^{\prime}\left(s\right)\overline{\widetilde{w}^{\prime }\left(s\right)}\,ds+\alpha w^{\prime}\left(0\right)\overline{\widetilde{w}^{ \prime}\left(0\right)},\]
is a Hilbert space.
Let now \(L^{2}\left(\mathbf{G}\right)\) be the metric space of vector-valued functions \(v=\left(v_{1},v_{2},v_{3}\right)\) for which \(v_{k}\in L^{2}\left(0,1\right)\), \(k=1,2,3\). We similarly define the space \(H_{*}^{2}\left(\mathbf{G}\right)\) of vector-valued functions \(w=\left(w_{1},w_{2},w_{3}\right)\) for which \(w_{k}\in H_{*}^{2}\left(0,1\right)\), \(k=1,2,3\), and \(w_{j}\left(0\right)=w_{k}\left(0\right)\), \(j,k=1,2,3\). In the Hilbert space \(\mathscr{X}=H_{*}^{2}\left(\mathbf{G}\right)\times L^{2}\left(\mathbf{G}\right)\), i.e.,
\[\mathscr{X}\coloneqq\left\{x=\left\{x_{k}\right\}_{k=1}^{3}\,\left|\begin{array} []{c}x_{k}=\left(w_{k},v_{k}\right)\in H_{*}^{2}\left(0,1\right)\times L^{2} \left(0,1\right),\\ w_{j}\left(0\right)=w_{k}\left(0\right),\quad j,k=1,2,3\end{array}\right.\right\}\]
with the inner product (inducing an energy-motivated norm on \(\mathscr{X}\))
\[\left(x,\widetilde{x}\right)_{\mathscr{X}}\coloneqq\left(w,\widetilde{w} \right)_{H_{*}^{2}\left(\mathbf{G}\right)}+\left(v,\widetilde{v}\right)_{L^{2} \left(\mathbf{G}\right)},\]
where
\[\left(w,\widetilde{w}\right)_{H_{*}^{2}\left(\mathbf{G}\right)} =\sum_{k=1}^{3}\left[\int_{0}^{1}w_{k}^{\prime\prime}\left(s \right)\overline{\widetilde{w}_{k}^{\prime\prime}\left(s\right)}\,ds+\left( \gamma-\eta^{2}\right)\int_{0}^{1}w_{k}^{\prime}\left(s\right)\overline{ \widetilde{w}_{k}^{\prime}\left(s\right)}\,ds+\alpha w_{k}^{\prime}\left(0 \right)\overline{\widetilde{w}_{k}^{\prime}\left(0\right)}\right],\] \[\left(v,\widetilde{v}\right)_{L^{2}\left(\mathbf{G}\right)} =\sum_{k=1}^{3}\int_{0}^{1}v_{k}\left(s\right)\overline{\widetilde{v}_{k} \left(s\right)}\,ds,\]
we define the operators \(\mathcal{A}\), \(\mathcal{B}\) on the domains
\[\mathscr{D}\left(\mathcal{A}\right)=\left\{x=\left\{x_{k}\right\}_{k=1}^{3}\in \mathscr{X}\,\left|\begin{array}{c}x_{k}=\left(w_{k},v_{k}\right)\in\left(H^{4 }\left(0,1\right)\cap H_{*}^{2}\left(0,1\right)\right)\times H_{*}^{2}\left(0, 1\right),\\ w_{k}^{\prime\prime}\left(1\right)=0,\quad w_{k}^{\prime\prime}\left(0 \right)-\alpha w_{k}^{\prime}\left(0\right)-\kappa v_{k}^{\prime}\left(0 \right)=0,\quad k=1,2,3,\\ \sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2 }\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k}\left(0\right)\right]=0 \end{array}\right.\right\}, \tag{2.1}\]
\[\mathscr{D}\left(\mathcal{B}\right)=\left\{x=\left\{x_{k}\right\}_{k=1}^{3} \in\mathscr{X}\,\left|\begin{array}{c}x_{k}=\left(w_{k},v_{k}\right)\in H _{*}^{2}\left(0,1\right)\right)\times H_{*}^{1}\left(0,1\right),\quad k=1,2,3 \right\} \tag{2.2}\]
by
\[\mathcal{A}x \coloneqq\left\{\left(v_{k},-w_{k}^{\left(4\right)}+\left(\gamma- \eta^{2}\right)w_{k}^{\prime\prime}\right)\right\}_{k=1}^{3}, \tag{2.3}\] \[\mathcal{B}x \coloneqq\left\{\left(0,-2\beta\eta v_{k}^{\prime}\right)\right\} _{k=1}^{3}, \tag{2.4}\]
respectively. Clearly \(\mathscr{D}\left(\mathcal{A}\right)\subset\mathscr{D}\left(\mathcal{B}\right)\), and one may verify quite readily that \(\mathcal{B}\) is relatively compact with respect to \(\mathcal{A}\) (in the sense of [7, Section IV.1.3]). The closed-loop system may be formulated abstractly thus:
\[\left\{\begin{aligned} &\dot{x}\left(t\right)=\mathcal{T}x \left(t\right),\quad\mathcal{T}:=\mathcal{A}+\mathcal{B},\quad\mathscr{D} \left(\mathcal{T}\right)=\mathscr{D}\left(\mathcal{A}\right),\\ & x\left(0\right)=x_{0},\end{aligned}\right. \tag{2.5}\]
where
\[x\left(t\right)=\left\{\left(w_{k}\left(\,\cdot\,,t\right),v_{k}\left(\,\cdot,t\right)\right)\right\}_{k=1}^{3},\quad x_{0}=\left\{\left(g_{k},h_{k}\right) \right\}_{k=1}^{3}.\]
Substitute \(x\left(t\right)=x\exp\left(\lambda t\right)\), \(x\in\mathscr{X}\), in (2.5) and note for later reference that
\[\mathcal{T}x=\lambda x,\quad x\in\mathscr{D}\left(\mathcal{A}\right),\quad \lambda\in\mathbb{C}, \tag{2.6}\]
which is the spectral problem for the closed-loop system with spectral parameter \(\lambda\). (Refer to standard textbooks for the standard definitions from functional analysis of the spectral theory of linear operators in Hilbert space.)
Our central well-posedness result is the following:
**Theorem 2.1**.: _The closed-loop system is well posed in the sense that (2.5) has a unique solution \(x\in C^{1}\left(\left(0,\infty\right);\mathscr{X}\right)\cap C\left(\left[0, \infty\right);\mathscr{D}\left(\mathcal{A}\right)\right)\) given by_
\[x\left(t\right)=\mathbb{S}\left(t\right)x_{0},\quad x_{0}\in\mathscr{D}\left( \mathcal{A}\right),\]
_where \(\mathbb{S}\left(t\right)\) is a contractive \(C_{0}\)-semigroup on \(\mathscr{X}\) with infinitesimal generator \(\mathcal{T}\)._
We will prove the theorem with the help of the following lemma.
**Lemma 2.1**.: _The following statements hold:_
1. \(\mathcal{T}\) _is closed and densely defined._
2. \(0\in\varrho\left(\mathcal{T}\right)\) _(the resolvent set of_ \(\mathcal{T}\)_) and_ \(\mathcal{T}^{-1}\) _is compact._
3. \(\mathcal{T}\) _is maximal dissipative for_ \(\kappa>0\) _and skewadjoint for_ \(\kappa=0\)_._
Proof.: To prove statement (2) it will be shown, first of all, that \(\mathcal{T}\) is injective and surjective. To this end for \(\lambda=0\) we consider the solution \(x\) of (2.6). Then we clearly have \(v_{k}=0\), \(k=1,2,3\), and the \(w_{k}=w_{k}\left(\lambda,s\right)\) satisfy the boundary-eigenvalue problem
\[\left\{\begin{aligned} w_{k}^{\left(4\right)}-\left(\gamma-\eta^{2} \right)w_{k}^{\prime\prime}&=0,& k=1,2,3,\\ w_{k}\left(1\right)=w_{k}^{\prime\prime}\left(1\right)& =0,& k=1,2,3,\\ w_{j}\left(0\right)&=w_{k}\left(0\right),& j,k=1,2,3,\\ w_{k}^{\prime\prime}\left(0\right)-\alpha w_{k}^{\prime}\left(0\right)& =0,& k=1,2,3,\\ \sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2 }\right)w_{k}^{\prime}\left(0\right)\right]&=0.\end{aligned}\right. \tag{2.7}\]
Multiply the differential equations in (2.7) by the conjugate of \(w_{k}\), \(k=1,2,3\), and integrate from \(0\) to \(1\). Integrating by parts, making use of the boundary and vertex conditions \(w_{k}\left(1\right)=w_{k}^{\prime\prime}\left(1\right)=0\), \(w_{k}^{\prime\prime}\left(0\right)-\alpha w_{k}^{\prime}\left(0\right)=0\), \(k=1,2,3\), we get
\[0=\int_{0}^{1}\left|w_{k}^{\prime\prime}\left(s\right)\right|^{2}ds+\left( \gamma-\eta^{2}\right)\int_{0}^{1}\left|w_{k}^{\prime}\left(s\right)\right|^{2 }ds+\alpha\left|w_{k}^{\prime}\left(0\right)\right|^{2}\]
Subsequent summation over \(k=1,2,3\), using the vertex conditions \(w_{j}\left(0\right)=w_{k}\left(0\right)\), \(j,k=1,2,3\), and \(\sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2 }\right)w_{k}^{\prime}\left(0\right)\right]=0\), yields
\[0=\sum_{k=1}^{3}\left[\int_{0}^{1}\left|w_{k}^{\prime\prime}\left(s\right) \right|^{2}ds+\left(\gamma-\eta^{2}\right)\int_{0}^{1}\left|w_{k}^{\prime} \left(s\right)\right|^{2}ds+\alpha\left|w_{k}^{\prime}\left(0\right)\right|^{ 2}\right]=\left\|w\right\|_{H^{2}\left(\mathbf{G}\right)}^{2}.\]
The fact that \(v_{k}=0\), \(k=1,2,3\), implies that \(\left\|v\right\|_{L^{2}\left(\mathbf{G}\right)}=0\) and hence that \(\left\|x\right\|_{\mathscr{X}}=0\), or equivalently, that \(x=0\). It follows from this that \(\ker\mathcal{T}=0\) and so \(\mathcal{T}\) is injective. Thus, \(0\) is not an eigenvalue.
For the surjectivity, we proceed as follows. Let \(\widetilde{x}\in\mathscr{X}\), \(x\in\mathscr{D}\left(\mathcal{A}\right)\) (arbitrary) and consider the equation
\[\mathcal{T}x=\widetilde{x}, \tag{2.8}\]
or equivalently in coordinates,
\[\left\{\begin{aligned} v_{k}&=\widetilde{w}_{k},& k=1,2,3,\\ -w_{k}^{\left(4\right)}+\left(\gamma-\eta^{2}\right)w_{k}^{ \prime\prime}-2\beta\eta v_{k}^{\prime}&=\widetilde{v}_{k},& k=1,2,3,\\ w_{k}\left(1\right)&=w_{k}^{\prime\prime}\left(1 \right)&=0,& k=1,2,3,\\ w_{j}\left(0\right)&=w_{k}\left(0\right),& j,k=1,2,3,\\ w_{k}^{\prime\prime}\left(0\right)-\alpha w_{k}^{\prime}\left(0 \right)-\kappa v_{k}^{\prime}\left(0\right)&=0,& k=1,2,3,\\ \sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left( \gamma-\eta^{2}\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k}\left(0 \right)\right]&=0,\end{aligned}\right. \tag{2.9}\]
Write the differential equations in (2.9) in weak form as
\[\int_{0}^{1}\!\left[-w_{k}^{\left(4\right)}\left(s\right)+\left(\gamma-\eta^ {2}\right)w_{k}^{\prime\prime}\left(s\right)-2\beta\eta v_{k}^{\prime}\left(s \right)\right]\overline{\phi_{k}\left(s\right)}\,ds=\int_{0}^{1}\widetilde{v} _{k}\left(s\right)\overline{\phi_{k}\left(s\right)}\,ds,\quad k=1,2,3,\]
for the \(\phi_{k}\) in an appropriate class of test functions satisfying the conditions \(\phi_{k}\left(1\right)=0\), \(k=1,2,3\), and \(\phi_{j}\left(0\right)=\phi_{k}\left(0\right)\), \(j,k=1,2,3\). Again,
\[\sum_{k=1}^{3}\int_{0}^{1}\left(\widetilde{v}_{k}\left(s\right)+ 2\beta\eta\widetilde{w}_{k}^{\prime}\left(s\right)\right)\overline{\phi_{k} \left(s\right)}\,ds+\beta\eta\sum_{k=1}^{3}\widetilde{w}_{k}\left(0\right) \overline{\phi_{k}\left(0\right)}+\kappa\sum_{k=1}^{3}\widetilde{w}_{k}^{ \prime}\left(0\right)\overline{\phi_{k}^{\prime}\left(0\right)}\] \[=-\sum_{k=1}^{3}\left[\int_{0}^{1}w_{k}^{\prime\prime}\left(s \right)\overline{\phi_{k}^{\prime\prime}\left(s\right)}\,ds+\left(\gamma-\eta^ {2}\right)\int_{0}^{1}w_{k}^{\prime}\left(s\right)\overline{\phi_{k}^{\prime} \left(s\right)}\,ds+\alpha w_{k}^{\prime}\left(0\right)\overline{\phi_{k}^{ \prime}\left(0\right)}\right]\]
employing integration by parts and using the boundary and vertex conditions, and where we have taken into account that \(v_{k}=\widetilde{w}_{k}\), \(k=1,2,3\). We define the bilinear form \(\left\langle\,\cdot\,,\,\cdot\,\right\rangle\) on \(H_{*}^{2}\left(\mathbf{G}\right)\) by
\[\left\langle u,\widetilde{u}\right\rangle\coloneqq\sum_{k=1}^{3}\left[\int_{0}^ {1}u_{k}^{\prime\prime}\left(s\right)\overline{\widetilde{u}_{k}^{\prime \prime}\left(s\right)}\,ds+\left(\gamma-\eta^{2}\right)\int_{0}^{1}u_{k}^{ \prime}\left(s\right)\overline{\widetilde{u}_{k}^{\prime}\left(s\right)}\,ds+ \alpha u_{k}^{\prime}\left(0\right)\overline{\widetilde{u}_{k}^{\prime}\left( 0\right)}\right]\]
for any \(u,\widetilde{u}\in H_{*}^{2}\left(\mathbf{G}\right)\). Clearly from Schwarz's inequality,
\[\left|\left\langle u,\widetilde{u}\right\rangle\right|\leq\left\|u\right\|_{H_{ *}^{2}\left(\mathbf{G}\right)}\left\|\widetilde{u}\right\|_{H_{*}^{2}\left( \mathbf{G}\right)},\quad u,\widetilde{u}\in H_{*}^{2}\left(\mathbf{G}\right).\]
Moreover, \(\left\langle u,u\right\rangle=\left\|u\right\|_{H_{*}^{2}\left(\mathbf{G}\right)}\), and so \(\left\langle\,\cdot\,,\,\cdot\,\right\rangle\) is coercive. Now define
\[f\left(\phi\right)\coloneqq\sum_{k=1}^{3}\int_{0}^{1}\left(\widetilde{v}_{k} \left(s\right)+2\beta\eta\widetilde{w}_{k}^{\prime}\left(s\right)\right) \overline{\phi_{k}\left(s\right)}\,ds+\beta\eta\sum_{k=1}^{3}\widetilde{w}_{k }\left(0\right)\overline{\phi_{k}\left(0\right)}+\kappa\sum_{k=1}^{3} \widetilde{w}_{k}^{\prime}\left(0\right)\overline{\phi_{k}^{\prime}\left(0 \right)},\]
a bounded conjugate linear functional of \(\phi\) in \(H_{*}^{2}\left(\mathbf{G}\right)\) for fixed \(\widetilde{x}=\left\{\left(\widetilde{w}_{k},\widetilde{v}_{k}\right)\right\}_ {k=1}^{3}\). Then there exists a unique \(w=\left\{w_{k}\right\}_{k=1}^{3}\in H_{*}^{2}\left(\mathbf{G}\right)\) such that \(\left\langle w,\phi\right\rangle+f\left(\phi\right)=0\) (by the Lax-Milgram theorem) and therefore that \(w_{k}=w_{k}\left(s\right)\), \(k=1,2,3\), satisfies the differential equations in (2.9). Since \(x=\left\{\left(w_{k},v_{k}\right)\right\}_{k=1}^{3}=\left\{\left(w_{k}, \widetilde{w}_{k}\right)\right\}_{k=1}^{3}\), we have \(\left\{\left(w_{k},\widetilde{w}_{k}\right)\right\}_{k=1}^{3}\in\mathscr{D} \left(\mathcal{A}\right)\) and (2.8) is satisfied for any given \(\widetilde{x}=\left\{\left(\widetilde{w}_{k},\widetilde{v}_{k}\right)\right\} _{k=1}^{3}\in\mathscr{X}\). This proves surjectivity, and hence bijectivity of \(\mathcal{T}\). The closed graph theorem now shows that the inverse \(\mathcal{T}^{-1}\) of \(\mathcal{T}\) is closed and bounded, so it remains only to show that \(\mathcal{T}^{-1}\) is compact. But this is clear, for \(\mathscr{D}\left(\mathcal{A}\right)\subset\mathscr{X}\) and the spaces \(\mathscr{D}\left(\mathcal{A}\right)\), \(\mathscr{X}\) are closed subspaces of \(H^{4}\left(\mathbf{G}\right)\times H^{2}\left(\mathbf{G}\right)\), \(H^{2}\left(\mathbf{G}\right)\times L^{2}\left(\mathbf{G}\right)\), respectively, and hence by the Sobolev embedding theorem, \(\mathscr{D}\left(\mathcal{A}\right)\) is compactly embedded in \(\mathscr{X}\).
In order to establish statement (3), we compute for any \(x\in\mathscr{D}\left(\mathcal{A}\right)\),
\[\Re\left(\mathcal{T}x,x\right)_{\mathscr{X}}=\left(\mathcal{T}x,x\right)_{ \mathscr{X}}+\left(x,\mathcal{T}x\right)_{\mathscr{X}}=-\kappa\sum_{k=1}^{3} \left|v_{k}^{\prime}\left(0\right)\right|^{2}, \tag{2.10}\]
after the usual integrations by parts and using the boundary and vertex conditions. The previous arguments show that \(\mathcal{T}\) is maximal dissipative for \(\kappa>0\). Indeed, since \(0\in\varrho\left(\mathcal{T}\right)\), there exists \(\lambda>0\) in \(\varrho\left(\mathcal{T}\right)\) and \(\operatorname{Ran}\left(\lambda\mathcal{I}-\mathcal{T}\right)=\mathscr{X}\) (because \(\varrho\left(\mathcal{T}\right)\) is open); hence the maximality of \(\mathcal{T}\). It is skewsymmetric when \(\kappa=0\) for then we have \(\left(\mathcal{T}x,x\right)_{\mathscr{X}}+\left(x,\mathcal{T}x\right)_{ \mathscr{X}}=0\) from (2.10). The resulting skewadjointness of \(\mathcal{T}\) for \(\kappa=0\) can be obtained by the same standard arguments as in the proof of [11, Lemma 3.2].
Finally, statements (2) and (3) imply statement (1).
Proof of Theorem 2.1.: Since, by Lemma 2.1, \(\mathcal{T}\) is a closed, densely defined, maximal dissipative operator in \(\mathscr{X}\), the proof follows immediately from the Lumer-Phillips theorem, see [15, Theorem 1.4.3].
In preparation for what follows, we recall that for any \(C_{0}\)-semigroup \(\mathbb{S}\left(t\right)\) on \(\mathscr{X}\), there exist constants \(M\), \(\varpi\) such that \(\left\|\mathbb{S}\left(t\right)\right\|_{\mathscr{X}}\leq Me^{\varpi t}\), \(t\geq 0\). The semigroup is said to be exponentially stable if \(\varpi<0\). In this case, solutions of the closed-loop system are such that \(\left\|\mathbb{S}\left(t\right)x_{0}\right\|_{\mathscr{X}}\leq Me^{\varpi t} \left\|x_{0}\right\|_{\mathscr{X}}\) and so, as \(t\to\infty\), \(\left\|x\left(t\right)\right\|_{\mathscr{X}}\to 0\) exponentially (recall that \(\left\|x\left(t\right)\right\|_{\mathscr{X}}^{2}\) is a measure of the energy of the closed-loop system at a given time \(t\)). The goal of the paper is to prove the exponential stability of the closed-loop system for which the growth bound \(\varpi_{0}\) satisfies \(\varpi_{0}=\sup\left\{\Re\left(\lambda\right)\,\left|\,\,\lambda\in\sigma \left(\mathcal{T}\right)\right\}\), \(\sigma\left(\mathcal{T}\right)\) being the spectrum of \(\mathcal{T}\), and hence the Spectrum Determined Growth Assumption will be satisfied.
## 3. Spectral analysis
We return to the spectral problem (2.6) and analyse the spectrum of the closed-loop system operator \(\mathcal{T}\) in detail. It follows from Lemma 2.1 that \(\mathcal{T}\) has compact resolvent and \(0\in\varrho\left(\mathcal{T}\right)\). It is known then (see, e.g., [5, Corollary XI.8.4]) that the spectrum \(\sigma\left(\mathcal{T}\right)\) is a purely discrete set, consisting only of eigenvalues of finite algebraic multiplicity which accumulate only at infinity. Moreover, because \(\mathcal{T}\) is maximal dissipative, for any \(\lambda\in\sigma\left(\mathcal{T}\right)\) we have \(\Re\left(\lambda\right)\leq 0\).
Further information about the location of the eigenvalues are obtained in the next result.
**Theorem 3.1**.: _The spectrum of \(\mathcal{T}\coloneqq\mathcal{A}+\mathcal{B}\) as defined by (2.1)-(2.4) is symmetric with respect to the real axis of the complex plane, the eigenvalues of \(\mathcal{T}\) being confined to the open left half-plane when \(\kappa>0\)._
Proof.: Since \(\mathcal{T}\) is a real operator we obtain the first assertion about symmetry of the spectrum. Indeed, conjugation of (2.6) shows that \(\overline{x}\) satisfies its conjugate spectral problem and is the
eigenvector of \(\mathcal{T}\) which corresponds to the eigenvalue \(\overline{\lambda}\). Next, we prove the second assertion. To do this, we take the inner product of (2.6) with the corresponding \(x\) to obtain for the real parts of the resulting expression, taking into account (2.10),
\[\Re\left(\lambda\right)=\frac{\Re\left(\mathcal{T}x,x\right)_{\mathscr{X}}}{ \|x\|_{\mathscr{X}}^{2}}\leq 0.\]
We must show that if \(\kappa>0\) then \(\Re\left(\lambda\right)<0\). Let \(\lambda\) be an eigenvalue with \(\Re\left(\lambda\right)=0\) and let \(x=\left\{\left(w_{k},v_{k}\right)\right\}_{k=1}^{3}\) be the corresponding eigenvector. Then, because \(v_{k}=\lambda w_{k}\), \(k=1,2,3\), replacing the \(v_{k}^{\prime}\left(0\right)\) in (2.10) by \(\lambda w_{k}^{\prime}\left(0\right)\) we arrive at
\[\Re\left(\mathcal{T}x,x\right)_{\mathscr{X}}=-\kappa\left|\lambda\right|^{2} \sum_{k=1}^{3}\left|w_{k}^{\prime}\left(0\right)\right|^{2}=0\]
and
\[\sum_{k=1}^{3}\left|w_{k}^{\prime}\left(0\right)\right|^{2}=0\]
since \(\kappa>0\), \(\lambda\neq 0\). So \(w_{k}^{\prime}\left(0\right)=0\), \(k=1,2,3\), and the \(w_{k}=w_{k}\left(\lambda,s\right)\) satisfy the boundary-eigenvalue problem
\[\left\{\begin{aligned} w_{k}^{\left(4\right)}-\left( \gamma-\eta^{2}\right)w_{k}^{\prime\prime}+2\lambda\beta\eta w_{k}^{\prime}& =-\lambda^{2}w_{k},\ \ \ \ k=1,2,3,\\ w_{k}\left(1\right)=w_{k}^{\prime\prime}\left(1\right)& =0,\ \ \ \ \ \ \ \ \ \ k=1,2,3,\\ w_{j}\left(0\right)&=w_{k}\left(0\right),\ \ \ \ j,k=1,2,3,\\ w_{k}^{\prime}\left(0\right)&=w_{k}^{\prime\prime} \left(0\right)&=0,\ \ \ \ \ \ \ \ \ \ \ \ k=1,2,3,\\ \sum_{k=1}^{3}\left(w_{k}^{\left(3\right)}\left(0\right)+\lambda \beta\eta w_{k}\left(0\right)\right)&=0.\end{aligned}\right. \tag{3.1}\]
Setting \(\lambda=i\mu\), \(\mu\in\mathbb{R}\), consider the boundary-eigenvalue problem
\[\left\{\begin{aligned} \varphi^{\left(4\right)}-\left( \gamma-\eta^{2}\right)\varphi^{\prime\prime}+2i\beta\eta\mu\varphi^{\prime}& =\mu^{2}\varphi,\\ \varphi\left(1\right)&=\varphi^{\prime\prime}\left(1 \right)&=0,\\ \varphi^{\prime}\left(0\right)&=\varphi^{\prime\prime} \left(0\right)&=0.\end{aligned}\right. \tag{3.2}\]
Let \(\varphi=\varphi\left(\lambda,s\right)\neq 0\) be a solution of (3.2). Then solutions of (3.1) are of the form \(w_{k}\left(\lambda,s\right)=c_{k}\varphi\left(\lambda,s\right)\), \(k=1,2,3\), where \(c_{k}\) are arbitrary constants. The vertex condition \(w_{j}\left(0\right)=w_{k}\left(0\right)\), \(j,k=1,2,3\), together with \(\sum_{k=1}^{3}\left(w_{k}^{\left(3\right)}\left(0\right)+i\beta\eta\mu w_{k} \left(0\right)\right)=0\) implies that \(c_{j}=c_{k}\equiv c\), \(j,k=1,2,3\), and hence that
\[3c\left(\varphi^{\left(3\right)}\left(0\right)+i\beta\eta\mu\varphi\left(0 \right)\right)=0.\]
Using \(\varphi^{\left(3\right)}\left(0\right)+i\beta\eta\mu\varphi\left(0\right)\neq 0\) gives \(c=0\). This shows that (3.1) has only zero solutions. Thus \(\mathcal{T}\) has no eigenvalues on the imaginary axis, \(\Re\left(\lambda\right)<0\), if \(\kappa>0\).
### Eigenvalues and eigenvectors
We have shown that studying the spectrum of \(\mathcal{T}\) reduces to studying its discrete spectrum, consisting only of eigenvalues. To pursue this further, we will show in the next theorem how to determine these eigenvalues and the corresponding eigenvectors.
**Theorem 3.2**.: _Let \(\varphi=\varphi\left(\lambda,s\right)\) be a nonzero solution of the differential equation_
\[\varphi^{\left(4\right)}-\left(\gamma-\eta^{2}\right)\varphi^{\prime\prime}+2 \lambda\beta\eta\varphi^{\prime}=-\lambda^{2}\varphi \tag{3.3}\]
_for \(\lambda\in\mathbb{C}\) satisfying the boundary conditions_
\[\varphi\left(1\right)=\varphi^{\prime\prime}\left(1\right) =0, \tag{3.4}\] \[\varphi^{\prime\prime}\left(0\right)-\left(\alpha+\lambda\kappa \right)\varphi^{\prime}\left(0\right) =0. \tag{3.5}\]
_Define by_
\[D_{1}\left(\lambda\right)\coloneqq\varphi^{\left(3\right)}\left(\lambda,0\right)- \left(\gamma-\eta^{2}\right)\varphi^{\prime}\left(\lambda,0\right)+\lambda \beta\eta\varphi\left(\lambda,0\right),\quad D_{2}\left(\lambda\right)\coloneqq \varphi\left(\lambda,0\right)\]
_the corresponding characteristic functions. The eigenvalues of \(\mathcal{T}\) are the solutions of \(D_{1}\left(\lambda\right)=0\) and \(D_{2}\left(\lambda\right)=0\),_
\[\sigma\left(\mathcal{T}\right)=\left\{\lambda\in\mathbb{C}\,\left|\,\,D_{1} \left(\lambda\right)=0\right\}\cup\left\{\lambda\in\mathbb{C}\,\left|\,\,D_{2} \left(\lambda\right)=0\right.\right.\]
_If \(D_{1}\left(\lambda\right)=0\) and \(D_{2}\left(\lambda\right)\neq 0\), then an eigenvector \(x=x\left(\lambda\right)\) corresponding to the eigenvalue \(\lambda\) is given by_
\[x\left(\lambda\right)=\big{\{}\left(\varphi\left(\lambda,\,\cdot\,\right), \lambda\varphi\left(\lambda,\,\cdot\,\right)\right);\left(\varphi\left( \lambda,\,\cdot\,\right),\lambda\varphi\left(\lambda,\,\cdot\,\right)\right); \left(\varphi\left(\lambda,\,\cdot\,\right),\lambda\varphi\left(\lambda,\, \cdot\,\right)\right)\big{\}}. \tag{3.6}\]
_If \(D_{1}\left(\lambda\right)\neq 0\) and \(D_{2}\left(\lambda\right)=0\), then there are two linearly independent eigenvectors \(x_{1}=x_{1}\left(\lambda\right)\), \(x_{2}=x_{2}\left(\lambda\right)\) for the eigenvalue \(\lambda\) (i.e., \(\lambda\) has geometric multiplicity two) given by_
\[x_{1}\left(\lambda\right) =\big{\{}\left(\varphi\left(\lambda,\,\cdot\,\right),\lambda \varphi\left(\lambda,\,\cdot\,\right)\right);-\frac{1}{2}\left(\varphi\left( \lambda,\,\cdot\,\right),\lambda\varphi\left(\lambda,\,\cdot\,\right)\right); -\frac{1}{2}\left(\varphi\left(\lambda,\,\cdot\,\right),\lambda\varphi\left( \lambda,\,\cdot\,\right)\right)\big{\}}, \tag{3.7}\] \[x_{2}\left(\lambda\right) =\big{\{}\left(0,0\right);\left(\varphi\left(\lambda,\,\cdot\, \right),\lambda\varphi\left(\lambda,\,\cdot\,\right)\right);-\left(\varphi \left(\lambda,\,\cdot\,\right),\lambda\varphi\left(\lambda,\,\cdot\,\right) \right)\big{\}}, \tag{3.8}\]
_respectively._
Proof.: Let us first note from (2.6) that if \(\lambda\in\mathbb{C}\) is an eigenvalue of \(\mathcal{T}\) with corresponding eigenvector \(x\), then the \(w_{k}=w_{k}\left(\lambda,s\right)\) are nonzero solutions of the boundary-eigenvalue problem
\[\left\{\begin{aligned} w_{k}^{\left(4\right)}-\left( \gamma-\eta^{2}\right)w_{k}^{\prime\prime}+2\lambda\beta\eta w_{k}^{\prime}& =-\lambda^{2}w_{k},&\quad k=1,2,3,\\ w_{k}\left(1\right)=w_{k}^{\prime\prime}\left(1\right)& =0,&\quad k=1,2,3,\\ w_{j}\left(0\right)&=w_{k}\left(0\right),& \quad j,k=1,2,3,\\ w_{k}^{\prime\prime}\left(0\right)-\left(\alpha+\lambda\kappa \right)w_{k}^{\prime}\left(0\right)&=0,&\quad k=1,2,3,\\ \sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left( \gamma-\eta^{2}\right)w_{k}^{\prime}\left(0\right)+\lambda\beta\eta w_{k} \left(0\right)\right]&=0.\end{aligned}\right.\right. \tag{3.9}\]
Comparing (3.3)-(3.5) with (3.9) we observe that \(w_{k}\left(\lambda,s\right)=c_{k}\varphi\left(\lambda,s\right)\), \(k=1,2,3\). Substitution in \(w_{j}\left(0\right)=w_{k}\left(0\right)\), \(j,k=1,2,3\), and \(\sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2} \right)w_{k}^{\prime}\left(0\right)+\lambda\beta\eta w_{k}\left(0\right)\right]=0\) gives \(c_{j}\varphi\left(\lambda,0\right)=c_{k}\varphi\left(\lambda,0\right)\), \(j,k=1,2,3\), and
\[\sum_{k=1}^{3}c_{k}\left[\varphi^{\left(3\right)}\left(\lambda,0\right)-\left( \gamma-\eta^{2}\right)\varphi^{\prime}\left(\lambda,0\right)+\lambda\beta\eta \varphi\left(\lambda,0\right)\right]=0. \tag{3.10}\]
Thus two cases arise.
**Case 1.**\(D_{2}\left(\lambda\right)\neq 0\). Here \(c_{j}=c_{k}\equiv c\neq 0\), \(j,k=1,2,3\), since \(\varphi\left(\lambda,0\right)\neq 0\), and from (3.10) we have therefore
\[\varphi^{\left(3\right)}\left(\lambda,0\right)-\left(\gamma-\eta^{2}\right) \varphi^{\prime}\left(\lambda,0\right)+\lambda\beta\eta\varphi\left(\lambda,0 \right)=0. \tag{3.11}\]
Then \(\lambda\) is an eigenvalue if and only if the boundary-eigenvalue problem (3.3)-(3.5), (3.11) has a nonzero solution, in which case \(\lambda\) is a zero of \(D_{1}\) and (3.6) follows since
\[x\left(\lambda\right)=\left\{c_{k}\left(\varphi\left(\lambda,\,\cdot\,\right), \lambda\varphi\left(\lambda,\,\cdot\,\right)\right)\right\}_{k=1}^{3} \tag{3.12}\]
with the \(c_{k}\equiv c\).
**Case 2.**\(D_{2}\left(\lambda\right)=0\). In this case we consider (3.3)-(3.5) subject to the additional boundary condition
\[\varphi\left(\lambda,0\right)=0. \tag{3.13}\]
Therefore, \(\lambda\) is an eigenvalue if and only if the boundary-eigenvalue problem (3.3)-(3.5), (3.13) admits a nonzero solution. Using (3.13) in (3.10), we obtain
\[\sum_{k=1}^{3}c_{k}\left[\varphi^{\left(3\right)}\left(\lambda,0\right)-\left( \gamma-\eta^{2}\right)\varphi^{\prime}\left(\lambda,0\right)\right]=\left[ \varphi^{\left(3\right)}\left(\lambda,0\right)-\left(\gamma-\eta^{2}\right) \varphi^{\prime}\left(\lambda,0\right)\right]\,\sum_{k=1}^{3}c_{k}=0.\]
Let
\[\varphi^{\left(3\right)}\left(\lambda,0\right)-\left(\gamma-\eta^{2}\right)\varphi^ {\prime}\left(\lambda,0\right)=0. \tag{3.14}\]
It is easy to see that any solution (3.3)-(3.5), (3.13), (3.14) must be the zero solution. Consequently, \(\sum_{k=1}^{3}c_{k}=0\) and \(\varphi^{\left(3\right)}\left(\lambda,0\right)-\left(\gamma-\eta^{2}\right) \varphi^{\prime}\left(\lambda,0\right)\neq 0\), and it follows that associated with \(\lambda\) there exist two linearly independent eigenvectors \(x_{1}\left(\lambda\right)\) and \(x_{2}\left(\lambda\right)\) given by (3.7) and (3.8), respectively. With this choice of eigenvectors \(x_{1}\left(\lambda\right)\), \(x_{2}\left(\lambda\right)\) we have in fact \(x_{1}\left(\lambda\right)\perp x_{2}\left(\lambda\right)\). The theorem is proven.
### Asymptotics of eigenvalues
Our aim in this subsection is to discuss eigenvalue asymptotics. The approach we take here is based on the "asymptotic spectral problem" given in [10] for the single pipe case. By Theorem 3.1, we need only consider eigenvalues in the left half-plane \(\Re\left(\lambda\right)\leq 0\). Moreover, those eigenvalues with nonzero imaginary part occur in conjugate pairs \(\lambda\), \(\overline{\lambda}\), so that we may restrict attention to \(\frac{\pi}{2}\leq\arg\left(\lambda\right)\leq\pi\). As usual, we use the standard substitution \(\lambda=i\rho^{2}\) and take \(0\leq\arg\left(\rho\right)\leq\frac{\pi}{4}\). Define the sector \(S\) in the complex plane by
\[S\coloneqq\left\{\rho\in\mathbb{C}\ \Big{|}\ 0\leq\arg\left(\rho\right)\leq \frac{\pi}{4}\right\}.\]
For \(S\) the four roots of \(-1\) can be ordered so that
\[\Re\left(-\rho\right)\leq\Re\left(i\rho\right)\leq\Re\left(-i\rho\right)\leq \Re\left(\rho\right),\quad\rho\in S,\]
and from elementary considerations we have
\[\Re\left(-\rho\right)=\left|\rho\right|\cos\left(\arg\left(\rho\right)\right) \leq-\frac{\sqrt{2}}{2}\left|\rho\right|<0\]
and
\[\Re\left(i\rho\right)=\left|\rho\right|\cos\left(\arg\left(\rho\right)+\frac{ \pi}{2}\right)=-\left|\rho\right|\sin\left(\arg\left(\rho\right)\right)\leq 0.\]
Using this, it follows that for \(\rho\in S\) we have as \(\left|\rho\right|\to\infty\)
\[\left|e^{i\rho}\right|\leq 1,\quad\left|e^{-\rho}\right|=\mathcal{O}\left(e^{-b \left|\rho\right|}\right)\to 0,\]
for some constant \(b>0\). We shall use these observations henceforth without explicit mention.
The underlying idea in the proof of the next theorem is that it is enough to base it on asymptotic properties in \(\left|\lambda\right|\) of the solutions to (3.3), according to Theorem 3.2. We are here concerned with the case \(\lambda=i\rho^{2}\), and (3.3) becomes therefore
\[\varphi^{\left(4\right)}-\left(\gamma-\eta^{2}\right)\varphi^{\prime\prime}+2 i\beta\eta\rho^{2}\varphi^{\prime}=\rho^{4}\varphi. \tag{3.15}\]
For convenience, we state a result from [10].
**Lemma 3.1**.: _In the sector \(S\), there exists a fundamental system \(\left\{\varphi_{r}\left(\rho,\,\cdot\,\right)\right\}_{r=1}^{4}\) of the differential equation (3.15) which has the following asymptotic expressions for large \(\left|\rho\right|\):_
\[\varphi_{r}^{\left(m\right)}\left(\rho,s\right)=\left(i^{r}\rho\right)^{m}e^{ i^{r}\rho s}\left(1+\Phi_{r}\left(s\right)+\frac{i^{r}\Phi_{r1}\left(s \right)+m\Phi_{r}^{\prime}\left(s\right)}{i^{r}\rho}+\mathcal{O}\left(\rho^{-2 }\right)\right),\quad r=1,2,3,4,\]
_for \(m=0,1,2,3\), \(0\leq s\leq 1\), where_
\[\Phi_{r}\left(s\right)=-1+e^{\left(-1\right)^{r+1}\frac{i\beta n}{2}s},\quad \Phi_{r1}\left(s\right)=\frac{\left(-i\right)^{r}}{4}\left(\frac{\beta^{2} \eta^{2}}{2}+\gamma-\eta^{2}\right)se^{\left(-1\right)^{r+1}\frac{i\beta n}{2} s},\quad r=1,2,3,4.\]
Let us note from the lemma that, in \(S\), since \(\omega_{r}=i^{r}\) and \(\omega_{r}^{2}=\left(-1\right)^{r}\), with \(\omega_{1}=-\omega_{3}=i\) and \(\omega_{2}=-\omega_{4}=-1\)
\[\varphi_{r}^{\left(m\right)}\left(\rho,s\right)=\left(\rho\omega_{r}\right)^{ m}e^{\rho\omega_{r}s}\left(1+\Phi_{r}\left(s\right)+\frac{\omega_{r}\Phi_{r1} \left(s\right)+m\Phi_{r}^{\prime}\left(s\right)}{\rho\omega_{r}}+\mathcal{O} \left(\rho^{-2}\right)\right),\quad r=1,2,3,4,\]
which we shall use subsequently.
We know that linear combinations of the form \(\varphi\left(\rho,s\right)=\sum_{r=1}^{4}a_{r}\varphi_{r}\left(\rho,s\right)\) (the constants \(a_{r}\) possibly depending on the spectral parameter) satisfy the differential equation (3.15). It is
immediately obvious from Theorem 3.2 that \(D_{1}\left(\lambda\right)=0\) is equivalent to the condition that the system of equations
\[\left\{\begin{aligned} &\sum_{r=1}^{4}a_{r}\varphi_{r}\left(\rho,1 \right)=0,\\ &\sum_{r=1}^{4}a_{r}\varphi_{r}^{\prime\prime}\left(\rho,1 \right)=0,\\ &\sum_{r=1}^{4}a_{r}\left[\varphi_{r}^{\prime\prime}\left(\rho,0 \right)-\left(\alpha+i\kappa\rho^{2}\right)\varphi_{r}\left(\rho,0\right) \right]=0,\\ &\sum_{r=1}^{4}a_{r}\left[\varphi_{r}^{\left(3\right)}\left(\rho, 0\right)-\left(\gamma-\eta^{2}\right)\varphi_{r}^{\prime}\left(\rho,0\right) +i\beta\eta\rho^{2}\varphi_{r}\left(\rho,0\right)\right]=0\end{aligned}\right. \tag{3.16}\]
should have nonzero solutions. Similarly, \(D_{2}\left(\lambda\right)=0\) in Theorem 3.2 is equivalent to the condition that
\[\left\{\begin{aligned} &\sum_{r=1}^{4}a_{r}\varphi_{r}\left(\rho,1 \right)=0,\\ &\sum_{r=1}^{4}a_{r}\varphi_{r}^{\prime\prime}\left(\rho,1 \right)=0,\\ &\sum_{r=1}^{4}a_{r}\left[\varphi_{r}^{\prime\prime}\left(\rho,0 \right)-\left(\alpha+i\kappa\rho^{2}\right)\varphi_{r}\left(\rho,0\right) \right]=0,\\ &\sum_{r=1}^{4}a_{r}\varphi_{r}\left(\rho,0\right)=0\end{aligned}\right. \tag{3.17}\]
should have nonzero solutions. Let
\[\Delta_{1}\left(\lambda\right)\coloneqq\begin{pmatrix}\varphi_{1} \left(\rho,1\right)\\ \varphi_{1}^{\prime\prime}\left(\rho,1\right)\\ \varphi_{1}^{\prime\prime}\left(\rho,0\right)-\left(\alpha+i\kappa\rho^{2} \right)\varphi_{1}^{\prime}\left(\rho,0\right)\\ \varphi_{1}^{\left(3\right)}\left(\rho,0\right)-\left(\gamma-\eta^{2} \right)\varphi_{1}^{\prime}\left(\rho,0\right)+i\beta\eta\rho^{2}\varphi_{1} \left(\rho,0\right)\\ \varphi_{2}\left(\rho,1\right)\\ \varphi_{2}^{\prime\prime}\left(\rho,0\right)-\left(\alpha+i\kappa\rho^{2 }\right)\varphi_{2}^{\prime}\left(\rho,0\right)\\ \varphi_{2}^{\left(3\right)}\left(\rho,0\right)-\left(\gamma-\eta^{2} \right)\varphi_{2}^{\prime}\left(\rho,0\right)+i\beta\eta\rho^{2}\varphi_{2} \left(\rho,0\right)\\ \varphi_{3}\left(\rho,1\right)\\ \varphi_{3}^{\prime\prime}\left(\rho,0\right)-\left(\alpha+i\kappa\rho^{2 }\right)\varphi_{3}^{\prime}\left(\rho,0\right)\\ \varphi_{3}^{\left(3\right)}\left(\rho,0\right)-\left(\gamma-\eta^{2} \right)\varphi_{3}^{\prime}\left(\rho,0\right)+i\beta\eta\rho^{2}\varphi_{3} \left(\rho,0\right)\\ \varphi_{4}\left(\rho,1\right)\\ \varphi_{4}^{\prime\prime}\left(\rho,0\right)-\left(\alpha+i\kappa\rho^{2 }\right)\varphi_{4}^{\prime}\left(\rho,0\right)\\ \varphi_{4}^{\left(3\right)}\left(\rho,0\right)-\left(\gamma-\eta^{2} \right)\varphi_{4}^{\prime}\left(\rho,0\right)+i\beta\eta\rho^{2}\varphi_{4} \left(\rho,0\right)\end{pmatrix}\]
and
\[\Delta_{2}\left(\lambda\right)\coloneqq\begin{vmatrix}\varphi_{1}\left(\rho,1 \right)&\varphi_{2}\left(\rho,1\right)\\ \varphi_{1}^{\prime\prime}\left(\rho,1\right)&\varphi_{2}^{\prime\prime} \left(\rho,1\right)\\ \varphi_{1}\left(\rho,0\right)-\left(\alpha+i\kappa\rho^{2}\right)\varphi_{1} ^{\prime}\left(\rho,0\right)&\varphi_{2}^{\prime\prime}\left(\rho,0\right)- \left(\alpha+i\kappa\rho^{2}\right)\varphi_{2}^{\prime}\left(\rho,0\right)\\ \varphi_{1}\left(\rho,0\right)&\varphi_{2}\left(\rho,0\right)\end{vmatrix}\]
be the characteristic determinants associated with (3.16) and (3.17), respectively. Clearly the zeros of the characteristic functions \(D_{1}\), \(D_{2}\) coincide (counted with multiplicities) with those of \(\Delta_{1}\), \(\Delta_{2}\), respectively. So there are again two cases.
**Case 1**.: _Asymptotic zeros of \(\Delta_{1}\)._ It is a straightforward calculation to show that
\[\varphi_{r}\left(\rho,1\right) =e^{\rho\omega_{r}}e^{-\frac{i\beta\eta}{2}\omega_{r}^{2}}\] \[\quad\times\left[1+\frac{1}{4\rho\omega_{r}}\left(\frac{\beta^{2 }\eta^{2}}{2}+\gamma-\eta^{2}\right)+\mathcal{O}\left(\rho^{-2}\right)\right],\] \[\varphi_{r}^{\prime\prime}\left(\rho,1\right) =\left(\rho\omega_{r}\right)^{2}e^{\rho\omega_{r}}e^{-\frac{i \beta\eta}{2}\omega_{r}^{2}}\left[1+\frac{1}{4\rho\omega_{r}}\left(\frac{\beta ^{2}\eta^{2}}{2}+\gamma-\eta^{2}\right.\right.\] \[\quad\left.\left.-4i\beta\eta\omega_{r}^{2}\right)+\mathcal{O} \left(\rho^{-2}\right)\right],\] \[\varphi_{r}^{\prime\prime}\left(\rho,0\right)-\left(\alpha+i \kappa\rho^{2}\right)\varphi_{r}^{\prime}\left(\rho,0\right) =i\kappa\rho^{2}\left(\rho\omega_{r}\right)\left(-1+\frac{i\beta \eta}{2\rho}\omega_{r}+\frac{\omega_{r}}{i\kappa\rho}+\mathcal{O} \left(\rho^{-2}\right)\right),\] \[\varphi_{r}^{\left(3\right)}\left(\rho,0\right)-\left(\gamma- \eta^{2}\right)\varphi_{r}^{\prime}\left(\rho,0\right)+i\beta\eta\rho^{2} \varphi_{r}\left(\rho,0\right) =\left(\rho\omega_{r}\right)^{3}\left(1-\frac{3i\beta\eta}{2 \rho}\omega_{r}+\frac{i\beta\eta}{\rho}\omega_{r}+\mathcal{O}\left(\rho^{-2} \right)\right)\]
for \(r=1,2,3,4\). Substituting these expressions in the equation \(\Delta_{1}\left(\lambda\right)=0\) and performing elementary computations, we obtain that for \(\left|\rho\right|\) large,
\[\begin{vmatrix}e^{i\rho}e^{\frac{i\beta\eta}{2}}\left[1+\frac{1}{4i\rho}\left( \frac{\beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}\right)+\mathcal{O}\left(\rho^{-2} \right)\right]\\ -e^{i\rho}e^{\frac{i\beta\eta}{2}}\left[1+\frac{1}{4i\rho}\left(\frac{\beta^{2 }\eta^{2}}{2}+\gamma-\eta^{2}+4i\beta\eta\right)+\mathcal{O}\left(\rho^{-2} \right)\right]\\ \qquad\qquad-\kappa\left(-1-\frac{\beta\eta}{2\rho}+\frac{1}{\kappa\rho}+ \mathcal{O}\left(\rho^{-2}\right)\right)\\ \qquad\qquad-i\left(1+\frac{\beta\eta}{2\rho}+\mathcal{O}\left(\rho^{-2} \right)\right)\end{vmatrix}\]
\[\begin{vmatrix}0\\ 0\\ -i\kappa\left(-1-\frac{i\beta\eta}{2\rho}-\frac{1}{i\kappa\rho}+\mathcal{O} \left(\rho^{-2}\right)\right)\\ \qquad\qquad-\left(1+\frac{i\beta\eta}{2\rho}+\mathcal{O}\left(\rho^{-2} \right)\right)\end{vmatrix}\]
\[\begin{vmatrix}e^{-i\rho}e^{\frac{i\beta\eta}{2}}\left[1-\frac{1}{4i\rho} \left(\frac{\beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}\right)+\mathcal{O}\left( \rho^{-2}\right)\right]\\ -e^{-i\rho}e^{\frac{i\beta\eta}{2}}\left[1-\frac{1}{4i\rho}\left(\frac{\beta^ {2}\eta^{2}}{2}+\gamma-\eta^{2}+4i\beta\eta\right)+\mathcal{O}\left(\rho^{-2} \right)\right]\\ \qquad\qquad\qquad\qquad\kappa\left(-1+\frac{\beta\eta}{2\rho}-\frac{1}{ \kappa\rho}+\mathcal{O}\left(\rho^{-2}\right)\right)\\ \qquad\qquad\qquad\qquad i\left(1-\frac{\beta\eta}{2\rho}+\mathcal{O} \left(\rho^{-2}\right)\right)\end{vmatrix}\]
\[\begin{vmatrix}e^{-\frac{i\beta\eta}{2}}\left[1+\frac{1}{4\rho}\left(\frac{ \beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}\right)+\mathcal{O}\left(\rho^{-2} \right)\right]\\ e^{-\frac{i\beta\eta}{2}}\left[1+\frac{1}{4\rho}\left(\frac{\beta^{2}\eta^ {2}}{2}+\gamma-\eta^{2}-4i\beta\eta\right)+\mathcal{O}\left(\rho^{-2}\right) \right]\\ 0\\ 0\end{vmatrix}+\mathcal{O}\left(e^{-b|\rho|}\right)=0,\]
where we have used that \(\omega_{1}^{2}=\omega_{3}^{2}=-1\), \(\omega_{2}^{2}=\omega_{4}^{2}=1\) and \(e^{\rho\omega_{2}}=e^{-\rho\omega_{4}}=e^{-\rho}\), \(e^{\rho\omega_{1}}=e^{-\rho\omega_{3}}=e^{-i\rho}\) in \(S\). So \(\Delta_{1}\left(\lambda\right)\) has an asymptotic representation of the form
\[\begin{split}\Delta_{1}\left(\lambda\right)=-8\cos\rho+\frac{ \cos\rho}{\rho}\left[-\beta^{2}\eta^{2}-2\left(\gamma-\eta^{2}\right)+\frac{4 i}{\kappa}\right]\\ \qquad\qquad\qquad+\frac{i\sin\rho}{\rho}\left[i\beta^{2}\eta^{2 }+2i\left(\gamma-\eta^{2}\right)-\frac{4}{\kappa}\right]+\mathcal{O}\left( \rho^{-2}\right)+\mathcal{O}\left(e^{-b|\rho|}\right).\end{split}\]
Let the sequence \(\left\{\lambda_{n}\right\}\) represent the roots of the equation \(\frac{1}{2}\Delta_{1}\left(\lambda\right)=0\) and set \(\rho_{n}=\left(n+\frac{1}{2}\right)\pi+z_{n}\). Since
\[\begin{split}\cos\rho_{n}&=\cos\left(n+\frac{1}{2} \right)\pi\cos z_{n}-\sin\left(n+\frac{1}{2}\right)\pi\sin z_{n}=-\left(-1 \right)^{n}\sin z_{n},\\ \sin\rho_{n}&=\sin\left(n+\frac{1}{2}\right)\pi\cos z _{n}+\cos\left(n+\frac{1}{2}\right)\pi\sin z_{n}=\left(-1\right)^{n}\cos z_{n},\end{split}\]
it follows that the \(z_{n}\) satisfy
\[\sin z_{n}=\frac{\cos z_{n}}{4\rho_{n}}\left(\frac{\beta^{2}\eta^{2}}{2}+ \gamma-\eta^{2}+\frac{2i}{\kappa}\right)+\mathcal{O}\left(\rho_{n}^{-2}\right) +\mathcal{O}\left(e^{-b|\rho_{n}|}\right).\]
Thus, for large \(n\),
\[z_{n}=\frac{\frac{\beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}+\frac{2i}{\kappa}}{4 \left(n+\frac{1}{2}\right)\pi}+\mathcal{O}\left(n^{-2}\right).\]
We write \(\rho_{n}=\tau_{n}+z_{n}\) where \(\tau_{n}=\left(n+\frac{1}{2}\right)\pi\) to obtain, taking into account \(\lambda_{n}=i\rho_{n}^{2}=i\left(\tau_{n}+z_{n}\right)^{2}\),
\[\lambda_{n}=-\frac{1}{\kappa}+i\left(\tau_{n}^{2}+\frac{\frac{\beta^{2}\eta^{2 }}{2}+\gamma-\eta^{2}}{2}\right)+\mathcal{O}\left(\tau_{n}^{-1}\right)\text{, \ \ \ }\tau_{n}=\left(n+\frac{1}{2}\right)\pi\text{.}\]
**Case 2**.: _Asymptotic zeros of \(\Delta_{2}\)._ In this case,
\[\varphi_{r}\left(\rho,1\right) =e^{\rho\omega_{r}}e^{-\frac{i\beta\eta}{2}\omega_{r}^{2}}\] \[\quad\times\left[1+\frac{1}{4\rho\omega_{r}}\left(\frac{\beta^{2 }\eta^{2}}{2}+\gamma-\eta^{2}\right)+\mathcal{O}\left(\rho^{-2}\right)\right],\] \[\varphi_{r}^{\prime\prime}\left(\rho,1\right) =\left(\rho\omega_{r}\right)^{2}e^{\rho\omega_{r}}e^{-\frac{i \beta\eta}{2}\omega_{r}^{2}}\left[1+\frac{1}{4\rho\omega_{r}}\left(\frac{ \beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}\right.\right.\] \[\quad-4i\beta\eta\omega_{r}^{2}\right)+\mathcal{O}\left(\rho^{-2 }\right)\] \[\varphi_{r}^{\prime\prime}\left(\rho,0\right)-\left(\alpha+i \kappa\rho^{2}\right)\varphi_{r}^{\prime}\left(\rho,0\right) =i\kappa\rho^{2}\left(\rho\omega_{r}\right)\left(-1+\frac{i \beta\eta}{2\rho}\omega_{r}+\frac{\omega_{r}}{i\kappa\rho}+\mathcal{O}\left( \rho^{-2}\right)\right),\] \[\varphi_{r}\left(\rho,0\right) =1+\mathcal{O}\left(\rho^{-2}\right)\]
for \(r=1,2,3,4\) and hence, using arguments analogous to those given in Case 1 we compute
\[\Delta_{2}\left(\lambda\right)=4\sqrt{2}\cos\left(\rho+\frac{\pi}{4}\right)+ \frac{\cos\rho}{\rho}\left[\beta^{2}\eta^{2}+2\left(\gamma-\eta^{2}\right) \right]+\frac{i\sin\rho}{\rho}\left(\frac{8}{\kappa}\right)+\mathcal{O}\left( \rho^{-2}\right)+\mathcal{O}\left(e^{-b|\rho|}\right)\text{.}\]
Let the sequence \(\left\{\lambda_{n}\right\}\) be the roots of the equation \(\Delta_{2}\left(\lambda\right)=0\). Set \(\rho_{n}=\left(n+\frac{1}{4}\right)\pi+z_{n}\). Since
\[\cos\rho_{n} =\cos\left(n+\frac{1}{4}\right)\pi\cos z_{n}-\sin\left(n+\frac{1 }{4}\right)\pi\sin z_{n}=\frac{\sqrt{2}}{2}\left(-1\right)^{n}\cos z_{n}- \frac{\sqrt{2}}{2}\left(-1\right)^{n}\sin z_{n}\text{,}\] \[\sin\rho_{n} =\sin\left(n+\frac{1}{4}\right)\pi\cos z_{n}+\cos\left(n+\frac{1 }{4}\right)\pi\sin z_{n}=\frac{\sqrt{2}}{2}\left(-1\right)^{n}\cos z_{n}+ \frac{\sqrt{2}}{2}\left(-1\right)^{n}\sin z_{n}\text{,}\]
proceeding as before, we have that the \(z_{n}\) in this case satisfy
\[\sin z_{n}=\frac{\cos z_{n}}{4\rho_{n}}\left(\frac{\beta^{2}\eta^{2}}{2}+ \gamma-\eta^{2}+\frac{4i}{\kappa}\right)+\mathcal{O}\left(\rho_{n}^{-2}\right) +\mathcal{O}\left(e^{-b|\rho_{n}|}\right)\text{.}\]
Therefore, for \(n\) large,
\[z_{n}=\frac{\frac{\beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}+\frac{4i}{\kappa}}{4 \left(n+\frac{1}{4}\right)\pi}+\mathcal{O}\left(n^{-2}\right)\text{,}\]
and a similar calculation to the above shows that
\[\lambda_{n}=-\frac{2}{\kappa}+i\left(\tau_{n}^{2}+\frac{\frac{\beta^{2}\eta^{2} }{2}+\gamma-\eta^{2}}{2}\right)+\mathcal{O}\left(\tau_{n}^{-1}\right)\text{, \ \ \ }\tau_{n}=\left(n+\frac{1}{4}\right)\pi\text{.}\]
We collect the foregoing results together into the following theorem.
**Theorem 3.3**.: _Let \(\kappa>0\). The spectrum of \(\mathcal{T}\) consists of two branches of a discrete set of eigenvalues, \(\sigma\left(\mathcal{T}\right)=\sigma^{\left(1\right)}\left(\mathcal{T} \right)\cup\sigma^{\left(2\right)}\left(\mathcal{T}\right)\), where asymptotically, for large \(n\),_
\[\sigma^{\left(1\right)}\left(\mathcal{T}\right)=\left\{\lambda\in\mathbb{C}\ |\ \Delta_{1} \left(\lambda\right)=0\right\}=\left\{\lambda_{n}^{\left(1\right)},\overline{ \lambda_{n}^{\left(1\right)}}\right\}_{n=0}^{\infty}\text{,}\]
\[\sigma^{\left(2\right)}\left(\mathcal{T}\right)=\left\{\lambda\in\mathbb{C}\ |\ \Delta_{2} \left(\lambda\right)=0\right\}=\left\{\lambda_{n}^{\left(2\right)},\overline{ \lambda_{n}^{\left(2\right)}}\right\}_{n=0}^{\infty}\text{,}\]
_with the sequences enumerated properly (in the sense of [11, Remark 4.2]). Both branches are confined to a vertical strip in the open left half-plane such that, as \(n\rightarrow\infty\),_
\[|\Re\left(\lambda_{n}^{\left(j\right)}\right)|\leq C<\infty\text{, \ \ \ }\Im\left(\lambda_{n}^{\left(j\right)}\right) \rightarrow\infty\text{, \ \ \ }j=1,2\text{,}\]
some constant \(C\). The eigenvalues belonging to \(\sigma_{1}\left(\mathcal{T}\right)\) have asymptotic representations_
\[\lambda_{n}^{\left(1\right)}=-\frac{1}{\kappa}+i\left(\left(\tau_{n}^{\left(1 \right)}\right)^{2}+\frac{\frac{\beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}}{2} \right)+\mathcal{O}\left(\left(\tau_{n}^{\left(1\right)}\right)^{-1}\right),\quad\tau_{n}^{\left(1\right)}=\left(n+\frac{1}{2}\right)\pi,\quad n\to\infty,\]
_and those belonging to \(\sigma_{2}\left(\mathcal{T}\right)\) have asymptotic representations_
\[\lambda_{n}^{\left(2\right)}=-\frac{2}{\kappa}+i\left(\left(\tau_{n}^{\left(2 \right)}\right)^{2}+\frac{\frac{\beta^{2}\eta^{2}}{2}+\gamma-\eta^{2}}{2} \right)+\mathcal{O}\left(\left(\tau_{n}^{\left(2\right)}\right)^{-1}\right),\quad\tau_{n}^{\left(2\right)}=\left(n+\frac{1}{4}\right)\pi,\quad n\to\infty.\]
### Multiplicity of Eigenvalues
For the study of eigenvalue multiplicities, we quote the following useful result from [16, Corollary 4.2.2].
**Lemma 3.2**.: _Let \(\mathcal{A}\) be an operator on a Hilbert space \(\mathscr{X}\), and let \(\lambda\) be an eigenvalue of \(\mathcal{A}\) with corresponding eigenvector \(x\), i.e. \(\left(\lambda\mathcal{I}-\mathcal{A}\right)x=0\), \(x\neq 0\). If there is a nonzero element \(z\in\ker\left(\overline{\lambda}\mathcal{I}-\mathcal{A}^{*}\right)\) such that the inner product \(\left(x,z\right)\neq 0\), then \(\lambda\) is a simple eigenvalue if \(\dim\ker\left(\lambda\mathcal{I}-\mathcal{A}\right)=1\)._
In order to use Lemma 3.2, we need to consider the spectral problem for the adjoint operator for \(\mathcal{T}\), \(\mathcal{T}^{*}\). First we prove a proposition.
**Proposition 3.1**.: _The adjoint operator \(\mathcal{T}^{*}\) is defined on the domain_
\[\mathscr{D}\left(\mathcal{T}^{*}\right)=\left\{z=\left\{z_{k}\right\}_{k=1}^{3 }\in\mathscr{X}\,\left|\begin{array}{c}z_{k}=\left(\widetilde{w}_{k}, \widetilde{v}_{k}\right)\in\left(H^{4}\left(0,1\right)\cap H_{*}^{2}\left(0,1 \right)\right)\times H_{*}^{2}\left(0,1\right),\\ \widetilde{w}_{k}^{\prime\prime}\left(1\right)=0,\quad\widetilde{w}_{k}^{ \prime\prime}\left(0\right)-\alpha\widetilde{w}_{k}^{\prime}\left(0\right)+ \kappa\widetilde{v}_{k}^{\prime}\left(0\right)=0,\quad k=1,2,3,\\ \sum_{k=1}^{3}\left[\widetilde{w}_{k}^{\left(3\right)}\left(0\right)-\left( \gamma-\eta^{2}\right)\widetilde{w}_{k}^{\prime}\left(0\right)+\beta\eta \widetilde{v}_{k}\left(0\right)\right]=0\end{array}\right\}, \tag{3.18}\]
_by_
\[\mathcal{T}z:=\left\{\left(-\widetilde{v}_{k},\widetilde{w}_{k}^{\left(4 \right)}-\left(\gamma-\eta^{2}\right)\widetilde{w}_{k}^{\prime\prime}+2\beta \eta\widetilde{v}_{k}^{\prime}\right)\right\}_{k=1}^{3}. \tag{3.19}\]
Proof.: The proof is a formal calculation. First note that \(x\in\mathscr{D}\left(\mathcal{T}\right)(=\mathscr{D}\left(\mathcal{A}\right))\) implies \(v_{k}\left(1\right)=0\), \(k=1,2,3\), and \(v_{j}\left(0\right)=v_{k}\left(0\right)\), \(j,k=1,2,3\). Using this, it follows from integration by parts that if \(x\in\mathscr{D}\left(\mathcal{T}\right)\), then for any \(z=\left\{\left(\widetilde{w}_{k},\widetilde{v}_{k}\right)\right\}_{k=1}^{3} \in H^{4}\left(\mathbf{G}\right)\times H^{2}\left(\mathbf{G}\right)(\supset \mathscr{D}\left(\mathcal{T}^{*}\right))\),
\[\left(\mathcal{T}x,z\right)_{\mathscr{X}} =-\sum_{k=1}^{3}\left[\int_{0}^{1}w_{k}^{\prime\prime}\left(s \right)\overline{\widetilde{v}_{k}^{\prime\prime}\left(s\right)}ds+\left( \gamma-\eta^{2}\right)\int_{0}^{1}w_{k}^{\prime}\left(s\right)\overline{ \widetilde{v}_{k}^{\prime}\left(s\right)}ds+\alpha w_{k}^{\prime}\left(0 \right)\overline{\widetilde{v}_{k}^{\prime}\left(0\right)}\right]\] \[\quad+\sum_{k=1}^{3}\int_{0}^{1}v_{k}\left(s\right)\left[ \overline{\widetilde{w}_{k}^{\left(4\right)}\left(s\right)}-\left(\gamma- \eta^{2}\right)\overline{\widetilde{w}_{k}^{\prime\prime}\left(s\right)}+2 \beta\eta\overline{\widetilde{v}_{k}^{\prime}\left(s\right)}\right]ds\] \[\quad+\sum_{k=1}^{3}v_{k}^{\prime}\left(1\right)\overline{ \widetilde{w}_{k}^{\prime\prime}\left(1\right)}-\sum_{k=1}^{3}\biggl{[}w_{k}^{ \left(3\right)}\left(1\right)-\left(\gamma-\eta^{2}\right)w_{k}^{\prime}\left(1 \right)+2\beta\eta v_{k}\left(1\right)\biggr{]}\overline{\widetilde{v}_{k}\left( 1\right)}\] \[\quad-\sum_{k=1}^{3}v_{k}^{\prime}\left(0\right)\left(\overline{ \widetilde{w}_{k}^{\prime\prime}\left(0\right)}-\alpha\overline{\widetilde{w}_{k }^{\prime}\left(0\right)}+\kappa\widetilde{v}_{k}^{\prime}\left(0\right)\right)\] \[\quad+\sum_{k=1}^{3}v_{k}\left(0\right)\left[\overline{\widetilde{ w}_{k}^{\left(3\right)}\left(0\right)}-\left(\gamma-\eta^{2}\right) \overline{\widetilde{w}_{k}^{\prime}\left(0\right)}+\beta\eta\widetilde{v}_{k}^ {\prime}\left(0\right)\right]\] \[\quad-\sum_{k=1}^{3}\Bigl{(}w_{k}^{\prime\prime}\left(0\right)- \alpha w_{k}^{\prime}\left(0\right)-\kappa v_{k}^{\prime}\left(0\right)\Bigr{)} \overline{\widetilde{v}_{k}^{\prime}\left(0\right)}\] \[\quad+\sum_{k=1}^{3}\biggl{[}w_{k}^{\left(3\right)}\left(0\right)- \left(\gamma-\eta^{2}\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k}\left( 0\right)\biggr{]}\overline{\widetilde{v}_{k}\left(0\right)}\]
Let \(\widetilde{v}_{k}\left(1\right)=0\), \(k=1,2,3\), and let \(\widetilde{v}_{j}\left(0\right)=\widetilde{v}_{k}\left(0\right)\), \(j,k=1,2,3\). Further, let \(\widetilde{w}_{k}\left(1\right)=\widetilde{w}_{k}^{\prime\prime}\left(1\right)=0\), \(\widetilde{w}_{k}^{\prime\prime}\left(0\right)-\alpha\widetilde{w}_{k}^{ \prime}\left(0\right)+\kappa\widetilde{v}_{k}^{\prime}\left(0\right)=0\), \(k=1,2,3\), \(\widetilde{w}_{j}\left(0\right)=\widetilde{w}_{k}\left(0\right)\), \(j,k=1,2,3\), and \(\sum_{k=1}^{3}\left[\widetilde{w}_{k}^{\left(3\right)}\left(0\right)-\left( \gamma-\eta^{2}\right)\widetilde{w}_{k}^{\prime}\left(0\right)+\beta\eta \widetilde{v}_{k}\left(0\right)\right]=0\). Then we have \(\left\{\left(\widetilde{w}_{k},\widetilde{v}_{k}\right)\right\}_{k=1}^{3} \in\mathscr{D}\left(\mathcal{T}^{*}\right)\) and
\[\left(\mathcal{T}x,z\right)_{\mathscr{X}} =\left(x,\mathcal{T}^{*}z\right)_{\mathscr{X}}\] \[=-\sum_{k=1}^{3}\left[\int_{0}^{1}w_{k}^{\prime\prime}\left(s \right)\overline{\widetilde{v}_{k}^{\prime\prime}\left(s\right)}ds+\left( \gamma-\eta^{2}\right)\int_{0}^{1}w_{k}^{\prime}\left(s\right)\overline{ \widetilde{v}_{k}^{\prime}\left(s\right)}ds+\alpha w_{k}^{\prime}\left(0 \right)\overline{\widetilde{v}_{k}^{\prime}\left(0\right)}\right]\] \[\quad+\sum_{k=1}^{3}\int_{0}^{1}v_{k}\left(s\right)\left[ \overline{\widetilde{w}_{k}^{\left(4\right)}\left(s\right)}-\left(\gamma-\eta ^{2}\right)\overline{\widetilde{w}_{k}^{\prime\prime}\left(s\right)}+2\beta \eta\overline{\widetilde{v}_{k}^{\prime}\left(s\right)}\right]ds\]
where \(\mathscr{D}\left(\mathcal{T}^{*}\right)\), \(\mathcal{T}^{*}\) are as in the lemma, completing the proof.
Let us now consider the spectral problem for \(\mathcal{T}^{*}\) defined by (3.18), (3.19),
\[\mathcal{T}^{*}z=\mu z,\quad z\in\mathscr{D}\left(\mathcal{T}^{*}\right),\quad \mu\in\mathbb{C}, \tag{3.20}\]
which in coordinates is given by
\[\left\{\begin{aligned} \widetilde{w}_{k}^{\left(4\right)}-\left( \gamma-\eta^{2}\right)\widetilde{w}_{k}^{\prime\prime}-2\mu\beta\eta\widetilde{ w}_{k}^{\prime}&=-\mu^{2}\widetilde{w}_{k},& k=1,2,3,\\ \widetilde{w}_{k}\left(1\right)=\widetilde{w}_{k}^{\prime\prime} \left(1\right)&=0,& k=1,2,3,\\ \widetilde{w}_{j}\left(0\right)&=\widetilde{w}_{k} \left(0\right),& j,k=1,2,3,\\ \widetilde{w}_{k}^{\prime\prime}\left(0\right)-\left(\alpha+\mu \kappa\right)\widetilde{w}_{k}^{\prime}\left(0\right)&=0,& k=1,2,3,\\ \sum_{k=1}^{3}\left[\widetilde{w}_{k}^{\left(3\right)}\left(0 \right)-\left(\gamma-\eta^{2}\right)\widetilde{w}_{k}^{\prime}\left(0\right)+ \mu\beta\eta\widetilde{w}_{k}\left(0\right)\right]&=0,\end{aligned} \right.\right. \tag{3.21}\]
where we have taken into account that \(v_{k}=-\mu\widetilde{w}_{k}\), \(k=1,2,3\). Proceeding formally as in the proof of Theorem 3.2 we let \(\psi=\psi\left(\mu,s\right)\) be a nonzero solution of the differential equation
\[\psi^{\left(4\right)}-\left(\gamma-\eta^{2}\right)\psi^{\prime\prime}-2\mu \beta\eta\psi_{k}^{\prime}=-\mu^{2}\psi \tag{3.22}\]
satisfying the boundary conditions
\[\psi\left(1\right)=\psi^{\prime\prime}\left(1\right) =0, \tag{3.23}\] \[\psi^{\prime\prime}\left(0\right)-\left(\alpha+\mu\kappa\right) \psi^{\prime}\left(0\right) =0. \tag{3.24}\]
Obviously, \(\psi\left(\overline{\mu},\,\cdot\,\right)=\overline{\psi\left(\mu,\cdot\, \right)}\). It is then clear that solutions \(w_{k}=w_{k}\left(\mu,s\right)\) of (3.21) are of the form \(w_{k}\left(\mu,s\right)=b_{k}\psi\left(\mu,s\right)\), \(k=1,2,3\), where \(b_{k}\) are arbitrary constants, and therefore we have in the case \(\psi\left(\mu,0\right)\neq 0\) that \(\mu\) is an eigenvalue if and only if (3.22)-(3.24) with the supplementary requirement
\[\psi^{\left(3\right)}\left(\mu,0\right)-\left(\gamma-\eta^{2}\right)\psi^{ \prime}\left(\mu,0\right)-\mu\beta\eta\psi\left(\mu,0\right)=0\]
has a nonzero solution. The corresponding eigenvector \(z=z\left(\mu\right)\) is given by
\[z\left(\mu\right)=\left\{b_{k}\left(\psi\left(\mu,\,\cdot\,\right),-\mu\psi \left(\mu,\,\cdot\,\right)\right)\right\}_{k=1}^{3} \tag{3.25}\]
with the \(b_{k}\equiv b\neq 0\).
In the case
\[\psi\left(\mu,0\right)=0,\]
again continuing as in the proof of Theorem 3.2, we have \(\sum_{k=1}^{3}b_{k}=0\) and it follows that there are two linearly independent eigenvectors for the eigenvalue \(\mu\), \(z_{1}=z_{1}\left(\mu\right)\), \(z_{2}=z_{2}\left(\mu\right)\), given
by
\[z_{1}\left(\mu\right)=\big{\{}\left(\psi\left(\mu,\,\cdot\,\right),-\mu\psi\left( \mu,\,\cdot\,\right)\right);-\frac{1}{2}\left(\psi\left(\mu,\,\cdot\,\right),- \mu\psi\left(\mu,\,\cdot\,\right)\right);-\frac{1}{2}\left(\psi\left(\mu,\, \cdot\,\right),-\mu\psi\left(\mu,\,\cdot\,\right)\right)\big{\}}, \tag{3.26}\]
\[z_{2}\left(\mu\right)=\big{\{}\left(0,0\right);\left(\psi\left(\mu,\,\cdot\, \right),-\mu\psi\left(\mu,\,\cdot\,\right)\right);-\left(\psi\left(\mu,\, \cdot\,\right),-\mu\psi\left(\mu,\,\cdot\,\right)\right)\big{\}}, \tag{3.27}\]
respectively.
We can now turn to the verification of the condition \(\left(x,z\right)_{\mathscr{X}}\neq 0\) in Lemma 3.2. To do this note first that, since \(\mathscr{X}\) is a Hilbert space, \(\sigma\left(\mathcal{T}^{\ast}\right)=\overline{\sigma\left(\mathcal{T}\right)}\). Let \(\lambda\in\sigma\left(\mathcal{T}\right)\), \(\mu\in\sigma\left(\mathcal{T}^{\ast}\right)\), and let \(x\left(\lambda\right)=\{\left(w_{k}\left(\lambda,\,\cdot\,\right),v_{k} \left(\lambda,\,\cdot\,\right)\right)\}_{k=1}^{3}\) and \(z\left(\mu\right)=\{\left(\widetilde{w}_{k}\left(\mu,\,\cdot\,\right), \widetilde{v}_{k}\left(\mu,\,\cdot\,\right)\right)\}_{k=1}^{3}\) be the eigenvectors corresponding to \(\lambda\) and \(\mu\), respectively. We take the inner product of (2.6) with \(z\left(\mu\right)\) and obtain, taking into account (3.20),
\[\lambda\left(x\left(\lambda\right),z\left(\mu\right)\right)_{\mathscr{X}}= \left(\mathcal{T}x\left(\lambda\right),z\left(\mu\right)\right)_{\mathscr{X}} =\left(x\left(\lambda\right),\mathcal{T}^{\ast}z\left(\mu\right)\right)_{ \mathscr{X}}=\overline{\mu}\left(x\left(\lambda\right),z\left(\mu\right) \right)_{\mathscr{X}}.\]
Consequently, for \(\mu\neq\overline{\lambda}\), \(\left(x\left(\lambda\right),z\left(\mu\right)\right)_{\mathscr{X}}=0\). If we set \(\mu=\overline{\lambda}\),
\[\left(x\left(\lambda\right),z\left(\overline{\lambda}\right)\right)_{ \mathscr{X}}=B\left(\lambda\right)\sum_{k=1}^{3}c_{k}\overline{b_{k}} \tag{3.28}\]
with
\[B\left(\lambda\right)\coloneqq\int_{0}^{1}\varphi^{\prime\prime} \left(\lambda,s\right)\psi^{\prime\prime}\left(\lambda,s\right)ds+\left( \gamma-\eta^{2}\right)\int_{0}^{1}\varphi^{\prime}\left(\lambda,s\right)\psi^ {\prime}\left(\lambda,s\right)ds\] \[\qquad+\alpha\varphi^{\prime}\left(\lambda,0\right)\psi^{\prime} \left(\lambda,0\right)-\lambda^{2}\int_{0}^{1}\varphi\left(\lambda,s\right) \psi\left(\lambda,s\right)ds,\]
which is easily verified on using the fact that, by (3.12) and (3.25), \(w_{k}\left(\lambda,\,\cdot\,\right)=c_{k}\varphi\left(\lambda,\,\cdot\,\right)\), \(v_{k}\left(\lambda,\,\cdot\,\right)=c_{k}\lambda\varphi\left(\lambda,\, \cdot\,\right)\), \(\widetilde{w}_{k}\left(\lambda,\,\cdot\,\right)=b_{k}\psi\left(\overline{ \lambda},\,\cdot\,\right)\), \(\widetilde{v}_{k}\left(\lambda,\,\cdot\,\right)=-b_{k}\overline{\lambda} \psi\left(\overline{\lambda},\,\cdot\,\right)\), \(k=1,2,3\), and that \(\overline{\psi\left(\overline{\lambda},\,\cdot\,\right)}=\psi\left(\lambda,\, \cdot\,\right)\). Now integration by parts in the first and second integrals in \(B\left(\lambda\right)\) yields
\[B\left(\lambda\right)=-\lambda^{2}\int_{0}^{1}\varphi\left( \lambda,s\right)\psi\left(\lambda,s\right)ds+\int_{0}^{1}\!\!\left[\varphi^{ \left(4\right)}\left(\lambda,s\right)-\left(\gamma-\eta^{2}\right)\varphi^{ \prime\prime}\left(\lambda,s\right)\right]\psi\left(\lambda,s\right)ds\] \[\qquad+\varphi^{\prime\prime}\left(\lambda,1\right)\psi^{\prime} \left(\lambda,1\right)-\left[\varphi^{\left(3\right)}\left(\lambda,1\right)- \left(\gamma-\eta^{2}\right)\varphi^{\prime}\left(\lambda,1\right)\right]\psi \left(\lambda,1\right)\] \[\qquad-\left(\varphi^{\prime\prime}\left(\lambda,0\right)-\alpha \varphi^{\prime}\left(\lambda,0\right)\right)\psi^{\prime}\left(\lambda,0 \right)+\left[\varphi^{\left(3\right)}\left(\lambda,0\right)-\left(\gamma-\eta^ {2}\right)\varphi^{\prime}\left(\lambda,0\right)\right]\psi\left(\lambda,0 \right).\]
Inserting the differential equation (3.3) and using that \(\varphi^{\prime\prime}\left(1\right)=0\), \(\varphi^{\prime\prime}\left(0\right)-\left(\alpha+\lambda\kappa\right)\varphi \left(0\right)=0\), \(\psi\left(1\right)=0\), we can write
\[B\left(\lambda\right)=-2\lambda^{2}\int_{0}^{1}\varphi\left( \lambda,s\right)\psi\left(\lambda,s\right)ds-2\lambda\beta\eta\int_{0}^{1} \varphi^{\prime}\left(\lambda,s\right)\psi\left(\lambda,s\right)ds-\lambda \kappa\varphi^{\prime}\left(\lambda,0\right)\psi^{\prime}\left(\lambda,0\right)\] \[\qquad+\left[\varphi^{\left(3\right)}\left(\lambda,0\right)- \left(\gamma-\eta^{2}\right)\varphi^{\prime}\left(\lambda,0\right)+\lambda \beta\eta\varphi\left(\lambda,0\right)\right]\psi\left(\lambda,0\right)- \lambda\beta\eta\varphi\left(\lambda,0\right)\psi\left(\lambda,0\right)\] \[=-2\lambda^{2}\int_{0}^{1}\varphi\left(\lambda,s\right)\psi\left( \lambda,s\right)ds+2\lambda\beta\eta\int_{0}^{1}\psi^{\prime}\left(\lambda,s \right)\varphi\left(\lambda,s\right)ds-\lambda\kappa\varphi^{\prime}\left( \lambda,0\right)\psi^{\prime}\left(\lambda,0\right)\] \[\qquad+\left[\psi^{\left(3\right)}\left(\lambda,0\right)- \left(\gamma-\eta^{2}\right)\psi^{\prime}\left(\lambda,0\right)-\lambda\beta \eta\psi\left(\lambda,0\right)\right]\varphi\left(\lambda,0\right)+\lambda \beta\eta\psi\left(\lambda,0\right)\varphi\left(\lambda,0\right).\]
Note that
\[2\lambda\beta\eta\int_{0}^{1}\varphi^{\prime}\left(\lambda,s\right) \psi\left(\lambda,s\right)ds+\lambda\beta\eta\varphi\left(\lambda,0\right)\psi \left(\lambda,0\right)\] \[=\lambda\beta\eta\int_{0}^{1}\left(\varphi^{\prime}\left(\lambda,s \right)\psi\left(\lambda,s\right)-\varphi\left(\lambda,s\right)\psi^{\prime} \left(\lambda,s\right)\right)ds.\]
Hence,
\[B\left(\lambda\right) =-2\lambda^{2}\int_{0}^{1}\varphi\left(\lambda,s\right)\psi\left( \lambda,s\right)ds-\lambda\beta\eta\int_{0}^{1}\left(\varphi^{\prime}\left( \lambda,s\right)\psi\left(\lambda,s\right)-\varphi\left(\lambda,s\right)\psi^{ \prime}\left(\lambda,s\right)\right)ds\] \[\qquad-\lambda\kappa\varphi^{\prime}\left(\lambda,0\right)\psi^{ \prime}\left(\lambda,0\right)+\left[\varphi^{\left(3\right)}\left(\lambda,0 \right)-\left(\gamma-\eta^{2}\right)\varphi^{\prime}\left(\lambda,0\right)+ \lambda\beta\eta\varphi\left(\lambda,0\right)\right]\psi\left(\lambda,0\right)\] \[=-2\lambda^{2}\int_{0}^{1}\varphi\left(\lambda,s\right)\psi\left( \lambda,s\right)ds-\lambda\beta\eta\int_{0}^{1}\left(\varphi^{\prime}\left( \lambda,s\right)\psi\left(\lambda,s\right)-\varphi\left(\lambda,s\right) \psi^{\prime}\left(\lambda,s\right)\right)ds\] \[\qquad-\lambda\kappa\varphi^{\prime}\left(\lambda,0\right)\psi^{ \prime}\left(\lambda,0\right)+\Delta_{1}\left(\lambda\right)\Delta_{2}\left( \lambda\right)\]
Therefore, since \(\lambda\in\sigma\left(\mathcal{T}\right)\),
\[B\left(\lambda\right) =-\lambda\bigg{[}2\lambda\int_{0}^{1}\varphi\left(\lambda,s \right)\psi\left(\lambda,s\right)ds+\beta\eta\int_{0}^{1}\left(\varphi^{ \prime}\left(\lambda,s\right)\psi\left(\lambda,s\right)-\varphi\left(\lambda,s \right)\psi^{\prime}\left(\lambda,s\right)\right)ds\] \[\qquad+\kappa\varphi^{\prime}\left(\lambda,0\right)\psi^{\prime} \left(\lambda,0\right)\bigg{]}.\]
Clearly if \(\lambda\) belongs to the branch \(\sigma_{1}\left(\mathcal{T}\right)\) of the spectrum \(\sigma\left(\mathcal{T}\right)\), then from (3.28) we have
\[\left(x\left(\lambda\right),z\left(\overline{\lambda}\right)\right)_{ \mathscr{X}}=3bcB\left(\lambda\right)\neq 0,\]
as \(bc\neq 0\). If \(\lambda\) belongs to the branch \(\sigma_{2}\left(\mathcal{T}\right)\) of \(\sigma\left(\mathcal{T}\right)\) we have
\[\left(x_{1}\left(\lambda\right),z_{1}\left(\overline{\lambda}\right)\right)_{ \mathscr{X}}=\frac{1}{2}B\left(\lambda\right)\neq 0,\quad\left(x_{2} \left(\lambda\right),z_{2}\left(\overline{\lambda}\right)\right)_{\mathscr{X}} =2B\left(\lambda\right)\neq 0\]
where the eigenvectors \(x_{j}\left(\lambda\right)\), \(z_{j}\left(\overline{\lambda}\right)\), \(j=1,2\), have the forms given by (3.7), (3.26) and (3.8), (3.27), respectively. The calculations above imply that associated with an eigenvector \(x\) corresponding to an eigenvalue \(\lambda\in\sigma_{1}\left(\mathcal{T}\right)\) there is a nontrivial element \(z\in\ker\left(\overline{\lambda}\mathcal{I}-\mathcal{T}^{*}\right)\) such that \(\left(x,z\right)_{\mathscr{X}}\neq 0\). Then each \(\lambda\in\sigma_{1}\left(\mathcal{T}\right)\) is simple since, by Theorem 3.2, it has a one-dimensional geometric eigenspace, \(\dim\ker\left(\lambda\mathcal{I}-\mathcal{T}\right)=1\). Similarly, it also follows from the calculations above that associated with an eigenvector \(x\) corresponding to an eigenvalue \(\lambda\in\sigma_{2}\left(\mathcal{T}\right)\) there is a nontrivial element \(z\in\ker\left(\overline{\lambda}\mathcal{I}-\mathcal{T}^{*}\right)\) such that \(\left(x,z\right)_{\mathscr{X}}\neq 0\). Hence, we obtain that each \(\lambda\in\sigma_{2}\left(\mathcal{T}\right)\) is semisimple and has multiplicity \(2\) since \(\dim\ker\left(\lambda\mathcal{I}-\mathcal{T}\right)=2\). We have proven the following theorem.
**Theorem 3.4**.: _All eigenvalues of \(\mathcal{T}\) are semisimple, i.e. there are no second-order root vectors (associated vectors) corresponding to any eigenvalue \(\lambda\in\sigma\left(\mathcal{T}\right)\). Each eigenvalue belonging to \(\sigma_{1}\left(\mathcal{T}\right)\) has multiplicity \(1\), and each eigenvalue belonging to \(\sigma_{2}\left(\mathcal{T}\right)\) has multiplicity \(2\)._
## 4. Completeness, minimality, and Riesz basis properties of eigenvectors
We begin by collecting three familiar operator results from the literature which we require in the sequel. The first is due to [6, Theorem V.8.1], the second to [12, Lemma 2.4], and the third to [18, Theorem 1.1].
**Lemma 4.1**.: _Let \(\mathcal{K}\) be a compact skewadjoint operator on a Hilbert space \(\mathscr{X}\) with \(\ker\mathcal{K}=\left\{0\right\}\), and let \(\mathcal{S}\) be a real operator on \(\mathscr{X}\) which has finite rank. Let_
\[\mathcal{A}=\mathcal{K}+\kappa\mathcal{S},\quad\kappa\geq 0.\]
_Then the root vectors of the operator \(\mathcal{A}\) are complete in \(\mathscr{X}\)._
**Lemma 4.2**.: _Let \(\mathcal{A}\) be a compact operator on \(\mathscr{X}\) and \(\ker\mathcal{A}=\left\{0\right\}\). Then the root vectors of \(\mathcal{A}\) are minimal in \(\mathscr{X}\)._
**Lemma 4.3**.: _Let \(\mathscr{X}\) be a separable Hilbert space and let \(\mathcal{A}\) be the infinitesimal generator of a \(C_{0}\)-semigroup \(\mathbb{U}\left(t\right)\) on \(\mathscr{X}\). Suppose that the following conditions hold:_
1. \(\sigma\left(\mathcal{A}\right)=\sigma^{\left(1\right)}\left(\mathcal{A} \right)\cup\sigma^{\left(2\right)}\left(\mathcal{A}\right)\) _where the branch_ \(\sigma^{\left(2\right)}\left(\mathcal{A}\right)=\left\{\lambda_{k}\right\}_{k=1}^ {\infty}\)_, consisting entirely of isolated eigenvalues of finite algebraic multiplicity;_
2. \(\sup_{k\geq 1}m_{a}\left(\lambda_{k}\right)<\infty\) _with_ \(m_{a}\left(\lambda_{k}\right)\coloneqq\dim E\left(\lambda_{k},\mathcal{A}\right) \mathscr{X}\)_, the_ \(E\left(\lambda_{k},\mathcal{A}\right)\) _being the Riesz projections associated with the eigenvalues_ \(\lambda_{k}\)_; and_
3. _there is a real constant_ \(\nu\) _such that_ \[\sup\left\{\Re\left(\lambda\right)\;\left|\;\lambda\in\sigma^{\left(1\right)} \left(\mathcal{A}\right)\right.\right\}\leq\nu\leq\inf\left\{\Re\left(\lambda \right)\;\left|\;\lambda\in\sigma^{\left(2\right)}\left(\mathcal{A}\right)\right.\right\},\] _and_ \[\inf_{k\neq j}\left|\lambda_{k}-\lambda_{j}\right|>0.\] (4.1)
_Then the following statements hold:_
1. _There exist two_ \(\mathbb{U}\left(t\right)\)_-invariant closed subspaces_ \(\mathscr{X}_{1}\) _and_ \(\mathscr{X}_{2}\) _with the properties_ 1. \(\sigma\left(\mathcal{A}|_{\mathscr{X}_{1}}\right)=\sigma^{\left(1\right)} \left(\mathcal{A}\right)\) _and_ \(\sigma\left(\mathcal{A}|_{\mathscr{X}_{2}}\right)=\sigma^{\left(2\right)} \left(\mathcal{A}\right)\)_; and_ 2. \(\left\{E\left(\lambda_{k},\mathcal{A}\right)\mathscr{X}_{2}\right\}_{k=1}^{\infty}\) _forms a Riesz basis of subspaces for_ \(\mathscr{X}_{2}\)_, and_ \[\mathscr{X}=\overline{\mathscr{X}_{1}\oplus\mathscr{X}_{2}}.\]
2. _If_ \(\sup_{k\geq 1}\left\|E\left(\lambda_{k},\mathcal{A}\right)\right\|<\infty\)_, then_ \[\mathscr{D}\left(\mathcal{A}\right)\subset\mathscr{X}_{1}\oplus\mathscr{X}_{2 }\subset\mathscr{X}.\]
3. \(\mathscr{X}\) _can be decomposed into the topological direct sum_ \[\mathscr{X}=\mathscr{X}_{1}\oplus\mathscr{X}_{2}\] _if and only if_ \(\sup_{n\geq 1}\left\|\sum\limits_{k=1}^{n}E\left(\lambda_{k},\mathcal{A}\right) \right\|<\infty\)_._
### Completeness and minimality
We shall now show that the root vectors of \(\mathcal{T}\), all of which shown in Theorem 3.4 to be eigenvectors, are minimal complete in \(\mathscr{X}\). The result can be obtained by application of a similar method to the one in [11] to construct the inverse operator \(\mathcal{T}^{-1}\), which we know by Lemma 2.1 to exist and be compact, and subsequent application of Lemmas 4.1 and 4.2. To this end, we need first of all the following theorem.
**Theorem 4.1**.: _Let \(\mathcal{T}_{0}\) be the skewadjoint part of \(\mathcal{T}\), i.e., \(\mathcal{T}\coloneqq\mathcal{A}+\mathcal{B}\) as defined by (2.1)-(2.4) with \(\kappa=0\). Then \(\mathcal{T}^{-1}=\mathcal{T}_{0}^{-1}+\kappa\mathcal{S}\), where \(\mathcal{S}\) is a real operator on \(\mathscr{X}\) which is of finite rank._
Proof.: Consider the problem (2.9). If we integrate the differential equation twice from \(0\) to \(1\), making use of the boundary conditions \(w_{k}\left(1\right)=w_{k}^{\prime\prime}\left(1\right)=0\), \(k=1,2,3\), we arrive at
\[\begin{split} w_{k}^{\prime\prime}\left(s\right)-\left(\gamma- \eta^{2}\right)w_{k}\left(s\right)+\left[w_{k}^{\left(3\right)}\left(0\right) -\left(\gamma-\eta^{2}\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k} \left(0\right)\right]\\ \times\left(1-s\right)=-\widetilde{V}_{k}\left(s\right)- \widetilde{W}_{k}\left(s\right),\quad k=1,2,3,\end{split} \tag{4.2}\]
with the integral terms
\[\widetilde{V}_{k}\left(s\right)=-\int_{s}^{1}dt\int_{0}^{t}\widetilde{v}_{k} \left(r\right)dr,\quad\widetilde{W}_{k}\left(s\right)=-2\beta\eta\int_{s}^{1} dt\int_{0}^{t}\widetilde{w}_{k}^{\prime}\left(r\right)dr.\]
For brevity, let us set \(\widetilde{\left(V;W\right)}_{k}\left(\,\cdot\,\right)\coloneqq\widetilde{V} _{k}\left(\,\cdot\,\right)+\widetilde{W}_{k}\left(\,\cdot\,\right)\) in this proof. The solutions of (4.2) then take the form
\[\begin{split} w_{k}\left(s\right)=c_{k}&\sinh\sqrt{ \gamma-\eta^{2}}\left(1-s\right)+\left[w_{k}^{\left(3\right)}\left(0\right)- \left(\gamma-\eta^{2}\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k} \left(0\right)\right]\\ &\times\frac{1}{\sqrt{\gamma-\eta^{2}}}\int_{s}^{1}\left(1-r \right)\sinh\sqrt{\gamma-\eta^{2}}\left(s-r\right)dr\\ &+\frac{1}{\sqrt{\gamma-\eta^{2}}}\int_{s}^{1}\sinh\sqrt{\gamma- \eta^{2}}\left(s-r\right)\widetilde{\left(V;W\right)}_{k}\left(r\right)dr, \quad k=1,2,3,\end{split} \tag{4.3}\]
with arbitrary constant \(c_{k}\). Thus the following equations are obtained
\[w_{k}\left(0\right)=c_{k} \sinh\sqrt{\gamma-\eta^{2}}-\left[w_{k}^{\left(3\right)}\left(0 \right)-\left(\gamma-\eta^{2}\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k }\left(0\right)\right]\] \[\times\frac{1}{\sqrt{\gamma-\eta^{2}}}\int_{0}^{1}\left(1-r \right)\sinh\sqrt{\gamma-\eta^{2}}\,r\,dr\] \[-\frac{1}{\sqrt{\gamma-\eta^{2}}}\int_{0}^{1}\sinh\sqrt{\gamma- \eta^{2}}\,r\,\widetilde{\left(V;W\right)}_{k}\left(r\right)dr,\] \[w_{k}^{\prime}\left(0\right)=-c_{k} \sqrt{\gamma-\eta^{2}}\cosh\sqrt{\gamma-\eta^{2}}+\left[w_{k}^{ \left(3\right)}\left(0\right)-\left(\gamma-\eta^{2}\right)w_{k}^{\prime}\left( 0\right)+\beta\eta v_{k}\left(0\right)\right]\] \[\times\int_{0}^{1}\left(1-r\right)\cosh\sqrt{\gamma-\eta^{2}}\,r \,dr+\int_{0}^{1}\cosh\sqrt{\gamma-\eta^{2}}\,r\,\widetilde{\left(V;W\right) }_{k}\left(r\right)dr,\] \[w_{k}^{\prime\prime}\left(0\right)=c_{k} \left(\gamma-\eta^{2}\right)\sinh\sqrt{\gamma-\eta^{2}}-\left[w_ {k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2}\right)w_{k}^{\prime} \left(0\right)+\beta\eta v_{k}\left(0\right)\right]\] \[\times\left[\sqrt{\gamma-\eta^{2}}\int_{0}^{1}\left(1-r\right) \sinh\sqrt{\gamma-\eta^{2}}\,r\,dr+1\right]\] \[-\sqrt{\gamma-\eta^{2}}\int_{0}^{1}\sinh\sqrt{\gamma-\eta^{2}}\, r\,\widetilde{\left(V;W\right)}_{k}\left(r\right)dr-\widetilde{\left(V;W \right)}_{k}\left(0\right)\]
for \(k=1,2,3\). If we sum each of these three equations over \(k=1,2,3\), using the vertex condition \(\sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{ 2}\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k}\left(0\right)\right]=0\), we obtain
\[\begin{cases}\sum_{k=1}^{3}w_{k}\left(0\right)=\sinh\sqrt{\gamma- \eta^{2}}\,\sum_{k=1}^{3}c_{k}-\frac{1}{\sqrt{\gamma-\eta^{2}}}\int_{0}^{1} \sinh\sqrt{\gamma-\eta^{2}}\,r\sum_{k=1}^{3}\widetilde{\left(V;W\right)}_{k} \left(r\right)dr\\ \sum_{k=1}^{3}w_{k}^{\prime}\left(0\right)=-\sqrt{\gamma-\eta^{2}}\cosh\sqrt{ \gamma-\eta^{2}}\sum_{k=1}^{3}c_{k}+\int_{0}^{1}\cosh\sqrt{\gamma-\eta^{2}}\, r\sum_{k=1}^{3}\widetilde{\left(V;W\right)}_{k}\left(r\right)dr\\ \sum_{k=1}^{3}w_{k}^{\prime\prime}\left(0\right)=\left(\gamma-\eta^{2}\right) \,\sinh\sqrt{\gamma-\eta^{2}}\sum_{k=1}^{3}c_{k}-\sqrt{\gamma-\eta^{2}}\int_{ 0}^{1}\sinh\sqrt{\gamma-\eta^{2}}\,r\sum_{k=1}^{3}\widetilde{\left(V;W\right) }_{k}\left(r\right)dr\\ \qquad\qquad\qquad-\sum_{k=1}^{3}\widetilde{\left(V;W\right)}_{k} \left(0\right).\end{cases} \tag{4.4}\]
Using the vertex condition \(w_{k}^{\prime\prime}\left(0\right)-\alpha w_{k}^{\prime}\left(0\right)-\kappa v _{k}^{\prime}\left(0\right)=0\), \(k=1,2,3\), taking into account that the \(v_{k}=\widetilde{w}_{k}\), and summing over \(k=1,2,3\) we find, again using \(\sum_{k=1}^{3}\left[w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{ 2}\right)w_{k}^{\prime}\left(0\right)+\beta\eta v_{k}\left(0\right)\right]=0\), that
\[\sum_{k=1}^{3}c_{k} =\frac{\sum_{k=1}^{3}\left[\widetilde{\left(V;W\right)}_{k}\left( 0\right)+\int_{0}^{1}\left(\sqrt{\gamma-\eta^{2}}\sinh\sqrt{\gamma-\eta^{2}}\, r+\alpha\cosh\sqrt{\gamma-\eta^{2}}\,r\right)\widetilde{\left(V;W\right)}_{k} \left(r\right)dr\right]}{\left(\gamma-\eta^{2}\right)\,\sinh\sqrt{\gamma-\eta^{2 }}+\alpha\sqrt{\gamma-\eta^{2}}\cosh\sqrt{\gamma-\eta^{2}}}\] \[\quad+\frac{\kappa\sum_{k=1}^{3}\widetilde{w}_{k}^{\prime}\left(0 \right)}{\left(\gamma-\eta^{2}\right)\,\sinh\sqrt{\gamma-\eta^{2}}+\alpha\sqrt{ \gamma-\eta^{2}}\cosh\sqrt{\gamma-\eta^{2}}}\] \[\coloneqq b\left(\widetilde{w},\widetilde{v}\right)+\kappa H\left( \widetilde{w}\right),\]
where
\[H\left(\widetilde{w}\right)\coloneqq\frac{\sum_{k=1}^{3}\widetilde{w}_{k}^{ \prime}\left(0\right)}{\left(\gamma-\eta^{2}\right)\,\sinh\sqrt{\gamma-\eta^{2} }+\alpha\sqrt{\gamma-\eta^{2}}\cosh\sqrt{\gamma-\eta^{2}}}.\]
Clearly the first equation in (4.4) can be written as
\[\sum_{k=1}^{3}w_{k}\left(0\right)=\left(b\left(\widetilde{w},\widetilde{v}\right)+ \kappa H\left(\widetilde{w}\right)\right)\sinh\sqrt{\gamma-\eta^{2}}-\frac{1}{ \sqrt{\gamma-\eta^{2}}}\int_{0}^{1}\sinh\sqrt{\gamma-\eta^{2}}\,r\sum_{k=1}^{3 }\widetilde{\left(V;W\right)}_{k}\left(r\right)dr.\]
Since \(w_{j}\left(0\right)=w_{k}\left(0\right)\equiv w\left(0\right)\), \(j,k=1,2,3\), it follows that
\[w\left(0\right) =\frac{1}{3}\left[\left(b\left(\widetilde{w},\widetilde{v}\right) +\kappa H\left(\widetilde{w}\right)\right)\sinh\sqrt{\gamma-\eta^{2}}-\frac{1} {\sqrt{\gamma-\eta^{2}}}\int_{0}^{1}\sinh\sqrt{\gamma-\eta^{2}}r\sum_{k=1}^{3 }\widetilde{\left(V;W\right)}_{k}\left(r\right)dr\right]\] \[\coloneqq c\left(\widetilde{w},\widetilde{v}\right)+\kappa G\left( \widetilde{w}\right)\]
where
\[G\left(\tilde{w}\right)\coloneqq\frac{H\left(\widetilde{w}\right)}{3}\sinh \sqrt{\gamma-\eta^{2}}.\]
Thus we have
\[\sum_{k=1}^{3}c_{k}=b\left(\widetilde{w},\widetilde{v}\right)+\kappa H\left( \widetilde{w}\right),\quad w\left(0\right)=c\left(\widetilde{w},\widetilde{v} \right)+\kappa G\left(\widetilde{w}\right),\]
and hence the following algebraic equations for \(c_{k}\), \(w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2}\right)w_{k}^{\prime }\left(0\right)+\beta\eta v_{k}\left(0\right)\):
\[\left\{\begin{aligned} c_{k}\sinh\sqrt{\gamma-\eta^{2}}-\frac{w_{k}^{ \left(3\right)}\left(0\right)-\left(\gamma-\eta^{2}\right)w_{k}^{\prime}\left( 0\right)+\beta\eta v_{k}\left(0\right)}{\sqrt{\gamma-\eta^{2}}}\int_{0}^{1} \left(1-r\right)\sinh\sqrt{\gamma-\eta^{2}}\,r\,dr\\ =c\left(\widetilde{w},\widetilde{v}\right)+\frac{1}{\sqrt{\gamma- \eta^{2}}}\int_{0}^{1}\sinh\sqrt{\gamma-\eta^{2}}\,r\left(\widetilde{V;W} \right)_{k}\left(r\right)dr,\end{aligned}\right. \tag{4.5}\]
for \(k=1,2,3\). We observe that the determinant of the coefficient matrix formed by (4.5) does not vanish,
\[\left|\begin{aligned} \sinh\sqrt{\gamma-\eta^{2}}& -\int_{0}^{1}\left(1-r\right)\sinh\sqrt{\gamma-\eta^{2}}\,r\,dr\\ \alpha\cosh\sqrt{\gamma-\eta^{2}}&-\alpha\int_{0}^{1} \left(1-r\right)\cosh\sqrt{\gamma-\eta^{2}}\,r\,dr-1\end{aligned}\right| \neq 0,\]
whence there are unique solution pairs \(c_{k}\), \(w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2}\right)w_{k}^{\prime }\left(0\right)+\beta\eta v_{k}\left(0\right)\) of the form
\[c_{k} =c_{k}\left(\widetilde{w},\widetilde{v}\right)+\kappa d_{k}\left( \widetilde{w}\right), k=1,2,3,\] \[w_{k}^{\left(3\right)}\left(0\right)-\left(\gamma-\eta^{2}\right) w_{k}^{\prime}\left(0\right)+\beta\eta v_{k}\left(0\right) =M_{k}\left(\widetilde{w},\widetilde{v}\right)+\kappa N_{k}\left( \widetilde{w}\right), k=1,2,3.\]
Inserting these into (4.3), we find
\[w_{k}\left(s\right)=c_{k} \left(\widetilde{w},\widetilde{v}\right)\sinh\sqrt{\gamma-\eta^{2}} \left(1-s\right)+\frac{M_{k}\left(\widetilde{w},\widetilde{v}\right)}{\sqrt{ \gamma-\eta^{2}}}\int_{s}^{1}\left(1-r\right)\sinh\sqrt{\gamma-\eta^{2}}\left( s-r\right)dr\] \[+\frac{1}{\sqrt{\gamma-\eta^{2}}}\int_{s}^{1}\sinh\sqrt{\gamma- \eta^{2}}\left(s-r\right)\widetilde{\left(V;W\right)}_{k}\left(r\right)dr\] \[+\kappa\left[d_{k}\left(\widetilde{w}\right)\sinh\sqrt{\gamma- \eta^{2}}\left(1-s\right)+\frac{N_{k}\left(\widetilde{w}\right)}{\sqrt{\gamma -\eta^{2}}}\int_{s}^{1}\left(1-r\right)\sinh\sqrt{\gamma-\eta^{2}}\left(s-r \right)dr\right]\] \[\coloneqq w_{k}\left(s,\widetilde{w},\widetilde{v}\right)+\kappa \left(d_{k}\left(\widetilde{w}\right)\phi_{1}\left(s\right)+N_{k}\left( \widetilde{w}\right)\phi_{2}\left(s\right)\right)\]
where \(\phi_{1}\left(s\right)=\sinh\sqrt{\gamma-\eta^{2}}\left(1-s\right)\), \(\phi_{2}\left(s\right)=\frac{1}{\sqrt{\gamma-\eta^{2}}}\int_{s}^{1}\left(1-r \right)\sinh\sqrt{\gamma-\eta^{2}}\left(s-r\right)dr\). Therefore we have
\[\mathcal{T}^{-1}\left\{\left(\widetilde{w}_{k},\widetilde{v}_{k} \right)\right\}_{k=1}^{3} =\left\{\left(w_{k},v_{k}\right)\right\}_{k=1}^{3}\] \[=\left\{\left(w_{k}\left(s,\widetilde{w},\widetilde{v}\right),v_ {k}\right)\right\}_{k=1}^{3}+\kappa\left\{d_{k}\left(\widetilde{w}\right) \left(\phi_{1},0\right)+N_{k}\left(\widetilde{w}\right)\left(\phi_{2},0\right) \right\}_{k=1}^{3}\] \[=\mathcal{T}_{0}^{-1}\left\{\left(\widetilde{w}_{k},\widetilde{v }_{k}\right)\right\}_{k=1}^{3}+\kappa\mathcal{S}\left\{\left(\widetilde{w}_{k},\widetilde{v}_{k}\right)\right\}_{k=1}^{3},\]
taking into account that \(v_{k}=\widetilde{w}_{k}\), \(k=1,2,3\), where \(\mathcal{T}_{0}^{-1}\) is a compact skewadjoint operator and \(\mathcal{S}\) is a bounded operator of rank \(2\) on \(\mathscr{X}\). In particular if \(\mathcal{T}x=y\), \(y\in\mathscr{X}\), \(x\in\mathscr{D}\left(\mathcal{A}\right)\), it is easily seen that
\[\Re\left(\mathcal{T}x,x\right)_{\mathscr{X}}=\Re\left(\mathcal{T}^{-1}y,y \right)_{\mathscr{X}}=\Re\left(\left(\mathcal{T}_{0}^{-1}+\kappa\mathcal{S} \right)y,y\right)_{\mathscr{X}}=\kappa\Re\left(\mathcal{S}y,y\right)_{ \mathscr{X}}=\kappa\left(\mathcal{S}y,y\right)_{\mathscr{X}}.\]
This completes the proof of the theorem.
Combined with Lemmas 4.1 and 4.2, if we identify \(\mathcal{A}\) with \(\mathcal{T}^{-1}\) (recall Lemma 2.1) and note that the geometric eigenspaces of \(\mathcal{A}\) and \(\mathcal{T}\) corresponding, respectively, to eigenvalues \(\lambda\) and \(\lambda^{-1}\) coincide, Theorem 4.1 leads to the following result.
**Theorem 4.2**.: _The eigenvectors of \(\mathcal{T}\) are minimal complete in \(\mathscr{X}\)._
### Riesz basis property
There remains the verification of the Riesz basis property of the eigenvectors. We use the result of [18, Theorem 1.1] to prove the following theorem.
**Theorem 4.3**.: _There exists a sequence of eigenvectors corresponding to a properly enumerated sequence of eigenvalues of \(\mathcal{T}\) which is a Riesz basis for \(\mathscr{X}\)._
Proof.: We identify the operator \(\mathcal{A}\) in [18, Theorem 1.1] with \(\mathcal{T}\) and begin by noting that, by Theorem 3.4, for any \(\lambda\in\sigma\left(\mathcal{T}\right)\), \(E\left(\lambda,T\right)\mathscr{X}\) is equal to the geometric eigenspace \(\ker\left(\lambda\mathcal{I}-\mathcal{T}\right)\) and so \(\dim E\left(\lambda,T\right)\mathscr{X}=1\) or \(\dim E\left(\lambda,T\right)\mathscr{X}=2\). Denote by \(\{\lambda_{\pm n}\}_{n=1}^{\infty}\) the properly enumerated sequence of eigenvalues of \(\mathcal{T}\), i.e. indexed in such a way that \(\lambda_{n}=\overline{\lambda_{-n}}\) whenever \(\Im\left(\lambda_{n}\right)\neq 0\). Let us take \(\sigma^{\left(2\right)}\left(\mathcal{T}\right)=\sigma\left(\mathcal{T} \right)=\left\{\lambda_{\pm n}\right\}_{n=1}^{\infty}\) and \(\sigma^{\left(1\right)}\left(\mathcal{T}\right)=\{-\infty\}\). It now follows from Theorem 3.3 that all conditions of Lemma 4.3 are satisfied. In particular, by virtue of Theorem 3.3, the eigenvalues are simple for large enough \(n\), and thus their sequence is interpolating (because \(|\Re\left(\lambda_{\pm n}\right)|<\infty\) and the separation condition (4.1) holds). It also follows from Theorem 4.2 that
\[\overline{\operatorname{span}\left\{E\left(\lambda,\mathcal{T}\right)\mathscr{X }\ \big{|}\ \lambda\in\sigma^{\left(2\right)}\left(\mathcal{T}\right)\right\}}=\mathscr{X} _{2}=\mathscr{X}.\]
Hence, combining these results, we have by Lemma 4.3 that there exists a sequence of eigenvectors of \(\mathcal{T}\) corresponding to \(\{\lambda_{\pm n}\}_{n=1}^{\infty}\) which is a Riesz basis for \(\mathscr{X}\).
As a corollary of the theorem we obtain the following result.
**Theorem 4.4**.: \(\mathcal{T}\) _satisfies the Spectrum Determined Growth Assumption._
## 5. Exponential stability result
We now have all the ingredients to apply the spectral approach to obtain the final result of the paper.
**Theorem 5.1**.: _If \(\kappa>0\), then there exist constants \(M,\varepsilon>0\) such that_
\[\left\|\mathbb{S}\left(t\right)x_{0}\right\|_{\mathscr{X}}\leq Me^{-\varepsilon t }\left\|x_{0}\right\|_{\mathscr{X}},\quad x_{0}\in\mathscr{D}\left(\mathcal{A }\right),\quad t\geq 0,\]
_where \(\mathbb{S}\left(t\right)\) is the \(C_{0}\)-semigroup of contractions generated by \(\mathcal{T}\coloneqq\mathcal{A}+\mathcal{B}\) defined by (2.1)-(2.4); and consequently we have for the solutions of the closed-loop system (2.5), as \(t\to\infty\), \(\left\|\boldsymbol{x}\left(t\right)\right\|_{\mathscr{X}}\to 0\) exponentially._
Proof.: Since, by Theorem 4.4, \(\mathcal{T}\) satisfies the Spectrum Determined Growth Assumption, we have \(\varpi_{0}=\sup\left\{\Re\left(\lambda\right)\mid\lambda\in\sigma\left( \mathcal{T}\right)\right\}\). It follows from Theorem 3.1 that \(i\mathbb{R}\subset\varrho\left(\mathcal{T}\right)\), and according to Theorem 3.3 the two branches \(\sigma^{\left(1\right)}\left(\mathcal{T}\right)\), \(\sigma^{\left(2\right)}\left(\mathcal{T}\right)\) have asymptotes \(\Re\left(\lambda\right)\sim-\frac{1}{\kappa}\), \(\Re\left(\lambda\right)\sim-\frac{2}{\kappa}\), respectively. Thus \(\sup\left\{\Re\left(\lambda\right)\mid\lambda\in\sigma\left(\mathcal{T} \right)\right\}<0\) and hence \(\sup\left\{\Re\left(\lambda\right)\mid\lambda\in\sigma\left(\mathcal{T} \right)\right\}\leq-\varepsilon<0\) for some \(\varepsilon>0\). With this the proof is complete.
|
2310.00103 | Linkage principle for small quantum groups | We consider small quantum groups with root systems of Cartan, super and
modular type, among others. These are constructed as Drinfeld doubles of
finite-dimensional Nichols algebras of diagonal type. We prove a linkage
principle for them by adapting techniques from the work of Andersen, Jantzen
and Soergel in the context of small quantum groups at roots of unity.
Consequently we characterize the blocks of the category of modules. We also
find a notion of (a)typicality similar to the one in the representation theory
of Lie superalgebras. The typical simple modules turn out to be the simple and
projective Verma modules. Moreover, we deduce a character formula for
1-atypical simple modules. | Cristian Vay | 2023-09-29T19:25:48Z | http://arxiv.org/abs/2310.00103v1 | # Linkage principle for small quantum groups
###### Abstract.
We consider small quantum groups with root systems of Cartan, super and modular type, among others. These are constructed as Drinfeld doubles of finite-dimensional Nichols algebras of diagonal type. We prove a linkage principle for them by adapting techniques from the work of Andersen, Jantzen and Soergel in the context of small quantum groups at roots of unity. Consequently we characterize the blocks of the category of modules. We also find a notion of (a)typicality similar to the one in the representation theory of Lie superalgebras. The typical simple modules turn out to be the simple and projective Verma modules. Moreover, we deduce a character formula for \(1\)-atypical simple modules.
_2020 Mathematics Subject Classification._ 16T05, 17B37, 22E47, 17B10, 17B35, 20G05.
_Key words._ Quantum group, Nichols algebra, Weyl groupoid, Root system, Representation theory.
This work is partially supported by CONICET PIP 11220200102916CO, Foncyt PICT 2020-SERIA-02847 and Secyt (UNC)
particular, they have associated (generalized) root systems and, besides those of Cartan type, we find among the examples root systems of finite-dimensional contragredient Lie superalgebras in characteristic 0, 3 and 5, and root systems of finite-dimensional contragredient Lie algebras in positive characteristic, as it was observed by Andruskiewitsch, Angiono and Yamane [8, 3]. Consequently, we get small quantum groups with these more general root systems.
### Main results
Let \(u_{\mathfrak{q}}\) be as in Figure 2. Then it admits a triangular decomposition \(u_{\mathfrak{q}}=u_{\mathfrak{q}}^{-}u_{\mathfrak{q}}^{0}u_{\mathfrak{q}}^{+}\) which gives rise to Verma type modules \(M(\pi)\) for every algebra map \(\pi:u_{\mathfrak{q}}^{0}\longrightarrow\mathbb{C}\). Moreover, every \(M(\pi)\) has a unique simple quotient \(L(\pi)\) and any simple \(u_{\mathfrak{q}}\)-module can be obtained in this way. For instance, all these were computed for \(\mathfrak{q}\) of type \(\mathfrak{uf}\mathfrak{o}(7)\) in [5]. The linkage principle gives us information about the composition factors of the Verma modules as we explain after introducing some notation.
Let \(\{\alpha_{1},...,\alpha_{\theta}\}\) be the canonical \(\mathbb{Z}\)-basis of \(\mathbb{Z}^{\mathbb{I}}\) with \(\mathbb{I}=\{1,...,\theta\}\). The matrix \(\mathfrak{q}\) defines a bicharacter \(\mathbb{Z}^{\mathbb{I}}\times\mathbb{Z}^{\mathbb{I}}\longrightarrow\mathbb{ C}^{\times}\) which we denote also \(\mathfrak{q}\). Given \(\beta\in\mathbb{Z}^{\mathbb{I}}\), we set \(q_{\beta}=\mathfrak{q}(\beta,\beta)\) and \(b^{\mathfrak{q}}(\beta)=\operatorname{ord}q_{\beta}\). We denote \(\rho^{\mathfrak{q}}:\mathbb{Z}^{\mathbb{I}}\longrightarrow\mathbb{C}^{\times}\) the group homomorphism such that \(\rho^{\mathfrak{q}}(\alpha_{i})=q_{\alpha_{i}}\) for all \(i\in\mathbb{I}\). We notice \(u_{\mathfrak{q}}^{0}\) is an abelian group algebra generated by \(K_{i},L_{i}\), \(i\in\mathbb{I}\) (in Figure 2, \(\Gamma\) is generated by the \(K_{i}\)'s) and unlike \(u_{q}(\mathfrak{g})\), they yield two copies of the (finite) torus. If \(\alpha=n_{1}\alpha_{1}+\cdots+n_{\theta}\alpha_{\theta}\in\mathbb{Z}^{\mathbb{ I}}\), we denote \(K_{\alpha}=K_{1}^{n_{1}}\cdots K_{\theta}^{n_{\theta}}\) and \(L_{\alpha}=L_{1}^{n_{1}}\cdots L_{\theta}^{n_{\theta}}\). The algebra map \(\pi\widetilde{\mu}:u_{\mathfrak{q}}^{0}\longrightarrow\mathbb{C}\) is defined by \(\pi\widetilde{\mu}(K_{\alpha}L_{\beta})=\frac{\mathfrak{q}(\alpha,\mu)}{ \mathfrak{q}(\mu,\beta)}\pi(K_{\alpha}L_{\beta})\),
Figure 2. We construct small quantum groups from matrices with finite-dimensional Nichols algebras of diagonal type; \(\Gamma\) is a group quotient of \(\mathbb{Z}^{\theta}\). For instance, the positive part of \(u_{q}(\mathfrak{g})\) is the Nichols algebra of \(\mathfrak{q}=(q^{d_{i}c_{ij}})_{i,j}\) where \(C=(c_{ij})_{i,j}\) is the Cartan matrix of \(\mathfrak{g}\) and \((d_{i}c_{ij})_{i,j}\) is symmetric. Thus, the corresponding bosonization is the Borel subalgebra and \(u_{q}(\mathfrak{g})\) is a quotient of \(u_{\mathfrak{q}}\) by a central group subalgebra.
Figure 1. Algebras involved in Lusztig’s conjectures. Given a finite root system \(\Delta\), we write \(\mathfrak{g}\) and \(\mathfrak{g}_{k}\) for the associated Lie algebras over \(\mathbb{C}\) and over an algebraically closed field \(k\) of characteristic \(p\) odd (\(\neq 3\) if \(\mathfrak{g}\) has a component of type \(G_{2}\)). We write \(q\) for a \(p\)-th root of unity in \(\mathbb{C}\).
\(\alpha,\beta\in\mathbb{Z}^{\mathbb{I}}\). Let \(\Delta_{+}^{\mathfrak{q}}\subset\mathbb{Z}_{\geqslant 0}^{\mathbb{I}}\) be the set of positive roots of the Nichols algebra \(\mathfrak{B}_{\mathfrak{q}}\). If \(\beta\in\Delta_{+}^{\mathfrak{q}}\), we define
\[\beta\downarrow\mu=\mu-n_{\beta}^{\pi}(\mu)\beta\]
where \(n_{\beta}^{\pi}(\mu)\) is the unique \(n\in\{1,...,b^{\mathfrak{q}}(\beta)-1\}\) such that \(q_{\beta}^{n}-\rho^{\mathfrak{q}}(\beta)\,\pi\widehat{\mu}(K_{\beta}L_{\beta}^ {-1})=0\), if it exists, and otherwise \(n_{\beta}^{\pi}(\mu)=0\).
**Theorem 1.1** (Strong linkage principle).: _Let \(L\) be a composition factor of \(M(\pi\widehat{\mu})\), \(\mu\in\mathbb{Z}^{\mathbb{I}}\). Then \(L\simeq L(\pi\widehat{\mu})\) or \(L\simeq L(\pi\widehat{\lambda})\) with \(\lambda=\beta_{r}\downarrow\cdots\beta_{1}\downarrow\mu\) for some \(\beta_{1},...,\beta_{r}\in\Delta_{+}^{\mathfrak{q}}\)._
A first consequence of this principle is that it defines an equivalence relation which completely characterizes the blocks of the category. It is also the starting point to imagine character formulas for the simple modules. In this direction, we deduce that \(M(\pi\widehat{\mu})=L(\pi\widehat{\mu})\) is simple if and only if
\[\prod_{\begin{subarray}{c}\beta\in\Delta_{+}^{\mathfrak{q}}\\ 1\leqslant t<b^{\mathfrak{q}}(\beta)\end{subarray}}\ \left(q_{\beta}^{t}-\rho^{ \mathfrak{q}}(\beta)\,\pi\widehat{\mu}(K_{\beta}L_{\beta}^{-1})\right)\neq 0;\]
this was also proved in [26, Proposition 5.16]. Moreover, we give a character formula for \(L(\pi\widehat{\mu})\) if the above expression is zero for a unique \(\beta\in\Delta_{+}^{\mathfrak{q}}\) and for \(t=n_{\beta}^{\pi}(\mu)\). Explicitly,
\[\mathrm{ch}L(\pi\widehat{\mu})=\quad\frac{1-e^{-n_{\beta}^{\pi}(\mu)\beta}}{1 -e^{-\beta}}\prod_{\gamma\in\Delta_{+}^{\mathfrak{q}}\setminus\{\beta\}} \frac{1-e^{-b^{\mathfrak{q}}(\gamma)\gamma}}{1-e^{-\gamma}}.\]
By analogy with the theory of Lie superalgebras [27, 39], we say that the number of zeros of the former product measures the degree of atypicality. Like in [41] where Yamane gives a Weyl-Kac character formula for typical simple modules over quantum groups of Nichols algebras of diagonal type with finite root systems (when the Nichols algebra is finite-dimensional, his typical modules coincides with ours).
When I discussed a preliminary version of Theorem 1.1 with Nicolas Andruskiewitsch and Simon Riche, they respectively asked me about the Weyl groupoid and the affine Weyl group, how these objects come into play. In fact, the linkage principle is usually encoded in the dot action of the affine Weyl group, and the Weyl groupoid is its replacement in the theory of Nichols algebras [21]. We partially answer their questions in the next corollary. We discover also a phenomenon similar to what occurs in the setting of Lie superalgebras. There, the action of the affine Weyl group generated by the even reflections as well as the translations by odd roots take part in the linkage principle, see _e. g._[17].
Let \(\Delta_{+,\mathrm{car}}^{\mathfrak{q}}\) be the set of positive Cartan roots of \(\mathfrak{q}\) and \(s_{\beta}\) the associated reflection. We set \(\varrho^{\mathfrak{q}}=\frac{1}{2}\sum_{\beta\in\Delta_{+}^{\mathfrak{q}}}(b^{ \mathfrak{q}}(\beta)-1)\beta\). For \(\mu\in\mathbb{Z}^{\mathbb{I}}\), \(\beta\in\Delta_{+,\mathrm{car}}^{\mathfrak{q}}\) and \(m\in\mathbb{Z}\), we define
\[s_{\beta,m}\bullet\mu=s_{\beta}(\mu+mb^{\mathfrak{q}}(\beta)\beta-\varrho^{ \mathfrak{q}})+\varrho^{\mathfrak{q}}.\]
We denote \(\mathcal{W}^{\mathfrak{q}}_{\mathrm{link}}\) the group generated by all the affine reflections \(s_{\beta,m}\). We recall that \(\mathfrak{q}\) is of Cartan type if \(\Delta^{\mathfrak{q}}_{+}=\Delta^{\mathfrak{q}}_{+,\mathrm{car}}\), and \(\mathfrak{q}\) is of super type if its root system is isomorphic to the root system of a finite-dimensional contragredient Lie superalgebra in characteristic \(0\). If \(\mathfrak{q}\) is of super type, then \(\Delta^{\mathfrak{q}}_{+,\mathrm{odd}}:=\Delta^{\mathfrak{q}}_{+}\backslash \Delta^{\mathfrak{q}}_{+,\mathrm{car}}\) is not empty and \(\mathrm{ord}\,q_{\beta}=2\) for every root \(\beta\in\Delta^{\mathfrak{q}}_{+,\mathrm{odd}}\); in this case \(\beta\downarrow\mu=\mu\) or \(\mu-\beta\).
**Corollary 1.2** (Linkage principle).: _Assume \(\pi\) is the trivial algebra map. Let \(L(\pi\widetilde{\lambda})\) be a composition factor of \(M(\pi\widetilde{\mu})\). Then_
1. \(\lambda\in\mathcal{W}^{\mathfrak{q}}_{\mathrm{link}}\bullet\mu\) _if_ \(\mathfrak{q}\) _is of Cartan type._
2. \(\lambda\in\mathcal{W}^{\mathfrak{q}}_{\mathrm{link}}\bullet(\mu+\mathbb{Z} \Delta^{\mathfrak{q}}_{+,\mathrm{odd}})\) _if_ \(\mathfrak{q}\) _is of super type._
This corollary holds more generally for matrices of standard type, those with constant bundle of root system. However, there are matrices which are not, as the matrices of modular type. The assumption on \(\pi\) is not particularly restrictive as we deal with all the simple modules of highest-weight \(\widetilde{\mu}:u^{\mathfrak{q}}_{\mathfrak{q}}\longrightarrow\mathbb{C}\), \(K_{\alpha}L_{\beta}\mapsto\frac{\mathfrak{q}(\alpha,\mu)}{\mathfrak{q}(\mu, \beta)}\), for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\). In the case of \(u_{q}(\mathfrak{q})\), these form the category of modules of type \(1\) in the sense of Lusztig, cf. [1, SS2.4].
### Sketch of the proof
As we mentioned at the beginning, we imitate the ideas of [1, SS1-SS7]. There, the authors consider \(\mathbb{Z}^{\mathbb{I}}\)-graded algebras admitting a triangular decomposition \(U=U^{-}U^{0}U^{+}\) with \(U^{0}\) commutative and satisfying additional conditions which are fulfilled by \(u_{q}(\mathfrak{g})\) and \(U^{[p]}(\mathfrak{g}_{k})\). Then, given a Noetherian commutative algebra \(\mathbf{A}\) and an algebra map \(\pi:U^{0}\longrightarrow\mathbf{A}\), they define certain categories \(\mathcal{C}_{\mathbf{A}}\) of \(\mathbb{Z}^{\mathbb{I}}\)-graded \((U,\mathbf{A})\)-bimodules. We observe here that we can consider these categories also for \(u_{\mathfrak{q}}\). Roughly speaking, in the case \(\mathbf{A}=\mathbb{C}\), this identifies with the abelian subcategory generated by the simple modules \(L(\pi\widetilde{\mu})\) for \(\mu\in\mathbb{Z}^{\mathbb{I}}\).
A powerful tool used in _loc. cit._ to study the categories \(\mathcal{C}_{\mathbf{A}}\) are the so-called Lusztig automorphisms \(T_{w}\) of \(u_{q}(\mathfrak{g})\), where \(w\) runs in the Weyl group of \(\mathfrak{g}\). We find a difference between \(u_{q}(\mathfrak{g})\) and \(u_{\mathfrak{q}}\) at this point. Indeed, we have Lusztig isomorphisms but connecting possibly different algebras, _i.e._\(T_{w}:u_{w^{-\mathfrak{q}}}\longrightarrow u_{\mathfrak{q}}\) and the matrices \(w^{-\mathfrak{s}}\mathfrak{q}\) and \(\mathfrak{q}\) are not necessarily equal. These isomorphisms were defined in [23] for each \(w\in{}^{\mathfrak{q}}\mathcal{W}\), the Weyl groupoid of \(\mathfrak{q}\)[21]. Nevertheless, we can carefully use them as in [1].
First, we produce different triangular decompositions on \(u_{\mathfrak{q}}\) and then each triangular decomposition gives rise to new Verma modules. Namely, for \(w\in{}^{\mathfrak{q}}\mathcal{W}\) and \(\mu\in\mathbb{Z}^{\mathbb{I}}\), we denote \(Z^{w}_{\mathbb{C}}(\mu)\) the Verma module induced by \(\pi\widetilde{\mu}\) using the triangular decomposition \(u_{\mathfrak{q}}=T_{w}(u^{-}_{w^{-\mathfrak{s}}\mathfrak{q}})\,u^{0}_{ \mathfrak{q}}\,T_{w}(u^{+}_{w^{-\mathfrak{s}}\mathfrak{q}})\). Let \(L^{w}_{\mathbb{C}}(\mu)\) denote the unique simple quotient of \(Z^{w}_{\mathbb{C}}(\mu)\). For instance, \(Z_{\mathbb{C}}(\mu):=Z^{\mathrm{id}}_{\mathbb{C}}(\mu)=M(\pi\widetilde{\mu})\) and \(L_{\mathbb{C}}(\mu):=L^{\mathrm{id}}_{\mathbb{C}}(\mu)=L(\pi\widetilde{\mu})\).
Now, we set \(\mu\langle w\rangle=\mu+w(\varrho^{w^{-\mathfrak{s}}\mathfrak{q}})-\varrho^{ \mathfrak{q}}\in\mathbb{Z}^{\mathbb{I}}\); we notice that \(\varrho^{w^{-\mathfrak{s}}\mathfrak{q}}\) and \(\varrho^{\mathfrak{q}}\) could be different. We show that the Verma modules \(Z^{w}_{\mathbb{C}}(\mu\langle w\rangle)\) and \(Z^{\mathbb{R}}_{\mathbb{C}}(\mu\langle x\rangle)\) have identical characters and the Hom-space between them is one-dimensional, for all \(w,x\in{}^{\mathfrak{q}}\mathcal{W}\). Moreover, we construct inductively a generator for each space and compute its kernel.
We also prove that the image of \(\Phi:Z_{\mathbb{C}}(\mu)\longrightarrow Z_{\mathbb{C}}^{w_{0}}(\mu\langle w_{0} \rangle)\) is isomorphic to \(L_{\mathbb{C}}(\mu)\), cf. Figure 3.
We are ready to outline the last step of the proof of Theorem 1.1. Let \(L_{\mathbb{C}}(\lambda)\) be a composition factor of \(Z_{\mathbb{C}}(\mu)\) not isomorphic to \(L_{\mathbb{C}}(\mu)\). Then \(L_{\mathbb{C}}(\lambda)\) is a composition factor of \(\operatorname{Ker}\Phi\) and hence also of \(\operatorname{Ker}\varphi=\operatorname{Im}\psi\), cf. Figure 3 (for some \(s\)). Since the Verma modules in the same row of Figure 3 have identical characters, we conclude that \(L_{\mathbb{C}}(\lambda)\) is a composition factor of \(Z_{\mathbb{C}}(\beta\downarrow\mu)\) as well. We observe that \(-\beta_{top}^{\mathfrak{q}}=\sum_{\beta\in\Delta_{+}^{\mathfrak{q}}}(b^{ \mathfrak{q}}(\beta)-1)\beta\leqslant\lambda\leqslant\beta\downarrow\mu<\mu\) because \(\beta_{top}^{\mathfrak{q}}\) is the maximum \(\mathbb{Z}^{\mathbb{I}}\)-degree of \(\mathfrak{B}_{\mathfrak{q}}\). Therefore, by repeating this procedure, we will find \(\beta_{1},...,\beta_{r}\in\Delta_{+}^{\mathfrak{q}}\) such that \(\lambda=\beta_{r}\downarrow\cdots\beta_{1}\downarrow\mu\) as desired.
### Relations with other algebras in the literature
We would like to remark that the representations of the present small quantum groups can be helpful for other algebras. For instance, this was pointed out by Andruskiewitsch, Angiono and Yakimov [7, SS1.3.3] for the large quantum groups studied in [11, 13, 7] which are analogous to the quantized enveloping algebras of De Concini-Kac-Procesi. The small quantum groups are particular quotients of them. As in [7], we highlight that our results apply to small quantum groups at even roots of unity.
Another example is given by the braided Drinfeld doubles of Nichols algebras recently constructed by Laugwitz and Sanmarco [32]. These are quotients of \(u_{\mathfrak{q}}\) with only one copy of the finite torus. Since the corresponding projections preserve the triangular decompositions [32, Proposition 3.16], a linkage principle for them can be deduced from Theorem 1.1.
Finally, it is worth noting that Pan and Shu [37] have obtained results comparable to ours, as well as their proofs, for modular Lie superalgebras. This could suggest a relationship between small quantum groups of super type and the corresponding modular Lie superalgebras as it happens between \(u_{q}(\mathfrak{g})\) and \(U^{[p]}(\mathfrak{g}_{k})\).
### Organization
The exposition is mostly self-contained. In Section 2 we set up general conventions. In Section 3 we collect the main concepts regarding Nichols algebras (PBW basis, Weyl groupoid, root system) and their properties. We illustrate them in super type \(A(1|1)\). In Section 4 we recall the construction of the Drinfeld doubles and
their Lusztig isomorphisms. In Section 5 we introduce the categories defined in [1] and summarize their general features. Next we investigate these categories over the Drinfeld doubles of Section 4 using their Lusztig isomorphisms: in Section 6, we construct the different Verma modules mentioned previously in this introduction, and we study the morphisms between them in Section 7. Finally, we prove our main results in Section 8.
### Acknowledgments
I am very grateful to Nicolas Andruskiewitsch, for motivating me to study the representations of Hopf algebras since my PhD, to Ivan Angiono, for patiently answering all my questions about Nichols algebras of diagonal type, and to Simon Riche, for teaching me so much about representation theory. I also thank them for the fruitful discussions.
## 2. Conventions
Throughout our work \(\Bbbk\) denotes an algebraically closed field of characteristic zero. For \(q\in\Bbbk^{\times}\) and \(n\in\mathbb{N}\), we recall the quantum numbers
\[(n)_{q}=\sum_{j=0}^{n-1}q^{j}\quad\text{and the identity}\quad(n)_{q}=q^{n-1}(n)_ {q^{-1}}.\]
Let \(\theta\in\mathbb{N}\). We set \(\mathbb{I}=\mathbb{I}_{\theta}=\{1,2,\dots,\theta\}\). We denote \(\Pi=\{\alpha_{1},\dots,\alpha_{\theta}\}\) the canonical \(\mathbb{Z}\)-basis of \(\mathbb{Z}^{\mathbb{I}}\). We will write \(0=0\alpha_{1}+\dots+0\alpha_{\theta}\). We will use \(\Pi\) to identify the matrices of size \(\theta\times\theta\) and the bicharacters on \(\mathbb{Z}^{\mathbb{I}}\) with values in \(\Bbbk^{\times}\). Explicitly, given \(\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^{\times})^{\mathbb{I}\times \mathbb{I}}\), by abuse of notation, we will denote \(\mathfrak{q}:\mathbb{Z}^{\mathbb{I}}\times\mathbb{Z}^{\mathbb{I}}\longrightarrow \Bbbk^{\times}\) the bicharacter defined by
\[\mathfrak{q}(\alpha_{i},\alpha_{j})=q_{ij}\quad\forall\,i,j\in\mathbb{I}.\]
Let \(\beta\in\mathbb{Z}^{\mathbb{I}}\). We will write \(q_{\beta}=\mathfrak{q}(\beta,\beta)\). In particular, \(q_{\alpha_{i}}=q_{ii}\) for all \(i\in\mathbb{I}\). The bound function [26, (2.12)] is
\[b^{\mathfrak{q}}(\beta)=\begin{cases}\min\{m\in\mathbb{N}\mid(m)_{q_{\beta}}= 0\}&\text{if $(m)_{q_{\beta}}=0$ for some $m\in\mathbb{N}$},\\ \infty&\text{otherwise}.\end{cases} \tag{2.1}\]
Of course, \(b^{\mathfrak{q}}(\beta)\) is finite if and only if \(q_{\beta}\) is a primitive root of \(1\) of order \(b^{\mathfrak{q}}(\beta)\).
We will consider the dual action of \(\operatorname{Aut}_{\mathbb{Z}}(\mathbb{Z}^{\mathbb{I}})\) on bicharacters. That is, if \(w\in\operatorname{Aut}_{\mathbb{Z}}(\mathbb{Z}^{\mathbb{I}})\), then the bicharacter \(w^{*}\mathfrak{q}:\mathbb{Z}^{\mathbb{I}}\times\mathbb{Z}^{\mathbb{I}} \longrightarrow\Bbbk^{\times}\) is defined by
\[w^{*}\mathfrak{q}(\alpha,\beta)=\mathfrak{q}(w^{-1}(\alpha),w^{-1}(\beta)) \quad\forall\alpha,\beta\in\mathbb{Z}^{\mathbb{I}}.\]
The partial order \(\leqslant^{w}\) in \(\mathbb{Z}^{\mathbb{I}}\) is defined by \(\lambda\leqslant^{w}\mu\) if and only if \(w^{-1}(\mu-\lambda)\in\mathbb{Z}^{\mathbb{I}}_{\geqslant 0}\). When \(w=\operatorname{id}\), we simply write \(\leqslant\).
Let \(M=\oplus_{\nu\in\mathbb{Z}^{\mathbb{I}}}M_{\nu}\) be a \(\mathbb{Z}^{\mathbb{I}}\)-graded module over a ring \(\mathbf{A}\). We call _weight space_ to the homogeneous component \(M_{\nu}\). The support \(\sup(M)\) is the set of all \(\nu\in\mathbb{Z}^{\mathbb{I}}\) such that \(M_{\nu}\neq 0\). If \(w\in\operatorname{Aut}_{\mathbb{Z}}(\mathbb{Z}^{\mathbb{I}})\), the _\(w\)-twisted_\(M[w]\) is the \(\mathbb{Z}^{\mathbb{I}}\)-graded \(\mathbf{A}\)-module whose weight spaces are \(M[w]_{\nu}=M_{w^{-1}\nu}\) for all \(\nu\in\mathbb{Z}^{\mathbb{I}}\). This an endofunctors in the category
of \(\mathbb{Z}^{\mathbb{I}}\)-graded \(\mathbf{A}\)-modules. If the weight spaces are \(\mathbf{A}\)-free, the formal character of \(M\) is
\[\operatorname{ch}\!M=\sum_{\mu\in\mathbb{Z}^{\mathbb{I}}}\operatorname{rank}_{ \mathbf{A}}(M_{\mu})\,e^{\mu}\]
which belongs to the \(\mathbb{Z}\)-algebra generated by the symbols \(e^{\mu}\) with multiplication \(e^{\mu}\cdot e^{\nu}=e^{\mu+\nu}\). We denote \(\overline{(-)}\) and \(w(-)\) the automorphisms of it given by \(\overline{e^{\mu}}=e^{-\mu}\) and \(w(e^{\mu})=e^{w\mu}\) for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(w\in\operatorname{Aut}_{\mathbb{Z}}(\mathbb{Z}^{\mathbb{I}})\). Therefore
\[\operatorname{ch}(M[w])=w(\operatorname{ch}M). \tag{2.2}\]
## 3. Finite-dimensional Nichols algebras of diagonal type
The finite-dimensional Nichols algebras of diagonal type were classified by Heckenberger in [22]1. They are parameterized by matrices of scalars. The classification and the structure of them are governed by certain Lie type objects, such as, PBW basis, the generalized Cartan matrix, the Weyl groupoid and the generalized root system, among others. We recall here their main features needed for our work. We refer to [3] for an overview of the theory.
Footnote 1: Indeed, he classifies a more large family but we only consider here finite-dimensional Nichols algebras.
We fix \(\theta\in\mathbb{N}\) and set \(\mathbb{I}=\mathbb{I}_{\theta}\). Throughout this section \(\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^{\times})^{\mathbb{I}\times \mathbb{I}}\) denotes a matrix with finite-dimensional Nichols algebra \(\mathfrak{B}_{\mathfrak{q}}\). This is constructed as a quotient of the free \(\Bbbk\)-algebra generated by \(E_{1},...,E_{\theta}\), that is
\[\mathfrak{B}_{\mathfrak{q}}=\Bbbk\langle E_{1},...,E_{\theta}\rangle/ \mathcal{J}_{\mathfrak{q}}\]
for certain ideal \(\mathcal{J}_{\mathfrak{q}}\). It is a \(\mathbb{Z}^{\mathbb{I}}\)-graded algebra with
\[\deg E_{i}=\alpha_{i}\quad\forall i\in\mathbb{I}\]
and \(\mathbb{Z}\)-graded with \(\deg E_{i}=1\) for all \(i\in\mathbb{I}\). Moreover, \(\mathfrak{B}_{\mathfrak{q}}\) is a braided Hopf algebra and a minimal set of generators of \(\mathcal{J}_{\mathfrak{q}}\) is known [11, 12] but we do not need more details.
**Example 3.1**.: Let \(\mathfrak{g}\) be a finite dimensional semisimple Lie algebra over \(\mathbb{C}\) with Cartan matrix \(C=(c_{ij})_{i,j\in\mathbb{I}}\) and \(D=(d_{i})_{i\in\mathbb{I}}\) such that \(DC\) is symmetric. Let \(q\) be a primitive root of unity and \(\mathfrak{q}=(q^{d_{i}c_{ij}})_{i,j\in\mathbb{I}}\). Then \(\mathfrak{B}_{\mathfrak{q}}\) is finite-dimensional. Moreover, if \(\operatorname{ord}q\) is an odd prime, not \(3\) if \(\mathfrak{g}\) has a component of type \(G_{2}\), then \(\mathfrak{B}_{\mathfrak{q}}\) is the positive part of the small quantum group \(u_{q}(\mathfrak{g})\), one of the algebras analyzed by Andersen-Jantzen-Soergel [1, Section 1.3. Case 2].
**Example 3.2**.: Let \(q\in\Bbbk\) be a primitive root of order \(N>2\) and
\[\mathfrak{q}=\begin{pmatrix}-1&-q\\ -1&-1\end{pmatrix}.\]
Set \(E_{12}=\operatorname{ad}_{c}E_{1}(E_{2})=(E_{1}E_{2}-q_{12}E_{2}E_{1})\). Then
\[\mathfrak{B}_{\mathfrak{q}}=\langle E_{1},E_{2}\mid E_{1}^{2}=E_{2}^{2}=E_{12 }^{N}=0\rangle.\]
This is a Nichols algebra of super type \(A(1|1)\). This term refers to the root system which will be introduced in SS3.3; see Example 3.5. Through this section we will use this example to illustrate the different concept. See also [3, SS5.1.11].
### PBW basis
In [28], Karchenck proves the existence of homogeneous elements \(E_{\beta_{1}},...,E_{\beta_{n}}\in\mathfrak{B}_{\mathfrak{q}}\) with \(\deg E_{\beta_{\nu}}=\beta_{\nu}\in\mathbb{Z}_{\geqslant 0}^{\mathbb{I}}\) and \(b^{\mathfrak{q}}(\beta_{\nu})<\infty\), recall (2.1), such that
\[\left\{E_{\beta_{1}}^{m_{1}}\cdots E_{\beta_{n}}^{m_{n}}\mid 0\leqslant m_{\nu} <b^{\mathfrak{q}}(\beta_{\nu}),\,1\leqslant\nu\leqslant n\right\}\]
is a linear basis of \(\mathfrak{B}_{\mathfrak{q}}\). We will see in the next section a way of constructing these elements \(E_{\beta_{\nu}}\). The set of _positive roots_ of \(\mathfrak{q}\) is
\[\Delta_{+}^{\mathfrak{q}}=\{\beta_{1},...,\beta_{n}\}. \tag{3.1}\]
It turns out that the elements of \(\Delta_{+}^{\mathfrak{q}}\) are pairwise different [18, Proposition 2.12]. The set of (positive) _simple roots_ is \(\Pi^{\mathfrak{q}}=\{\alpha_{1},...,\alpha_{\theta}\}\). We set
\[\beta_{top}^{\mathfrak{q}}=\sum_{\beta\in\Delta_{+}^{\mathfrak{q}}}(b^{ \mathfrak{q}}(\beta)-1)\beta. \tag{3.2}\]
This is the weight of the homogeneous component of maximum \(\mathbb{Z}^{\mathbb{I}}\)-degree of \(\mathfrak{B}_{\mathfrak{q}}\).
**Example 3.3**.: The positive roots of \(\mathfrak{q}\) in Example 3.2 are
\[\Delta_{+}^{\mathfrak{q}}=\{\alpha_{1},\alpha_{1}+\alpha_{2},\alpha_{2}\}\]
with
\[b^{\mathfrak{q}}(\alpha_{1})=2,\quad b^{\mathfrak{q}}(\alpha_{1}+\alpha_{2})= N,\quad b^{\mathfrak{q}}(\alpha_{2})=2.\]
The associated PBW basis is
\[\{E_{1}^{m_{1}}E_{12}^{m_{2}}E_{2}^{m_{3}}\mid 0\leqslant m_{1},m_{3}<2,\,0 \leqslant m_{2}<N\}.\]
### Weyl groupoid
For distinct \(i,j\in\mathbb{I}\), by [38] there exists \(m\in\mathbb{N}_{0}\) such that
\[(m+1)_{q_{ii}}(q_{ii}^{m}q_{ij}q_{ji}-1)=0.\]
Thus, it is possible to define the _generalized Cartan matrix_\(C^{\mathfrak{q}}=(c^{\mathfrak{q}}_{ij})_{i,j\in\mathbb{I}}\) of \(\mathfrak{q}\) as
\[c^{\mathfrak{q}}_{ij}=\begin{cases}2,&\text{if $i=j$;}\\ -\min\{m\in\mathbb{N}_{0}\mid(m+1)_{q_{ii}}(q_{ii}^{m}q_{ij}q_{ji}-1)=0\},&\text {otherwise.}\end{cases} \tag{3.3}\]
This matrix leads to the reflection \(\sigma^{\mathfrak{q}}_{i}\in\operatorname{Aut}_{\mathbb{Z}}(\mathbb{Z}^{ \mathbb{I}})\), \(i\in\mathbb{I}\), given by
\[\sigma^{\mathfrak{q}}_{i}(\alpha_{j})=\alpha_{j}-c^{\mathfrak{q}}_{ij}\alpha_ {i}\quad\forall\,j\in\mathbb{I}. \tag{3.4}\]
It turns out that the Nichols algebra of \((\sigma^{\mathfrak{q}}_{i})^{\ast}\mathfrak{q}\) is also finite-dimensional for all \(i\in\mathbb{I}\). However, \(\mathfrak{B}_{(\sigma^{\mathfrak{q}}_{i})^{\ast}\mathfrak{q}}\) is not necessarily isomorphic to \(\mathfrak{B}_{\mathfrak{q}}\).
Let \(\mathcal{H}\) be the family of matrices with finite-dimensional Nichols algebras and \(r_{i}:\mathcal{H}\longrightarrow\mathcal{H}\) the bijection given by
\[r_{i}(\mathfrak{q})=(\sigma_{i}^{\mathfrak{q}})^{*}\mathfrak{q}. \tag{3.5}\]
for all \(\mathfrak{q}\in\mathcal{H}\) and \(i\in\mathbb{I}\). For instance, \(r_{p}(\mathfrak{q})(\alpha_{i},\alpha_{j})=q_{ij}q_{ip}^{-c_{pj}^{\mathfrak{ q}}}q_{pj}^{-c_{pi}^{\mathfrak{q}}}q_{pp}^{c_{pi}^{\mathfrak{q}}}\) for all \(p,i,j\in\mathbb{I}\) and hence
\[c_{ij}^{r_{i}(\mathfrak{q})}=c_{ij}^{\mathfrak{q}}. \tag{3.6}\]
It is immediate that \(\sigma_{i}^{r_{i}(\mathfrak{q})}=\sigma_{i}^{\mathfrak{q}}\) and \(r_{i}^{2}(\mathfrak{q})=\mathfrak{q}\). We notice that
\[r_{i_{k}}(\cdots(r_{i_{1}}(\mathfrak{q})))=\left(\sigma_{i_{k}}^{r_{i_{k-1}} \cdots r_{i_{1}}(\mathfrak{q})}\cdots\sigma_{i_{2}}^{r_{1}(\mathfrak{q})} \sigma_{i_{1}}^{\mathfrak{q}}\right)^{*}\mathfrak{q}.\]
We denote \(\mathcal{G}=\langle r_{i}\mid i\in\mathbb{I}\rangle\) which is a subgroup of the group of bijections on \(\mathcal{H}\).
Let \(\mathcal{X}\) be a \(\mathcal{G}\)-orbit. The _Weyl groupoid_\(\mathcal{W}\) of \(\mathcal{X}\) is the category whose objects are the matrices belong to \(\mathcal{X}\), and their morphisms are generated by the arrows \(\sigma_{i}^{\mathfrak{q}}:\mathfrak{q}\to r_{i}(\mathfrak{q})\) for all \(\mathfrak{q}\in\mathcal{X}\) and \(i\in\mathbb{I}\). We denote \(1^{\mathfrak{q}}\) the identity of \(\mathfrak{q}\) and set \(\mathcal{W}_{\mathfrak{q}}=\operatorname{Hom}_{\mathcal{W}}(\mathfrak{q}, \mathfrak{q})\). Thus, a morphism in \(\mathcal{W}\) is of the form
\[\sigma_{i_{k}}^{r_{i_{k-1}}\cdots r_{i_{1}}(\mathfrak{q})}\cdots\sigma_{i_{2} }^{r_{1}(\mathfrak{q})}\sigma_{i_{1}}^{\mathfrak{q}}:\mathfrak{q} \longrightarrow r_{i_{k}}(\cdots(r_{i_{1}}(\mathfrak{q}))).\]
To shorten notation, we observe that a morphism in \(\mathcal{W}\) is determined either by specifying the source, \(\sigma_{i_{k}}\cdots\sigma_{i_{1}}^{\mathfrak{q}}:\mathfrak{q}\longrightarrow r _{i_{k}}\cdots r_{i_{1}}(\mathfrak{q})\), or by specifying the target, \(1^{\mathfrak{q}}\sigma_{i_{k}}\cdots\sigma_{i_{1}}:r_{i_{1}}\cdots r_{i_{k}}( \mathfrak{q})\longrightarrow\mathfrak{q}\). For \(w=\sigma_{i_{k}}\cdots\sigma_{i_{1}}^{\mathfrak{q}}\), we will write
\[w^{*}\mathfrak{q}=r_{i_{k}}(\cdots(r_{i_{1}}(\mathfrak{q})))\quad\text{and} \quad w^{-*}\mathfrak{q}=(w^{-1})^{*}\mathfrak{q}.\]
We set \({}^{\mathfrak{q}}\mathcal{W}\), resp. \(\mathcal{W}^{\mathfrak{q}}\), the family of morphisms in \(\mathcal{W}\) whose target, resp. source, is \(\mathfrak{q}\).
Clearly, \((\sigma_{p}^{\mathfrak{q}})^{*}\mathfrak{q}(\sigma_{p}^{\mathfrak{q}}(\alpha), \sigma_{p}^{\mathfrak{q}}(\alpha))=\mathfrak{q}(\alpha,\alpha)\). Hence, for all \(w\in\mathcal{W}^{\mathfrak{q}}\) and \(\alpha\in\mathbb{Z}^{\mathbb{I}}\), it holds
\[b^{w^{*}\mathfrak{q}}(w\alpha)=b^{\mathfrak{q}}(\alpha). \tag{3.7}\]
By [25, Theorem 1], the defining relations of the Weyl groupoid are of Coxeter type:
\[(\sigma_{i}\sigma_{i})1^{\mathfrak{q}}=1^{\mathfrak{q}}\quad\text{and}\quad( \sigma_{i}\sigma_{j})^{m_{ij}^{\mathfrak{q}}}1^{\mathfrak{q}}=1^{\mathfrak{q} }\quad\forall\mathfrak{q}\in\mathcal{X},\,\forall i,j\in\mathbb{I},\,i\neq j;\]
for certain exponents \(m_{ij}^{\mathfrak{q}}\). Similar to Coxeter groups, there exists a length function \(\ell:\mathcal{W}\longrightarrow\mathbb{N}_{0}\) given by
\[\ell(w)=\min\bigl{\{}k\in\mathbb{N}_{0}\mid\exists i_{1},\dots,i_{k}\in \mathbb{I},\mathfrak{q}\in\mathcal{X}:w=\sigma_{i_{k}}\cdots\sigma_{i_{1}}^{ \mathfrak{q}}\bigr{\}}\]
for \(w\in\mathcal{W}\). Also, for each \(\mathfrak{q}\in\mathcal{X}\), there exists a unique morphism \(w_{0}\in{}^{\mathfrak{q}}\mathcal{W}\) of maximal length. Let \(w_{0}=1^{\mathfrak{q}}\sigma_{i_{1}}\cdots\sigma_{i_{n}}\) be a reduced expression of the longest element in \({}^{\mathfrak{q}}\mathcal{W}\). The set of positive roots can be constructed as follows
\[\Delta_{+}^{\mathfrak{q}}=\biggl{\{}\beta_{\nu}=1^{\mathfrak{q}}\sigma_{i_{1}} \cdots\sigma_{i_{\nu-1}}(\alpha_{i_{\nu}})\mid 1\leq\nu\leq n\biggr{\}}, \tag{3.8}\]
see for instance [18, Proposition 2.12].
**Example 3.4**.: Let \(\mathfrak{q}\) be as in Example 3.2. We set
\[\mathfrak{p}=\left(\begin{array}{cc}-1&q^{-1}\\ 1&q\end{array}\right)\quad\text{and}\quad\mathfrak{r}=\left(\begin{array}{cc}q &q^{-1}\\ 1&-1\end{array}\right).\]
Then \(\mathcal{X}=\{\mathfrak{q},\mathfrak{p},\mathfrak{r},\mathfrak{q}^{t}, \mathfrak{p}^{t},\mathfrak{r}^{t}\}\) is a \(\mathcal{G}\)-orbit. Indeed, the generalized Cartan matrices of them are all equal to
\[\left(\begin{array}{cc}2&-1\\ -1&2\end{array}\right)\]
and the Weyl groupoid of \(\mathcal{X}\) is
The corresponding Nichols algebras are not necessarily isomorphic. For instance,
\[\mathfrak{B}_{\mathfrak{p}} =\langle E_{1},E_{2}\mid E_{1}^{2}=E_{2}^{N}=E_{221}^{2}=0\rangle \quad\text{with}\quad E_{221}=(\operatorname{ad}_{c}E_{2})^{2}(E_{1}),\] \[\mathfrak{B}_{\mathfrak{r}} =\langle E_{1},E_{2}\mid E_{1}^{N}=E_{2}^{2}=E_{112}^{2}=0\rangle \quad\text{with}\quad E_{112}=(\operatorname{ad}_{c}E_{1})^{2}(E_{2}).\]
### Root systems
The bundle \(\mathcal{R}=\{\Delta^{\mathfrak{q}}\}_{\mathfrak{q}\in\mathcal{X}}\) with
\[\Delta^{\mathfrak{q}}=\Delta^{\mathfrak{q}}_{+}\cup-\Delta^{\mathfrak{q}}_{+}\]
is the so-called _(generalized) root system of \(\mathcal{X}\)_ (or of \(\mathfrak{q}\in\mathcal{X}\))
We highlight that, unlike classical root systems, \(\Delta^{\mathfrak{q}}\) and \(\Delta^{\mathfrak{p}}\) are not necessarily equals for distinct \(\mathfrak{q},\mathfrak{p}\in\mathcal{X}\). When it is needed, we will write \(\beta^{\mathfrak{q}}\) to emphasize that \(\beta\in\mathbb{Z}^{\mathbb{I}}\) belongs to \(\Delta^{\mathfrak{q}}\). However, they share other characteristics with classical root systems which will be useful, cf. [18, 26]. In particular, for all \(i\in\mathbb{I}\), it holds that
\[\sigma^{\mathfrak{q}}_{i}(\Delta^{\mathfrak{q}}_{+}\backslash\{\alpha_{i}\}) =\Delta^{r_{i}(\mathfrak{q})}_{+}\backslash\{\alpha_{i}\},\quad\sigma^{ \mathfrak{q}}_{i}(\alpha_{i})=-\alpha_{i}\quad\text{and}\quad w(\Delta^{w^{- \mathfrak{q}}}\mathfrak{q})=\Delta^{\mathfrak{q}}. \tag{3.9}\]
Also, \(\ell(w)=\left|w(\Delta^{\mathfrak{p}}_{+})\cap-\Delta^{\mathfrak{q}}_{+}\right|\) for any \(w:\mathfrak{p}\to\mathfrak{q}\)[25, Lemma 8 \((iii)\)]. This implies that
\[w_{0}(\beta^{w^{-\mathfrak{q}}_{0}*}_{top})=-\beta^{\mathfrak{q}}_{top}. \tag{3.10}\]
**Example 3.5**.: The generalized root system of \(\mathfrak{q}\) from Example 3.2 is constant: \(\Delta^{\mathfrak{q}}=\{\pm\alpha_{1},\pm(\alpha_{1}+\alpha_{2}),\pm\alpha_{2}\}\) for all \(\mathfrak{z}\in\mathcal{X}\).
We have introduced here generalized root systems for Nichols algebras. However, this is a combinatorial object appearing in other context. According to [3, Theorem 2.34] contragredient Lie superalgebras have associated generalized root systems. For instance, that associated to \(\mathfrak{sl}(1|1)\) is equal to \(\mathcal{R}\).
### A shift operation on weights
Recall the element \(\beta_{top}^{\mathfrak{q}}\) in (3.2). We set
\[\varrho^{\mathfrak{q}}=\frac{1}{2}\beta_{top}^{\mathfrak{q}}. \tag{3.11}\]
This element will play the role of the semi-sum of the positive roots in Lie theory. The following definition is analogous to [1, 4.7(4)].
**Definition 3.6**.: For \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(w:w^{-\ast}\mathfrak{q}\to\mathfrak{q}\) in \(\mathcal{W}\), we set
\[\mu\langle w\rangle=\mu+w(\varrho^{w^{-\ast}\mathfrak{q}})-\varrho^{ \mathfrak{q}}\in\mathbb{Z}^{\mathbb{I}}.\]
We observe that
\[w(\varrho^{w^{-\ast}\mathfrak{q}})-\varrho^{\mathfrak{q}}=-\sum_{\beta\in \Delta_{+}^{\mathfrak{q}}\,:\,w^{-1}\beta\in\Delta_{-}^{w^{-\ast}\mathfrak{q} }}(b^{\mathfrak{q}}(\beta)-1)\beta \tag{3.12}\]
since \(w(\Delta^{w^{-\ast}\mathfrak{q}})=\Delta^{\mathfrak{q}}\) and \(b^{w^{-\ast}\mathfrak{q}}(w^{-1}\beta)=b^{\mathfrak{q}}(\beta)\). For instance, \(\mu\langle w_{0}\rangle=\mu-\beta_{top}^{\mathfrak{q}}\). This operation satisfies that
\[\mu\langle w\sigma_{i}\rangle=\mu\langle w\rangle-(b^{\mathfrak{q}}(w\alpha_ {i})-1)w\alpha_{i} \tag{3.13}\]
where \(w\sigma_{i}:\sigma_{i}^{\ast}w^{-\ast}\mathfrak{q}\to\mathfrak{q}\) and \(i\in\mathbb{I}\). Indeed,
\[w\sigma_{i}\left(\varrho^{\sigma_{i}^{\ast}w^{-\ast}\mathfrak{ q}}\right)-\varrho^{\mathfrak{q}} = w\left(\sigma_{i}(\varrho^{\sigma_{i}^{\ast}w^{-\ast}\mathfrak{ q}})-\varrho^{w^{-\ast}\mathfrak{q}}+\varrho^{w^{-\ast}\mathfrak{q}}\right)- \varrho^{\mathfrak{q}}\] \[= w\left(-(b^{w^{-\ast}\mathfrak{q}}(\alpha_{i})-1)\alpha_{i}+ \varrho^{w^{-\ast}\mathfrak{q}}\right)-\varrho^{\mathfrak{q}}\] \[= w\left(\varrho^{w^{-\ast}\mathfrak{q}}\right)-\varrho^{ \mathfrak{q}}-\left(b^{w^{-\ast}\mathfrak{q}}(\alpha_{i})-1\right)w\alpha_{i}.\]
**Example 3.7**.: Even if the generalized root system is constant, the corresponding elements \(\varrho^{\mathfrak{q}}\) could be different. For instance,
\[\varrho^{\mathfrak{q}} =\frac{1}{2}\left(\alpha_{1}+(N-1)(\alpha_{1}+\alpha_{2})+\alpha _{2}\right)\] \[\varrho^{\mathfrak{p}} =\frac{1}{2}\left(\alpha_{1}+(\alpha_{1}+\alpha_{2})+(N-1)\alpha _{2}\right)\] \[\varrho^{\mathfrak{r}} =\frac{1}{2}\left((N-1)\alpha_{1}+(\alpha_{1}+\alpha_{2})+\alpha _{2}\right)\]
where \(\mathfrak{q}\), \(\mathfrak{p}\) and \(\mathfrak{r}\) are as in Example 3.4.
## 4. Small quantum groups
In this section, we introduce the Hopf algebras in which we are interested in. We will follow [23]. We fix \(\theta\in\mathbb{N}\) and a matrix \(\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^{\times})^{\mathbb{I} \times\mathbb{I}}\) with finite-dimensional Nichols algebra \(\mathfrak{B}_{\mathfrak{q}}\), where \(\mathbb{I}=\mathbb{I}_{\theta}\). Let \(\mathcal{X}\) be the \(\mathcal{G}\)-orbit of \(\mathfrak{q}\) and \(\mathcal{W}\) its Weyl groupoid.
We denote \(U_{\mathfrak{q}}\) the Hopf algebra generated by the symbols \(K_{i}\), \(K_{i}^{-1}\), \(L_{i}\), \(L_{i}^{-1}\), \(E_{i}\) and \(F_{i}\), with \(i\in\mathbb{I}\), subject to the relations:
\[K_{i}E_{j}=q_{ij}E_{j}K_{i}, L_{i}E_{j}=q_{ji}^{-1}E_{j}L_{i}\] \[K_{i}F_{j}=q_{ij}^{-1}F_{j}K_{i}, L_{i}F_{j}=q_{ji}F_{j}L_{i}\]
\[E_{i}F_{j}-F_{j}E_{i} =\delta_{i,j}(K_{i}-L_{i})\] \[XY =YX,\] \[K_{i}K_{i}^{-1} =L_{i}L_{i}^{-1}=1 \tag{4.1}\]
for all \(i,j\in\mathbb{I}\) and \(X,Y\in\{K_{i}^{\pm 1},L_{i}^{\pm 1}\mid i\in\mathbb{I}\}\). Also, the generators \(E_{i}\) (resp. \(F_{i}\)) are subject to the relations given by \(\mathcal{J}_{\mathfrak{q}}\) (resp. \(\tau(\mathcal{J}_{\mathfrak{q}})\), cf. (4.13) below). However, these last relations will not be needed and neither the counit, nor he comultiplication and nor the antipode of \(U_{\mathfrak{q}}\).
We have that \(U_{\mathfrak{q}}=\oplus_{\mu\in\mathbb{Z}^{\mathbb{I}}}\,(U_{\mathfrak{q}})_{\mu}\) is a \(\mathbb{Z}^{\mathbb{I}}\)-graded Hopf algebra with
\[\deg E_{i}=-\deg F_{i}=\alpha_{i}\quad\text{and}\quad\deg K_{i}^{\pm 1}=\deg L _{i}^{\pm 1}=0\quad\forall\,i\in\mathbb{I}.\]
For \(\alpha=n_{1}\alpha_{1}+\cdots+n_{\theta}\alpha_{\theta}\in\mathbb{Z}^{\mathbb{ I}}\), we set
\[K_{\alpha}=K_{1}^{n_{1}}\cdots K_{\theta}^{n_{\theta}}\quad\text{and}\quad L_{ \alpha}=L_{1}^{n_{1}}\cdots L_{\theta}^{n_{\theta}}.\]
In particular, \(K_{\alpha_{i}}=K_{i}\) for \(i\in\mathbb{I}\).
Given \(\mu\in\mathbb{Z}^{\mathbb{I}}\), we define \(\mathfrak{q}_{\mu}\in\text{Alg}(U_{\mathfrak{q}}^{0},\Bbbk)\) and \(\widetilde{\mu}\in\text{Aut}_{\Bbbk-alg}(U_{\mathfrak{q}}^{0})\) by
\[\mathfrak{q}_{\mu}(K_{\alpha}L_{\beta})=\frac{\mathfrak{q}(\alpha,\mu)}{ \mathfrak{q}(\mu,\beta)}\quad\text{and}\quad\widetilde{\mu}(K_{\alpha}L_{ \beta})=\mathfrak{q}_{\mu}(K_{\alpha}L_{\beta})\,K_{\alpha}L_{\beta} \tag{4.2}\]
for all \(\alpha,\beta\in\mathbb{Z}^{\mathbb{I}}\). Then, it follows easily that
\[s\,u=u\,\widetilde{\mu}(s)\quad\forall u\in(U_{\mathfrak{q}})_{\mu},\,s\in U_ {\mathfrak{q}}^{0} \tag{4.3}\]
and
\[\widetilde{\nu}\circ\widetilde{\mu}=\widetilde{\nu+\mu}\quad\forall\nu,\mu\in \mathbb{Z}^{\mathbb{I}}. \tag{4.4}\]
The multiplication of \(U_{\mathfrak{q}}\) induces a linear isomorphism
\[U_{\mathfrak{q}}^{-}\otimes U_{\mathfrak{q}}^{0}\otimes U_{\mathfrak{q}}^{+} \longrightarrow U_{\mathfrak{q}} \tag{4.5}\]
where
\[U_{\mathfrak{q}}^{+}=\Bbbk\langle E_{i}\mid i\in\mathbb{I}\rangle\simeq \mathfrak{B}_{\mathfrak{q}},\quad U_{\mathfrak{q}}^{0}=\Bbbk\langle K_{i},L _{i}\mid i\in\mathbb{I}\rangle,\quad U_{\mathfrak{q}}^{-}=\Bbbk\langle F_{i }\mid i\in\mathbb{I}\rangle\]
are subalgebras of \(U_{\mathfrak{q}}\). We notice that \(U_{\mathfrak{q}}^{0}\) identifies with the groups algebra \(\Bbbk(\mathbb{Z}^{\mathbb{I}}\times\mathbb{Z}^{\mathbb{I}})\) for any matrix \(\mathfrak{q}\).
**Definition 4.1**.: Let \(p:U_{\mathfrak{q}}\longrightarrow u_{\mathfrak{q}}\) be a \(\mathbb{Z}^{\mathbb{I}}\)-graded algebra projection and set \(u_{\mathfrak{q}}^{\pm,0}:=p(U_{\mathfrak{q}}^{\pm,0})\). We call \(u_{\mathfrak{q}}\)_small quantum group_ if the multiplication \(u_{\mathfrak{q}}^{-}\otimes u_{\mathfrak{q}}^{0}\otimes u_{\mathfrak{q}}^{+} \longrightarrow u_{\mathfrak{q}}\) induces a linear isomorphism and \(u_{\mathfrak{q}}^{\pm}\simeq U_{\mathfrak{q}}^{\pm}\).
We say they are "small" because \(u_{\mathfrak{q}}^{\pm}\) is finite-dimensional but we do not make any assumption on the size of \(u_{\mathfrak{q}}^{0}\). Thus, \(U_{\mathfrak{q}}\) is even a small quantum group.
**Example 4.2**.: Let \(\mathfrak{q}\) be as in Example 3.1. Then \(U_{\mathfrak{q}}/\langle K_{i}-L_{i}^{-1},K_{i}^{2\operatorname{ord}q}-1\mid i\in \mathbb{I}\rangle\) is an small quantum group. Moreover, if \(\operatorname{ord}q\) is an odd prime, not \(3\) if \(\mathfrak{g}\) has a component of type \(G_{2}\), then \(u_{q}(\mathfrak{g})\simeq U_{\mathfrak{q}}/\langle K_{i}-L_{i}^{-1},K_{i}^{2 \operatorname{ord}q}-1\mid i\in\mathbb{I}\rangle\).
**Example 4.3**.: Let us explain how to construct the small quantum group \(u_{\mathfrak{q}}\) of Figure 2. By [23, Corollary 5.9], there is a skew Hopf pairing \(\eta\) between \(U_{\mathfrak{q}}^{+}\#\Bbbk\langle K_{i}\mid i\in\mathbb{I}\rangle\) and \(U_{\mathfrak{q}}^{-}\#\Bbbk\langle L_{i}\mid i\in\mathbb{I}\rangle\), and the corresponding Drinfeld double is isomorphic to \(U_{\mathfrak{q}}\). Let \(\Gamma=\overline{\langle K_{i}\mid i\in\mathbb{I}\rangle}\) be a group quotient and set \(g_{i}=\overline{K_{i}}\), \(i\in\mathbb{I}\). Suppose the character \(\chi_{i}:\Gamma\longrightarrow\Bbbk^{\times}\), \(\chi_{i}(g_{j})=q_{ij}\) for all \(i,j\in\mathbb{I}\), is well-defined and set \(\widetilde{\Gamma}=\langle\chi_{i}\mid i\in\mathbb{I}\rangle\). Then \(\langle L_{i}\mid i\in\mathbb{I}\rangle\longrightarrow\widetilde{\Gamma}\), \(L_{i}\mapsto\chi_{i}\), is a group quotient. Moreover, \(\eta\) induces a pairing between \(U_{\mathfrak{q}}^{+}\#\Bbbk\Gamma\) and \(U_{\mathfrak{q}}^{-}\#\Bbbk\widetilde{\Gamma}\), and the corresponding Drinfeld double is \(u_{\mathfrak{q}}\).
**Example 4.4**.: The braided Drinfeld doubles introduced in [32, SS3.2] are small quantum groups.
### Lusztig isomorphisms
In [23, Theorem 6.11], Heckenberger constructs certain algebra isomorphisms
\[T_{i}=T_{i}^{(\sigma_{i}^{\mathfrak{q}})^{\mathfrak{g}}\mathfrak{q}}:U_{( \sigma_{i}^{\mathfrak{q}})^{\mathfrak{g}}\mathfrak{q}}\longrightarrow U_{ \mathfrak{q}} \tag{4.6}\]
for all \(i\in\mathbb{I}\). They emulate some properties of the Lusztig automorphisms of small quantum groups. We emphasize that these isomorphisms depends on the matrix defining the Drinfeld double of the domain, but we will omit this in the notation when no confusion can arise. We do not need the precise definition of these functions. We just recall some useful properties for us.
Let \(w:w^{-*}\mathfrak{q}\rightarrow\mathfrak{q}\) be a morphism in \(\mathcal{W}\). We choose a reduced expression \(w=1^{\mathfrak{q}}\sigma_{i_{k}}\cdots\sigma_{i_{1}}\) and denote
\[T_{w}=T_{i_{k}}\cdots T_{i_{1}}:U_{w^{-*}\mathfrak{q}}\longrightarrow U_{ \mathfrak{q}}. \tag{4.7}\]
This isomorphism depends on our choosed expression for \(w\). However, if \(w=1^{\mathfrak{q}}\sigma_{j_{k}}\cdots\sigma_{j_{1}}\) is another reduced expression, there exists \(\underline{a}=(a)_{i\in\mathbb{I}}\in(\Bbbk^{\times})^{\mathbb{I}}\) such that
\[T_{i_{k}}\cdots T_{i_{1}}=T_{j_{k}}\cdots T_{j_{1}}\varphi_{\underline{a}} \tag{4.8}\]
where \(\varphi_{\underline{a}}\) is the algebra automorphism of \(U_{w^{-*}\mathfrak{q}}\) given by
\[\varphi_{\underline{a}}(K_{i})=K_{i},\quad\varphi_{\underline{a}}(L_{i})=L_{i },\quad\varphi_{\underline{a}}(E_{i})=a_{i}E_{i}\quad\text{and}\quad\varphi_{ \underline{a}}(F_{i})=a_{i}^{-1}F_{i}\quad\forall i\in\mathbb{I}.\]
Indeed, both reduced expression of \(w\) can be transformed each other using only the Coxeter type relations by [25, Theorem 5]. Then, (4.8) follows using [23, Theorem 6.19 and Proposition 6.8\((ii)\)].
These isomorphisms permute the weight spaces according in the following way:
\[T_{w}\left((U_{w^{-*}\mathfrak{q}})_{\alpha}\right)=(U_{\mathfrak{q}})_{w\alpha} \tag{4.9}\]
for all \(\alpha\in\mathbb{Z}^{\mathbb{I}}\), cf. [26, Proposition 4.2]. They also have a good behavior on the middle subalgebras \(U^{0}_{w^{-*}\mathfrak{q}}=U^{0}_{\mathfrak{q}}\). Explicitly,
\[T_{w}(K_{\alpha}L_{\beta})=K_{w\alpha}L_{w\beta} \tag{4.10}\]
for all \(\alpha,\beta\in\mathbb{Z}^{\mathbb{I}}\) because \(T_{i}^{\pm 1}(K_{j})=K_{\sigma_{i}^{\mathfrak{q}}(\alpha_{j})}\) and \(T_{i}^{\pm 1}(L_{j})=L_{\sigma_{i}^{\mathfrak{q}}(\alpha_{j})}\) by definition [23, Lemma 6.6]. It follows that
\[T_{w}\circ\widetilde{\mu}\circ T_{w}^{-1}{}_{|U^{0}_{\mathfrak{q}}}= \widetilde{w(\mu)} \tag{4.11}\]
for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\); keep in mind that \(\widetilde{\mu}\) depends on \(w^{-*}\mathfrak{q}\) and \(\widetilde{w(\mu)}\) depends on \(\mathfrak{q}\).
For the longest element \(w_{0}\), there is a permutation \(f\) of \(\mathbb{I}\) such that \(w_{0}^{-*}\mathfrak{q}(\alpha_{i},\alpha_{j})=f^{*}\mathfrak{q}(\alpha_{i}, \alpha_{j})=\mathfrak{q}(\alpha_{f(i)},\alpha_{f(j)})\). Also, there is \(\underline{b}=(b)_{i\in\mathbb{I}}\in(\Bbbk^{\times})^{\mathbb{I}}\) such that
\[T_{w_{0}}=\phi_{1}\,\varphi_{f}\,\varphi_{\underline{b}} \tag{4.12}\]
by [23, Corollary 6.21]; where \(\phi_{1}\) is the algebra automorphism of \(U_{\mathfrak{q}}\) given by
\[\phi_{1}(K_{i})=K_{i}^{-1},\quad\phi_{1}(L_{i})=L_{i}^{-1},\quad\phi_{1}(E_{i} )=F_{i}L_{i}^{-1}\quad\text{and}\quad\varphi_{1}(F_{i})=K_{i}^{-1}E_{i}\]
for all \(i\in\mathbb{I}\) and \(\varphi_{f}:U_{\mathfrak{q}}\longrightarrow U_{f^{*}\mathfrak{q}}\) is the algebra isomorphism given by
\[\varphi_{f}(K_{i})=K_{f(i)},\quad\varphi_{f}(L_{i})=L_{f(i)},\quad\varphi_{f} (E_{i})=E_{f(i)},\quad\text{and}\quad\varphi_{f}(F_{i})=F_{f(i)}\]
for all \(i\in\mathbb{I}\).
Let \(\tau\) be the algebra antiautomorphism of \(U_{\mathfrak{q}}\) defined by
\[\tau(K_{i})=K_{i},\quad\tau(L_{i})=L_{i},\quad\tau(E_{i})=F_{i}\quad\text{and }\quad\tau(F_{i})=E_{i} \tag{4.13}\]
for all \(i\in\mathbb{I}\), see [23, Proposition 4.9 (7)]. Notice that \(\tau^{2}=\operatorname{id}\).
The generators of the PBW basis of the Nichols algebras can be constructed using the Lusztig isomorphisms as follows. Let \(w_{0}=1^{\mathfrak{q}}\sigma_{i_{1}}\cdots\sigma_{i_{n}}\) be a reduced expression of the longest element in \({}^{\mathfrak{q}}\mathcal{W}\) and recall from (3.8) that \(\beta_{\nu}=1^{\mathfrak{q}}\sigma_{i_{1}}\cdots\sigma_{i_{\nu-1}}(\alpha_{i_ {\nu}})\), \(1\leqslant\nu\leqslant n\), are the positive roots of \(\mathfrak{q}\). We set
\[E_{\beta_{\nu}}=T_{i_{1}}\cdots T_{i_{\nu-1}}(E_{i_{\nu}})\in(U^{+}_{ \mathfrak{q}})_{\beta_{\nu}}\quad\text{and}\quad F_{\beta_{\nu}}=T_{i_{1}} \cdots T_{i_{\nu-1}}(F_{i_{\nu}})\in(U^{-}_{\mathfrak{q}})_{-\beta_{\nu}}, \tag{4.14}\]
for all \(1\leqslant\nu\leqslant n\); by an abuse of notation \(E_{i_{\nu}}\) and \(F_{i_{\nu}}\) denote the generators of \(U_{(\sigma_{i_{\nu-1}}\cdots\sigma_{i_{1}})*_{\mathfrak{q}}}\). These elements depend on the reduced expression of \(w_{0}\).
By [26, Theorem 4.9], we know that
\[\begin{split}\left\{E^{m_{1}}_{\beta_{f(1)}}\cdots E^{m_{n}}_{ \beta_{f(n)}}\ |\ 0\leqslant m_{\nu}<b^{\mathfrak{q}}(\beta_{\nu}),\,1\leqslant\nu \leqslant n\right\}\quad\text{and}\\ \left\{F^{m_{1}}_{\beta_{f(1)}}\cdots F^{m_{n}}_{\beta_{f(n)}}\ |\ 0 \leqslant m_{\nu}<b^{\mathfrak{q}}(\beta_{\nu}),\,1\leqslant\nu \leqslant n\right\}\end{split} \tag{4.15}\]
are linear basis of \(U^{+}_{\mathfrak{q}}\) and \(U^{-}_{\mathfrak{q}}\) for any permutation \(f\) of \(\mathbb{I}\). It is immediate that
\[\operatorname{ch}U^{-}_{\mathfrak{q}}=\prod_{\beta\in\Delta^{+}_{+}}\frac{1-e^ {-b^{\mathfrak{q}}(\beta)\beta}}{1-e^{-\beta}}=\prod_{\beta\in\Delta^{+}_{+}} \left(1+e^{-\beta}+\cdots+e^{(1-b^{\mathfrak{q}}(\beta))\beta}\right). \tag{4.16}\]
We point out that the weight space of minimum degree of \(U_{\mathfrak{q}}^{-}\) is one-dimensional and spanned by
\[F_{top}^{\mathfrak{q}}=F_{\beta_{1}}^{b^{\mathfrak{q}}(\beta_{1})-1}\cdots F_{ \beta_{n}}^{b^{\mathfrak{q}}(\beta_{n})-1}. \tag{4.17}\]
**Example 4.5**.: Keeping the notation of Example 3.4, \(T_{1}^{\mathfrak{p}}:U_{\mathfrak{p}}\to U_{\mathfrak{q}}\) is defined by
\[T_{1}^{\mathfrak{p}}(E_{1}) =F_{1}L_{1}^{-1},\quad T_{1}^{\mathfrak{p}}(E_{2})=E_{12},\quad T_ {1}^{\mathfrak{p}}(K_{\alpha}L_{\beta})=K_{\alpha_{1}^{\mathfrak{p}}(\alpha)}L _{\alpha_{1}^{\mathfrak{p}}(\alpha)},\] \[T_{1}^{\mathfrak{p}}(F_{1}) =K_{1}^{-1}E_{1},\quad T_{1}^{\mathfrak{p}}(F_{2})=\frac{1}{q-1}( F_{1}F_{2}+F_{2}F_{1}).\]
### Parabolic subalgebras
We fix \(i\in\mathbb{I}\) and denote \(U_{\mathfrak{q}}(\alpha_{i})\) and \(U_{\mathfrak{q}}(-\alpha_{i})\) the subalgebras of \(U_{\mathfrak{q}}\) generated by \(E_{i}\) and \(F_{i}\), respectively. We set
\[P_{\mathfrak{q}}(\alpha_{i})=U_{\mathfrak{q}}(-\alpha_{i})U_{\mathfrak{q}}^{ 0}U_{\mathfrak{q}}^{+}. \tag{4.18}\]
This is a subalgebra of \(U_{\mathfrak{q}}\) thanks to the defining relations.
By the definition of the Lusztig isomorphisms, the restriction \(T_{i}:P_{\sigma_{i}^{*}\mathfrak{q}}(\alpha_{i})\longrightarrow P_{ \mathfrak{q}}(\alpha_{i})\) is an isomorphism and we can decompose \(P_{\mathfrak{q}}(\alpha_{i})\) as
\[P_{\mathfrak{q}}(\alpha_{i})=U_{\mathfrak{q}}(\alpha_{i})\,U_{\mathfrak{q}}^{ 0}\,T_{i}(U_{(\sigma_{i}^{\mathfrak{q}})^{*}\mathfrak{q}}^{+}). \tag{4.19}\]
Indeed, \(T_{i}(E_{i})=F_{i}L_{i}^{-1}\), \(T_{i}(E_{j})=\operatorname{ad}_{c}^{-c_{ij}^{\mathfrak{q}}}E_{i}(E_{j})\) and \(T_{i}(F_{i})=K_{i}^{-1}E_{i}\)[23, Lemma 6.6].
### Some definitions
We introduce some elements which will be key in the analysis of the representations of \(U_{\mathfrak{q}}\).
**Definition 4.6**.: Let \(\beta\in\Delta^{\mathfrak{q}}\) and \(n\in\mathbb{N}\). We set
\[[\beta;n]=(n)_{q_{\beta}^{-1}}K_{\beta}-(n)_{q_{\beta}}L_{\beta}\quad\text{ and}\quad[n;\beta]=(n)_{q_{\beta}^{-1}}L_{\beta}-(n)_{q_{\beta}}K_{\beta}.\]
It follows from the defining relations that
\[E_{i}F_{i}^{n}=F_{i}^{n}E_{i}+F_{i}^{n-1}[\alpha_{i};n]\quad\text{and}\quad F _{i}E_{i}^{n}=E_{i}^{n}F_{i}+E_{i}^{n-1}[n;\alpha_{i}]. \tag{4.20}\]
for all \(i\in\mathbb{I}\). Moreover, once we have fixed a PBW basis as in (4.15), we can apply the corresponding Lusztig isomorphisms to the above identities and obtain that
\[E_{\beta}F_{\beta}^{n}=F_{\beta}^{n}E_{\beta}+F_{\beta}^{n-1}[\beta;n]\quad \text{and}\quad F_{\beta}E_{\beta}^{n}=E_{\beta}^{n}F_{\beta}+E_{\beta}^{n-1}[ n;\beta]\]
for all \(\beta\in\Delta^{\mathfrak{q}}_{+}\).
**Definition 4.7**.: Given \(\beta\in\Delta^{\mathfrak{q}}\), \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and a \(U_{\mathfrak{q}}^{0}\)-algebra \(\mathbf{A}\) with structural map \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{A}\), we define \(t_{\beta}^{\pi}(\mu)\) as the unique \(t\in\{1,...,b^{\mathfrak{q}}(\beta)-1\}\) such that
\[1=q_{\beta}^{1-t}\pi\tilde{\mu}(K_{\beta}L_{\beta}^{-1}),\]
if it exists, and otherwise \(t_{\beta}^{\pi}(\mu)=0\).
Equivalently, we see can say that, modulo \(b^{\mathfrak{q}}(\beta)\), \(t_{\beta}^{\pi}(\mu)\) is the unique \(t\in\{1,...,b^{\mathfrak{q}}(\beta)\}\) such that \(\pi\tilde{\mu}([\beta;t])=0\). In fact, \(\pi\tilde{\mu}([\beta;b^{\mathfrak{q}}(\beta)])=0\) and
\[[\beta;t]=(t)_{q_{\beta}}L_{\beta}\left(q_{\beta}^{1-t}K_{\beta}L_{\beta}^{-1 }-1\right). \tag{4.21}\]
We also observe that
\[[t;\beta]=(t)_{q_{\beta}^{-1}}L_{\beta}\left(1-q_{\beta}^{t-1}K_{\beta}L_{\beta }^{-1}\right). \tag{4.22}\]
Given a \(U_{\mathfrak{q}}^{0}\)-algebra \(\mathbf{A}\) with structural map \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{A}\) and a morphism \(w\in\mathcal{W}^{\mathfrak{q}}\), we denote \(\mathbf{A}[w]\) the \(U_{\mathfrak{q}}^{0}\)-algebra with structural map \(\pi[w]:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{A}\) defined by
\[\pi[w](K_{\alpha}L_{\alpha})=\pi(K_{w^{-1}\alpha}L_{w^{-1}\beta}) \tag{4.23}\]
for all \(\alpha,\beta\in\mathbb{Z}^{\mathbb{I}}\). We highlight that \(\pi[w]=\pi\circ T_{w}^{-1}{}_{|U_{\mathfrak{q}}^{0}}\) for any Lusztig isomorphism \(T_{w}:U_{\mathfrak{q}}\longrightarrow U_{w}{*}_{\mathfrak{q}}\) associated to \(w\) by (4.10).
If \(\beta=w\alpha_{i}\in\Delta^{w{*}}\mathfrak{q}\) for some \(i\in\mathbb{I}\), it holds that
\[t_{w\alpha_{i}}^{\pi[w]}(w\mu)=t_{\alpha_{i}}^{\pi}(\mu). \tag{4.24}\]
In fact, we have that
\[\pi[w]\widehat{w}\mu([\beta,t]) =\pi[w]\left((t)_{w{*}{*}(\beta,\beta)^{-1}}w{*}\mathfrak{q}( \beta,w\mu)K_{\beta}-(t)_{w{*}{*}(\beta,\beta)}w{*}\mathfrak{q}(w\mu,-\beta)L _{\beta}\right)\] \[=\pi\left((t)_{q_{\alpha_{i}}^{-1}}\mathfrak{q}(\alpha_{i},\mu)K_ {\alpha_{i}}-(t)_{q_{\alpha_{i}}}\mathfrak{q}(\mu,-\alpha_{i})L_{\alpha_{i}}\right)\] \[=\pi\tilde{\mu}[\alpha_{i},t]. \tag{4.25}\]
As in [26, Definition 2.16], we define the group homomorphism \(\rho^{\mathfrak{q}}:\mathbb{Z}^{\mathbb{I}}\longrightarrow\Bbbk^{\times}\) by \(\rho^{\mathfrak{q}}(\alpha_{i})=\mathfrak{q}(\alpha_{i},\alpha_{i})\) for all \(i\in\mathbb{I}\).
**Definition 4.8**.: Given \(\beta\in\Delta^{\mathfrak{q}}\), \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and a \(U_{\mathfrak{q}}^{0}\)-algebra \(\mathbf{A}\) with structural map \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{A}\), we define \(n_{\beta}^{\pi}(\mu)\) as the unique \(n\in\{1,...,b^{\mathfrak{q}}(\beta)-1\}\) such that
\[q_{\beta}^{n}=\rho^{\mathfrak{q}}(\beta)\,\pi\widetilde{\mu}(K_{\beta}L_{ \beta}^{-1}),\]
if it exists, and otherwise \(n_{\beta}^{\pi}(\mu)=0\).
The above numbers are related in the following way.
**Lemma 4.9**.: _If \(\beta=w\alpha_{i}\in\Delta^{\mathfrak{q}}\) for some \(i\in\mathbb{I}\), then \(n_{\beta}^{\pi}(\mu)=t_{\beta}^{\pi}(\mu\langle w\rangle)\)._
Proof.: We first claim that
\[\rho^{\mathfrak{q}}(w\alpha_{i})=\mathfrak{q}(w\alpha_{i},w\alpha_{i})\, \mathfrak{q}(0\langle w\rangle,w\alpha_{i})\,\mathfrak{q}(w\alpha_{i},0\langle w \rangle). \tag{4.26}\]
We prove it by induction on the length of \(w\). If \(w=1^{\mathfrak{q}}\), then \(n_{\alpha_{i}}^{\pi}(\mu)=t_{\alpha_{i}}^{\pi}(\mu)\) by (4.21). We now assume the equality holds for all bicharacters and morphisms in \(\mathcal{W}\) of length \(r\). Thus, if \(w=\sigma_{j}w_{1}\) with \(j\in\mathbb{I}\) and \(\ell(\sigma_{j}w_{1})=1+\ell(w_{1})=1+r\), similar to (3.12), one can
check that
\[0\langle\sigma_{j}w_{1}\rangle=-\sum_{\gamma\in\Delta_{+}^{\sigma_{j}^{-\#}(q)}:w _{1}^{-1}\gamma\in\Delta_{-}^{(\sigma_{j}w_{1})-\#_{\mathfrak{q}}}}(b^{ \mathfrak{q}}(\sigma_{j}\gamma)-1)\sigma_{j}\gamma+(b^{\mathfrak{q}}(\alpha_{j })-1)\sigma_{j}\alpha_{j}.\]
Therefore
\[\mathfrak{q}(\sigma_{j}w_{1}\alpha_{i},\sigma_{j}w_{1}\alpha_{i}) \,\mathfrak{q}(0\langle\sigma_{j}w_{1}\rangle,\sigma_{j}w_{1} \alpha_{i})\,\mathfrak{q}(\sigma_{j}w_{1}\alpha_{i},0\langle\sigma_{j}w_{1} \rangle)=\] \[=\sigma_{j}^{-\ast}(\mathfrak{q})(w_{1}\alpha_{i},w_{1}\alpha_{i })\,\sigma_{j}^{-\ast}(\mathfrak{q})(0\langle w_{1}\rangle,w_{1}\alpha_{i}) \,\sigma_{j}^{-\ast}(\mathfrak{q})(w_{1}\alpha_{i},0\langle w_{1}\rangle)\times\] \[\sigma_{j}^{-\ast}(\mathfrak{q})(\alpha_{j},w_{1}\alpha_{i})^{b ^{\sigma_{j}^{-\ast}(\mathfrak{q})}(\alpha_{j})-1}\,\sigma_{j}^{-\ast}( \mathfrak{q})(w_{1}\alpha_{i},\alpha_{j})^{b^{\sigma_{j}^{-\#}(\mathfrak{q}) }(\alpha_{j})-1}\] \[\stackrel{{(\star)}}{{=}}\rho^{\sigma_{j}^{-\ast}( \mathfrak{q})}(w_{1}\alpha_{i})\,\frac{\rho^{\mathfrak{q}}(\sigma_{j}w_{1} \alpha_{i})}{\rho^{\sigma_{j}^{-\ast}(\mathfrak{q})}(w_{1}\alpha_{i})}=\rho^{ \mathfrak{q}}(\sigma_{j}w_{1}\alpha_{i});\]
(\(\star\)) follows from the inductive hypothesis and [26, Lemma 2.17]. This concludes the induction and our claim holds.
In consequence, we have that
\[q_{\beta}\pi\widehat{\mu\langle w\rangle}(K_{\beta}L_{\beta}^{-1}) =\mathfrak{q}(\beta,\beta)\mathfrak{q}(0\langle w\rangle,\beta) \mathfrak{q}(\beta,0\langle w\rangle)\,\pi\widehat{\mu}(K_{\beta}L_{\beta}^{- 1})\] \[=\rho^{\mathfrak{q}}(\beta)\,\pi\widehat{\mu}(K_{\beta}L_{\beta}^ {-1})\]
which implies the lemma.
## 5. Andersen-Jantzen-Soergel categories
In [1, Section 2], Andersen, Jantzen and Soergel have defined certain categories of modules for algebras fulfilling most of the more remarkable features of the small quantum groups at roots if unity. We will call them _AJS categories_. They consider any \(\mathbb{Z}^{\mathbb{I}}\)-graded \(\Bbbk\)-algebra \(U\) endowed with a triangular decomposition
\[U^{-}\otimes U^{0}\otimes U^{+}\longrightarrow U,\]
_i.e._ this is a \(\Bbbk\)-linear isomorphism induced by the multiplication and \(U^{-}\), \(U^{0}\) and \(U^{+}\) are \(\mathbb{Z}^{\mathbb{I}}\)-graded subalgebras satisfying the following properties, cf. [1, SS1.1 and SS2.1]:
\[U^{0}\subset U_{0},\quad(U^{\pm})_{0}=\Bbbk; \tag{5.2}\] \[(U^{\pm})_{\nu}\neq 0\Rightarrow\pm\nu\geq 0;\] (5.3) \[\sup(U^{\pm})\text{ is finite};\] (5.4) \[\text{Each }(U^{\pm})_{\nu},\text{ and hence }U^{\pm},\text{ is finite-dimensional over }\Bbbk. \tag{5.1}\]
They also assume that \(U^{0}\) is commutative and the existence of a group homomorphism \(\mathbb{Z}^{\mathbb{I}}\longrightarrow\text{Aut}_{\Bbbk-alg}(U^{0})\), \(\mu\mapsto\widetilde{\mu}\), such that
\[s\,u=u\,\widetilde{\mu}(s)\quad\forall u\in U_{\mu},\,s\in U^{0}. \tag{5.5}\]
From (5.1), we deduce that there are augmentation maps \(U^{\pm}\longrightarrow(U^{\pm})_{0}=\Bbbk\). We denote both of them \(\varepsilon\).
**Example 5.1**.: A small quantum group satisfies all the previous properties.
We fix a Noetherian commutative \(U^{0}\)-algebra \(\mathbf{A}\) with structural map \(\pi:U^{0}\longrightarrow\mathbf{A}\). We now present the AJS category \(\mathcal{C}_{\mathbf{A}}\), cf. [1, 2.3]. An object of \(\mathcal{C}_{\mathbf{A}}\) is a \(\mathbb{Z}^{\mathbb{I}}\)-graded \(U\otimes\mathbf{A}\)-module \(M\), or equivalently a left \(U\)-module and right \(\mathbf{A}\)-module, such that
\[M\text{ is finitely generated over }\mathbf{A}; \tag{5.7}\] \[M_{\mu}\mathbf{A}\subset M_{\mu},\] (5.8) \[U_{\nu}M_{\mu}\subset M_{\nu+\mu},\] (5.9) \[s\,m=m\,\pi\widetilde{\mu}(s), \tag{5.6}\]
for all \(\mu,\nu\in\mathbb{Z}^{\mathbb{I}}\), \(m\in M_{\mu}\) and \(s\in U^{0}\). The last compatibility means that the \(U^{0}\)-action is determined by the \(\mathbf{A}\)-action and the \(\mathbb{Z}^{\mathbb{I}}\)-grading. The morphisms between two objects are the morphisms of \(\mathbb{Z}^{\mathbb{I}}\)-graded \(U\otimes\mathbf{A}\)-modules.
The authors also defined categories \(\mathcal{C}^{\prime}_{\mathbf{A}}\) and \(\mathcal{C}^{\prime\prime}_{\mathbf{A}}\) in similar fashion by replacing \(U\) with \(U^{0}U^{+}\) and \(U^{0}\), respectively. There are obvious induced functors \(\mathcal{C}^{\prime\prime}_{\mathbf{A}}\longrightarrow\mathcal{C}^{\prime}_ {\mathbf{A}}\) and \(\mathcal{C}^{\prime}_{\mathbf{A}}\longrightarrow\mathcal{C}_{\mathbf{A}}\) which are left adjoints of the forgetful functors. Moreover, the categories \(\mathcal{C}_{\mathbf{A}}\) and \(\mathcal{C}^{\prime}_{\mathbf{A}}\) have enough projectives [1, Lemma 2.7].
We next summarize the most important attributes of the AJS categories.
### Verma modules
Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\). We denote \(\mathbf{A}^{\mu}\) the free right \(\mathbf{A}\)-module of rank one concentrated in degree \(\mu\) and generated by the symbol \(\left|\mu\right\rangle=\left|\mu\right\rangle_{\mathbf{A}}\).
We consider \(\mathbf{A}^{\mu}\) as an object of \(\mathcal{C}^{\prime}_{\mathbf{A}}\) with left \(U^{+}\)-action given by the augmentation map and left \(U^{0}\)-action determined by (5.9). Explicitly,
\[su\cdot\left|\mu\right\rangle=\varepsilon(u)\,\pi\widetilde{\mu}(s)\left|\mu \right\rangle\quad\forall s\in U^{0},\,u\in U^{+}. \tag{5.10}\]
We call _Verma modules_ to the induced modules
\[Z_{\mathbf{A}}(\mu)=U\otimes_{U^{0}U^{+}}\mathbf{A}^{\mu} \tag{5.11}\]
with \(U\) and \(\mathbf{A}\) acting by left and right multiplication on the left and right factor, respectively. It is isomorphic to \(U^{-}\otimes\mathbf{A}\) as \(U^{-}\otimes\mathbf{A}\)-module and its weight spaces are
\[Z_{\mathbf{A}}(\mu)_{\beta}=(U^{-})_{\beta-\mu}\otimes\left|\mu\right\rangle \tag{5.12}\]
for all \(\beta\in\mathbb{Z}^{\mathbb{I}}\). Therefore
\[\operatorname{ch}Z_{\mathbf{A}}(\mu)=e^{\mu}\operatorname{ch}U^{-}. \tag{5.13}\]
By an abuse of notation, we also denote \(\left|\mu\right\rangle\) the generator \(1\otimes\left|\mu\right\rangle\) of \(Z_{\mathbf{k}}(\mu)\).
A _\(Z\)-filtration_ of a module in \(\mathcal{C}_{\mathbf{A}}\) is a filtration whose subquotient are isomorphic to Verma modules. In [1, SS2.11-SS2.16] is proved that projective modules admit \(Z\)-filtrations. Several other properties of these modules are proved in [1, Sections 2-4].
### Simple modules
Assume that \(\mathbf{A}=\mathbf{k}\) is a field. Then \(Z_{\mathbf{k}}(\mu)\) has a unique simple quotient denoted \(L_{\mathbf{k}}(\mu)\)[1, SS4.1]. This object is characterized as the unique (up to isomorphism) simple _highest-weight module_\(L\) in \(\mathcal{C}_{\mathbf{k}}\), that is \(L\) is generated by some \(v\in L_{\mu}\) with \((U^{+})_{\nu}v=0\) for all \(\nu>0\). We say that \(v\) is _highest-weight vector of weight_\(\mu\). Moreover, each simple module in \(\mathcal{C}_{\mathbf{k}}\) is isomorphic to a unique simple highest-weight module. This characterization of the simple modules implies that their characters are linearly independent.
We notice that all modules have composition series of finite length. For \(M\in\mathcal{C}_{\mathbf{k}}\), \([M:L_{\mathbf{k}}(\lambda)]\) denotes the number of composition factors isomorphic to \(L_{\mathbf{k}}(\lambda)\). Two important properties of the Verma modules are
\[[Z_{\mathbf{k}}(\mu):L_{\mathbf{k}}(\mu)]=1\quad\text{and}\quad[Z_{\mathbf{k} }(\mu):L_{\mathbf{k}}(\lambda)]\neq 0\Rightarrow\lambda\leqslant\mu.\]
The next lemma is standard. It says that we can read the composition factors of a module from its character. In particular, modules with equal characters have the same composition factors.
**Lemma 5.2**.: _Let \(M\in\mathcal{C}_{\mathbf{k}}\). It holds that \(\operatorname{ch}M=\sum_{\lambda}a_{\lambda}\operatorname{ch}L_{\mathbf{k}}(\lambda)\) if and only if \(a_{\lambda}=[M:L_{\mathbf{k}}(\lambda)]\) for all \(\lambda\in\mathbb{Z}^{\mathbb{I}}\). _
**Example 5.3**.: Let \(U=U_{\mathfrak{q}}\) be a Drinfeld double, \(\mathbf{k}=\Bbbk\) and \(\mu=0\). Then \(K_{\alpha}L_{\beta}\cdot|0\rangle=\pi(K_{\alpha}L_{\beta})|0\rangle\) and hence \(Z_{\Bbbk}(0)=\mathcal{M}^{\mathfrak{q}}(\pi)\) is the Verma module of [26, Definition 5.1] and \(L_{\Bbbk}(0)=L^{\mathfrak{q}}(\pi)\) its quotient as in [26, (5.7)].
**Example 5.4**.: \(L_{\mathbf{k}}(\mu)=\mathbf{k}^{\mu}\) is one-dimensional if and only if
\[\pi\tilde{\mu}(K_{i}L_{i}^{-1})=1\quad\forall i\in\mathbb{I}. \tag{5.14}\]
Indeed, using (4.1) and (5.10), \(\mathbf{k}^{\mu}\) is the simple quotient of \(Z_{\mathbf{k}}(\mu)\) if and only if
\[0=(K_{i}-L_{i})\cdot|\mu\rangle=\left(\mathfrak{q}(\alpha_{i},\mu)\pi(K_{i})- \frac{1}{\mathfrak{q}(\mu,\alpha_{i})}\pi(L_{i})\right)|\mu\rangle\]
which is equivalent to our claim.
### Blocks
Let \(\sim_{b}\) denote the smallest equivalence relation in \(\mathbb{Z}^{\mathbb{I}}\) such that \(\lambda\sim_{b}\mu\) if \(\operatorname{Hom}_{\mathcal{C}_{\mathbf{A}}}(Z_{\mathbf{A}}(\lambda),Z_{ \mathbf{A}}(\mu))\neq 0\) or \(\operatorname{Ext}^{1}_{\mathcal{C}_{\mathbf{k}}}(Z_{\mathbf{A}}(\lambda),Z_{ \mathbf{A}}(\mu))\neq 0\). The equivalence classes of \(\sim_{b}\) are called blocks [1, SS6.9]. In case \(\mathbf{A}=\mathbf{k}\) is a field, this definition coincides with the usual definition of blocks via simple modules [1, Lemma 6.12]. Namely, \(\lambda\) and \(\mu\) belong to the same block if \(L_{\mathbf{k}}(\lambda)\) and \(L_{\mathbf{k}}(\mu)\) have a non trivial extension.
Let \(\mathcal{D}_{\mathbf{A}}\) denote the full subcategory of \(\mathcal{C}_{\mathbf{A}}\) of all objects admitting a \(Z\)-filtration. If \(b\) is a block, \(\mathcal{D}_{\mathbf{A}}(b)\) is the full subcategory of all object admitting a \(Z\)-filtration whose factors are \(Z_{\mathbf{A}}(\mu)\) with \(\mu\in b\). Likewise \(\mathcal{C}_{\mathbf{A}}(b)\) denotes the full subcategory of all objects in \(\mathcal{C}_{\mathbf{A}}\) which are the homomorphic image of an object in \(\mathcal{D}_{\mathbf{A}}(b)\). The abelian categories \(\mathcal{D}_{\mathbf{A}}\) and \(\mathcal{C}_{\mathbf{A}}\) decompose into the sum \(\oplus_{b}\mathcal{D}_{\mathbf{A}}(b)\) and \(\oplus_{b}\mathcal{C}_{\mathbf{A}}(b)\). These and other properties are proved in [1, 6.10]. We also will call blocks to the subcategories \(\mathcal{D}_{\mathbf{A}}(b)\) and \(\mathcal{C}_{\mathbf{A}}(b)\). The _principal block_, denoted \(\mathcal{C}_{\mathbf{k}}(0)\), is the block containing \(L_{\mathbf{k}}(0)\).
### Duals
Let \(M\in\mathcal{C}_{\mathbf{A}}\). Then \(M^{\tau}=\operatorname{Hom}_{\mathbf{A}}(M,\mathbf{A})\) is an object in \(\mathcal{C}_{\mathbf{A}}\) with \(U\)-action
\[(uf)(m)=f(\tau(u)m),\]
for all \(m\in M\) and \(f\in M^{\tau}\), and homogeneous components
\[(M^{\tau})_{\lambda}=\{f\in\operatorname{Hom}_{\mathbf{A}}(M,\mathbf{A})\mid f (M_{\mu})=0\,\forall\mu\neq\lambda\}\]
for all \(\lambda\in\mathbb{Z}^{\mathbb{I}}\). If \(M\) is \(\mathbf{A}\)-free, then
\[\operatorname{ch}(M^{\tau})=\operatorname{ch}(M). \tag{5.15}\]
From the characterization of the simple objects in \(\mathcal{C}_{\mathbf{k}}\), \(\mathbf{k}\) is a field, we deduce that
\[L_{\mathbf{k}}(\mu)^{\tau}\simeq L_{\mathbf{k}}(\mu).\]
for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\).
### AJS categories versus usual module categories
Assume that \(U^{0}\) is finite-dimensional. By forgetting the right \(\mathbf{A}\)-action, we obtain a fully faithful exact functor from the AJS categories \(\mathcal{C}_{\mathbf{A}}\) to the category \({}_{U}\mathcal{G}\) of finite-dimensional \(\mathbb{Z}^{\mathbb{I}}\)-graded \(U\)-modules. Roughly speaking, the next proposition says that we know all the simple objects and the blocks of the category \({}_{U}\mathcal{G}\) if we know the simple module \(L_{\Bbbk}(0)\) and the principal block \(\mathcal{C}_{\Bbbk}(0)\) for all the algebra maps \(\pi:U^{0}\longrightarrow\Bbbk\).
**Proposition 5.5**.: _Every block in \({}_{U}\mathcal{G}\) is equivalent as an abelian category to the principal block of \(\mathcal{C}_{\Bbbk}\) for some algebra map \(\pi:U^{0}\longrightarrow\Bbbk\)._
Proof.: One can construct Verma and simple modules in \({}_{U}\mathcal{G}\) like in the AJS categories, see for instance [16]. Given an algebra map \(\pi:U^{0}\longrightarrow\Bbbk\), let us denote of \(\Delta(\pi)\) and \(L(\pi)\) the corresponding Verma module and simple modules in \({}_{U}\mathcal{G}\). Namely, \(L(\pi)\) coincides with the image of \(L_{\Bbbk}(0)\) under the forgetful functor. We claim that every simple module in \({}_{U}\mathcal{G}\) belonging to the same block of \(L(\pi)\) is isomorphic to the image of \(L_{\Bbbk}(\mu)\) under the forgetful functor for some \(\mu\) in the principal block of \(\mathcal{C}_{\Bbbk}\). In fact, suppose that \(L(\mu)\) is a simple module in \({}_{U}\mathcal{G}\) and
\[0\longrightarrow L(\mu)\longrightarrow M\longrightarrow L(\pi)\longrightarrow 0\]
is a non trivial extension. Let \(m_{\mu}\) be a highest-weight vector in \(L(\mu)\) and \(m\in M\) such that its image in \(L(\pi)\) is a highest-weight vector. Then \(m\) generates \(M\) and there is \(u_{\mu}\in U_{\mu}\) such that \(m_{\mu}=u_{\mu}\cdot m\). Let \(s\in U^{0}\). Hence
\[s\cdot m_{\mu}=su_{\mu}\cdot m\stackrel{{\eqref{eq:m_AJS}}}{{=}}u_ {\mu}\widetilde{\mu}(s)\cdot m\stackrel{{\eqref{eq:m_AJS}}}{{=}}( \pi\widetilde{\mu})(s)\,m_{\mu}.\]
This implies that \(L(\mu)\) is isomorphic to the image of \(L_{\Bbbk}(\mu)\) under the forgetful functor. Therefore every simple module of \({}_{U}\mathcal{G}\) belonging to the same block of \(L(\pi)\) also is in the image of the principal block, and the proposition follows.
### AJS categories and quotients
Let \(p:U\longrightarrow\overline{U}\) be a \(\mathbb{Z}^{\mathbb{I}}\)-graded algebra projection and set \(\overline{U}^{\pm,0}:=p(U^{\pm,0})\). Assume that \(\overline{U}^{-}\otimes\overline{U}^{0}\otimes\overline{U}^{+}\longrightarrow \overline{U}\) induces a linear isomorphism, \(\overline{U}^{\pm}\simeq U^{\pm}\) and \(\widetilde{\mu}\) induces an algebra automorphism on \(\overline{U}^{0}\). Thus, given an algebra map \(\overline{\pi}:\overline{U}\longrightarrow\mathbf{A}\) we can consider the corresponding AJS category which we denote \(\overline{\mathcal{C}}_{\mathbf{A}}\). We write \(\overline{Z}_{\mathbf{A}}(\mu)\) and \(\overline{L}_{\mathbf{k}}(\mu)\) for the Verma and simple modules in \(\overline{\mathcal{C}}_{\mathbf{A}}\) and \(\overline{\mathcal{C}}_{\mathbf{k}}\), respectively. If \(\pi=\overline{\pi}\circ p:U\longrightarrow\mathbf{A}\), we get an obvious functor \(\mathrm{Inf}_{U}^{U}:\overline{\mathcal{C}}_{\mathbf{A}}\longrightarrow \mathcal{C}_{\mathbf{A}}\) such that \(\mathrm{Inf}_{U}^{U}(\overline{Z}_{\mathbf{A}}(\mu))\simeq Z_{\mathbf{A}}(\mu)\) and \(\mathrm{Inf}_{U}^{U}(\overline{L}_{\mathbf{k}}(\mu))\simeq L_{\mathbf{k}}(\mu)\) for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\), by the assumptions on \(p\). Then \(\mathrm{ch}\overline{Z}_{\mathbf{k}}(\mu)=\mathrm{ch}Z_{\mathbf{k}}(\mu)\) and \(\mathrm{ch}\overline{L}_{\mathbf{k}}(\mu)=\mathrm{ch}L_{\mathbf{k}}(\mu)\) and therefore
\[\big{[}\overline{Z}_{\mathbf{k}}(\mu):\overline{L}_{\mathbf{k}}(\lambda) \big{]}=[Z_{\mathbf{k}}(\mu):L_{\mathbf{k}}(\lambda)] \tag{5.16}\]
for all \(\mu,\lambda\in\mathbb{Z}^{\mathbb{I}}\) by Lemma 5.2.
Moreover, if \(\widetilde{\mu}\) does not descend to an algebra automorphism on \(\overline{U}^{0}\), we have an analogous to (5.16) by considering the category \(\overline{{}_{U}}\mathcal{G}\) instead of \(\overline{\mathcal{C}}_{\mathbf{k}}\).
## 6. Twisted Verma modules
Through this section we fix a matrix \(\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^{\times})^{\mathbb{I} \times\mathbb{I}}\) with finite-dimensional Nichols algebra \(\mathfrak{B}_{\mathfrak{q}}\), where \(\mathbb{I}=\mathbb{I}_{\theta}\). Let \(\mathcal{X}\) be the \(\mathcal{G}\)-orbit of \(\mathfrak{q}\) and \(\mathcal{W}\) its Weyl groupoid. Let \(U_{\mathfrak{q}}\) be the Hopf algebra introduced in Section 4. In the sequel \(\mathbf{A}\) denotes a Noetherian commutative ring and \(\mathbf{k}\) denotes a field. We assume that both are algebras over \(U^{0}\) and, by abuse of notation, we denote their structural maps \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{A}\) and \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{k}\). We denote \(\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\), \(\mathcal{C}_{\mathbf{A}}^{\prime\mathfrak{q}}\) and \(\mathcal{C}_{\mathbf{A}}^{\prime\mathfrak{q}}\) the corresponding AJS categories; and similarly over the field \(\mathbf{k}\). We point out that the results regarding to simple modules of the following subsections only hold for the AJS categories over the field \(\mathbf{k}\), cf. SS5.2, and the results being valid over \(\mathbf{A}\) obviously hold over \(\mathbf{k}\).
In this section we construct and study different Verma modules for \(U_{\mathfrak{q}}\) using the Lusztig isomorphisms of SS4.1. To do this, we mimic the ideas from [1] where the authors use the Lusztig automorphisms [34]. Although our demonstrations are almost identical to [1], it is worthwhile to do thoroughly again them as we have morphisms between possibly different algebras parameterized by \(\mathcal{X}\) and permuted by the action of the Weyl groupoid.
### \(w\)-Verma modules
Let \(w:w^{-*}\mathfrak{q}\rightarrow\mathfrak{q}\) be a morphism in \(\mathcal{W}\) with \(w=1^{\mathfrak{q}}\sigma_{i_{k}}\cdots\sigma_{i_{1}}\) a reduced expression. We consider the Lusztig isomorphism
\[T_{w}=T_{i_{k}}\cdots T_{i_{1}}:U_{w^{-*}\mathfrak{q}}\longrightarrow U_{ \mathfrak{q}}. \tag{6.1}\]
Thus, the triangular decomposition of \(U_{w^{-*}\mathfrak{q}}\) induces a new triangular decomposition on \(U_{\mathfrak{q}}\). Explicitly,
\[T_{w}(U_{w^{-*}\mathfrak{q}}^{-})\otimes U_{\mathfrak{q}}^{0}\otimes T_{w}(U_ {w^{-*}\mathfrak{q}}^{+})\longrightarrow U_{\mathfrak{q}} \tag{6.2}\]
since \(T_{w}(U_{w^{-*}\mathfrak{q}}^{0})=U_{\mathfrak{q}}^{0}\).
Given \(\mu\in\mathbb{Z}^{\mathbb{I}}\), we consider \(\mathbf{A}^{\mu}\) as a \(U^{0}_{\mathbf{q}}\,T_{w}(U^{+}_{w^{-}*_{\mathbf{q}}})\)-module with action
\[su\cdot|\mu\rangle=\varepsilon(u)\,\pi(\widetilde{\mu}(s))\,|\mu\rangle\quad \forall s\in U^{0}_{\mathbf{q}},\,u\in T_{w}(U^{+}_{w^{-}*_{\mathbf{q}}}) \tag{6.3}\]
and imitating [1, SS4.3], we introduce the module
\[Z^{w}_{\mathbf{A}}(\mu)=U_{\mathbf{q}}\otimes_{U^{0}_{\mathbf{q}}T_{w}(U^{+}_{ w^{-}*_{\mathbf{q}}})}\mathbf{A}^{\mu} \tag{6.4}\]
which belongs to \(C^{\mathfrak{q}}_{\mathbf{A}}\). We call it \(w\)_-Verma module_. Of course, \(Z^{\mathfrak{q}}_{\mathbf{A}}(\mu)=Z_{\mathbf{A}}(\mu)\). We notice that \(Z^{w}_{\mathbf{A}}(\mu)\) does not depend on the reduced expression of \(w\). In fact, if \(\tilde{T}_{w}\) is the Lusztig isomorphism associated to other reduced expression of \(w\), then \(T_{w}(U^{\pm}_{w^{-}*_{\mathbf{q}}})=\tilde{T}_{w}(U^{\pm}_{w^{-}*_{\mathbf{q}}})\) by (4.8). Hence the decomposition (6.2) does not depend on the reduced expression. Moreover, (6.3) defines the same module for both expressions, and _a posteriori_ isomorphic Verma modules.
**Lemma 6.1**.: _Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(w\in\,\raise 1.0pt\hbox{${}^{\mathfrak{q}}$}\mathcal{W}\). Then_
\[\mathrm{ch}Z^{w}_{\mathbf{A}}(\mu)=e^{\mu-(w(\varrho^{w^{-}*_{\mathbf{q}}})- \varrho^{\mathfrak{q}})}\mathrm{ch}U^{-}_{\mathbf{q}}.\]
Proof.: By the definition, we see that \(\mathrm{ch}\,Z^{w}_{\mathbf{A}}(\mu)=e^{\mu}\,\mathrm{ch}\,T_{w}(U^{-}_{w^{-}* _{\mathbf{q}}})\). On the other hand, \(T_{w}(U^{-}_{w^{-}*_{\mathbf{q}}})\simeq(U^{-}_{w^{-}*_{\mathbf{q}}})[w]\) as \(\mathbb{Z}^{\mathbb{I}}\)-graded object by (4.9). Let us write \(\Delta^{\mathfrak{q}}_{+}=R_{1}\cup R_{2}\) with
\[R_{1}=\left\{\beta\in\Delta^{\mathfrak{q}}_{+}\,:\,w^{-1}\beta\in\Delta^{w^{- }*_{\mathbf{q}}}_{-}\right\}\quad\text{and}\quad R_{2}=\left\{\beta\in\Delta^ {\mathfrak{q}}_{+}\,:\,w^{-1}\beta\in\Delta^{w^{-}*_{\mathbf{q}}}_{+}\right\}.\]
Therefore
(6.5) \[\begin{split}\mathrm{ch}\,T_{w}(U^{-}_{w^{-}*_{\mathbf{q}}}) \stackrel{{\eqref{eq:w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-ww-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-w-ww-w
Proof.: Let us abbreviate \(M^{x}=Z^{x}_{\mathbf{A}}(\mu\langle x\rangle)\). We have that \((M^{x})_{\mu\langle x\rangle}\) is \(\mathbf{A}\)-free of rank \(1\) and \(\mu\langle x\rangle+x\beta\) is not a weight for any \(\beta\in\Delta_{+}^{x^{-\mathbf{s}}\mathfrak{q}}\) because \(\operatorname{ch}M^{x}=e^{\mu\langle x\rangle}\operatorname{ch}T_{x}(U^{-}_{x ^{-\mathbf{s}}\mathfrak{q}})\). By (6.6), these claims are also true for \(M^{w}\). Thus \((M^{w})_{\mu\langle x\rangle}=v\mathbf{A}\) and \(T_{x}(U^{+}_{x^{-\mathbf{s}}\mathfrak{q}})v=0\). Therefore, there is a morphism \(M^{x}\to M^{w}\), induced by \(|\mu\langle x\rangle\rangle\mapsto v\) and any other morphism has to be a multiple of it. This shows the first isomorphism. The second one follows similarly by using (5.15).
**Corollary 6.3**.: _Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(w\in{}^{\mathfrak{q}}\mathcal{W}\). Then \(Z^{w}_{\mathbf{A}}(\mu\langle w\rangle)\) and \(Z_{\mathbf{A}}(\mu)\) belong to the same block._
Proof.: It follows as [1, Lemma 6.11] using the above proposition.
### Twisted simple modules
Let \(w\in\mathcal{W}\). The new triangular decomposition (6.2) on \(U_{\mathfrak{q}}\) satisfies (5.1)-(5.5) with respect to the \(w\)-twisted \(\mathbb{Z}^{\mathbb{I}}\)-grading on \(U_{\mathfrak{q}}\) and the partial order \(\leq^{w}\); (5.5) holds thanks to (4.11). Similar to [1, SS4.3], this observation ensures that the \(w\)-Verma modules satisfy most of the properties of the usual Verma modules. For instance, we can construct the \(w\)-Verma modules via induced functors, they have simple head, and the projectives modules admit \(Z^{w}\)-filtrations.
Therefore, in the case of the field \(\mathbf{k}\), the \(w\)-Verma module \(Z^{w}_{\mathbf{k}}(\mu)\) has a unique simple quotient which we denote \(L^{w}_{\mathbf{k}}(\mu)\) for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\). As the simple modules in \(\mathcal{C}^{\mathfrak{q}}_{\mathbf{k}}\) are determined by their highest-weights, \(w\) induces a bijection in \(\mathbb{Z}^{\mathbb{I}}\), \(\mu\leftrightarrow\mu_{w}\), such that
\[L^{w}_{\mathbf{k}}(\mu)\simeq L_{\mathbf{k}}(\mu_{w}). \tag{6.7}\]
Notice that if \(w_{0}\in{}^{\mathfrak{q}}\mathcal{W}\) is the longest element, then \(\mu_{w_{0}}\) is the _lowest-weight_ of \(L_{\mathbf{k}}(\mu)\) by (4.12); _i.e._\((U^{-})_{\nu}\cdot L_{\mathbf{k}}(\mu)_{\mu_{w_{0}}}=0\) for all \(\nu<0\).
We next give more information about the \(w\)-Verma modules over a the field \(\mathbf{k}\).
**Lemma 6.4** ([1, Lemma 4.8]).: _Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(w\in{}^{\mathfrak{q}}\mathcal{W}\). Then the socle of \(Z^{w}_{\mathbf{k}}(\mu)\) is a simple module in \(\mathcal{C}^{\mathfrak{q}}_{\mathbf{k}}\). Furthermore, the element \(T_{w}(F^{w^{-\mathbf{s}}\mathfrak{q}}_{top})|\mu\rangle\) generates the socle and spans the homogeneous component of weight \(\mu-w(\beta^{w^{-\mathbf{s}}\mathfrak{q}}_{top})\)._
Proof.: The socle of \(T_{w}(U^{-}_{w^{-\mathbf{s}}\mathfrak{q}})\) as a module over itself is spanned by \(T_{w}(F^{w^{-\mathbf{s}}\mathfrak{q}}_{top})\) whose degree is \(-w(\beta^{w^{-\mathbf{s}}\mathfrak{q}}_{top})\). Then any simple submodule of \(Z^{w}_{\mathbf{k}}(\mu)\) in \(\mathcal{C}^{\mathfrak{q}}_{\mathbf{k}}\) should contains \(T_{w}(F^{w^{-\mathbf{s}}\mathfrak{q}}_{top})|\mu\rangle\) and the lemma follows.
Morphisms as in the lemma below exists by Proposition 6.2. Recall that \(w_{0}\) denotes the longest element in \({}^{\mathfrak{q}}\mathcal{W}\).
**Lemma 6.5** ([1, Lemma 4.9]).: _Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\),_
\[\Phi:Z_{\mathbf{k}}(\mu)\longrightarrow Z^{w_{0}}_{\mathbf{k}}(\mu\langle w_{ 0}\rangle)\quad\text{and}\quad\Phi^{\prime}:Z^{w_{0}}_{\mathbf{k}}(\mu\langle w _{0}\rangle)\longrightarrow Z_{\mathbf{k}}(\mu)\]
_be non-zero morphisms in \(\mathcal{C}^{\mathfrak{q}}_{\mathbf{k}}\). Then_
\[L_{\mathbf{k}}(\mu)\simeq\operatorname{soc}Z^{w_{0}}_{\mathbf{k}}(\mu\langle w _{0}\rangle)=\operatorname{Im}\Phi\quad\text{and}\quad L^{w_{0}}_{\mathbf{k}}( \mu\langle w_{0}\rangle)\simeq\operatorname{soc}Z_{\mathbf{k}}(\mu)= \operatorname{Im}\Phi^{\prime}.\]
Proof.: As \(\Phi^{\prime}\) is graded, \(\Phi^{\prime}\) sends \(|\mu\langle w_{0}\rangle\rangle\) to the generator of the socle of \(Z_{\mathbf{k}}(\mu)\) by Lemma 6.4. This shows the second isomorphism and the second one follows in a similar way.
**Example 6.6**.: \(L_{\mathbf{k}}^{w_{0}}(\mu\langle w_{0}\rangle)=\mathbf{k}^{\mu\langle w_{0}\rangle}\) is one-dimensional if and only if
\[\widehat{\pi\mu\langle w_{0}\rangle}(K_{i}L_{i}^{-1})=1\quad\forall i\in \mathbb{I}. \tag{6.8}\]
Indeed, this follows as Example 5.3 using (6.3) and (6.4).
### Highest weight theory
The proofs of the next results run as in [1].
**Lemma 6.7** ([1, Lemma 4.10]).: _For all \(\mu\in\mathbb{Z}^{\mathbb{I}}\), we have that \(Z_{\mathbf{A}}^{w_{0}}(\mu\langle w_{0}\rangle)\simeq Z_{\mathbf{A}}(\mu)^{\tau}\) and \(Z_{\mathbf{A}}(\mu+\beta_{top}^{\mathfrak{q}})\simeq Z_{\mathbf{A}}^{w_{0}}( \mu)^{\tau}\). _
**Proposition 6.8** ([1, Propositions 11 and 4.12]).: _Let \(\lambda,\mu\in\mathbb{Z}^{\theta}\). Then_
\[\operatorname{Ext}_{\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}}^{n}\left(Z_{ \mathbf{A}}(\lambda),Z_{\mathbf{A}}^{w_{0}}(\mu)\right)=\begin{cases}\mathbf{ A},&\text{if $n=0$ and $\mu=\lambda\langle w_{0}\rangle$}\\ 0,&\text{otherwise}.\end{cases}\]
A by-product of the above is that the category \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) satisfies all the axioms of the definition of highest weight category in [14, SS3.2] except for the axiom (2) since \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) has infinitely many simple objects.
**Theorem 6.9**.: \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) _is a highest weight category with infinitely many simple modules \(L_{\mathbf{k}}(\lambda)\), \(\lambda\in\mathbb{Z}^{\mathbb{I}}\). The standard and costandard modules are \(Z_{\mathbf{k}}(\lambda)\) and \(Z_{\mathbf{k}}(\lambda)^{\tau}\), \(\lambda\in\mathbb{Z}^{\mathbb{I}}\). _
Another interesting consequence is the so-called BGG Reciprocity [1, Proposition 4.15]. Let \(P_{\mathbf{k}}(\lambda)\in\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) be the projective cover of \(L_{\mathbf{k}}(\lambda)\), \(\lambda\in\mathbb{Z}^{\mathbb{I}}\). Recall \(P_{\mathbf{k}}(\lambda)\) admits a \(Z\)-filtration [1, Lemma 2.16]. Given \(\mu\in\mathbb{Z}^{\mathbb{I}}\), we denote \([P_{\mathbf{k}}(\lambda):Z_{\mathbf{k}}(\mu)]\) the number of subquotients isomorphic to \(Z_{\mathbf{k}}(\mu)\) in a \(Z\)-filtration of \(P_{\mathbf{k}}(\lambda)\).
**Theorem 6.10** (BGG Reciprocity).: _Let \(\lambda,\mu\in\mathbb{Z}^{\mathbb{I}}\). Then_
\[[P_{\mathbf{k}}(\lambda):Z_{\mathbf{k}}(\mu)]=[Z_{\mathbf{k}}(\mu):L_{\mathbf{ k}}(\lambda)].\]
Proof.: We know that \([Z_{\mathbf{k}}(\mu):L_{\mathbf{k}}(\lambda)]=\dim\operatorname{Hom}_{\mathcal{C}_{ \mathbf{k}}^{\mathfrak{q}}}(P_{\mathbf{k}}(\lambda),Z_{\mathbf{k}}(\mu))\)[1, 4.15 (2)]. Using Proposition 6.8 we can deduce that \([P_{\mathbf{k}}(\lambda):Z_{\mathbf{k}}(\mu)]=\dim\operatorname{Hom}_{\mathcal{ C}_{\mathbf{k}}^{\mathfrak{q}}}\left(P_{\mathbf{k}}(\lambda),Z_{\mathbf{k}}^{ w_{0}}(\lambda\langle w_{0}\rangle)\right)\) which is equal to \([Z_{\mathbf{k}}^{w_{0}}(\lambda\langle w_{0}\rangle):L_{\mathbf{k}}(\lambda)]\). Since \(\operatorname{ch}Z_{\mathbf{k}}(\lambda)=\operatorname{ch}Z_{\mathbf{k}}^{w_{ 0}}(\lambda\langle w_{0}\rangle)\), Lemma 5.2 implies the equality in the statement.
## 7. Morphisms between twisted Verma modules
We keep the notation of the previous section. Here we construct generators of the Hom spaces between twisted Verma modules. Recall they are \(\mathbf{A}\)-free of rank one by Proposition 6.2. We will proceed in an inductive fashion starting from morphisms between a sort of parabolic Verma modules.
### Parabolic Verma modules
We fix \(i\in\mathbb{I}\) and write \(\sigma_{i}=1^{\mathfrak{q}}\sigma_{i}\). We also fix \(\mu\in\mathbb{Z}^{\mathbb{I}}\). To shorten notation, we write \(\mu^{\prime}=\mu\langle\sigma_{i}\rangle=\mu-(b^{\mathfrak{q}}(\alpha_{i})-1) \alpha_{i}\).
Recall \(P_{\mathfrak{q}}(\alpha_{i})\) from (4.18)-(4.19). By construction \(P_{\mathfrak{q}}(\alpha_{i})\) has a triangular decomposition satisfying (5.1)-(5.5). We denote \({}_{\alpha_{i}}\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\) the corresponding AJS category. The _parabolic Verma module_ and the _parabolic \(\sigma_{i}\)-Verma module_ are
\[\Psi_{\mathbf{A}}(\mu)=P_{\mathfrak{q}}(\alpha_{i})\otimes_{U_{\mathfrak{q}}^{ \mathfrak{q}}U_{\mathfrak{q}}^{+}}\mathbf{A}^{\mu}\quad\text{and}\quad\Psi_{ \mathbf{A}}^{\prime}(\mu^{\prime})=P_{\mathfrak{q}}(\alpha_{i})\otimes_{U^{ 0}T_{i}(U_{\sigma_{i}^{\mathfrak{q}}}^{+})}\mathbf{A}^{\mu^{\prime}}, \tag{7.1}\]
respectively. These are object of \({}_{\alpha_{i}}\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\).
Clearly, the elements \(F_{i}^{t}|\mu\rangle\), \(0\leqslant t<b^{\mathfrak{q}}(\alpha_{i})\), form a basis of \(\Psi_{\mathbf{A}}(\mu)\). By (4.20), the action of \(P_{\mathfrak{q}}(\alpha_{i})\) is given by
\[E_{i}\cdot F_{i}^{t}|\mu\rangle=\pi\tilde{\mu}[\alpha_{i};t]\,F_{i}^{t-1}|\mu\rangle, \tag{7.2}\]
\(E_{j}\cdot F_{i}^{t}|\mu\rangle=0\) for all \(j\in\mathbb{I}\backslash\{i\}\) and \(F_{i}\cdot F_{i}^{t}|\mu\rangle=F_{i}^{t+1}|\mu\rangle\). The weight of \(F_{i}^{t}|\mu\rangle\) is \(\mu-t\alpha_{i}\).
In turn, the elements \(E_{i}^{t}|\mu^{\prime}\rangle\), \(0\leqslant t<b^{\mathfrak{q}}(\alpha_{i})\), form a basis of \(\Psi_{A}^{\prime}(\mu^{\prime})\). By (4.20),
\[F_{i}\cdot E_{i}^{t}|\mu^{\prime}\rangle=\pi\widetilde{\mu}^{\prime}[t;\alpha_ {i}]\,E_{i}^{t-1}|\mu^{\prime}\rangle. \tag{7.3}\]
These are vectors of weights \(\mu^{\prime}+t\alpha_{i}=\mu-(b^{\mathfrak{q}}(\alpha_{i})-1-t)\alpha_{i}\), respectively. This implies that \(E_{j}\cdot E_{i}^{t}|\mu^{\prime}\rangle=0\) for all \(j\in\mathbb{I}\backslash\{i\}\). Of course, \(E_{i}\cdot E_{i}^{t}|\mu^{\prime}\rangle=E_{i}^{t+1}|\mu^{\prime}\rangle\).
Therefore there exists in \({}_{\alpha_{i}}\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\) a morphism
\[f_{i}:\Psi_{\mathbf{A}}(\mu)\longrightarrow\Psi_{\mathbf{A}}^{\prime}(\mu^{ \prime}) \tag{7.4}\]
such that \(f_{i}(|\mu\rangle)=E_{i}^{b^{\mathfrak{q}}(\alpha_{i})-1}|\mu^{\prime}\rangle\). Indeed, this is the morphism induced by the fact that \(E_{j}\cdot E_{i}^{b^{\mathfrak{q}}(\alpha_{i})-1}|\mu^{\prime}\rangle=0\) for all \(j\in\mathbb{I}\). Moreover, any morphism from \(\Psi_{\mathbf{A}}(\mu)\) to \(\Psi_{\mathbf{A}}^{\prime}(\mu^{\prime})\) is a \(\mathbf{A}\)-multiple of \(f_{i}\) since the weight spaces are of rank one over \(\mathbf{A}\). We can compute explicitly this morphism as follows:
\[f_{i}(F_{i}^{n}|\mu\rangle)=F_{i}^{n}\cdot E_{i}^{b^{\mathfrak{q}}(\alpha_{i} )-1}|\mu^{\prime}\rangle=\prod_{t=1}^{n}\pi\widetilde{\mu^{\prime}}([b^{ \mathfrak{q}}(\alpha_{i})-t;\alpha_{i}])\,E_{i}^{b^{\mathfrak{q}}(\alpha_{i} )-1-n}|\mu^{\prime}\rangle \tag{7.5}\]
for all \(0\leqslant n<b^{\mathfrak{q}}(\alpha_{i})\).
Analogously, there exists
\[f_{i}^{\prime}:\Psi_{\mathbf{A}}^{\prime}(\mu^{\prime})\longrightarrow\Psi_{ \mathbf{A}}(\mu) \tag{7.6}\]
given by
\[f_{i}^{\prime}(E_{i}^{n}|\mu^{\prime}\rangle)=E_{i}^{n}\cdot F_{i}^{b^{ \mathfrak{q}}(\alpha_{i})-1}|\mu\rangle=\prod_{t=1}^{n}\pi\widetilde{\mu}([ \alpha_{i};b^{\mathfrak{q}}(\alpha_{i})-t])\,F_{i}^{b^{\mathfrak{q}}(\alpha_{ i})-1-n}|\mu\rangle. \tag{7.7}\]
**Lemma 7.1**.: _If \(\pi\widetilde{\mu}([\alpha_{i};t])\) is a unit in \(\mathbf{A}\) for all \(1\leqslant t\leqslant b^{\mathfrak{q}}(\alpha_{i})-1\), then \(f_{i}\) and \(f_{i}^{\prime}\) are isomorphisms._
Proof.: The claim for \(f_{i}^{\prime}\) follows directly from (7.7) by the hypothesis. By the formula (7.5), \(f_{i}\) is an isomorphism if \(\pi\widetilde{\mu^{\prime}}[b^{\mathfrak{q}}(\alpha_{i})-t;\alpha_{i}]\) is a unit for all \(1\leqslant t\leqslant b^{\mathfrak{q}}(\alpha_{i})-1\). We
have that \(q_{ii}^{-t-1}\pi\widetilde{\mu^{\prime}}(K_{i}L_{i}^{-1})=q_{ii}^{1-t}\pi \tilde{\mu}(K_{i}L_{i}^{-1})\) and hence (4.22) implies that
\[\pi\widetilde{\mu^{\prime}}[b^{\mathfrak{d}}(\alpha_{i})-t;\alpha_{i}]=(b^{ \mathfrak{d}}(\alpha_{i})-t)_{q_{ii}^{-1}}\pi\widetilde{\mu^{\prime}}(L_{i}) \left(1-q_{ii}^{1-t}\pi\tilde{\mu}(K_{i}L_{i}^{-1})\right). \tag{7.8}\]
Since \((b^{\mathfrak{d}}(\alpha_{i})-t)_{q_{ii}^{-1}}\) and \(\pi\widetilde{\mu^{\prime}}(L_{i})\) are units, \(f_{i}\) is an isomorphism by the hypothesis and (4.21).
We next restrict ourselves to the case of the field \(\mathbf{k}\). To simplify notation, we write
\[t_{i}=t_{\alpha_{i}}^{\pi}(\mu),\]
recall Definition 4.7. By Lemma 7.1, \(f_{i}\) and \(f_{i}^{\prime}\) are isomorphisms in \({}_{\alpha_{i}}\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) if \(t_{i}=0\).
**Lemma 7.2**.: _Suppose \(t_{i}\neq 0\). Then_
\[\operatorname{Ker}f_{i}=\operatorname{Im}f_{i}^{\prime}=\langle\,F_{i}^{n}| \mu\rangle\mid n\geqslant t_{i}\rangle\quad\text{and}\quad\operatorname{Im}f_ {i}=\operatorname{Ker}f_{i}^{\prime}=\langle E_{i}^{n}|\mu^{\prime}\rangle \mid n\geqslant b^{\mathfrak{d}}(\alpha_{i})-t_{i}\rangle.\]
Proof.: Using (7.5), (7.8) and (4.22), we see that \(\operatorname{Ker}f_{i}=\langle\,F_{i}^{n}|\mu\rangle\mid n\geqslant t_{i}\rangle\) and \(\operatorname{Im}f_{i}=\langle E_{i}^{n}|\mu\rangle\mid n\geqslant b^{ \mathfrak{d}}(\alpha_{i})-t_{i}\rangle\).
For \(f_{i}^{\prime}\), we first observe that \(b^{\mathfrak{d}}(\alpha_{i})-t_{i}\) is the minimum natural number \(s_{i}\) such that \(q_{ii}^{1+s_{i}}\pi\tilde{\mu}(K_{i}L_{i}^{-1})-1=0\). Indeed, \(q_{ii}^{1+s_{i}}\pi\widetilde{\mu}(K_{i}L_{i}^{-1})-1=0=q_{ii}^{1-t_{i}}\pi \tilde{\mu}(K_{i}L_{i}^{-1})-1\) implies \(q_{ii}^{t_{i}+s_{i}}=1\), recall (4.21). Hence \(b^{\mathfrak{d}}(\alpha_{i})=\operatorname{ord}q_{ii}\leqslant t_{i}+s_{i}\). On the other hand, \(\pi\tilde{\mu}([\alpha_{i};b^{\mathfrak{d}}(\alpha_{i})-s_{i}])=0\) and hence \(t_{i}\leqslant b^{\mathfrak{d}}(\alpha_{i})-s_{i}\). Therefore \(b^{\mathfrak{d}}(\alpha_{i})=t_{i}+s_{i}\).
Finally, the equalities for \(\operatorname{Im}f_{i}^{\prime}\) and \(\operatorname{Ker}f_{i}^{\prime}\) follow from (7.7) since \(\pi\widetilde{\mu}([\alpha_{i};b^{\mathfrak{d}}(\alpha_{i})-t])=(b^{ \mathfrak{d}}(\alpha_{i})-t)_{q_{ii}}\pi\widetilde{\mu}(L_{i})(q_{ii}^{1+t}\pi \widetilde{\mu}(K_{i}L_{i}^{-1})-1)\).
As a consequence of the above lemma we see that \(F_{i}^{t_{i}}|\mu\rangle\) is a highest-weight vector, _i.e._\(E_{j}\cdot F_{i}^{t_{i}}|\mu\rangle=0\) for all \(j\in\mathbb{I}\), since the weights of \(\operatorname{Ker}f_{i}\) are \(\mu-n\alpha_{i}\) with \(n\geqslant t_{i}\). Therefore there exists a morphism
\[g_{i}:\Psi_{\mathbf{k}}(\mu-t_{i}\alpha_{i})\longrightarrow\Psi_{\mathbf{k}}( \mu) \tag{7.9}\]
in \({}_{\alpha_{i}}\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) such that \(|\mu-t_{i}\alpha_{i}\rangle\mapsto F_{i}^{t_{i}}|\mu\rangle\). Clearly, \(\operatorname{Im}g_{i}=\operatorname{Ker}f_{i}\).
**Lemma 7.3**.: _There exists a long exact sequence_
\[\cdots\Psi_{\mathbf{k}}(\mu-(b^{\mathfrak{d}}(\alpha_{i})+t_{i})\alpha_{i}) \longrightarrow\Psi_{\mathbf{k}}(\mu-b^{\mathfrak{d}}(\alpha_{i})\alpha_{i}) \longrightarrow\Psi_{\mathbf{k}}(\mu-t_{i}\alpha_{i})\stackrel{{ g_{i}}}{{ \longrightarrow}}\Psi_{\mathbf{k}}(\mu)\longrightarrow\cdots\]
Proof.: The kernel of \(g_{i}\) is generated by \(F_{i}^{b^{\mathfrak{d}}(\alpha_{i})-t_{i}}|\mu-t_{i}\alpha_{i}\rangle\) since \(g_{i}(F_{i}^{n}|\mu-t_{i}\alpha_{i}\rangle)=F_{i}^{n+t_{i}}|\mu\rangle\). Hence this generator is a highest-weight vector. Therefore we have a morphism \(\Psi_{\mathbf{k}}(\mu-b^{\mathfrak{d}}(\alpha_{i})\alpha_{i})\longrightarrow \Psi_{\mathbf{k}}(\mu-t_{i}\alpha_{i})\) and its kernel is generated by \(F_{i}^{t_{i}}|\mu-b^{\mathfrak{d}}(\alpha_{i})\alpha_{i}\rangle\). We can repeat the arguments with \(\mu-b^{\mathfrak{d}}(\alpha_{i})\alpha_{i}\) instead of \(\mu\) in order to construct the desired long exact sequence.
**Remark 7.4**.: In a similar way, we can see that \(T_{i}(E_{j})\cdot E_{i}^{b^{\mathfrak{d}}(\alpha_{i})-t_{i}}|\mu^{\prime}\rangle=0\) for all \(j\in\mathbb{I}\). Therefore there exists a morphism \(g_{i}^{\prime}:\Psi_{\mathbf{k}}^{\prime}(\mu^{\prime}+(b^{\mathfrak{d}}( \alpha_{i})-t_{i})\alpha_{i})\longrightarrow\Psi_{\mathbf{k}}^{\prime}(\mu^{ \prime})\) in \({}_{\alpha_{i}}\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) such that \(|\mu^{\prime}+(b^{\mathfrak{d}}(\alpha_{i})-t_{i})\alpha_{i}\rangle\mapsto E_{i} ^{b^{\mathfrak{d}}(\alpha_{i})-t_{i}}|\mu^{\prime}\rangle\) and \(\operatorname{Im}g_{i}^{\prime}=\operatorname{Ker}f_{i}^{\prime}\).
### Morphisms between Verma modules twisted by a simple reflection
Here we lift the morphisms of the above subsection to morphisms between actual Verma modules. We continue with the same fixed elements \(i\in\mathbb{I}\) and \(\mu\in\mathbb{Z}^{\mathbb{I}}\). Recall \(t_{i}=t_{\alpha_{i}}^{\pi}(\mu)\).
We can construct the Verma modules inducing from the parabolic ones. Namely, we have the next isomorphisms in \(\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\) :
\[Z_{\mathbf{A}}(\mu) \simeq U_{\mathfrak{q}}\otimes_{P_{\mathfrak{q}}(\alpha_{i})}(P_{ \mathfrak{q}}(\alpha_{i})\otimes_{U_{\mathfrak{q}}^{0}U_{\mathfrak{q}}^{+}} \mathbf{A}^{\mu})\simeq U_{\mathfrak{q}}\otimes_{P_{\mathfrak{q}}(\alpha_{i})} \Psi_{\mathbf{A}}(\mu)\quad\text{and} \tag{7.11}\] \[Z_{\mathbf{A}}^{\sigma_{i}}(\mu) \simeq U_{\mathfrak{q}}\otimes_{P_{\mathfrak{q}}(\alpha_{i})}(P_{ \mathfrak{q}}(\alpha_{i})\otimes_{U_{\mathfrak{q}}^{0}T_{i}(U_{\sigma_{i}^{ \mathfrak{q}}\mathfrak{q}}^{+})}\mathbf{A}^{\mu})\simeq U_{\mathfrak{q}} \otimes_{P_{\mathfrak{q}}(\alpha_{i})}\Psi_{\mathbf{A}}^{\prime}(\mu). \tag{7.10}\]
These isomorphisms allow us to lift \(f_{i}\) and \(f_{i}^{\prime}\) to morphisms in \(\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\).
**Lemma 7.5**.: _The morphisms_
\[\varphi=1\otimes f_{i}:Z_{\mathbf{A}}(\mu)\longrightarrow Z_{\mathbf{A}}^{ \sigma_{i}}(\mu\langle\sigma_{i}\rangle)\quad\text{and}\quad\varphi^{\prime}=1 \otimes f_{i}^{\prime}:Z_{\mathbf{A}}^{\sigma_{i}}(\mu\langle\sigma_{i} \rangle)\longrightarrow Z_{\mathbf{A}}(\mu) \tag{7.12}\]
_are generators of the respective Hom spaces as \(\mathbf{A}\)-modules. Also,_
\[\operatorname{Ker}\varphi\simeq U_{\mathfrak{q}}\otimes_{P_{\mathfrak{q}}( \alpha_{i})}\operatorname{Ker}f_{i}\quad\text{and}\quad\operatorname{Ker} \varphi^{\prime}\simeq U_{\mathfrak{q}}\otimes_{P_{\mathfrak{q}}(\alpha_{i})} \operatorname{Ker}f_{i}^{\prime}. \tag{7.13}\]
Proof.: On the space of weight \(\mu\), \(\varphi\) is an isomorphism by construction. Then \(\varphi\) is a generator of \(\operatorname{Hom}_{\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}}(Z_{\mathbf{A}}( \mu),Z_{\mathbf{A}}^{\sigma_{i}}(\mu\langle\sigma_{i}\rangle))\) by Proposition 6.2. By the PBW basis (4.15), \(U_{\mathfrak{q}}\) is free over \(P_{\mathfrak{q}}(\alpha_{i})\) and hence \(\operatorname{Ker}\varphi\simeq U_{\mathfrak{q}}\otimes_{P_{\mathfrak{q}}( \alpha_{i})}\operatorname{Ker}f_{i}\). The proof for \(\varphi^{\prime}\) is analogous.
From the above considerations we can extend immediately to \(\varphi\) and \(\varphi^{\prime}\) Lemmas 7.1-7.3, cf. [1, Lemmas 5.8 and 5.9].
**Lemma 7.6**.: _Assume that \(\pi\tilde{\mu}([\alpha_{i};t])\) is a unit in \(\mathbf{A}\) for all \(1\leq t\leq b^{\mathfrak{q}}(\alpha_{i})-1\). Then \(\varphi\) and \(\varphi^{\prime}\) are isomorphisms in \(\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\). _
**Lemma 7.7**.: _If \(t_{i}=0\), then \(L_{\mathbf{k}}(\mu)\simeq L_{\mathbf{k}}^{\sigma_{i}}(\mu\langle\sigma_{i} \rangle)\) in \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\). _
**Lemma 7.8**.: _Suppose \(t_{i}\neq 0\). Then_
\[\operatorname{Ker}\varphi=\operatorname{Im}\!\varphi^{\prime}=U_{\mathfrak{q} }\cdot F_{i}^{t_{i}}|\mu\rangle\quad\text{and}\quad\operatorname{Im}\! \varphi=\operatorname{Ker}\varphi^{\prime}=U_{\mathfrak{q}}\cdot E_{i}^{b^{ \mathfrak{q}}(\alpha_{i})-t_{i}}|\mu\rangle.\]
__
Moreover, if \(t_{i}\neq 0\), we have that
\[\operatorname{ch}\operatorname{Ker}\varphi=e^{\mu}\left(e^{-t_{i}\alpha_{i}}+ \cdots+e^{(1-b^{\mathfrak{q}}(\alpha_{i}))\alpha_{i}}\right)\prod_{\gamma \in\Delta_{+}^{\mathfrak{q}}\setminus\{\alpha_{i}\}}\frac{1-e^{-b^{\mathfrak{q} }(\gamma)\gamma}}{1-e^{-\gamma}}. \tag{7.14}\]
because of the PBW basis (4.15) and (7.13).
The morphism \(\psi\) below is induced by \(g_{i}\) of (7.9).
**Lemma 7.9**.: _Suppose \(t_{i}\neq 0\). Then \(F_{i}^{t_{i}}|\mu\rangle\) is a highest-weight vector of \(Z_{\mathbf{k}}(\mu)\) and hence there exists a morphism_
\[\psi:Z_{\mathbf{k}}(\mu-t_{i}\alpha_{i})\longrightarrow Z_{\mathbf{k}}(\mu)\]
_in \(\mathcal{C}^{q}_{\mathbf{k}}\) such that \(|\mu-t_{i}\alpha_{i}\rangle\mapsto F_{i}^{t_{i}}|\mu\rangle\). Moreover, \(\mathrm{Im}\psi=\mathrm{Ker}\,\varphi\) and there is a long exact sequence_
\[\cdots Z_{\mathbf{k}}(\mu-(b^{q}(\alpha_{i})+t_{i})\alpha_{i})\longrightarrow Z _{\mathbf{k}}(\mu-b^{q}(\alpha_{i})\alpha_{i})\longrightarrow Z_{\mathbf{k}}( \mu-t_{i}\alpha_{i})\stackrel{{\psi}}{{\longrightarrow}}Z_{ \mathbf{k}}(\mu)\longrightarrow\cdots\]
**Remark 7.10**.: There is a morphism \(\psi^{\prime}:Z_{\mathbf{k}}^{\sigma_{i}}(\mu-(t_{i}-1)\alpha_{i}) \longrightarrow Z_{\mathbf{k}}^{\sigma_{i}}(\mu\langle\sigma_{i}\rangle)\) in \(\mathcal{C}^{q}_{\mathbf{k}}\), induced by \(\psi^{\prime}\) of Remark 7.4, such that \(|\mu-(t_{i}-1)\alpha_{i}\rangle\mapsto E_{i}^{b^{q}(\alpha_{i})-t_{i}}|\mu \langle\sigma_{i}\rangle\rangle\) and \(\mathrm{Im}\psi^{\prime}=\mathrm{Ker}\,\varphi^{\prime}\).
From the long exact sequence, we see that
\[\mathrm{ch}(\mathrm{Ker}\,\varphi)=\sum_{\ell\geq 0}\mathrm{ch}Z_{\mathbf{k}}( \mu-(\ell b^{q}(\alpha_{i})-t_{i})\alpha_{i})-\sum_{\ell\geq 1}\mathrm{ch}Z_{ \mathbf{k}}(\mu-\ell b^{q}(\alpha_{i})\alpha_{i}).\]
As another consequence we obtain the next isomorphisms between simple modules.
**Proposition 7.11**.: _Suppose \(t_{i}\neq 0\). In \(\mathcal{C}^{q}_{\mathbf{k}}\), it holds that_
\[L_{\mathbf{k}}(\mu)\simeq L_{\mathbf{k}}^{\sigma_{i}}(\mu-(t_{i}-1)\alpha_{i} )\quad\text{and}\quad L_{\mathbf{k}}(\mu-t_{i}\alpha_{i})\simeq L_{\mathbf{k }}^{\sigma_{i}}(\mu\langle\sigma_{i}\rangle).\]
Proof.: Since \(\mathrm{Im}\varphi=\mathrm{Im}\psi^{\prime}\) and \(\mathrm{Im}\psi=\mathrm{Im}\varphi^{\prime}\), the respective domains must to have isomorphic heads, which are the isomorphisms in the statement.
**Remark 7.12**.: Maps similar to \(\varphi\) were constructed in [26, Lemma 5.8]. Those maps are just linear morphism between Verma modules in different categories. By considering the \(\sigma_{i}\)-Verma module, we obtain morphisms between objects in the same category.
### Twisting the categories
Let \(w:\mathfrak{q}\longrightarrow w^{*}\mathfrak{q}\) be a morphism in \(\mathcal{W}\) and \(w=\sigma_{i_{k}}\cdots\sigma_{i_{1}}1^{q}\) a reduced expression. Fix the Lusztig isomorphism
\[T_{w}=T_{i_{1}}\cdots T_{i_{k}}:U_{\mathfrak{q}}\longrightarrow U_{w^{*} \mathfrak{q}}.\]
Recall the \(U_{\mathfrak{q}}^{0}\)-algebra \(\mathbf{A}[w]\) from (4.23) with structural map
\[\pi[w]=\pi\circ T_{w}^{-1}|_{U_{\mathfrak{q}}^{0}}:U_{\mathfrak{q}}^{0} \longrightarrow\mathbf{A}. \tag{7.15}\]
This does not depend on the presentation of \(w\) by (4.10). Also, \(\mathbf{A}[w][x]=\mathbf{A}[xw]\) for \(x:w^{*}\mathfrak{q}\longrightarrow x^{*}(w^{*}\mathfrak{q})=(wx)^{*}\mathfrak{q}\).
We have equivalence of categories \({}^{w}F_{\mathbf{A}}^{q}:\mathcal{C}^{q}_{\mathbf{A}}\longrightarrow\mathcal{C }^{w^{*}\mathfrak{q}}_{\mathbf{A}[w]}\) given by: if \(M\in\mathcal{C}^{q}_{\mathbf{A}}\), then \(F_{w}(M)=M[w]\) is an object of \(\mathcal{C}^{w^{*}\mathfrak{q}}_{\mathbf{A}[w]}\) with the action of \(U_{w^{*}\mathfrak{q}}\) twisted by \(T_{w}^{-1}\), that is
\[u\cdot_{T_{w}^{-1}}m=T_{w}^{-1}(u)m\quad\forall m\in M[w],\,u\in U_{w^{*} \mathfrak{q}}.\]
Indeed \(M[w]\) satisfies (5.8), since
\[(U_{w^{*}\mathfrak{q}})_{\alpha}\cdot_{T_{w}^{-1}}M[w]_{\mu}\stackrel{{ \eqref{eq:w^{-1}}}}{{=}}(U_{\mathfrak{q}})_{w^{-1}\alpha}\,M_{w^{-1}\mu} \subset M_{w^{-1}\alpha+w^{-1}\mu}=M_{w^{-1}(\alpha+\mu)}=M[w]_{\alpha+\mu}.\]
It also satisfies (5.9), since for \(m\in M[w]_{\mu}=M_{w^{-1}\mu}\) and \(s\in U_{w^{*}\mathfrak{q}}^{0}\) we have that
\[s\cdot_{T_{w}^{-1}}m=T_{w}^{-1}(s)m=m(\pi\circ\widetilde{w^{-1}(\mu)}\circ T_{ w}^{-1})(s)\stackrel{{\eqref{eq:w^{-1}}}}{{=}}m(\pi\circ T_{w}^{-1} \circ\widetilde{\mu})(s)=m\pi[w](\widetilde{\mu}(s))\]
Clearly, \({}^{w}F^{\mathfrak{q}}_{\mathbf{A}}\) depends on the reduced expression of \(w\). Its inverse \({}^{w}G^{\mathfrak{q}}_{\mathbf{A}}:\mathcal{C}^{w^{\bullet}\mathfrak{q}}_{ \mathbf{A}[w]}\longrightarrow\mathcal{C}^{\mathfrak{q}}_{\mathbf{A}}\) is given by \({}^{w}G^{\mathfrak{q}}_{\mathbf{A}}(M)=M[w^{-1}]\) with the action of \(U_{\mathfrak{q}}\) twisted by \(T_{w}\). When no confusion can arise, we will write simply \(M[w]\) and \(M[w^{-1}]\) instead of \({}^{w}F^{\mathfrak{q}}_{\mathbf{A}}(M)\) and \({}^{w}G^{\mathfrak{q}}_{\mathbf{A}}(M)\), respectively.
The Verma modules of both categories are related as follows.
**Lemma 7.13**.: _Let \(x\in{}^{\mathfrak{q}}\mathcal{W}\), \(w\in\mathcal{W}^{\mathfrak{q}}\) and \(\mu\in\mathbb{Z}^{\mathbb{I}}\). Then_
\[Z^{x}_{\mathbf{A}}(\mu)[w]\simeq Z^{wx}_{\mathbf{A}[w]}(w\mu) \tag{7.16}\]
_in the category \(\mathcal{C}^{w^{\bullet}\mathfrak{q}}_{\mathbf{A}[w]}\). Moreover, for the field \(\mathbf{k}\), it holds that_
\[L^{x}_{\mathbf{k}}(\mu)[w]\simeq L^{wx}_{\mathbf{k}[w]}(w\mu). \tag{7.17}\]
Proof.: We can repeat the arguments in [1, 4.4(2)] but we must be thorough with the categories where we are considering the Verma modules.
We first observe that \(Z^{x}_{\mathbf{A}}(\mu)\) has not weights of the form \(\mu+x\beta\) for all \(\beta\in\Delta^{x^{-\bullet}\mathfrak{q}}_{+}\) because \(\operatorname{ch}Z^{x}_{\mathbf{A}}(\mu)=e^{\mu}\operatorname{ch}T_{x}(U^{-}_ {x^{-\bullet}\mathfrak{q}})\). Therefore \(Z^{x}_{\mathbf{A}}(\mu)[w]\) has not weights of the form \(w\mu+wx\beta\) for all \(\beta\in\Delta^{x^{-\bullet}\mathfrak{q}}_{+}\). Also, the space of weight \(w\mu\) of \(Z^{x}_{\mathbf{A}}(\mu)[w]\) is \(\mathbf{A}\)-spanned by \(v_{1}=|\mu\rangle_{\mathbf{A}}\). In particular, we have that \(T_{wx}(U^{+}_{x^{-\bullet}\mathfrak{q}})\) annuls \(v_{1}\).
On the other hand, we are thinking of \(Z^{wx}_{\mathbf{A}[w]}(w\mu)\) as an object in \(\mathcal{C}^{w^{\bullet}\mathfrak{q}}_{\mathbf{A}[w]}\). Thus, it is constructed using the triangular decomposition of \(U_{w^{\bullet}\mathfrak{q}}\) induced by \(T_{wx}:U_{x^{-\bullet}\mathfrak{q}}\to U_{w^{\bullet}\mathfrak{q}}\). This means that its generator \(v_{2}=|w\mu\rangle_{\mathbf{A}[w]}\) is annuled by \(T_{wx}(U^{+}_{x^{-\bullet}\mathfrak{q}})\). Therefore there exists a morphism \(f:Z^{wx}_{\mathbf{A}[w]}(w\mu)\longrightarrow Z^{x}_{\mathbf{A}}(\mu)[w]\) given by \(v_{2}\mapsto v_{1}\).
In a similar fashion, we get a morphism \(Z^{x}_{\mathbf{A}}(\mu)\longrightarrow Z^{wx}_{\mathbf{A}[w]}(w\mu)[w^{-1}]\) in \(\mathcal{C}^{\mathfrak{q}}_{\mathbf{A}}\) such that \(v_{1}\mapsto v_{2}\). Then, we can transform it into a morphism \(g:Z^{x}_{\mathbf{A}}(\mu)[w]\longrightarrow Z^{wx}_{\mathbf{A}[w]}(w\mu)\) in \(\mathcal{C}^{w^{\bullet}\mathfrak{q}}_{\mathbf{A}[w]}\) such that \(v_{1}\mapsto v_{2}\). Clearly, \(g\circ f=\operatorname{id}\) and \(g[w^{-1}]\circ f[w^{-1}]=\operatorname{id}\). This proves the isomorphisms between the Verma modules. Therefore, in the case of the field \(\mathbf{k}\), we deduce that their heads also are isomorphic.
### Morphisms between twisted Verma modules
Here we extend the results of SS7.2 to any morphism in the Weyl groupoid using the functors of SS7.3 and following the ideas of [1, SS5.10].
We fix \(w\in{}^{\mathfrak{q}}\mathcal{W}\) and \(\beta\in\Delta^{\mathfrak{q}}\) such that \(\beta=w\alpha_{i}\) for fixed \(\alpha_{i}\in\Pi^{w^{-\bullet}\mathfrak{q}}\) and \(i\in\mathbb{I}\). We also fix \(\mu\in\mathbb{Z}^{\mathbb{I}}\). We set \(t_{\beta}=t_{\beta}^{\pi}(\mu)\), recall Definition 4.7. We will use the functor \({}^{w}F^{w^{-\bullet}\mathfrak{q}}_{\mathbf{A}[w^{-1}]}:\mathcal{C}^{w^{- \bullet}\mathfrak{q}}_{\mathbf{A}[w^{-1}]}\longrightarrow\mathcal{C}^{ \mathfrak{q}}_{\mathbf{A}}\). For abbreviation, we will denote \(M[w]\) and \(f[w]\) the image of objects and morphisms through this functor.
Using \({}^{w}F^{w^{-\bullet}\mathfrak{q}}_{\mathbf{A}[w^{-1}]}\), Lemma 7.13 implies that
\[\begin{split} Z_{\mathbf{A}[w^{-1}]}(w^{-1}\mu)[w]\simeq Z^{w}_{ \mathbf{A}}(\mu)\quad\text{and}\\ Z^{\sigma_{i}}_{\mathbf{A}[w^{-1}]}((w^{-1}\mu)^{\prime})[w]\simeq Z ^{w\sigma_{i}}_{\mathbf{A}}(\mu-(b^{\mathfrak{q}}(\beta)-1)\beta)\end{split} \tag{7.18}\]
for \((w^{-1}\mu)^{\prime}=(w^{-1}\mu)\langle\sigma_{i}\rangle=w^{-1}\mu-(b^{w^{- \bullet}\mathfrak{q}}(\alpha_{i})-1)\alpha_{i}\); for the first isomorphism take \(x=1^{w^{-\bullet}\mathfrak{q}}\) and for the second one \(x=\sigma_{i}^{(\sigma_{i}w)^{-\bullet}\mathfrak{q}}\).
On the other hand, let \(\varphi\) and \(\varphi^{\prime}\) be the morphisms in the category \(\mathcal{C}_{\mathbf{A}[w^{-1}]}^{w^{-\bullet}\mathfrak{q}}\) between \(Z_{\mathbf{A}[w^{-1}]}(w^{-1}\mu)\) and \(Z_{\mathbf{A}[w^{-1}]}^{\sigma_{i}}((w^{-1}\mu)^{\prime})\) given in (7.12). We apply to them the functor \({}^{w}F_{\mathbf{A}[w^{-1}]}^{w^{-\bullet}\mathfrak{q}}\) and obtain the morphisms
\[\begin{split}\varphi[w]:Z_{\mathbf{A}}^{w}(\mu)\longrightarrow Z _{\mathbf{A}}^{w\sigma_{i}}(\mu-(b^{\mathfrak{q}}(\beta)-1)\beta)\quad\text{ and}\\ \varphi^{\prime}[w]:Z_{\mathbf{A}}^{w\sigma_{i}}(\mu-(b^{ \mathfrak{q}}(\beta)-1)\beta)\longrightarrow Z_{\mathbf{A}}^{w}(\mu)\quad \text{in }\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}.\end{split} \tag{7.19}\]
We are ready to extend the results from SS7.2.
**Lemma 7.14**.: _If \(\pi\tilde{\mu}([\beta;t])\) is a unit for all \(1\leq t\leq b^{\mathfrak{q}}(\beta)-1\), then \(\varphi[w]\) and \(\varphi^{\prime}[w]\) are isomorphisms in \(\mathcal{C}_{\mathbf{A}}^{\mathfrak{q}}\)._
Proof.: By Lemma 7.6, \(\varphi\) and \(\varphi^{\prime}\) are isomorphisms if \(\pi[w^{-1}]\widetilde{w^{-1}\mu}([\alpha_{i};t])\) is a unit for all \(1\leq t\leq b^{w^{-\bullet}\mathfrak{q}}(\alpha_{i})-1=b^{\mathfrak{q}}( \beta)-1\). As \(\pi[w^{-1}]\widetilde{w^{-1}\mu}([\alpha_{i};t])=\pi\tilde{\mu}([\beta;t])\) by (4.25), the lemma follows from the hypothesis.
In the case of the field \(\mathbf{k}\), we immediately deduce an isomorphism between the heads of the Verma modules.
**Lemma 7.15**.: _If \(t_{\beta}=0\), then \(L_{\mathbf{k}}^{w}(\mu)\simeq L_{\mathbf{k}}^{w\sigma_{i}}(\mu-(b^{\mathfrak{ q}}(\beta)-1)\beta)\) in \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\). _
Suppose now \(t_{\beta}\neq 0\). Since \(t_{\alpha_{i}}^{\pi[w^{-1}]}(w^{-1}\mu)=t_{\beta}\) by (4.24), we can consider the morphisms \(\psi\) and \(\psi^{\prime}\) between \(Z_{\mathbf{k}[w^{-1}]}(w^{-1}\mu)\) and \(Z_{\mathbf{k}[w^{-1}]}^{\sigma_{i}}((w^{-1}\mu)^{\prime})\) in the category \(\mathcal{C}_{\mathbf{k}[w^{-1}]}^{w^{-\bullet}\mathfrak{q}}\) given by Lemma 7.9. By applying the functor \({}^{w}F_{\mathbf{k}[w^{-1}]}^{w^{-\bullet}\mathfrak{q}}\) to this lemma, we obtain the following.
**Lemma 7.16**.: _Suppose \(t_{\beta}\neq 0\). Then morphisms_
\[\psi[w]:Z_{\mathbf{k}}^{w}(\mu-t_{\beta}\beta) \longrightarrow Z_{\mathbf{k}}^{w}(\mu)\quad\text{and}\] \[\psi^{\prime}[w]:Z_{\mathbf{k}}^{w\sigma_{i}}(\mu-(t_{\beta}-1) \beta)\longrightarrow Z_{\mathbf{k}}^{w\sigma_{i}}(\mu-(b^{\mathfrak{q}}( \beta)-1)\beta)\]
_in \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) satisfy that_
\[\operatorname{Ker}\varphi[w]=\operatorname{Im}\!\varphi^{\prime}[w]= \operatorname{Im}\!\psi[w]\quad\text{and}\quad\operatorname{Im}\!\varphi[w]= \operatorname{Ker}\varphi^{\prime}[w]=\operatorname{Im}\!\psi^{\prime}[w].\]
_We also have a long exact sequence_
\[\cdots Z_{\mathbf{k}}^{w}(\mu-(b^{\mathfrak{q}}(\beta)+t_{\beta})\beta) \longrightarrow Z_{\mathbf{k}}^{w}(\mu-b^{\mathfrak{q}}(\beta)\beta) \longrightarrow Z_{\mathbf{k}}^{w}(\mu-t_{\beta}\beta)\longrightarrow Z_{ \mathbf{k}}^{w}(\mu)\longrightarrow\cdots\]
Similar to Proposition 7.11, we deduce an isomorphism between the heads of the \(w\)-Verma modules above.
**Proposition 7.17**.: _Suppose \(t_{\beta}\neq 0\). In \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\), it holds that_
\[L_{\mathbf{k}}^{w}(\mu)\simeq L_{\mathbf{k}}^{w\sigma_{i}}(\mu-(t_{\beta}-1) \beta)\quad\text{and}\quad L_{\mathbf{k}}^{w}(\mu-t_{\beta}\beta)\simeq L_{ \mathbf{k}}^{w\sigma_{i}}(\mu-(b^{\mathfrak{q}}(\beta)-1)\beta).\]
**Remark 7.18**.: Using iteratively the prior lemma, we can calculate \(\mu_{w}\in\mathbb{Z}^{\mathbb{I}}\) such that \(L^{w}_{\mathbf{k}}(\mu)=L_{\mathbf{k}}(\mu_{w})\), recall (6.7).
Let us make some comments about the kernel of \(\varphi[\![w]\!]\) in the case of the field \(\mathbf{k}\). First, using (7.14) and \(\operatorname{Ker}(\varphi[\![w]\!])=\operatorname{Ker}(\varphi)[\![w]\!]\), we have that
\[\prod_{\gamma\in\Delta^{q}_{+}\setminus\{\beta\}:w^{-1}\gamma \in\Delta^{w-\bullet_{q}}_{+}}\left(1+e^{-\gamma}+\cdots+e^{(1-b^{q}(\gamma)) \gamma}\right)\times\] \[\prod_{\gamma\in\Delta^{q}_{+}\setminus\{\beta\}:w^{-1}\gamma \in\Delta^{w-\bullet_{q}}_{-}}\left(1+e^{\gamma}+\cdots+e^{(b^{q}(\gamma)-1) \gamma}\right). \tag{7.20}\]
Then, if \(\beta=w\alpha_{i}\in\Delta^{q}_{+}\), it follows that
\[\mu-(w(\varrho^{w-\bullet_{q}})-\varrho^{q})\text{ is not a weight of }\operatorname{Ker}(\varphi[\![w]\!]) \tag{7.21}\]
since \(\mu-(w(\varrho^{w-\bullet_{q}})-\varrho^{q})=\mu+\sum_{\gamma\in\Delta^{q}_{+ }\ :\ w^{-1}\gamma\in\Delta^{w-\bullet_{q}}_{-}}(b^{q}(\gamma)-1)\gamma\) by (3.12). Lastly, we claim that
\[T_{w}(F_{i})^{t_{\beta}}|\mu\rangle_{\mathbf{k}}\text{ is a }U_{q}\text{- generator of }\operatorname{Ker}(\varphi[\![w]\!]) \tag{7.22}\]
where \(T_{w}:U_{w^{-\bullet_{q}}}\longrightarrow U_{q}\) is a Lusztig isomorphism associated to \(w\). In fact, \(\psi(|w^{-1}\mu-t_{\beta}\alpha_{i}\rangle_{\mathbf{k}[w^{-1}]})=F_{i}^{t_{ \beta}}|w^{-1}\mu\rangle_{\mathbf{k}[w^{-1}]}\) by Lemma 7.9. From the proof of Lemma 7.13, we see that \(\psi[\![w]\!](|\mu-t_{\beta}\beta\rangle_{\mathbf{k}})=g(F_{i}^{t_{\beta}}|w^ {-1}\mu\rangle_{\mathbf{k}[w^{-1}]})=g(T_{w}(F_{i}^{t_{\beta}})\cdot_{T_{w}^{- 1}}|w^{-1}\mu\rangle_{\mathbf{k}[w^{-1}]})=T_{w}(F_{i}^{t_{\beta}})|\mu\rangle _{\mathbf{k}}\).
#### 7.4.1. A generator
We end this subsection by constructing a generator of the Hom spaces between twisted Verma modules as in Proposition 6.2. We follow the strategy in [1, SS5.13].
We fix \(x:x^{-\bullet}\mathfrak{q}\rightarrow\mathfrak{q}\) and a reduced expression \(x^{-1}w=1^{x^{-\bullet_{q}}\mathfrak{q}}\sigma_{i_{1}}\cdots\sigma_{i_{r}}\). We set
\[x_{s}=1^{\mathfrak{q}}x\sigma_{i_{1}}\cdots\sigma_{i_{s-1}}\]
for \(1\leqslant s\leqslant r+1\). Then
\[\mu\langle x_{s+1}\rangle=\mu\langle x_{s}\rangle-(b^{\mathfrak{q}}(x_{s} \alpha_{i_{s}})-1)x_{s}\alpha_{i_{s}}\]
for all \(1\leqslant s\leqslant r\) by (3.13). Notice \(x_{1}=x\) and \(x_{r+1}=w\). We set \(\varphi_{s}=\varphi[x_{s}]\) the morphism in \(\mathcal{C}^{\mathfrak{q}}_{\mathbf{A}}\) given by (7.19) for \(\mu\langle x_{s}\rangle\). Explicitly,
\[\varphi_{s}:Z^{x_{s}}_{\mathbf{A}}(\mu\langle x_{s}\rangle)\longrightarrow Z ^{x_{s+1}}_{\mathbf{A}}(\mu\langle x_{s+1}\rangle)\]
for all \(1\leqslant s\leqslant r\). We get a morphism
\[\varphi_{r}\cdots\varphi_{1}:Z^{x}_{\mathbf{A}}(\mu\langle x\rangle) \longrightarrow Z^{w}_{\mathbf{A}}(\mu\langle w\rangle).\]
**Proposition 7.19**.: _Suppose that \(\pi\widetilde{\mu\langle x_{s}\rangle}([x_{s}\alpha_{i_{s}};t])\) is a unit for all \(1\leq t\leq b^{\mathfrak{q}}(x_{s}\alpha_{i_{s}})-1\) or \(x_{s}\alpha_{i_{s}}\in\Delta_{+}^{\mathfrak{q}}\), for all \(1\leq s\leq r\). Then \(\varphi_{r}\cdots\varphi_{1}\) induces an \(\mathbf{A}\)-isomorphism between the spaces of weight \(\mu\) and therefore \(\varphi_{r}\cdots\varphi_{1}\) is an \(\mathbf{A}\)-generator of the corresponding \(\operatorname{Hom}\) space._
Proof.: The spaces of weight \(\mu\) are free of rank \(1\) over \(\mathbf{A}\) by (6.6) and then the first assertion implies the second one. Also, it is enough to prove it for each \(s\) and assuming that \(\mathbf{A}\) is a field. Thus, under the first supposition, \(\varphi_{s}\) is an isomorphism by Lemma 7.14. Under the second one, \(\mu\) is not a weight of \(\operatorname{Ker}\varphi_{s}\) by (7.21) as \(\mu=\mu\langle x_{s}\rangle-(x_{s}(\varrho^{x_{s}^{-\mathfrak{s}}\mathfrak{q} })-\varrho^{\mathfrak{q}})\) and hence \(\varphi_{s}\) is an isomorphism on the spaces of weight \(\mu\).
We will use the following corollary with \(x=1^{\mathfrak{q}}\) and \(w=w_{0}\in{}^{\mathfrak{q}}\mathcal{W}\) the longest element to prove the linkage principle in the next section.
**Corollary 7.20**.: _If \(\ell(w)=\ell(x)+\ell(x^{-1}w)\), then \(\varphi_{r}\cdots\varphi_{1}\) is an \(\mathbf{A}\)-generator of the corresponding \(\operatorname{Hom}\) space._
Proof.: We have that \(x_{s}\alpha_{i_{s}}\in\Delta_{+}^{\mathfrak{q}}\) by [25, Lemma 8 (\(i\))] for all \(1\leq s\leq r\).
## 8. The linkage principle
We are ready to prove our main results. We follow here the ideas in [1, SS6]. We keep the notation of the above sections and restrict ourselves to the case of the field \(\mathbf{k}\). Recall Definition 4.8
**Definition 8.1**.: Let \(\beta\in\Delta_{+}^{\mathfrak{q}}\) and \(\mu\in\mathbb{Z}^{\mathbb{I}}\). We set
\[\beta\downarrow\mu=\mu-n_{\beta}^{\pi}(\mu)\,\beta.\]
We say that \(\lambda\in\mathbb{Z}^{\mathbb{I}}\) is strongly linked to \(\mu\) if and only if there exist \(\beta_{1},...,\beta_{r}\in\Delta_{+}^{\mathfrak{q}}\) such that \(\lambda=\beta_{r}\downarrow\cdots\beta_{1}\downarrow\mu\). We denote
\[{}^{\downarrow}\mu=\left\{\lambda\in\mathbb{Z}^{\mathbb{I}}\mid\lambda \,\text{is strongly linked to}\,\mu\,\text{and}\,\mu-\beta_{top}^{\mathfrak{q}} \leq\lambda\leq\mu\right\}.\]
Lastly, being linked is the smallest equivalence relation in \(\mathbb{Z}^{\mathbb{I}}\) such that \(\lambda\) and \(\mu\) are linked if \(\lambda\) is strongly linked to \(\mu\) or _vice versa_. We denote \([\mu]_{\mathrm{link}}\) the equivalence class of \(\mu\in\mathbb{Z}^{\mathbb{I}}\).
**Theorem 8.2**.: _If \(L_{\mathbf{k}}(\lambda)\) is a composition factor of \(Z_{\mathbf{k}}(\mu)\), then \(\lambda=\mu\) or \(L_{\mathbf{k}}(\lambda)\) is a composition factor of \(Z_{\mathbf{k}}(\beta\downarrow\mu)\) for some \(\beta\in\Delta_{+}^{\mathfrak{q}}\). Moreover, \(\lambda\in{}^{\downarrow}\mu\)._
Proof.: We use the notation of Figure 4. Then \(w_{0}=w_{n+1}\) and \(\mu\langle w_{n+1}\rangle=\mu-\beta_{top}^{\mathfrak{q}}\). Therefore \(\Phi=\varphi_{n}\cdots\varphi_{1}:Z_{\mathbf{k}}(\mu)\longrightarrow Z_{ \mathbf{k}}^{w_{0}}(\mu-\beta_{top}^{\mathfrak{q}})\) is non-zero by Corollary 7.20 and hence \(L_{\mathbf{k}}(\mu)\simeq\operatorname{Im}\!\Phi\simeq\operatorname{soc}Z_{ \mathbf{k}}^{w_{0}}(\mu-\beta_{top}^{\mathfrak{q}})\) by Lemma 6.5.
Now, suppose \(\lambda\neq\mu\). Then \(L_{\mathbf{k}}(\lambda)\) is a composition factor of \(\operatorname{Ker}\Phi\) and hence of \(\operatorname{Ker}\varphi_{s}\) for some \(s\). By Lemma 7.16, \(\operatorname{Ker}\varphi_{s}\) is a homomorphic image of \(Z_{\mathbf{k}}^{w_{s}}(\mu\langle w_{s}\rangle-t_{s}\beta_{s})\)
Therefore \(L_{\mathbf{k}}(\lambda)\) is also a composition factor of \(Z_{\mathbf{k}}^{w_{s}}(\mu\langle w_{s}\rangle-t_{s}\beta_{s})\). By Lemma 4.9, we have that \(t_{s}=n_{\beta_{s}}^{\pi}(\mu)\) and hence
\[\mu\langle w_{s}\rangle-t_{s}\beta_{s}=(\mu-t_{s}\beta_{s})\langle w_{s} \rangle=(\beta_{s}\downarrow\mu)\langle w_{s}\rangle.\]
Thus, \(\mathrm{ch}Z_{\mathbf{k}}^{w_{s}}(\mu\langle w_{s}\rangle-t_{s}\beta_{s})= \mathrm{ch}Z_{\mathbf{k}}(\beta_{s}\downarrow\mu)\) by (6.6), and hence \(L_{\mathbf{k}}(\lambda)\) is also a composition factor of \(Z_{\mathbf{k}}(\beta_{s}\downarrow\mu)\). This shows the first part of the statement. For the second one, we repeat the reasoning with \(\beta_{s}\downarrow\mu\) instead of \(\mu\), and so on. This procedure will end after a finite number of steps since \(\beta\downarrow\mu\leq\mu\) and \(\mu-\beta_{top}^{\mathfrak{q}}\leq\lambda<\mu\). Hence, there exist \(\beta_{s_{1}}\),..., \(\beta_{s_{r}}\) such that \(\lambda=\beta_{s_{r}}\downarrow\cdots\beta_{s_{1}}\downarrow\mu\) as desired.
Besides the following particular case, it is not necessarily true that \(L_{\mathbf{k}}(\lambda)\) is a composition factor of \(Z_{\mathbf{k}}(\mu)\) if \(\lambda\in{}^{\downarrow}\mu\), see Example 8.8.
**Lemma 8.3**.: _If \(\lambda=\beta\downarrow\mu\), then \(L_{\mathbf{k}}(\lambda)\) is a composition factor of \(Z_{\mathbf{k}}(\mu)\)._
Proof.: We can assume we are in the situation of Figure 4. That is, \(\beta=\beta_{s}=w_{s}\alpha_{i_{s}}\) and \(\lambda=\beta_{s}\downarrow\mu=\mu-t_{s}\beta_{s}\). Thus, we have a projection from \(Z_{\mathbf{k}}^{w_{s}}(\mu\langle w_{s}\rangle)\) to \(\operatorname{Ker}\varphi_{s}\). Notice that \(\lambda\) is a weight of \(\operatorname{Ker}\varphi_{s}\) by (7.20). On the other hand, \(\tilde{\varphi}_{s-1}\cdots\tilde{\varphi}_{1}\) induces an \(\mathbf{k}\)-isomorphism between the spaces of weight \(\lambda\) by Proposition 7.19 which is one-dimensional. Hence \(\psi\tilde{\varphi}_{s-1}\cdots\tilde{\varphi}_{1}\) is not zero on the space of weight \(\lambda\) because \(\operatorname{Im}\psi=\operatorname{Ker}\varphi_{s}\). This implies that \(L_{\mathbf{k}}(\lambda)\) is a composition factor of \(\operatorname{Ker}\varphi_{s}\) and therefore so is of \(Z_{\mathbf{k}}^{w_{s}}(\mu\langle w_{s}\rangle)\). As \(\mathrm{ch}Z_{\mathbf{k}}^{w_{s}}(\mu\langle w_{s}\rangle)=\mathrm{ch}Z_{ \mathbf{k}}(\mu)\), the lemma follows.
In general, we can assert that the strongly linked weights belong to the same block.
**Corollary 8.4**.: _Let \(\lambda,\mu\in\mathbb{Z}^{\mathbb{I}}\). Then \(\lambda\) and \(\mu\) are linked if and only if \(L_{\mathbf{k}}(\lambda)\) and \(L_{\mathbf{k}}(\mu)\) belong to the same block._
Proof.: We first prove that being linked implies the belonging to the same block. To this end, it is enough to consider the case \(\lambda=\beta\downarrow\mu\) for some \(\beta\in\Delta_{+}^{\mathfrak{q}}\) which follows from Lemma 8.3.
For the reciprocal, as above, it is enough to consider the existence of a non-trivial extension of the form \(0\longrightarrow L_{\mathbf{k}}(\lambda)\longrightarrow M\longrightarrow L_{ \mathbf{k}}(\mu)\longrightarrow 0\). Therefore \(M\) is a quotient of \(Z_{\mathbf{k}}(\mu)\) and hence \(\lambda\in{}^{\downarrow}\mu\) by Theorem 8.2.
### Typical weights
For \(\beta\in\Delta_{+}^{\mathfrak{q}}\) and \(\mu\in\mathbb{Z}^{\mathbb{I}}\), we introduce
\[\begin{split}\mathfrak{P}_{\mathbf{k}}^{\mathfrak{q}}(\beta,\mu) =\prod_{1\leq t<b^{\mathfrak{q}}(\beta)}\left(q_{\beta}^{t}-\rho^{\mathfrak{q} }(\beta)\,\pi\widetilde{\mu}(K_{\beta}L_{\beta}^{-1})\right)\quad\text{and} \\ \mathfrak{P}_{\mathbf{k}}^{\mathfrak{q}}(\mu)=\prod_{\beta\in \Delta_{+}^{\mathfrak{q}}}\mathfrak{P}_{\mathbf{k}}^{\mathfrak{q}}(\beta,\mu) \end{split} \tag{8.1}\]
A weight \(\mu\) is called _typical_ if \(\mathfrak{P}_{\mathbf{k}}^{\mathfrak{q}}(\mu)\neq 0\). Otherwise, it is called _atypical_ and the numbers of positive roots for which \(\mathfrak{P}_{\mathbf{k}}^{\mathfrak{q}}(\beta,\mu)=0\) is its _degree of atypicality_; if it is \(\ell\) we say that \(\mu\) is \(\ell\)-atypical. This terminology is borrowed from [27], see also [39, 41].
**Corollary 8.5** ([1, SS6.3]).: _Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\). The following are equivalent:_
1. \(\mu\) _is typical._
2. \(Z_{\mathbf{k}}(\mu)=L_{\mathbf{k}}(\mu)\) _is simple._
3. \(Z_{\mathbf{k}}(\mu)=L_{\mathbf{k}}(\mu)\) _is projective._
Proof.: If \(\mu\) is typical, then \(L_{\mathbf{k}}(\mu)\) is the unique composition factor of \(Z_{\mathbf{k}}(\mu)\) by Theorem 8.2. Since \([Z_{\mathbf{k}}(\mu),L_{\mathbf{k}}(\mu)]=1\), it follows that \(Z_{\mathbf{k}}(\mu)\) is simple. Instead, if \(\mathfrak{P}_{\mathbf{k}}^{\mathfrak{q}}(\mu)=0\), then \(\operatorname{Ker}\varphi_{s}\neq 0\) for some \(s\), cf. Figure 4. Since the morphism \(\varphi_{n}\cdots\varphi_{1}:Z_{\mathbf{k}}(\mu)\longrightarrow Z_{\mathbf{k} }^{w_{0}}(\mu-\beta_{top}^{\mathfrak{q}})\) is non trivial, \(\operatorname{Ker}\varphi_{s}\neq Z_{\mathbf{k}}(\mu)\) and hence \(Z_{\mathbf{k}}(\mu)\) is non simple. This proves that (1) is equivalent to (2).
The equivalence between (2) and (3) is a consequence of Theorem 6.10.
**Remark 8.6**.: The equivalence between (1) and (2) was proved before in [26, Proposition 5.16]; see also [41, Remark 6.25].
### \(1\)-atypical weights
For weights with degree of atypicality \(1\) we can compute the character of the associated simple module similar to [1, SS6.4].
**Corollary 8.7**.: _Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\) be a \(1\)-atypical weight with \(\mathfrak{P}_{\mathbf{k}}^{\mathfrak{q}}(\beta,\mu)=0\) for certain \(\beta\in\Delta_{+}^{\mathfrak{q}}\). Then_
\[\operatorname{ch}L_{\mathbf{k}}(\mu)=e^{\mu}\quad\frac{1-e^{-n_{\beta}^{\pi}( \mu)\beta}}{1-e^{-\beta}}\prod_{\gamma\in\Delta_{+}^{\mathfrak{q}}\setminus \{\beta\}}\frac{1-e^{-b^{\mathfrak{q}}(\gamma)\gamma}}{1-e^{-\gamma}}.\]
_Moreover, there exists an exact sequence_
\[0\longrightarrow L_{\mathbf{k}}(\beta\downarrow\mu)\longrightarrow Z_{ \mathbf{k}}(\mu)\longrightarrow L_{\mathbf{k}}(\mu)\longrightarrow 0.\]
Proof.: We keep the notation of Figure 4: \(\beta=\beta_{s}=w_{s}\alpha_{i_{s}}\), \(\lambda=\beta_{s}\downarrow\mu=\mu-t_{s}\beta_{s}\) and \(\Phi=\varphi_{n}\cdots\varphi_{1}\). As \(\mu\) is \(1\)-atypical, the morphisms \(\varphi_{\ell}\) are isomorphisms for all \(\ell\neq s\) by Lemma 7.14 and hence \(\operatorname{Im}\!\varphi_{s}\simeq\operatorname{Im}\!\Phi\simeq L_{\mathbf{ k}}(\mu)\), recall Lemma 6.5. Thus, the character formula is consequence of (7.20).
For the existence of the exact sequence, we claim that \(L_{\mathbf{k}}(\lambda)\simeq\operatorname{Ker}\varphi_{s}\). Indeed, in the proof of Lemma 8.3, we saw that \(L_{\mathbf{k}}(\lambda)\) is a composition factor of \(\operatorname{Ker}\varphi_{s}\). Thus, \(\dim L_{\mathbf{k}}(\lambda)\leq\dim\operatorname{Ker}\varphi_{s}=(b^{q}( \beta)-n_{\beta}^{\pi}(\mu))\prod_{\gamma\in\Delta^{\bullet}_{+}\setminus\{ \beta\}}b^{q}(\gamma)\). On the other hand, \(n_{\beta}^{\pi}(\lambda)=b^{q}(\beta)-n_{\beta}^{\pi}(\mu)\) and hence \(\dim\operatorname{Ker}\tilde{\varphi}_{s}=n_{\beta}^{\pi}(\mu)\prod_{\gamma \in\Delta^{\bullet}_{+}\setminus\{\beta\}}b^{q}(\gamma)\) by (7.20). Let \(\tilde{\Phi}=\tilde{\varphi}_{n}\cdots\tilde{\varphi}_{1}\). Then \(L_{\mathbf{k}}(\lambda)\simeq\operatorname{Im}\!\tilde{\Phi}\) by Lemma 6.5 and therefore \(\dim L_{\mathbf{k}}(\lambda)=\dim Z_{\mathbf{k}}(\lambda)-\dim\operatorname{ Ker}\tilde{\Phi}\geq\dim Z_{\mathbf{k}}(\lambda)-\dim\operatorname{Ker}\tilde{ \varphi}_{s}=(b^{q}(\beta)-n_{\beta}^{\pi}(\mu))\prod_{\gamma\in\Delta^{\bullet }_{+}\setminus\{\beta\}}b^{q}(\gamma)\). This implies our claim and the corollary is proved.
**Example 8.8**.: Let \(\mathfrak{q}\) be as in Example 3.2. Its positive roots are \(\alpha_{1}\), \(\beta=\alpha_{1}+\alpha_{2}\) and \(\alpha_{2}\), recall Example 3.3. Let \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\Bbbk\) be an algebra map. For \(\mu=0\), we have that
\[\mathfrak{P}_{\Bbbk}^{\mathfrak{q}}(0)=\big{(}-1+\pi(K_{1}L_{1}^{-1})\big{)} \,\prod_{t=1}^{N-1}\big{(}q^{t}-\pi(K_{\beta}L_{\beta}^{-1})\big{)}\,\big{(}- 1+\pi(K_{2}L_{2}^{-1})\big{)}.\]
Suppose \(\pi(K_{\beta}L_{\beta}^{-1})=q^{t}\) for some \(1\leq t\leq N-1\) and \(\pi(K_{1}L_{1}^{-1})\neq 1\neq\pi(K_{2}L_{2}^{-1})\). Then \(\mu=0\) is \(1\)-atypical, \(\beta\downarrow 0=-t\beta\) and hence there is an exact sequence of the form
\[0\longrightarrow L_{\Bbbk}(-t\beta)\longrightarrow Z_{\Bbbk}(0) \longrightarrow L_{\Bbbk}(0)\longrightarrow 0.\]
Moreover, \(\operatorname{ch}\!L_{\Bbbk}(0)=(1+e^{-\alpha_{1}})\,\big{(}1+e^{-\beta}+ \cdots+e^{(1-t)\beta}\big{)}\,(1+e^{-\alpha_{2}})\).
We observe now that \(q_{\beta}^{N-t}-\rho^{\mathfrak{q}}(\beta)\,\pi\widehat{(-t\beta)}(K_{\beta} L_{\beta}^{-1})=0\) and hence
\[\beta\downarrow\beta\downarrow 0=\beta\downarrow-t\beta=-t\beta-(N-t)\beta=-N \beta=-\beta_{top}^{\mathfrak{q}}.\]
Therefore \(-\beta_{top}^{\mathfrak{q}}\in{}^{\downarrow}0\) but \(L_{\Bbbk}(-\beta_{top}^{\mathfrak{q}})\) is not a composition factor of \(Z_{\Bbbk}(0)\).
### The linkage principle as a dot action
In this subsection, we assume that \(\mathfrak{q}\) is of standard type [3], this means the bundles of matrices \(\{C^{\mathfrak{p}}\}_{\mathfrak{p}\in\mathcal{X}}\) and roots \(\{\Delta^{\mathfrak{p}}\}_{\mathfrak{p}\in\mathcal{X}}\) are constant. We will see that the operation \(\downarrow\) can be carried out as the action of a group when \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{k}\) satisfies \(\pi(K_{i})=\pi(L_{i})=1\) for all \(i\in\mathbb{I}\), _e.g._\(\pi=\varepsilon\) the counit.
Let us introduce some notation. For \(i\in\mathbb{I}\), we define the group homomorphism
\[\langle\alpha_{i}^{\vee},-\rangle:\mathbb{Z}^{\mathbb{I}} \longrightarrow\mathbb{Z}\quad\text{by}\quad\langle\alpha_{i}^{\vee},\alpha_{ j}\rangle=c_{ij}^{\mathfrak{q}}\quad\forall j\in\mathbb{I}.\]
Therefore
\[\sigma_{i}=\sigma_{i}^{\mathfrak{p}}(\mu)=\mu-\langle\alpha_{i}^{ \vee},\mu\rangle\,\alpha_{i}\]
for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(\mathfrak{p}\in\mathcal{X}\), as the bundle of Cartan matrices is constant.
In the next definition we think of the morphisms in the Weyl groupoid just as \(\mathbb{Z}\)-automorphisms of \(\mathbb{Z}^{\mathbb{I}}\).
**Definition 8.9**.: Let \(\beta=w\alpha_{i}\in\Delta^{\mathfrak{q}}\) with \(w\in{}^{\mathfrak{q}}\mathcal{W}\) and \(\alpha_{i}\in\Pi^{w^{-\mathfrak{q}}\mathfrak{q}}\). We define \(s_{\beta}\in\operatorname{Aut}_{\mathbb{Z}}(\mathbb{Z}^{\mathbb{I}})\) and the group homomorphism \(\langle\beta^{\vee},-\rangle:\mathbb{Z}^{\mathbb{I}}\longrightarrow\mathbb{Z}\) as follows
\[s_{\beta}=w\,\sigma_{i}\,w^{-1}\quad\text{and}\quad\langle\beta^{\vee},\mu \rangle=\langle\alpha_{i}^{\vee},w^{-1}\mu\rangle\quad\forall\mu\in\mathbb{Z} ^{\mathbb{I}}.\]
Of course, \(s_{\beta}\) is defined for all roots thanks to (3.8). This definition and the next lemma are in [6, SS3.2] for Cartan roots. The proof runs essentially as in _loc. cit._
**Lemma 8.10**.: _Let \(\beta\in\Delta^{\mathfrak{q}}\). Then \(s_{\beta}\) and \(\langle\beta^{\vee},-\rangle\) are well-defined, that is, they do not depend on \(w\) and \(\alpha_{i}\). Moreover, \(s_{\beta}(\beta)=-\beta\) and_
\[s_{\beta}(\mu)=\mu-\langle\beta^{\vee},\mu\rangle\,\beta\quad\forall\mu\in \mathbb{Z}^{\mathbb{I}}.\]
Proof.: Assume \(\beta=w\alpha_{i}\) for certain \(w\in{}^{\mathfrak{q}}\mathcal{W}\) and \(\alpha_{i}\in\Pi^{w^{-\mathfrak{q}}\mathfrak{q}}\). Then
\[s_{\beta}(\mu)=w\,\sigma_{i}(w^{-1}\mu)=w(w^{-1}\mu-\langle\alpha_{i}^{\vee},w ^{-1}\mu\rangle\,\alpha_{i})=\mu-\langle\beta^{\vee},\mu\rangle\,\beta.\]
This implies that \(s_{\beta}\) is a reflection in \(\operatorname{End}_{\mathbb{Q}}(\mathbb{Q}^{\theta})\) in the sense of [15, Chapitre V SS2.2]. Also, \(s_{\beta}(\Delta^{\mathfrak{q}})=\Delta^{\mathfrak{q}}\) as we are assuming \(\mathfrak{q}\) is of standard type. Therefore \(s_{\beta}\) is well-defined, and hence \(\langle\beta^{\vee},-\rangle\) so is, by [15, Chapitre VI SS1, Lemme 1]. This proves the lemma.
We recall [11, Definition 2.6]: \(i\in\mathbb{I}\) is a Cartan vertex of \(\mathfrak{p}\in\mathcal{X}\) if \(\mathfrak{p}(\alpha_{i},\alpha_{i})^{\varrho_{ij}^{\mathfrak{p}}}=\mathfrak{p }(\alpha_{i},\alpha_{j})\mathfrak{p}(\alpha_{j},\alpha_{i})\) for all \(j\in\mathbb{I}\). The set of Cartan roots of \(\mathfrak{q}\) is
\[\Delta^{\mathfrak{q}}_{\operatorname{car}}=\left\{w(\alpha_{i})\mid w: \mathfrak{p}\rightarrow\mathfrak{q}\text{ and }i\text{ is a Cartan vertex of }\mathfrak{p}\right\}.\]
We introduce a Cartan-type Weyl group
\[\mathcal{W}^{\mathfrak{q}}_{\operatorname{car}}=\langle s_{\beta}\mid\beta \in\Delta^{\mathfrak{q}}_{\operatorname{car}}\rangle\subset\operatorname{ Aut}_{\mathbb{Z}}(\mathbb{Z}^{\mathbb{I}}),\]
and its affine extension
\[\mathcal{W}^{\mathfrak{q}}_{\operatorname{aff}}=\mathcal{W}^{\mathfrak{q}}_{ \operatorname{car}}\ltimes\mathbb{Z}^{\mathbb{I}}.\]
For \(m\in\mathbb{Z}\), we denote \(s_{\beta,m}=s_{\beta}\ltimes mb^{\mathfrak{q}}(\beta)\beta\in\mathcal{W}^{ \mathfrak{q}}_{\operatorname{aff}}\) and
\[\mathcal{W}^{\mathfrak{q}}_{\operatorname{link}}=\langle s_{\beta,m}\mid \beta\in\Delta^{\mathfrak{q}}_{\operatorname{car}},m\in\mathbb{Z}\rangle \subset\mathcal{W}^{\mathfrak{q}}_{\operatorname{aff}}.\]
Finally, we define the dot action of \(\mathcal{W}^{\mathfrak{q}}_{\operatorname{aff}}\) on \(\mathbb{Z}^{\mathbb{I}}\) as
\[(w\gamma)\bullet\mu=w(\mu+\gamma-\varrho^{\mathfrak{q}})+\varrho^{\mathfrak{ q}}\]
for all \(w\in\mathcal{W}^{\mathfrak{q}}_{\operatorname{car}}\) and \(\gamma,\mu\in\mathbb{Z}^{\mathbb{I}}\).
**Lemma 8.11**.: _Assume that \(\pi:U^{0}_{\mathfrak{q}}\longrightarrow\mathbf{k}\) satifies \(\pi(K_{j})=\pi(L_{j})=1\) for all \(j\in\mathbb{I}\). Let \(\beta\in\Delta^{\mathfrak{q}}_{+}\) be a Cartan root and \(\mu\in\mathbb{Z}^{\mathbb{I}}\). Hence there exists \(m\in\mathbb{Z}\) such that_
\[\beta\downarrow\mu=s_{\beta,m}\bullet\mu.\]
Proof.: Let \(w:\mathfrak{p}\to\mathfrak{q}\) and \(\alpha_{i}\in\Pi^{\mathfrak{p}}\) with \(i\in\mathbb{I}\) a Cartan vertex such that \(\beta=w\alpha_{i}\). For abbreviation, we set \(n=n_{\beta}^{\pi}(\mu)\) and \(b=b^{\mathfrak{q}}(\beta)=\operatorname{ord}q_{\beta}\). If \(1\leq n\leq b-1\), then
\[q_{\beta}^{n}= \rho^{\mathfrak{q}}(\beta)\pi\tilde{\mu}(K_{\beta}L_{\beta}^{-1})\] \[= \mathfrak{p}(\alpha_{i},\alpha_{i})\mathfrak{p}(\alpha_{i},w^{-1 }(\mu\langle w\rangle))\mathfrak{p}(w^{-1}(\mu\langle w\rangle),\alpha_{i})\] \[= \mathfrak{p}(\alpha_{i},\alpha_{i})\mathfrak{p}(\alpha_{i}, \alpha_{i})^{\langle\alpha_{i}^{\vee},w^{-1}(\mu\langle w\rangle)\rangle}\] \[= q_{\beta}^{\langle\beta^{\vee},\mu\langle w\rangle\rangle+1};\]
the second equality follows from (4.26) and the assumption on \(\pi\); the third one holds because \(i\) is a Cartan vertex. Therefore \(n\equiv\langle\beta^{\vee},\mu\langle w\rangle\rangle+1\mod b\). If \(n=0\), then \(q_{\beta}^{t}\neq q_{\beta}^{\langle\beta^{\vee},\mu\langle w\rangle\rangle+1}\) for all \(1\leq t\leq b-1\) and hence \(1=q_{\beta}^{\langle\beta^{\vee},\mu\langle w\rangle\rangle+1}\) because \(b\) is the order of \(q_{\beta}\). In both cases there exists \(k\in\mathbb{Z}\) such that
\[n+kb=\langle\beta^{\vee},\mu\langle w\rangle\rangle+1.\]
We claim that \(m=1-k\) has the desired property. Indeed, we notice that \(\mathfrak{p}(\gamma,\gamma)=\sigma_{i}^{*}\mathfrak{p}(\gamma,\gamma)\) for all \(\gamma\in\mathbb{Z}^{\mathbb{I}}\) because \(i\) is a Cartan vertex, and then \(\varrho^{\mathfrak{p}}=\varrho^{\sigma_{i}^{*}\mathfrak{p}}\) as \(\Delta^{\mathfrak{p}}=\Delta^{\sigma_{i}^{*}\mathfrak{p}}\). Hence \(\sigma_{i}(\varrho^{\mathfrak{p}})-\varrho^{\mathfrak{p}}=\sigma_{i}(\varrho^ {\sigma_{i}^{*}\mathfrak{p}})-\varrho^{\mathfrak{p}}=-(b-1)\alpha_{i}=- \langle\alpha_{i}^{\vee},\varrho^{\mathfrak{p}}\rangle\alpha_{i}\). Therefore \(\langle\beta^{\vee},w\varrho^{\mathfrak{p}}\rangle=\langle\alpha_{i}^{\vee},\varrho^{\mathfrak{p}}\rangle=b-1\). We use this equality in the next computation:
\[s_{\beta}\bullet\mu=s_{\beta}(\mu-\varrho^{\mathfrak{q}})+ \varrho^{\mathfrak{q}}= \mu-\varrho^{\mathfrak{q}}-\langle\beta^{\vee},\mu-\varrho^{ \mathfrak{q}}\rangle\beta+\varrho^{\mathfrak{q}}\] \[= \mu-\langle\beta^{\vee},\mu-\varrho^{\mathfrak{q}}\rangle\beta- \langle\beta^{\vee},w\varrho^{\mathfrak{p}}\rangle\beta-\beta+b\beta\] \[= \mu-(\langle\beta^{\vee},\mu\langle w\rangle\rangle+1)\beta+b\beta\] \[= \mu-(n+kb)\beta+b\beta\] \[= \beta\downarrow\mu+mb\beta.\]
Since \(s_{\beta}(\beta)=-\beta\), the lemma follows.
The family of standard type matrices is arranged into three subfamilies according to [3]: Cartan, super and the remainder. We next analyze them separately.
#### 8.3.1. Cartan type
This is the case in which all \(i\in\mathbb{I}\) are Cartan vertexes and therefore all roots are Cartan roots. Its Weyl groupoid turns out be just the Weyl group of the Cartan matrix \(C=C^{\mathfrak{q}}\), and hence it coincides with \(\mathcal{W}_{\mathrm{car}}^{\mathfrak{q}}\). The indecomposable matrices of Cartan type are listed in [3, SS4].
The previous results immediately imply the following.
**Corollary 8.12**.: _Assume that \(\mathfrak{q}\) is of Cartan type and \(\pi:U_{\mathfrak{q}}^{0}\longrightarrow\mathbf{k}\) satifies \(\pi(K_{j})=\pi(L_{j})=1\) for all \(j\in\mathbb{I}\). Then \([\mu]_{\mathrm{link}}\subset\mathcal{W}_{\mathrm{link}}^{\mathfrak{q}}\bullet\mu\) for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\). _
The natural example of Cartan type matrix is \(\mathfrak{q}=(q^{d_{i}c_{ij}})_{i,j\in\mathbb{I}}\) as in Example 3.1. In particular, if \(\operatorname{ord}q\) is an odd prime, not \(3\) if \(\mathfrak{g}\) has a component of type \(G_{2}\), and \(\pi(K_{i})=\pi(L_{i})=1\), then the objects in the category \(\mathcal{C}_{\mathbf{k}}^{\mathfrak{q}}\) associated to \(u_{q}(\mathfrak{q})\) turn out to be \(u_{q}(\mathfrak{q})\)-modules of type \(1\) in the sense of Lusztig, cf. [1, SS2.4].
Let \(\delta^{\mathfrak{q}}=\frac{1}{2}\sum_{\beta\in\Delta^{\mathfrak{q}}_{+}}\beta\) be the semi-sum of the positive roots of the Lie algebra associated to \(C\). We can replace \(\varrho^{\mathfrak{q}}\) with \(\delta^{\mathfrak{q}}\) in the dot action under the next assumption. For instance, when \(\mathfrak{q}\) is as in the above paragraph but this is not the case if \(\operatorname{ord}q\) is even. In this way, we recover the usual dot action of the affine Weyl group.
**Lemma 8.13**.: _Suppose \(b=b^{\mathfrak{q}}(\gamma)\) is constant for all \(\gamma\in\Delta^{\mathfrak{q}}_{+}\). Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(\beta\in\Delta^{\mathfrak{q}}_{+}\). Then \(s_{\beta}\bullet\mu=s_{\beta}(\mu+\delta^{\mathfrak{q}})-\delta^{\mathfrak{q}} +b\langle\beta^{\vee},\delta^{\mathfrak{q}}\rangle\beta\)._
Proof.: From the proof of Corollary 8.12, we see that
\[s_{\beta}\bullet\mu=\mu-\langle\beta^{\vee},\mu-\varrho^{\mathfrak{q}} \rangle\beta=\mu-\langle\beta^{\vee},\mu+\delta^{\mathfrak{q}}\rangle\beta+ b\langle\beta^{\vee},\delta^{\mathfrak{q}}\rangle\beta\]
as we wanted.
#### 8.3.2. Super type
These are the matrices \(\mathfrak{q}\) whose root systems are isomorphic to the root systems of finite-dimensional contragredient Lie superalgebras in characteristic \(0\)[3, 8]. The indecomposable matrices of super type are listed in [3, SS5]. An element in \(\Delta^{\mathfrak{q}}_{\operatorname{odd}}:=\Delta^{\mathfrak{q}}\backslash \Delta^{\mathfrak{q}}_{\operatorname{car}}\) is called _odd root_. We can see by inspection on [3, SS5] that \(\mathfrak{q}(\alpha_{i},\alpha_{i})=-1\) for all \(\alpha_{i}\) a simple odd root. Then \(q_{\beta}=-1\) for all \(\beta\in\Delta^{\mathfrak{q}}_{\operatorname{odd}}\) and hence \(\beta\downarrow\mu=\mu\) or \(\mu-\beta\). For odd root, we can not always carry out \(\downarrow\) as a dot action, see the example below. Instead we find that the classes \([\mu]_{\operatorname{link}}\) behave like in representation theory of Lie superalgebras, see for instance [17, 37]. Let \(\mathbb{Z}\Delta^{\mathfrak{q}}_{\operatorname{odd}}\) be the \(\mathbb{Z}\)-span of the odd roots in \(\mathbb{Z}^{\mathbb{I}}\).
**Corollary 8.14**.: _Assume that \(\mathfrak{q}\) is of super type and \(\pi:U^{0}_{\mathfrak{q}}\longrightarrow\mathbf{k}\) satifies \(\pi(K_{j})=\pi(L_{j})=1\) for all \(j\in\mathbb{I}\). Then \([\mu]_{\operatorname{link}}\subset\mathcal{W}^{\mathfrak{q}}_{\operatorname{ link}}\bullet(\mu+\mathbb{Z}\Delta^{\mathfrak{q}}_{\operatorname{odd}})\) for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\)._
Proof.: The set of Cartan roots is invariant by \(s_{\beta}\) for all \(\beta\in\Delta^{\mathfrak{q}}_{\operatorname{car}}\) by [6, Lemma 3.6] and hence so is \(\Delta^{\mathfrak{q}}_{\operatorname{odd}}\). Then the lemma is a direct consequence of Lemma 8.11 and Theorem 8.2.
Let \(\delta^{\mathfrak{q}}=\delta^{\mathfrak{q}}_{\operatorname{car}}-\delta^{ \mathfrak{q}}_{\operatorname{odd}}\) with \(\delta^{\mathfrak{q}}_{\operatorname{car}}=\frac{1}{2}\sum_{\beta\in\Delta^{ \mathfrak{q}}_{+,\operatorname{car}}}\beta\) the semi-sum of the positive Cartan roots and \(\delta^{\mathfrak{q}}_{\operatorname{odd}}=\frac{1}{2}\sum_{\beta\in\Delta^ {\mathfrak{q}}_{+,\operatorname{odd}}}\beta\). The following is analogous to Lemma 8.13.
**Lemma 8.15**.: _Suppose \(b=b^{\mathfrak{q}}(\gamma)\) is constant for all \(\gamma\in\Delta^{\mathfrak{q}}_{+,\operatorname{car}}\). Let \(\mu\in\mathbb{Z}^{\mathbb{I}}\) and \(\beta\in\Delta^{\mathfrak{q}}_{+,\operatorname{car}}\). Then \(s_{\beta}\bullet(\mu)=s_{\beta}(\mu+\delta^{\mathfrak{q}})-\delta^{\mathfrak{ q}}+b\langle\beta^{\vee},\delta^{\mathfrak{q}}_{\operatorname{car}}\rangle\beta\). _
**Example 8.16**.: Let \(\mathfrak{q}\) and \(\mathfrak{p}\) be as in Example 3.2 and Example 3.4, respectively. Recall their root system in Example 3.5. Then \(\pm\alpha_{1}\) and \(\pm\alpha_{2}\) are odd roots and \(\pm(\alpha_{1}+\alpha_{2})=\sigma^{\mathfrak{p}}_{1}(\alpha_{2})\) is a Cartan root because \(2\) is Cartan vertex of \(\mathfrak{p}\).
Let \(\mu=\mu_{1}\alpha_{1}+\mu_{2}\alpha_{2}\in\mathbb{Z}^{2}\). Then
\[s_{\alpha_{1}}\bullet\mu=s_{\alpha_{1}}(\mu-\varrho^{\mathfrak{q}})+\varrho^{ \mathfrak{q}}=\mu-(2\mu_{1}-\mu_{2}-1)\alpha_{1},\]
and
\[n^{\pi}_{\alpha_{1}}(\mu)=\begin{cases}1&\text{if $q^{\mu_{2}}=1$},\\ 0&\text{otherwise}.\end{cases}\]
Thus, \(\alpha_{1}\downarrow\mu\neq s_{\alpha_{1}}\bullet\mu+mb^{\mathfrak{q}}(\alpha_{1}) \alpha_{1}\) for all \(m\in\mathbb{Z}\) if \(\operatorname{ord}q\neq\mu_{2}\in 2\mathbb{Z}\) as \(b^{\mathfrak{q}}(\alpha_{1})=2\).
#### 8.3.3.
Besides the matrices of Cartan and super type there exist an infinite family of indecomposable matrices and one \(2\times 2\)-matrix whose roots systems are constant [3, SS6]. For a matrix \(\mathfrak{q}\) in this class and \(\beta\in\Delta^{\mathfrak{q}}\backslash\Delta^{\mathfrak{q}}_{\operatorname{ car}}\), the order of \(q_{\beta}\) belongs to \(\{2,3,4\}\). Corollary 8.14 can be stated in this case but replacing \(\Delta^{\mathfrak{q}}_{\operatorname{odd}}\) with \(\Delta^{\mathfrak{q}}\backslash\Delta^{\mathfrak{q}}_{\operatorname{car}}\).
### Proof of Theorem 1.1 and Corollary 1.2
**Corollary 8.17**.: _Let \(u_{\mathfrak{q}}\) be a small quantum group in the sense of Definition 4.1. Mutatis mutandis, all the results of this section hold for \(u_{\mathfrak{q}}\) instead of \(U_{\mathfrak{q}}\)._
Proof.: By definition there is a projection \(U_{\mathfrak{q}}\longrightarrow u_{\mathfrak{q}}\) preserving the triangular decomposition as in SS5.6 and then all the results of this section can be restated for \(u_{\mathfrak{q}}\) thanks to (5.16).
In particular, the above corollary applies to small quantum groups as in Figure 2, recall Example 4.3, and hence Theorem 1.1 and Corollary 1.2 follow. Alternatively, it is not difficult to see that the Lusztig isomorphisms descend to isomorphisms between these small quantum groups, and \(\widetilde{\mu}\) induces an algebra automorphism in \(u_{\mathfrak{q}}^{0}\) for all \(\mu\in\mathbb{Z}^{\mathbb{I}}\). Thus, one could repeat all the treatment of Sections 6 and 7 for \(u_{\mathfrak{q}}\), and gives a direct proof of Theorem 1.1 and Corollary 1.2.
|
2309.05118 | Thermodynamic Limits of Electronic Systems | We review thermodynamic limits and scaling limits of electronic structure
models for condensed matter. We discuss several mathematical ways to implement
these limits in three models of increasing chemical complexity and mathematical
difficulty: (1) Thomas-Fermi like models; (2) Hartree-Fock like models; and (3)
Kohn-Sham density functional theory models. | David Gontier, Jianfeng Lu, Christoph Ortner | 2023-09-10T19:38:00Z | http://arxiv.org/abs/2309.05118v1 | # Thermodynamic Limits of Electronic Systems
###### Abstract
We review thermodynamic limits and scaling limits of electronic structure models for condensed matter. We discuss several mathematical ways to implement these limits in three models of increasing chemical complexity and mathematical difficulty: (1) Thomas-Fermi like models; (2) Hartree-Fock like models; and (3) Kohn-Sham density functional theory models.
## 1 Introduction
The goal of thermodynamic limits, as introduced in the 1960's [39, 23], is to obtain mathematical models for infinite systems of particles. The overarching strategy is to study systems with a finite number of particles (which can be described efficiently by well-posed mathematical models), to let the number of particles go to infinity while filling the space, and to pass to the limit in the governing equations in order to obtain a limit model. The purpose of the present chapter is to review results of this kind in the context of electronic structure models in condensed matter.
Two prototypical applications of thermodynamic limits are (1) to justify models of the _energy per unit cell_ of a homogeneous crystal (infinite periodic system); (2) to obtain models for the formation energy of a crystalline defects without artefacts due to the boundary conditions. In this chapter, we review different models and mathematical methods to treat both of these scenarios. Extensive references will be provided throughout the chapter.
In both cases, one would like to describe an infinite system of electrons in a potential generated by an infinite collection of nuclei at positions \(\mathscr{R}\subset\mathbb{R}^{3}\). In most studies, \(\mathscr{R}\) is a periodic
lattice (we write \(\mathscr{R}=\mathbb{L}\) in this case), or a perturbation of it, describing for instance a crystal with a defect, or a deformed crystal (there are some studies for amorphous solids, in which case \(\mathscr{R}\) is a random set [11]). In order to highlight the main ideas of thermodynamic limit, we restrict ourselves to the simple case of a periodic crystal with one nucleus of charge 1 per unit cell. We denote by \(m_{\rm a}\) the charge density of a single nucleus, which we take to be smooth to avoid some technical details: \(m_{\rm a}\in C^{\infty}(\mathbb{R}^{3})\) with compact support and \(\int m_{\rm a}=1\). The total nuclear density of the crystal is then given by
\[m^{\rm nuc}({\bf r}):=\sum_{{\bf R}\in\mathscr{R}}m_{\rm a}({\bf r}-{\bf R}). \tag{1}\]
In order to approximate this infinite distribution of charges, we consider a sequence of finite systems that _converges_ to the infinite one: we choose \(\mathscr{R}_{N}\subset\mathscr{R}\) a finite subset of size \(|\mathscr{R}_{N}|=N\), and study the finite electronic problem with \(N\) electrons, in the external potential generated by \(N\) nuclei arranged along \(\mathscr{R}_{N}\). The total nuclear density is
\[m_{N}^{\rm nuc}({\bf r}):=\sum_{{\bf R}\in\mathscr{R}_{N}}m_{\rm a}({\bf r}-{ \bf R}). \tag{2}\]
Given this finite distribution of charges, \(m_{N}^{\rm nuc}\), one formulates a variational problem to equilibrate the electrons,
\[I(\mathscr{R}_{N}):=\inf_{\gamma_{N}}E(m_{N}^{\rm nuc};\gamma_{N}),\]
where the infimum is taken over all admissible states \(\gamma_{N}\) representing a system of \(N\) electrons, and \(E\) describes the energy of finite electronic systems. Usually, \(\gamma_{N}\) represents the electron density or density matrix. One then aims to make various statements about the limits of the energy, \(I(\mathscr{R}_{N})\) and the optimal electron variable \(\gamma_{N}^{0}\). Examples of important questions in this context include:
* _Does the sequence \(N^{-1}I(\mathscr{R}_{N})\) converge to some limit \(W(\mathscr{R})\) as \(N\to\infty\) and \(\mathscr{R}_{N}\to\mathscr{R}\)?_ In this case, \(W(\mathscr{R})\) would correspond to an average energy per electron or energy per unit volume.
* _Does the sequence of minimisers \(\gamma_{N}^{0}\) have a limit \(\gamma^{0}\) as \(N\to\infty\)?_ The limiting object would describe an infinite sea of electrons in a crystal. Which equations are satisfied by the limiting object \(\gamma^{0}\)?
* If \(\mathscr{R}\approx\mathscr{R}^{\prime}\), then _does the energy difference \(\delta I_{N}:=I(\mathscr{R}_{N})-I(\mathscr{R}_{N}^{\prime})\) have a limit \(\delta I\)?_ If \(\mathscr{R}\) describes a crystal and \(\mathscr{R}^{\prime}\) the same crystal with a local defect, then \(\delta I\) is the defect formation energy.
In the following sections, we focus on the case where the energy \(E\) is given by one of the following three models: the Thomas-Fermi-von Weizsacker model in Section 2, the (reduced) Hartree-Fock model in Section 3, as well as Kohn-Sham density functional theory models, in Section 4.
In what follows, the energy of an \(N\)-electron state \(\gamma_{N}\) is denoted by \(E(\gamma_{N})\). The infimum of this energy if \(I_{N}=\inf_{\gamma_{N}}E(\gamma_{N})\), and represents the ground state energy of an \(N\)-electron
system. The energy per unit electron is \(W_{N}=N^{-1}I_{N}\). For the orbital-free TFW model the \(N\)-electron state is given by the electron density \(\rho_{N}\).
**Remark 1.1**.: Throughout this review we are technically misapplying the term "thermodynamic limit", but we do so in a way that is consistent with its usage in the analysis literature. In a strict sense, the thermodynamic limit was introduced to describe many-particle systems in a limit where boundary effects can be neglected, and to employ the law of large numbers, large deviation theory and ergodic theory as a transition from microscopic states to macroscopic states (variational principles, PDEs etc). A key goal of this framework was to model the situation when the corresponding thermodynamic functions (pressure, free energy, susceptibility, magnetisation, etc) can have singularities which appear at the critical value of the intensive parameter (temperature, chemical potential, etc). We refer to [39, 23] for detailed treatments of the subject. The connection between the present review and the classical usage of the term "thermodynamic limit" is the study of the _many-particle limit_ in which boundary and domain size effects can be ignored, however there is no (genuine) statistical mechanics aspect.
## 2 The Thomas-Fermi-von Weizsacker Model
Thomas-Fermi models describe electronic structure purely in terms of the electron density and electrostatic potential, and can therefore be interpreted as a system of two nonlinear PDEs. In this setting there is a mature theory and general results on the structure of the model and in particular the thermodynamic limit. The original Thomas-Fermi model, while attractive due to its simplicity, does not allow for the existence of molecules [31]. We will therefore focus on the Thomas-Fermi-von Weizsacker (TFW) model [42]. Our presentation is primarily based on the monograph [14], but incorporates also more recent results [35, 3].
### TFW Model for a Cluster
We consider \(N\) nuclei at locations \(\mathscr{R}_{N}\) and with total charge \(m_{N}^{\text{nuc}}\), see Eq. (2). The non-dimensionalised TFW energy, parametrised by \(\mathscr{R}_{N}\) as a functional of the electron density \(\rho\), is given by
\[E^{\text{TFW}}(\mathscr{R}_{N},\rho):=\underbrace{\int_{\mathbb{R}^{3}}\big{(}c _{\text{W}}|\nabla\sqrt{\rho}|^{2}+c_{\text{TF}}\,\rho^{5/3}\big{)}}_{\text{ kinetic energy}}\underbrace{+\tfrac{1}{2}D(\rho-m_{N}^{\text{nuc}},\rho-m_{N}^{\text{nuc}})}_{ \text{Coulomb interaction}}, \tag{3}\]
where we defined the Coulomb quadratic form
\[D(f,g):=\iint_{(\mathbb{R}^{3})^{2}}\frac{f(\mathbf{r})g(\mathbf{r}^{\prime}) }{|\mathbf{r}-\mathbf{r}^{\prime}|}\,d\mathbf{r}\,d\mathbf{r}^{\prime}. \tag{4}\]
The first two terms of (3) represent the kinetic energy, and the third term is the Coulomb energy. This term can further be split into
\[\tfrac{1}{2}D(\rho-m_{N}^{\text{nuc}},\rho-m_{N}^{\text{nuc}})=\tfrac{1}{2}D( \rho,\rho)-D(\rho,m_{N}^{\text{nuc}})+\tfrac{1}{2}D(m_{N}^{\text{nuc}},m_{N}^ {\text{nuc}}).\]
The first term is the direct term, or Hartree term, and descibes the mean-field self-interaction of the electrons. The second is the electron-nuclei Coulomb interaction and the last term is the nuclei-nuclei one. Since we fixed the lattice \(\mathscr{R}\) beforehand, the last term is constant, and does not play role in the minimisation problem. In addition, \(c_{\mathrm{W}}\) and \(c_{\mathrm{TF}}\) are positive physical constants that are irrelevant from a mathematical perspective; hence, for the sake of notational convenience, we set them to \(c_{\mathrm{W}}=c_{\mathrm{TF}}=1\).
The charge-neutral electronic ground state is obtained by solving
\[I^{\mathrm{TFW}}(\mathscr{R}_{N}):=\inf\big{\{}E^{\mathrm{TFW}}(\mathscr{R}_{N },\rho),\quad\rho\geq 0,\,\int_{\mathbb{R}^{3}}\rho=N,\,\sqrt{\rho}\in H^{1}( \mathbb{R}^{3})\big{\}}. \tag{5}\]
A direct computation shows that \(\rho\mapsto\int_{\mathbb{R}^{3}}|\nabla\sqrt{\rho}|^{2}\) is convex, which is a key ingredient to obtain the following result (see [2] for the proof).
**Proposition 2.1**.: _There exists a unique minimiser \(\rho_{N}\) of (5). In addition, \(\rho_{N}>0\)._
It can then be readily checked, at least formally, that the minimiser satisfies the Euler-Lagrange equation
\[-\frac{\Delta\sqrt{\rho}_{N}}{\sqrt{\rho_{N}}}+\tfrac{5}{3}\rho_{N}^{2/3}-(m_ {N}^{\mathrm{nuc}}-\rho_{N})\ast\frac{1}{|.|}=\theta_{N},\]
for some Lagrange multiplier \(\theta_{N}\in\mathbb{R}\) associated with the charge neutrality constraint \(\int_{\mathbb{R}^{3}}\rho=N\). It now becomes convenient to make the transformation \(\rho_{N}=u_{N}^{2}\), where we may again assume that \(u_{N}>0\), and to introduce the total electrostatic potential
\[V_{N}^{\mathrm{tot}}:=(m_{N}^{\mathrm{nuc}}-\rho_{N})\ast\frac{1}{|.|}-\theta_ {N}\]
to obtain the Euler-Lagrange system
\[-\Delta u_{N}+\tfrac{5}{3}u_{N}^{7/3}-V_{N}^{\mathrm{tot}}u_{N}=0, \tag{6a}\] \[-\Delta V_{N}^{\mathrm{tot}}=4\pi\big{(}m_{N}^{\mathrm{nuc}}-u_{N }^{2}\big{)}. \tag{6b}\]
We have absorbed the Lagrange multiplier \(\theta_{N}\) into the electrostatic potential \(V_{N}^{\mathrm{tot}}\), shifting it by a constant, which in particular implies that we need not have \(V_{N}^{\mathrm{tot}}(\mathbf{r})\to 0\) as \(|\mathbf{r}|\to\infty\). See [2, 14, 30] for the details of this argument.
In the remainder of our treatment of the TFW model we review results establishing the convergence of the electron ground state \((u_{N},V_{N}^{\mathrm{tot}})\) as \(N\to\infty\), as the nuclei configuration \(\mathscr{R}_{N}\) grows. To establish this limit, a convenient function space setting is provided by the spaces (we denote by \(B_{R}(\mathbf{r}):=\{\mathbf{r}^{\prime}\in\mathbb{R}^{3},\ |\mathbf{r}^{ \prime}-\mathbf{r}|<R\}\))
\[H_{\mathrm{unif}}^{k}:=\big{\{}v\in H_{\mathrm{loc}}^{k}(\mathbb{R}^{3})\, \big{|}\sup_{\mathbf{r}\in\mathbb{R}^{d}}\|v\|_{H^{k}(B_{1}(\mathbf{r}))}< \infty\big{\}}.\]
### Thermodynamic Limit Model
To pass to the thermodynamic limit \(N\to\infty\) we begin with an _infinite_ collection of (smeared) nuclei at positions \(\mathscr{R}\subset\mathbb{R}^{3}\). Here, \(\mathscr{R}\) needs not be a periodic lattice. Since the energy
of an infinite system is not well-defined, the associated electronic ground state cannot be immediately characterised by an analogue of the variational problem (5). However, the nonlinear PDE representation (6) has a straightforward generalisation. Indeed, let \(m^{\rm nuc}({\bf r}):=\sum_{{\bf R}\in\mathscr{R}}m_{\rm a}({\bf r}-{\bf R})\), then it is natural to suppose that the electronic ground state for the nuclei arrangement \(\mathscr{R}\) is given by \(\rho=u^{2}\), where \((u,V^{\rm tot})\) solves
\[-\Delta u+\tfrac{5}{3}u^{7/3}-V^{\rm tot}u=0, \tag{7a}\] \[-\Delta V^{\rm tot}=4\pi\big{(}m^{\rm nuc}-u^{2}\big{)}. \tag{7b}\]
To justify this model we will establish its well-posedness and show that it indeed arises as the thermodynamic limit of (5) (or, equivalently, (6)).
To that end, we need to impose restrictions on the configuration \(\mathscr{R}\). We assume that \(\mathscr{R}\) describes roughly uniformly distributed matter, and in particular contains no clusters with arbitrary high densities, and no holes of arbitrary large volume. Precisely, we require that there exist constants \(c_{1,2},C_{1,2}>0\) such that
\[\forall\,{\bf r}\in\mathbb{R}^{3},\ R>0,\qquad c_{1}R^{3}-c_{2}\leq\#\big{(} \mathscr{R}\cap B_{R}({\bf r})\big{)}\leq C_{1}R^{3}+C_{2}. \tag{8}\]
This condition is equivalent to (H1) and (H2) in [14]. One of the main results of [14] is the well-posedness of (7).
**Theorem 2.2** (Well-Posedness [14, Thm 6.10]).: _Under the condition (8), there exists a unique pair \((u,V^{\rm tot})\in H^{4}_{\rm unif}\times H^{2}_{\rm unif}\), with \(u\geq 0\), solving (7). Moreover, \(\inf u>0\)._
The majority of the monograph [14] is devoted to the proof of Theorem 2.2. Let us recall some key ideas: A crucial observation is that the linear operator
\[L_{N}\varphi:=-\Delta\varphi+\big{(}\tfrac{5}{3}u_{N}^{4/3}-V_{N}^{\rm tot} \big{)}\varphi,\]
which is a kind of linearisation of (6a), is non-negative. This already hints at the existence of a strong stability property. Indeed, adapting this observation (see _e.g._ the proof of [14, Lemma 5.3]), the following result is shown in [35, Thm. 3.1], closely following variants of the same result in [14, Sec. 5.3] and [3].
**Lemma 2.3** (Stability and Uniqueness).: _[_35_, Thm. 3.1]_ _Let \(\mathscr{R},\mathscr{R}_{*}\subset\mathbb{R}^{3}\) satisfy (8), let \(m_{1}^{\rm nuc},m_{2}^{\rm nuc}\) be associated nuclear charge densities, and suppose that \((u,V^{\rm tot}),(u_{*},V_{*}^{\rm tot})\in H^{4}_{\rm unif}\times H^{2}_{\rm unif}\) are corresponding solutions to (7) with \(\inf u_{i}>0\). Then, there exist constants \(C\geq 0\) and \(\alpha>0\), depending only on \(m_{\rm a}\) and on the constants in (8) such that,_
\[|u({\bf r})-u_{*}({\bf r})|+|V^{\rm tot}({\bf r})-V_{*}^{\rm tot}({\bf r})| \leq C\bigg{(}\int_{\mathbb{R}^{3}}\mathrm{e}^{-\alpha|{\bf r}-{\bf z}|}\big{|} m^{\rm nuc}-m_{*}^{\rm nuc}\big{|}^{2}({\bf z})\,d{\bf z}\bigg{)}^{1/2}. \tag{9}\]
Lemma 2.3 immediately implies uniqueness of solutions to (7), but it is much stronger in that it also provides a pointwise stability that quantifies the dependence of the local electronic structure on the far-field. We will return to this result in SS 2.3.
To establish existence of solutions, we use a thermodynamic limit argument. At the same time, this also justifies the model (7). To that end, we specify a sequence of clusters approximating \(\mathscr{R}\): let \(\mathscr{R}_{N}\subset\mathscr{R}\) and \(r_{N}\uparrow\infty,c>0\) such that
\[B_{r_{N}}(\mathbf{0})\cap\mathscr{R}\subset\mathscr{R}_{N}\subset B_{r_{N}+c}( \mathbf{0})\cap\mathscr{R}. \tag{10}\]
For each \(N\), Proposition 2.1 yields existence and uniqueness of an electronic ground state \((u_{N},V_{N}^{\mathrm{tot}})\) solving (6).
The stability result stated in Lemma 2.3 already hints at the possibility of uniform _a priori_ estimates on the solutions \((u_{N},\phi_{N})\), and indeed one can prove that
\[\|u_{N}\|_{H^{4}_{\mathrm{unif}}}+\|V_{N}^{\mathrm{tot}}\|_{H^{2}_{\mathrm{unif }}}\leq C, \tag{11}\]
where \(C\) depends only on \(m_{\mathrm{a}}\) and on the constants in (8), see [14, Prop. 3.5] for the (involved and technical) details. A key technical step estimating the Lagrange multiplier, which we have hidden, is due to [40]. A summary of the proof, providing also quantitative estimates can be found in [35, Prop. 6.1].
With the _a priori_ estimate (11) in hand, we may extract a subsequence \((u_{N_{j}},V_{N_{j}}^{\mathrm{tot}})\rightharpoonup(u,V^{\mathrm{tot}})\) weakly in \(H^{2}_{\mathrm{loc}}\) (we say \(f_{j}\rightharpoonup f\) weakly in \(H^{k}_{\mathrm{loc}}\) if \(f_{j}|_{D}\rightharpoonup f|_{D}\) weakly in \(H^{k}(D)\) for all bounded domains \(D\)) and it is straightforward to deduce that the limit satisfies the PDE (7). Since the limit is unique, it follows that the entire sequence converges. We obtain the following result.
**Theorem 2.4** (Convergence).: _Let \(\mathscr{R}_{N},\mathscr{R}\) satisfy (8) and (10), and let \((u_{N},V_{N}^{\mathrm{tot}})\in H^{4}_{\mathrm{unif}}\times H^{2}_{\mathrm{ loc}}\) and \((u,V^{\mathrm{tot}})\in H^{4}_{\mathrm{loc}}\times H^{2}_{\mathrm{loc}}\) be the corresponding solutions of (6) and (7) respectively. Then_
\[(u_{N},V_{N}^{\mathrm{tot}})\rightharpoonup(u,V)\qquad\text{weakly in }H^{4}_{ \mathrm{loc}}\times H^{2}_{\mathrm{loc}}.\]
_In particular, the convergence is locally uniform._
### Discussion
We conclude this section with a series of further remarks about possible extensions and consequences of the results.
_(1) Convergence rates:_ In Lemma 2.3 the conditions on the second solution \(u_{*}\) can be weakened. This allows to prove that "well inside" the approximate domain \(\mathscr{R}_{N}\), the solutions \((u_{N},V_{N}^{\mathrm{tot}})\) and \((u,V^{\mathrm{tot}})\) are exponentially close. Specifically, in [35, Proposition 4.1] it is shown that there are constants \(C>0\) and \(\alpha>0\) independent of \(N\in\mathbb{N}\) and \(\mathbf{r}\in B_{r_{N}}\) so that
\[\forall N\in\mathbb{N},\ \forall\mathbf{r}\in B_{r_{N}},\quad\big{|}u_{N}( \mathbf{r})-u(\mathbf{r})\big{|}+\big{|}V_{N}^{\mathrm{tot}}(\mathbf{r})-V^{ \mathrm{tot}}(\mathbf{r})\big{|}\leq C\mathrm{e}^{-\alpha\operatorname{dist}( \mathbf{r},\partial B_{r_{N}}(0))}.\]
Choosing \(r_{N}^{\prime}\ll r_{N}\) this readily translates into the convergence rate
\[\|u_{N}-u\|_{L^{\infty}(B_{r_{N}^{\prime}})}+\|V_{N}^{\mathrm{tot}}-V^{ \mathrm{tot}}\|_{L^{\infty}(B_{r_{N}^{\prime}})}\leq C\mathrm{e}^{-\alpha(r_{ N}-r_{N}^{\prime})}.\]
The same argument also shows that boundary effects decay exponentially into the bulk of the cluster and justifies the common usage of buffer regions in electronic structure calculations.
(2) Surfaces and sheets:Our assumption (8) expressly disallows configurations with large sections of vacuum, in particular surfaces and 2D materials. Indeed, not only the mathematics but also the underlying physics changes in such situations. We refer to [3, 32, 6] for related results that go beyond this limitation.
(3) The Dirac correction:The Thomas-Fermi-Dirac-von Weizsacker model adds an additional correction term to the energy functional,
\[E^{\rm TFDW}(\mathscr{R}_{N},\rho)=\int_{\mathbb{R}^{3}}\big{(}|\nabla\sqrt{ \rho}|^{2}+\rho^{5/3}\big{)}+\tfrac{1}{2}D(\rho-m_{N}^{\rm nuc},\rho-m_{N}^{ \rm nuc})\underbrace{-c\int_{\mathbb{R}^{3}}\rho^{4/3}}_{\text{Dirac exchange}},\]
where the additional term can be interpreted as a model for the exchange of energy of the electrons. The additional challenge is that \(E^{\rm TFDW}\) is no longer convex. We are unaware of an in-depth treatment of this model, but refer to [14, Sec 3.6.3] for a discussion of possible avenues and [20] for results on the related Cauchy-Born scaling limit for this model.
(5) Further orbital-free DFT models:Most orbital-free DFT models used in practical materials computations have more complicated functionals form for kinetic energy and exchange-correlation energy, see e.g., [43, 44]. It is shown in [4] that the Wang-Teter kinetic energy [43] is not bounded from below, and thus the thermodynamic limit is ill-posed. The more complicated density-dependent orbital free kinetic-energy functionals, such as the Wang-Govind-Carter functional [44], are yet to be mathematically understood.
(6) Charge screening:The stability result Lemma 2.3 clearly shows that interaction in the TFW model is exponentially localised, _despite_ the presence of the long-range Coulomb interaction. This can be interpreted as a very general screening result. Consider two configurations \(\mathscr{R},\mathscr{R}_{*}\) satisfying (8), and which coincide outside a large ball of radius \(r>0\): \(\mathscr{R}\setminus B_{r}=\mathscr{R}_{*}\setminus B_{r}\). Let \((u,V^{\rm tot}),(u_{*},V_{*}^{\rm tot})\) be the corresponding solutions. Then Lemma 2.3 implies
\[|u({\bf r})-u_{*}({\bf r})|+|V^{\rm tot}({\bf r})-V_{*}^{\rm tot}({\bf r})| \leq C{\rm e}^{-\alpha|{\bf r}|}. \tag{12}\]
For instance, if \(\mathscr{R}_{*}\) contains more atoms than \(\mathscr{R}\), one could expect that the potentials satisfy \(V_{*}^{\rm tot}({\bf r})\approx V^{\rm tot}({\bf r})+Q/|{\bf r}|\) as \({\bf r}\to\infty\), where \(Q\) would be the extra effective charge. However, according to (12), this is not the case: the extra charge is completely screened. This is a very general fact in TFW theory, see [9, 35] for details.
One can also take into account the relaxation of the configuration \(\mathscr{R}_{*}\) due to the presence of the defect. In this case, instead of an exponential decay, we obtain (see [17])
\[\big{|}u({\bf r})-u_{*}({\bf r})\big{|}+\big{|}V^{\rm tot}({\bf r})-V_{*}^{ \rm tot}({\bf r})\big{|}\lesssim|{\bf r}|^{-2},\]
and, since \(|{\bf r}|^{-2}=o(|{\bf r}|^{-1})\), we deduce that charges are screened, see [9] and [35, Thm. 4.1] for the details.
### Scaling limit
A question related to but distinct from the thermodynamic limit arises when considering the derivation of a continuum model for elastic material response from an underlying electronic
structure model. Such scaling limits for the TFW model were first studied in [5], but the following discussion is builds on the results of [35]. Specifically, the stability and locality estimate (2.3) yields stronger and quantitative results. In addition, for the sake of consistency with the KS-DFT case in Section 4, we consider a periodic instead of an infinite-domain setting.
Spatial decomposition of energyIn preparation we first mention another useful consequence of the stability and locality estimate found in Lemma 2.3, which is also of independent interest: Let \(\mathscr{R}\subset\mathbb{R}^{3}\) be a finite configuration of nuclei or an infinite configuration satisfying (8) and let \((u,V^{\rm tot})\) be the associated solutions to (6), then we define the energy density
\[\mathcal{E}(\mathscr{R};\cdot\,)=|\nabla u|^{2}+u^{10/3}+\frac{1}{8\pi}|\nabla V ^{\rm tot}|^{2}.\]
If \(\mathscr{R}\) is finite then one may readily check [35, Eq. 4.18] that \(I^{\rm TFW}(\mathscr{R})=\int_{\mathbb{R}^{3}}\mathcal{E}(\mathscr{R}; \mathbf{r})\,d\mathbf{r}.\) We may therefore think of
\[I^{\rm TFW}(\mathscr{R},\Omega):=\int_{\Omega}\mathcal{E}(\mathscr{R}; \mathbf{r})\,d\mathbf{r}\]
as the energy stored in a compact sub-domain \(\Omega\subset\mathbb{R}^{3}\). This intuition is further supported by the following result, proven in [35, Proof of Thm. 4.2], which is closely related to (2.3): there exist constants \(C,\gamma\) such that
\[\forall\mathbf{R}\in\mathscr{R},\ \forall\mathbf{r}\in\mathbb{R}^{3},\qquad \left|\frac{\partial\mathcal{E}(\mathscr{R};\mathbf{r})}{\partial\mathbf{R}} \right|\leq Ce^{-\alpha|\mathbf{r}-\mathbf{R}|}. \tag{13}\]
In [35, Sec. 4.4] this observation is used to demonstrate exponential locality of interatomic forces in the TFW model. In the following section we use it for an easy derivation of the Cauchy-Born scaling limit.
The Cauchy-Born scaling limitConsider a periodic arrangement \(\mathscr{R}=B\mathbb{Z}^{3}\) of nuclei, where \(B\in\mathbb{R}^{3\times 3}\) is a non-singular matrix, and let \((u,V^{\rm tot})\) describes the corresponding TFW ground-state. Then, uniqueness of solutions to (6) implies that they must observe the same periodicity. In particular, we can define the Cauchy-Born energy function, which represents the energy stored in \(\Omega_{B}:=B[0,1)^{3}\), that is
\[W^{\rm cb}(B):=I^{\rm TFW}(B\mathbb{Z}^{3},\Omega_{B}):=\int_{\Omega_{B}} \mathcal{E}(\mathscr{R};\mathbf{r})\,d\mathbf{r}.\]
A deformed configuration of the crystal is described by a continuum deformation field \(Y(\mathbf{x})=\mathbf{x}+U(\mathbf{x})\) where \(U\) is smooth and \(\mathscr{R}\)-periodic. We assume that \(U\) is chosen such that \(Y\) is bijective, _i.e._ a proper deformation.
Given a parameter \(\epsilon>0\) describing the inverse length-scale over which the deformation varies we define a deformed crystalline configuration by
\[\mathscr{R}^{\epsilon}:=\big{\{}Y_{\epsilon}(\mathbf{x}):=\epsilon^{-1}Y\big{(} \epsilon\mathbf{x}\big{)}\,|\,\mathbf{x}\in B\mathbb{Z}^{3}\big{\}}.\]
This definition encodes the _Cauchy-Born hypothesis_ that nuclei in a crystal follow the continuum deformation field. An example of such an atomistic configuration shadowing a continuum field is given in Figure 1(left). It must be emphasised that this is a simplifying assumption that is only approximately valid in specific deformation regimes and for simple crystals; see [22, 36] for in-depth discussions.
In the following we concern ourselves with the _scaling limit_\(\epsilon\to 0\) of the stored elastic energy per unit undeformed volume. To that end, let \(\Omega_{\epsilon}:=\epsilon^{-1}\Omega\) then \(\mathscr{R}^{\epsilon}\) has periodic cells \(\Omega_{\epsilon}\) or alternatively also \(Y_{\epsilon}(\Omega_{\epsilon})\). That is, we may write the elastic energy per unit volume as
\[E^{\epsilon}:=|\Omega^{\epsilon}|^{-1}I^{\mathrm{TFW}}(\mathscr{R}^{\epsilon},\Omega_{\epsilon})=\frac{1}{|\Omega^{\epsilon}|}\int_{\Omega^{\epsilon}} \mathcal{E}(\mathscr{R}^{\epsilon},\mathbf{r})\,d\mathbf{r}=\frac{\epsilon^{ 3}}{\det B}\int_{Y_{\epsilon}(\Omega_{\epsilon})}\mathcal{E}(\mathscr{R}^{ \epsilon},\mathbf{r})\,d\mathbf{r}.\]
We remark that the electronic coordinates \(\mathbf{r}\) belong to the deformed space _i.e._ it is natural to write \(\mathbf{r}=Y_{\epsilon}(\mathbf{x})\).
The key observation now is that the locality estimate (9) on the electronic structure and its extension to the energy density (13) suggest that to predict the value of \(\mathcal{E}(\mathscr{R}^{\epsilon},\mathbf{r})\) for a non-uniform deformation varying at the macroscopic scale, it is not required to be aware of the global configuration \(\mathscr{R}^{\epsilon}\) but it is sufficient to know the _local deformation_ near \(\mathbf{r}\). This is illustrated by Figure 1.
To make this precise, let \(\mathbf{n}\in\mathbb{L}\), \(t_{\mathbf{n}}=Y_{\epsilon}(\mathbf{n})\) and \(F_{\mathbf{n}}=\nabla Y_{\epsilon}(\mathbf{n})\), then
\[Y_{\epsilon}(\mathbf{x})=t_{\mathbf{n}}+F_{\mathbf{n}}(\mathbf{x}-\mathbf{n}) +O(\epsilon),\]
for \(\mathbf{x}\) in any bounded neighbourhood of \(\mathbf{n}\). A Taylor expansion of \(\mathcal{E}\) with respect to \(\mathscr{R}^{\epsilon}\), employing (13), implies that for \(\mathbf{r}\in Y_{\epsilon}(\mathbf{n}+\Omega)\) we have
\[\mathcal{E}(\mathscr{R}^{\epsilon},\mathbf{r})=\mathcal{E}\big{(}F\mathbb{Z}^ {d},\mathbf{r}\big{)}+O(\epsilon).\]
Figure 1: Illustration of the scaling limit. Left: a deformation of a homogeneous crystal varying slowly relative to the scale of atoms; Center: the blow-up of a small section is nearly homogeneous; Right: The near-homogeneous section can be approximately represented by a single unit cell.
After integrating over one cell in the undeformed crystal, which in deformed coordinates becomes \(Y_{\epsilon}(\mathbf{n}+\Omega)\), and making further elementary approximation, we obtain
\[\int_{Y_{\epsilon}(\mathbf{n}+\Omega)}\mathcal{E}(\mathscr{R}^{\epsilon}, \mathbf{r})\,d\mathbf{r}=W(F_{\mathbf{n}})+O(\epsilon).\]
Finally, after summing over all such unit cells, using the scale-invariance of the deformation gradient one obtains the following convergence result. The second order error \(O(\epsilon^{2})\) is obtained by a more careful exploitation of the point symmetry in simple crystal lattices.
**Theorem 2.5**.: _Let \(Y\in C^{4}_{\mathrm{per}}(\Omega)\), then_
\[E^{\epsilon}(Y)=\int_{\Omega}W^{\mathrm{cb}}(\nabla Y(\mathbf{x}))\,d\mathbf{ x}+O(\epsilon^{2})\qquad\text{as }\epsilon\to 0.\]
The presentation of this section follows unpublished notes. Related results using different techniques were first presented in [5, Theorem 5, case (i)]. The publication [5] also consider several generalisations, including domains with boundaries and alternative scaling regimes.
## 3 The reduced Hartree-Fock model
We now focus on the reduced Hartree-Fock (rHF) model. In this model, a fermionic system with \(N\) electrons is described by a one-body density matrix \(\gamma\), which is a self-adjoint operator on \(L^{2}(\mathbb{R}^{3})\) satisfying the Pauli principle \(0\leq\gamma\leq 1\), and with trace \(N\).
Together with the spectral theorem, this implies that \(\gamma\) is of the form
\[\gamma=\sum_{i=1}^{\infty}n_{i}|\phi_{i}\rangle\langle\phi_{i}|,\quad\text{ with}\quad 1\geq n_{1}\geq n_{2}\geq\cdots\geq 0,\quad\text{and}\quad \operatorname{Tr}\left(\gamma\right)=\sum_{i=1}^{\infty}n_{i}=N.\]
Here, the functions \((\phi_{i})_{i}\) form an orthonormal basis of eigenfunctions in \(L^{2}(\mathbb{R}^{3})\), and are called the _orbitals_, and the numbers \(0\leq n_{i}\leq 1\) are the _occupation number_. To such a one-body density matrix, we can associate its density \(\rho_{\gamma}(\mathbf{r}):=\gamma(\mathbf{r},\mathbf{r})=\sum_{i}n_{i}|\phi_{ i}|^{2}(\mathbf{r})\).
In the potential generated by the nuclei at \(\mathscr{R}_{N}\subset\mathbb{R}^{3}\), the rHF energy of a state \(\gamma\) is
\[E^{\mathrm{rHF}}(\mathscr{R}_{N},\gamma):=\frac{1}{2}\mathrm{Tr}\left(-\Delta \gamma\right)+\frac{1}{2}D(\rho_{\gamma}-m_{N}^{\mathrm{nuc}},\rho_{\gamma}-m _{N}^{\mathrm{nuc}}), \tag{14}\]
where \(m_{N}^{\mathrm{nuc}}\) is the total nuclear density defined in (1). Compared with (3), we see that the kinetic energy part \(\int|\nabla\sqrt{\rho}|^{2}+\rho^{5/3}\) has been replaced by
\[\frac{1}{2}\mathrm{Tr}\left(-\Delta\gamma\right):=\frac{1}{2}\sum_{i=1}^{ \infty}n_{i}\|\nabla\phi_{i}\|_{L^{2}(\mathbb{R}^{3})}^{2}.\]
In particular, this model is no longer a function of the density \(\rho\), but of the one-body density matrix \(\gamma\). The energy of the configuration \(\mathscr{R}_{N}\) is given by the minimisation problem
\[I^{\mathrm{rHF}}(\mathscr{R}_{N}):=\inf\left\{E^{\mathrm{rHF}}(\mathscr{R}_{N },\gamma),\ 0\leq\gamma=\gamma^{*}\leq 1,\ \operatorname{Tr}\left(\gamma\right)=N\right\}. \tag{15}\]
Closely related the to rHF model is the Hartree-Fock (HF) model, where the exchange term is considered. This term is a correction to the direct Hartree energy, and is due to the fermionic nature of the particles. The HF model reads
\[E^{\rm HF}(\mathscr{R}_{N},\gamma):=E^{\rm rHF}(\mathscr{R}_{N}, \gamma)-\frac{1}{2}\iint_{(\mathbb{R}^{3})^{2}}\frac{|\gamma({\bf r},{\bf r}^{ \prime})|^{2}}{|{\bf r}-{\bf r}^{\prime}|}d{\bf r}d{\bf r}^{\prime}. \tag{16}\]
Since the HF model is not convex in \(\gamma\), we only have partial results for the HF model, and most of the following facts only hold for the rHF model.
In the thermodynamic limit, we consider a regular periodic lattice \(\mathbb{L}\subset\mathbb{R}^{3}\), and a sequence of arrangements \(\mathscr{R}_{N}\subset\mathbb{L}\) with \(|\mathscr{R}_{N}|=N\), and satisfying (10). We want to study the energy per unit cell \(N^{-1}I^{\rm rHF}(\mathscr{R}_{N})\) as \(N\) goes to infinity.
The finite electron model (15) was introduced and studied by Solovej [41]. The existence of an optimiser \(\gamma_{N}^{0}\) is provided here. The thermodynamic limit was latter studied by Catto, Le Bris and Lions in a series of paper. In [13], the authors announced their results, latter proved in [15] (for the models presented here) and [16] (for the pure-state version of these problems, _i.e._ when \(\gamma\) is further constrained to be a rank-\(N\) projector). They prove the thermodynamic limit for the rHF model:
\[\lim_{N\to\infty}\frac{1}{N}I^{\rm rHF}(\mathscr{R}_{N})=I^{\rm rHF }_{\rm per}+\frac{\mathfrak{m}}{2}, \tag{17}\]
where \(I^{\rm rHF}_{\rm per}\) can be characterised by a minimisation periodic problem, that we describe in the next section, and \(\mathfrak{m}\) is the Madelung constant, see (18) below. They conjectured that a similar result should hold for the HF model. Finally, they proved that the limiting problem \(I^{\rm rHF}_{\rm per}\) (and its HF counterpart \(I^{\rm HF}_{\rm per}\)) is indeed well-posed. We discuss this point in the next section.
### The periodic model
In order to write the limiting periodic model, as introduced by Catto, Le Bris and Lions, we define the set of periodic one-body density matrices (recall that in our simple setting, we expect one electron per unit cell)
\[\mathcal{P}_{\rm per}:=\{0\leq\gamma=\gamma^{*}\leq 1,\ \forall\ell\in\mathbb{L}, \ \tau_{\ell}\gamma=\gamma\tau_{\ell},\ \underline{\rm Tr}\,\gamma=1\}\,,\]
where \(\tau_{\ell}\) is the translation operator \(\tau_{\ell}f({\bf x}):=f({\bf x}-\ell)\). Such a periodic density matrix has a \(\mathbb{L}\)-periodic density \(\rho_{\gamma}({\bf r}):=\gamma({\bf r},{\bf r})\). Its trace per unit cell \(\underline{\rm Tr}\,\gamma\) is defined by
\[\underline{\rm Tr}\,\gamma:={\rm Tr}\ (\mathds{1}_{\Omega}\gamma\mathds{1}_{ \Omega})=\int_{\Omega}\rho_{\gamma},\]
where \(\Omega\) is a unit cell associated to the lattice \(\mathscr{R}=\mathbb{L}\). A periodic density matrix \(\gamma\in\mathcal{P}_{\rm per}\) has a \(\mathbb{L}\)-periodic density \(\rho_{\gamma}({\bf x}):=\gamma({\bf x},{\bf x})\in L^{1}_{\rm per}(\Omega)\). We let \(G\) be the \(\mathbb{L}\)-periodic Coulomb kernel, solution to
\[-\Delta G:=4\pi\sum_{{\bf R}\in\mathbb{L}}\left(\delta_{{\bf R}}-|\Omega|^{-1 }\right),\quad\text{and}\quad\int_{\Omega}G=0,\]
and we introduce the periodic Coulomb quadratic form \(D_{\rm per}(\cdot,\cdot)\) defined for periodic functions by (compare with (4))
\[D_{\rm per}(f,g):=\iint_{(\Omega)^{2}}f({\bf r})g({\bf r}^{\prime})G({\bf r}-{ \bf r}^{\prime})d{\bf r}d{\bf r}^{\prime}.\]
The Madelung constant appearing in (17) is defined to be
\[{\mathfrak{m}}:=\lim_{{\bf r}\to{\bf 0}}\left(G({\bf r})-\frac{1}{|{\bf r}|} \right). \tag{18}\]
Note that since \(F({\bf r}):=G({\bf r})-|{\bf r}|^{-1}\) satisfies \(\Delta F=0\) on \(\Omega\), the function \(F\) is indeed smooth on \(\Omega\), hence has a well-defined value at \({\bf r}={\bf 0}\). The Madelung constant somehow describes the mismatch between the full space Coulomb kernel \(|{\bf r}|^{-1}\), and the periodic one \(G({\bf r})\) (which can _a priori_ be defined up to a constant).
With these notations, the limit \(W^{\rm rHF}_{\rm per}\) for the perfect crystal is defined as the minimisation problem [15]
\[W^{\rm rHF}_{\rm per}:=\inf\left\{E^{\rm rHF}_{\rm per}({\mathbb{L}},\gamma), \ \gamma\in{\cal P}_{\rm per}\right\}, \tag{19}\]
where the energy per unit cell \(E^{\rm rHF}_{\rm per}\) is
\[E^{\rm rHF}_{\rm per}({\mathbb{L}},\gamma):=\frac{1}{2}\underline{\rm Tr} \,(-\Delta\gamma)+\frac{1}{2}D_{\rm per}(\rho_{\gamma}-m^{\rm nuc}_{\rm per}, \rho_{\gamma}-m^{\rm nuc}_{\rm per}), \tag{20}\]
and where \(m^{\rm nuc}_{\rm per}({\bf x}):=\sum_{\ell\in{\mathbb{L}}}m_{\rm a}({\bf x}-\ell)\) is the periodic nuclear density. Comparing this expression with (14), we see that all terms have been <<normalised>> to take into account the periodicity of the infinite system.
The fact that \(I^{\rm rHF}_{\rm per}\) is a well-posed problem was proved by Catto, Le Bris and Lions in [15]. Later in [7], Cances, Deleurence and Lewin proved that the minimiser \(\gamma\) satisfied the Euler-Lagrange equations
\[\gamma={\mathds{1}}\left(H_{\gamma}\leq\varepsilon_{F}\right),\quad H_{\gamma }:=-\frac{1}{2}\Delta+(\rho_{\gamma}-m^{\rm nuc}_{\rm per})*G.\]
Here, \(\varepsilon_{F}\in{\mathbb{R}}\) is the _Fermi level_. The operator \(H_{\gamma}\) is the mean-field one-body Hamiltonian of the crystal, which is a self-adjoint operator that commutes with \({\mathbb{L}}\)-translations. Its spectral properties are well understood thanks to the Bloch transform [37, Chapter XIII], and its spectrum are composed of bands and gaps. When \(\varepsilon_{F}\) is in a gap, the crystal is an insulator, while when \(\varepsilon_{F}\) is in a band, it is a metal.
### Supercell methods, and periodic thermodynamic limit
Once the periodic problem (19) has been written and justified, it is possible to understand its properties from other approaches. In [7] (see also [18]), Cances, Deleurence and Lewin proved that this problem was also the limit of another thermodynamic limit. Their idea was to start directly with a periodic problem on the large supercell \(\Omega_{L}:=L\Omega\) with \(N=L^{3}\) electrons, and
take the limit \(L\to\infty\). In other words, instead of working with one-body density matrices \(\gamma\) acting on the whole space \(L^{2}(\mathbb{R}^{3})\), they looked at one-body density matrices acting on the _supercell_\(L^{2}_{\rm per}(\Omega_{L})\). We therefore define
\[\mathcal{P}^{L}_{\rm per}:=\left\{\gamma\text{ acting on }L^{2}_{\rm per}(\Omega_{L}), \ 0\leq\gamma=\gamma^{*}\leq 1,\ \operatorname{Tr}_{L}\gamma=L^{3}\right\},\]
where we set for simplicity \(\operatorname{Tr}_{L}:=\operatorname{Tr}_{\mathcal{S}(L^{2}_{\rm per}( \Omega_{L}))}\). A one-body operator \(\gamma\in\mathcal{P}^{L}_{\rm per}\) has an \(L\mathbb{L}\)-periodic density \(\rho_{\gamma}\in L^{1}_{\rm loc}\). We also define the \(L\mathbb{L}\)-periodic Coulomb kernel as \(G_{L}(\mathbf{x}):=L^{-1}G(L^{-1}\mathbf{x})\), and the \(L\)-periodic Coulomb quadratic form defined for \(L\mathbb{L}\)-periodic functions by
\[D_{L}(f,g):=\iint_{(\Omega_{L})^{2}}f(\mathbf{r})g(\mathbf{r})G_{L}(\mathbf{ r}-\mathbf{r}^{\prime})d\mathbf{r}d\mathbf{r}^{\prime}.\]
The supercell model is given by a periodic minimisation problem of the form
\[I^{\rm rHF}_{{\rm per},L}(\mathscr{R}^{L}):=\inf\left\{E^{\rm rHF}_{{\rm per}, L}(\mathscr{R}^{L},\gamma),\ \gamma\in\mathcal{P}^{L}_{\rm per}\right\}, \tag{21}\]
with the supercell energy
\[E^{\rm rHF}_{{\rm per},L}(\mathscr{R}^{L},\gamma):=\frac{1}{2}\text{Tr}_{L} \left(-\Delta_{L}\gamma\right)+\frac{1}{2}D_{L}(\rho_{\gamma}-m_{L}^{\rm nuc}, \rho_{\gamma}-m_{L}^{\rm nuc}). \tag{22}\]
Here, \(\mathscr{R}^{L}\) is an \(L\mathbb{L}\)-periodic lattice (for instance \(\mathscr{R}^{L}=\mathbb{L}\), or a deformation of it, see below), and \(m_{L}^{\rm nuc}\) is the nuclear density \(m_{L}^{\rm nuc}:=\sum_{\mathbf{R}\in\mathscr{R}_{L}}m_{\rm a}(\mathbf{x}- \mathbf{R})\). In the case \(\mathscr{R}^{L}=\mathbb{L}\), there are \(L^{3}\) nuclei and electrons per supercell.
Even in the perfect crystal case, that is when \(\mathscr{R}^{L}=\mathbb{L}\), the problems (19)-(20) and (21)-(22) differ. In (19), the minimisation is performed for \(\gamma\) acting on the whole space \(L^{2}(\mathbb{R}^{3})\), while in (21), it is performed for \(\gamma\) acting on the supercell \(L^{2}(\Omega_{L})\). These two types of operators cannot be compared, and it is not obvious _a priori_ that there is a link between the two problems. Still, both operators give \(\mathbb{L}\)-periodic densities, which can be compared. This important fact allows to prove the convergence [7]
\[\lim_{L\to\infty}\frac{1}{L^{3}}I^{\rm rHF}_{{\rm per},L}(\mathbb{L})=W^{\rm rHF }_{\rm per}. \tag{23}\]
The result was later refined in [25], where the authors proved that, in the insulating case (see [10] for the metallic case), the convergence is exponential, in the sense that there exist constants \(C\in\mathbb{R}\) and \(\alpha>0\) such that
\[\left|W^{\rm rHF}_{\rm per}-\frac{1}{L^{3}}I^{\rm rHF}_{{\rm per},L}(\mathbb{ L})\right|+\|\rho^{0}_{\rm per}-\rho^{0}_{{\rm per},\mathbb{L}}\|_{L^{ \infty}}\leq C\mathrm{e}^{-\alpha L}, \tag{24}\]
where \(\rho^{0}_{\rm per}\) and \(\rho^{0}_{{\rm per},L}\) are the electronic densities of the periodic and supercell minimisers respectively, seen here as \(\mathbb{L}\)-periodic functions. This exponential convergence comes from the analyticity of the Bloch representation.
This means that the full space problem \(W^{\rm rHF}_{\rm per}\) can be well-approximated by the supercell model \(I^{\rm rHF}_{{\rm per},L}\). This latter problem can be studied efficiently from a numerical point of view, thanks to the Bloch transform. As noticed in [25], the supercell model corresponds exactly to a uniform discretisation of the Brillouin zone, as described in a famous paper by Monkhorst [34]. For metallic systems, the exact rate of convergence is unknown in the general case (see [10] for details).
**Remark 3.1**.: When studying supercell methods for non convex problems, symmetry breaking may happen (see _e.g._[38, 27]). In this case, the density of the \(L\mathbb{L}\)-periodic problem may not be \(\mathbb{L}\)-periodic, and the periodic problem may not be the limit of supercell models.
To sum up, the energy per unit cell \(W^{\mathrm{rHF}}_{\mathrm{per}}\) is the limit of two different sequences, namely
\[W^{\mathrm{rHF}}_{\mathrm{per}}=\lim_{N\to\infty}\left(\frac{1}{N}I^{\mathrm{ rHF}}(\mathscr{R}_{N})\right)-\frac{\mathfrak{m}}{2},\quad\text{and}\quad W^{ \mathrm{rHF}}_{\mathrm{per}}=\lim_{L\to\infty}\left(\frac{1}{L^{3}}I^{\mathrm{ rHF}}_{\mathrm{per},L}(\mathbb{L})\right).\]
In the first limit, the crystal is seen as the limit of finite systems. This is the correct physical limit, as a real crystal is indeed always finite. However, we expect the convergence to be slow, due to boundary effects. On the other hand, the second limit has no real physical meaning, but gives exponential rate of convergence in the insulating case.
### Local defects in crystals, in the reduced Hartree-Fock model
We now discuss how to define the energy of a defect inside a crystal. We would like to define this energy as the difference between the energy of a crystal with a defect, and the energy of the crystal without the defect. Unfortunately, these two quantities are infinite. Also, the model with defect does not have an underlying periodicity, hence there is no notion of _energy per unit cell_ in this case. One way to define the energy of a defect is through a thermodynamic limit procedure.
Let \(\mathscr{R}:=\mathbb{L}\) be the arrangement of nuclei for the perfect crystal, and let \(\mathscr{R}_{*}\) be the one for the crystal with (local) defect, that is such that \(\mathscr{R}\) and \(\mathscr{R}_{*}\) coincide outside a ball of radius \(r>0\). The nuclear charge of the defect is therefore
\[\nu:=m_{*}^{\mathrm{nuc}}-m^{\mathrm{nuc}}=\sum_{\mathbf{R}\in\mathscr{R}_{*} \setminus\mathscr{R}}m_{\mathrm{a}}(\cdot-\mathbf{R})-\sum_{\mathbf{R}\in \mathscr{R}\setminus\mathscr{R}_{*}}m_{\mathrm{a}}(\cdot-\mathbf{R}).\]
For \(L\in\mathbb{N}^{*}\), we can consider \(\mathscr{R}_{*}^{L}\) the \(L\mathbb{L}\)-periodic arrangement which is equals to \(\mathscr{R}_{*}\) on \(\Omega_{L}\) (note that \(\mathscr{R}^{L}=\mathscr{R}=\mathbb{L}\)). In [7], the authors consider the supercell energy of the defect \(\nu\), defined by
\[J_{L}(\nu):=I^{\mathrm{rHF}}_{\mathrm{per},L}(\mathscr{R}_{*}^{L})-I^{\mathrm{ rHF}}_{\mathrm{per},L}(\mathbb{L}).\]
Here there is a slight complication: since we do not know _a priori_ how many electrons should be in the system, we should not fix the number of electrons, but rather the Fermi level (grand canonical ensemble). We do not comment on this point to keep this presentation simple, and refer to [7] for details.
Although the two quantities \(I^{\mathrm{rHF}}_{\mathrm{per},L}(\mathscr{R}_{*}^{L})\) and \(I^{\mathrm{rHF}}_{\mathrm{per},L}(\mathbb{L})\) diverges to infinity with rate \(O(L^{3})\), the difference of the two quantities stays finite in the limit, and we can define the energy of the defect as
\[J_{\infty}(\nu):=\lim_{L\to\infty}J_{L}(\nu).\]
In [26], the authors prove that the corresponding rate of convergence is \(O(L^{-1})\). This slow rate of convergence is due to the spurious interaction between the defect and its periodic
images, a fact predicted in [29, 33]. This makes the supercell method quite a poor numerical method in this case.
It turns out that the limit \(\mathcal{J}_{\infty}(\nu)\) can be characterised as a minimisation problem on a set of "defect" operators. It is unclear whether this last problem could be tackled directly with efficient numerical methods (see also [8]).
## 4 Scaling limit for Kohn-Sham DFT
In this section, we discuss Kohn-Sham models. For a finite system with \(N\) electrons described by a one-body density matrix \(\gamma\), the Kohn-Sham density functional takes the form
\[\mathcal{E}^{\mathrm{KS}}\big{(}\mathscr{R}_{N},\gamma\big{)}:=\frac{1}{2} \mathrm{Tr}\,(-\Delta\gamma)+\frac{1}{2}D(\rho_{\gamma}-m_{N}^{\mathrm{nuc}}, \rho_{\gamma}-m_{N}^{\mathrm{nuc}})+E_{\mathrm{xc}}[\rho_{\gamma}], \tag{25}\]
where \(m_{N}^{\mathrm{nuc}}\) is the total nuclear density defined in (1). Compared with the reduced Hartree-Fock model (14), the Kohn-Sham model includes the exchange-correlation energy \(E_{\mathrm{xc}}[\rho_{\gamma}]\), where we have adopted the notation for a LDA or GGA type functional and thus it can be explicitly written in terms of \(\rho_{\gamma}\) as
\[\mathrm{(LDA)}\qquad E_{\mathrm{xc}}[\rho_{\gamma}]=\int_{\mathbb{ R}^{3}}\epsilon_{\mathrm{xc}}(\rho_{\gamma}(\mathbf{r}))d\mathbf{r},\qquad \mathrm{or}\] \[\mathrm{(GGA)}\qquad E_{\mathrm{xc}}[\rho_{\gamma}]=\int_{ \mathbb{R}^{3}}\epsilon_{\mathrm{xc}}\big{(}\rho_{\gamma}(\mathbf{r}),\big{|} \nabla\sqrt{\rho_{\gamma}(\mathbf{r})}\big{|}^{2}\big{)}d\mathbf{r}.\]
Even the simplest exchange-correlation functionals used in practice has complicated expressions, and hence will not be given explicitly here. Most of them are non-convex in \(\rho\), as, for instance the Dirac exchange term \(E_{\mathrm{x}}^{\mathrm{Dirac}}[\rho]=-C_{D}\int\rho(\mathbf{x})^{4/3}d\mathbf{x}\). Thus, even the existence of minimiser to the Kohn-Sham DFT problem becomes a difficult question. The existence of minimisers for LDA functionals has been proved in [28, 1, 24], while for GGA type functionals, it remains open with only preliminary results available (see the case of \(N=1\) in [1]).
We will henceforth assume LDA type exchange-correlation functionals. The variation of the functional (25) gives rise to the Kohn-Sham equations for \(\gamma=\sum_{i=1}^{N}|\psi_{i}\rangle\langle\psi_{i}|\), where \(\psi_{i}\) are the Kohn-Sham orbitals, solution to
\[H^{\mathrm{KS}}[\rho_{\gamma}]\psi_{i}=E_{i}\psi_{i}\quad\mathrm{where}\quad H ^{\mathrm{KS}}[\rho]:=-\frac{1}{2}\Delta+V_{\mathrm{H}}[\rho]+V_{\mathrm{xc} }[\rho] \tag{26}\]
with \(\rho_{\gamma}\) the density associated with \(\gamma\) and the Hartree and exchange-correlation potentials respectively given by
\[V_{\mathrm{H}}[\rho]=(\rho-m_{N}^{\mathrm{nuc}})*\frac{1}{|\cdot|}\quad\mathrm{ and}\quad V_{\mathrm{xc}}[\rho]:=\epsilon_{\mathrm{xc}}^{\prime}(\rho(\cdot)).\]
The Kohn-Sham equations (26) is a set of nonlinear eigenvalue problems, as the effective Hamiltonian operator \(H^{\mathrm{KS}}[\rho_{\gamma}]\) depends on the solution \(\gamma\). We remark that in general there is no guarantee that the Kohn-Sham orbitals of the minimisers of (25) correspond to the
lowest \(N\) eigenvalues of the self-consistent Hamiltonian, though in practice this is often assumed and known as the _Aufbau principle_.
Due to the non-convexity and hence possible symmetry breaking, see Remark 3.1, the thermodynamic limit of Kohn-Sham DFT with exchange-correlation functionals is very challenging and not much progress has been made.
To understand the behaviour of electronic structure in materials, we take a typical starting point of modelling in materials science - the periodic Kohn-Sham model with supercell \(\Omega\). This can be formulated using the density matrix similar to the periodic Hartree-Fock model discussed in Section 3.1. A periodic Kohn-Sham energy is of the form
\[E^{\rm KS}_{\rm per}(\mathbb{L},\gamma)=E^{\rm rHF}_{\rm per}(\mathbb{L}, \gamma)+E^{\rm xc}_{\rm per}[\rho_{\gamma}],\]
where the rHF energy \(E^{\rm rHF}_{\rm per}\) was defined in (20). This is the rHF model with the addition of a periodic exchange-correlation energy. One could follow the lines of Section 3.1 to study the thermodynamic limit. We can also consider the following alternative formulation, presented in [21], and that we present now.
The self-consistent Kohn-Sham eigenvalue problem (26) can be reformulated as a fixed point equation for the density
\[\rho({\bf r})=\mathcal{F}^{\rm KS}[\rho]({\bf r}):=\left[\frac{1}{2\pi{\rm i} }\oint_{\mathscr{C}}\bigl{(}\lambda-H^{\rm KS}[\rho]\bigr{)}^{-1}d\lambda \right]({\bf r},{\bf r}), \tag{27}\]
where \(\mathscr{C}\) is a contour in the resolvent set separating the first \(N\) eigenvalues of \(H^{\rm KS}\) from the rest of the spectrum (assuming a spectral gap). The right hand side of (27) denotes the diagonal of the kernel of the density matrix viewed as an integral operator.
### Periodic Kohn-Sham DFT model
For a periodic system with Bravais lattice \(\mathbb{L}\), we can write a similar equation. We introduce the periodic Kohn-Sham Hamiltonian associated with some \(\mathbb{L}\)-periodic density \(\rho\), given by
\[H^{\rm KS}_{\rm per}[\rho]=-\frac{1}{2}\Delta+V_{\rm H,per}[\rho]+V_{\rm xc}[ \rho],\]
where the periodic Hartree potential solves
\[-\Delta V_{\rm H,per}[\rho]=4\pi(\rho-m^{\rm nuc})\]
with periodic boundary condition where \(m^{\rm nuc}\) is understood as a background charge density given by the nuclei (to be specified below). As the potential is periodic, the Bloch-Floquet theory applies to the Hamiltonian. In particular, the spectrum of \(H^{\rm KS}_{\rm per}[\rho]\) has a band structure. For each \({\bf k}\in\Omega^{*}\) the first Brillouin zone, the Bloch waves solve the eigenvalue problem
\[\Biggl{(}\frac{1}{2}\bigl{(}-{\rm i}\nabla+{\bf k}\bigr{)}^{2}+V_{\rm H,per}[ \rho]+V_{\rm xc}[\rho]\Biggr{)}u_{n,{\bf k}}({\bf x})=E_{n,{\bf k}}u_{n,{\bf k }}({\bf x}),\quad\forall n=1,2,\ldots,\]
with periodic boundary condition on \(\Omega\). The spectrum is given by
\[\sigma(H^{\rm KS}_{\rm per}[\rho])=\bigcup_{n}\bigcup_{{\bf k}\in\Omega^{*}}E_{n,{ \bf k}}.\]
This is known as the band structure, see Figure 2 for an illustration.
If the first \(N\) bands are occupied and there exists a gap between the occupied and unoccupied spectrum (in physical terms, the system is an insulator), the Kohn-Sham map can be generalised to the periodic setting as
\[{\cal F}^{\rm KS}_{\rm per}[\rho]({\bf x}):=\Bigg{[}\frac{1}{2\pi{\rm i}}\oint_{ \mathscr{C}}\bigl{(}\lambda-H^{\rm KS}_{\rm per}[\rho]\bigr{)}^{-1}d\lambda \Bigg{]}({\bf x},{\bf x}), \tag{28}\]
where the contour \(\mathscr{C}\) lies in the resolvent set and separates the occupied and unoccupied spectra. For the periodic Kohn-Sham model, we thus recast the problem as a fixed point equation
\[\rho={\cal F}^{\rm KS}_{\rm per}[\rho]. \tag{29}\]
Using the electron density \(\rho\) as the basic variable is more convenient in studying the scaling limit than the Kohn-Sham orbitals (Bloch waves), which will be discussed next.
Figure 2: Schematic band structure of crystalline silicon, along various lines connecting high-symmetry points in the first Brillouin zone \(\Omega^{*}\). The first 4 bands are occupied and separated by a band gap from the higher bands.
### Scaling limit for the periodic model
Starting from the fixed point equation (29), valid for instance for a periodic configuration \(\mathbb{L}\), we would like to find other solutions when the crystal is deformed.
We consider the electronic structure of an elastically deformed system in the scaling limit that the lattice parameter goes to \(0\). To setup the atomic configuration, we assume a lattice structure for the undeformed system. The atoms are located at \(\varepsilon\mathbb{L}\), where \(\mathbb{L}\) is a Bravais lattice, and the lattice parameter \(\varepsilon\) will serve as the scaling parameter in the limit, which can be understood as the ratio of the lattice parameter and the characteristic length scale of the system.
For simplicity, we assume that \(\Omega\) coincides with the unit cell of \(\mathbb{L}\). Thus, in \(\Omega\), atoms are located at \(\Omega\cap\varepsilon\mathbb{L}\). The system consists of \(\varepsilon^{-3}\) atoms and correspondingly \(N\varepsilon^{-3}\) electrons where \(N\) is the number of valence electrons per atom.
Fix a smooth function \(u:\mathbb{R}^{3}\to\mathbb{R}^{3}\) of the form \(u(\mathbf{x})=B\mathbf{x}+u_{\mathrm{per}}(\mathbf{x})\), where \(B\) is a \(3\times 3\) matrix and \(u_{\mathrm{per}}\) is periodic with respect to \(\Omega\). The deformed atom locations are
\[\mathbf{Y}_{i}^{\varepsilon}=\mathbf{X}_{i}^{\varepsilon}+u(\mathbf{X}_{i}^{ \varepsilon}),\qquad\mathbf{X}_{i}^{\varepsilon}\in\varepsilon\mathbb{L}. \tag{30}\]
Correspondingly the background charge distribution is given by
\[m^{\mathrm{nuc},\varepsilon}(\mathbf{y})=\sum_{\mathbf{X}_{i}\in\mathbb{L}}m_ {\mathrm{a}}^{\varepsilon}(\mathbf{y}-\mathbf{Y}_{i}^{\varepsilon}), \tag{31}\]
where \(m_{\mathrm{a}}^{\varepsilon}\) is the rescaled version of the charge contribution from each individual atom (recall that the lattice parameter is scaled to \(\varepsilon\)):
\[m_{\mathrm{a}}^{\varepsilon}=\varepsilon^{-3}m_{\mathrm{a}}(\cdot/\varepsilon). \tag{32}\]
As the lattice parameter is scaled to be \(\varepsilon\), the Kohn-Sham Hamiltonian needs to be rescaled correspondingly as
\[H_{\varepsilon,u}^{\mathrm{KS}}[\rho]=-\frac{\varepsilon^{2}}{2}\Delta+V_{ \mathrm{H}}^{\varepsilon}[\rho]+V_{\mathrm{xc}}[\rho], \tag{33}\]
where the Hartree potential \(V_{\mathrm{H}}^{\varepsilon}\) solves
\[-\Delta V_{\mathrm{H}}^{\varepsilon}=4\pi\varepsilon(\rho-m^{\mathrm{nuc}, \varepsilon}). \tag{34}\]
Thus the electron density of the deformed system is determined by the fixed point of the Kohn-Sham map
\[\rho(\mathbf{r})=\mathcal{F}_{\varepsilon,u}^{\mathrm{KS}}[\rho](\mathbf{x}): =\left[\frac{1}{2\pi\mathrm{i}}\oint_{\mathscr{C}}\bigl{(}\lambda-H_{ \varepsilon,u}^{\mathrm{KS}}[\rho]\bigr{)}^{-1}d\lambda\right]\!(\mathbf{r}, \mathbf{r}). \tag{35}\]
In order to make sense of the Kohn-Sham map defined in (27) and (35), we require a gap between the occupied and unoccupied spectrum, and thus we make the following assumption for the undeformed system. The gap of the effective Hamiltonian of the perturbed system follows from a perturbation argument.
**Assumption 4.1** (Insulating undeformed system).: There exists a \(\Omega\)-periodic \(\rho_{0}\in C^{\infty}(\mathbb{R}^{3})\), that is positive and uniformly bounded away from zero, such that
* The spectrum of the Hamiltonian \(H^{\rm KS}_{\varepsilon=1,0}[\rho_{0}]\) has a positive gap between the occupied and unoccupied spectra.
* \(\rho_{0}\) is a fixed point of the Kohn-Sham map: \[\rho_{0}({\bf r})=\mathcal{F}^{\rm KS}_{\varepsilon=1,0}[\rho_{0}]({\bf r})= \Bigg{[}\frac{1}{2\pi{\rm i}}\oint_{\mathscr{C}}\bigl{(}\lambda-H^{\rm KS}_{ \varepsilon=1,0}[\rho_{0}]\bigr{)}^{-1}d\lambda\Bigg{]}({\bf r},{\bf r}),\] (36) where \(\mathscr{C}\) is a contour in the resolvent set enclosing the occupied spectrum.
### Cauchy-Born rule for electronic structure
The question of the scaling limit is to characterise the electron density, as a solution to the Kohn-Sham equation (35), when the deformation is elastic in the sense that the deformation gradient is not too large. This is motivated by the Cauchy-Born rule for passing from atomistic models to elastic models, where the analogous question for DFT is to pass from electronic structure models to continuum elastic models. The scaling limit for Thomas-Fermi-von Weizsacker model was studied by Blanc, Le Bris and Lions in [5], see Section 2.4.
For the Kohn-Sham type models, the scaling limit was proved by E and Lu in [19, 21] under the stability conditions on the level of linear response of the undeformed system. In order to state the stability assumptions, let us introduce the linearised Kohn-Sham map for the undeformed system
\[(\mathcal{L}_{0}w)=\Bigg{[}\frac{\delta\mathcal{F}^{\rm KS}_{\varepsilon=1,0}[ \rho_{0}]}{\delta\rho}(w)\Bigg{]}. \tag{37}\]
It has been established [21] that \(\mathcal{L}_{0}\) is a bounded linear operator on the space \(\mathcal{X}_{n}:=\dot{H}^{-1}_{\rm per}(n\Omega)\cap H^{2}_{\rm per}(n\Omega)\) for every \(n\in\mathbb{N}\), where \(H^{2}_{\rm per}(n\Omega)\) stands for the periodic Sobolev space with square integrable second derivatives and \(\dot{H}^{-1}_{\rm per}(n\Omega)\) is the homogeneous Sobolev space with index \(-1\) on the domain \(n\Omega\).
**Assumption 4.2** (Stability of charge density wave response).: For every \(n\in\mathbb{N}\), the operator \(I-\mathcal{L}_{0}\) is uniformly invertible as an operator on \(\mathcal{X}_{n}\).
Physically the stability assumption states that the undeformed crystal is stable with respect to spontaneous charge density wave perturbation at every scale. In particular, this prevents the possibility of symmetry breaking as \(\varepsilon\to 0\).
We also define the macroscopic permittivity tensor for the undeformed crystal as
\[\mathsf{E}_{0}=\frac{1}{2}(\mathsf{A}_{0}+\mathsf{A}_{0}^{*})+\frac{1}{4\pi} \mathsf{I}, \tag{38}\]
where the \(3\times 3\) matrix \(\mathsf{A}_{0}\) is given by
\[\mathsf{A}_{0,\alpha\beta}:=-2\Re\sum_{i}^{\rm occ}\sum_{a}^{\rm unocc}\int_{ \Omega^{*}}\frac{\langle u_{a,{\bf k}},i\partial_{\mathsf{k}_{\alpha}}u_{i,{ \bf k}}\rangle\langle u_{a,{\bf k}},i\partial_{\mathsf{k}_{\beta}}u_{i,{\bf k }}\rangle^{*}}{E_{i,{\bf k}}-E_{a,{\bf k}}}\frac{d{\bf k}}{|\Omega^{*}|}- \bigl{\langle}g_{\alpha},\delta_{\rho}V_{\rm eff}(I-\mathcal{L}_{0})^{-1}g_{ \beta}\bigr{\rangle} \tag{39}\]
where
\[g_{\alpha}(\mathbf{r}):=2\Re\sum_{i}^{\text{occ}}\sum_{a}^{\text{unocc}}\int_{ \Omega^{*}}\frac{\langle u_{a,\mathbf{k}},i\partial_{\mathbf{k}_{\alpha}}u_{i, \mathbf{k}}\rangle}{E_{i,\mathbf{k}}-E_{a,\mathbf{k}}}u_{i,\mathbf{k}}^{*}( \mathbf{r})u_{a,\mathbf{k}}(\mathbf{r})\frac{d\mathbf{k}}{|\Omega^{*}|}, \tag{40}\]
and \(\delta_{\rho}V_{\text{eff}}\) is the linearisation of the effective potential operator: \(V_{\text{eff}}[\rho]=V_{\text{H}}[\rho]+V_{\text{xc}}[\rho]\) at \(\rho_{0}\) for the undeformed crystal. The dielectric permittivity for the reduced Hartree-Fock theory has been studied in [12].
**Assumption 4.3** (Stability of dielectric response).: The macroscopic permittivity tensor for the undeformed cyrstal \(\mathsf{E}_{0}\) is positive definite.
The main result of [21] establishes the Cauchy-Born rule for the electronic structure.
**Theorem 4.1**.: _[_21_, Thm. 5.1]_ _Under Assumptions 4.1, 4.2 and 4.3, if the deformation gradient is sufficiently small, then there exists \(\rho_{u}^{\varepsilon}\) satisfying the Kohn-Sham fixed point equation (35), and furthermore, \(\rho_{u}^{\varepsilon}\) can be locally approximated by the Cauchy-Born rule:_
\[\|\rho_{u}^{\varepsilon}-\varepsilon^{-3}\rho_{\text{CB}}(\mathbf{r}/ \varepsilon;\nabla u(\mathbf{r}))\|_{L^{\infty}}\lesssim\varepsilon^{1/2}\| \rho_{u}^{\varepsilon}\|_{L^{\infty}}, \tag{41}\]
_where \(\rho_{\text{CB}}(\cdot;A)\) is the electron density of a homogeneous deformed system with \(u(\mathbf{x})=A\mathbf{x}\) (which is well-defined provided \(|A|\) is not too large)._
The main technical ingredients of the proof of the Theorem is a two-scale analysis of the linearised Kohn-Sham map. As a part of the analysis, the effective potential and the macroscopic dielectric response of the deformed crystal can be also characterised, we refer the readers to [21] for details.
|
2309.04206 | Variable order porous media equations: Application on modeling the
S&P500 and Bitcoin price return | This article reveals a specific category of solutions for the $1+1$ Variable
Order (VO) nonlinear fractional Fokker-Planck equations. These solutions are
formulated using VO $q$-Gaussian functions, granting them significant
versatility in their application to various real-world systems, such as
financial economy areas spanning from conventional stock markets to
cryptocurrencies. The VO $q$-Gaussian functions provide a more robust
expression for the distribution function of price returns in real-world
systems. Additionally, we analyzed the temporal evolution of the anomalous
characteristic exponents derived from our study, which are associated with the
long-range memory in time series data and autocorrelation patterns. | Yaoyue Tang, Fatemeh Gharari, Karina Arias-Calluari, Fernando Alonso-Marroquin, M. N. Najafi | 2023-09-08T08:32:32Z | http://arxiv.org/abs/2309.04206v1 | # Variable order porous media equations: Application on modeling the S&P500 and Bitcoin price return
###### Abstract
This article reveals a specific category of solutions for the \(1+1\) Variable Order (VO) nonlinear fractional Fokker-Planck equations. These solutions are formulated using VO \(q\)-Gaussian functions, granting them significant versatility in their application to various real-world systems, such as financial economy areas spanning from conventional stock markets to cryptocurrencies. The VO \(q\)-Gaussian functions provide a more robust expression for the distribution function of price returns in real-world systems. Additionally, we analyzed the temporal evolution of the anomalous characteristic exponents derived from our study, which are associated with the long-range memory in time series data and autocorrelation patterns.
Non-linear Fokker-Planck equations, Variable order fractional derivatives, \(q\)-Gaussian distribution pacs: 05.40.-a, 45.70.Cc, 11.25.Hf, 05.45.Df
## I Introduction
Anomalous diffusion has manifested itself in various fields of science, such as physics [1; 2; 3], chemistry [4; 5], biology [6; 7], and socioeconomic systems such as stock markets [8; 9]. Although it was proposed for transport and wave propagation paradigms [10; 11; 12], now its relation with other phenomena is well-established, including but not limited to fractals and percolation in porous media [13; 14], cell nucleus, plasma membrane and cytoplasm in biology [15]. Anomalous diffusion is manifested in the process where the total displacement of the random walker scales with time exhibiting a fractional exponent, as a result of the correlations in the stochastic process [16; 17]. Under specific assumptions, a fractional version of the Fokker-Plank equation (FPE) can be employed to describe the time evolution of these systems' probability density function (PDF). Non-local fractional derivatives are relevant when dealing with the Levy process, for example [18; 19]. Intuitively, a non-local operator requires information from a whole interval when operating on a function, in contrast to local operators that only need information from a single point in their immediate vicinity [9; 20], for a comprehensive review, see Ref. [21] and the references therein. This fractionalization can occur in the _time_ and the _space_ derivatives of FPE. While the linear fractional FPE is suitable for describing a wide range of systems with anomalous diffusion, its non-linear version has been implemented in more diverse domains, including biological systems [22], thermostatistics [23; 24; 25; 26; 27], and stock markets [8; 9; 28]. It was also shown that the PDF of the detrended price return of the S&P500 index is governed by the porous media equation (PME)-- which is a non-linear FPE-- through a curve fitting analysis of the PDFs after collapsing self-similar \(q\)-Gaussian functions [8; 9; 29].
Despite the relative success of the \(q\)-Gaussian distributions in explaining the time evolution of the PDF of many stochastic systems, some studies show that the numerically estimated exponents exhibit slow time dependence. The stock market is an example where the PDF of price return does not follow a constant order (CO) non-linear FPE, or at least has a limited validity [30], as the price returns of stock market indexes exhibit characteristic exponents that depend on time [31], aligned with the central limit theorem (CLT). More specifically, the stochastic fluctuations in the price return of the S&P500 index can be modeled using superdiffusive self-similar \(q\)-Gaussian functions and the anomalous diffusion exponent \(\alpha\), which has initial values (\(\alpha>2\), \(q>1\)), and then slowly converge to \(\alpha\to 2,q\to 1\) corresponding to the Gaussian (normal) distribution as required by CLT (the same happens to the diffusion coefficient \(D\)) [31]. The diffusion process in a porous medium is another example, where if the medium structure or external field changes over time, the CO fractional diffusion is not applicable [32; 33]. This characteristic poses fundamental challenges and introduces the need to reconsider the governing equation, considering the time dependence of these exponents. To address this limitation, using variable order (VO) fractional diffusion equations has been proposed [30; 32; 33]. As it has been shown in several studies that the inclusion of VO fractional derivatives can provide a more accurate representation of the underlying dynamics in many systems [34; 35; 36; 37; 38].
This paper considers the VO fractional porous media equation (FPME) with local fractional derivative operators, where all the exponents can change slowly with
time. The formalism is kept as general as possible to include a general time dependence of the exponents. By proposing a separable form for the solutions, we identify an important class of solutions that yield the ordinary \(q\)-Gaussian solution in the static (CO) limit. These solutions are not self-similar (SS), but the self-similarity is retrieved once we take the CO limit. In the second part of the paper, we relate these solutions to the PDF of price return in the traditional stock markets and the cryptocurrency. We assess the VO \(q\)-Gaussian function and inspect how this system approaches the normal diffusion counterparts over extended periods.
The paper is organized as follows: the constant order fractional diffusion process (CO) will be presented in the following section. Section III is devoted to the time-dependent (VO) exponents and their solutions with and without drift. The application to the stock markets is studied in Section IV.
## II Constant order (CO) fractional diffusion process
The anomalous diffusion is a diffusion process with a non-linear relationship between the mean squared displacement and time with an anomalous diffusion exponent \(\alpha\). For any \(d\)-dimensional space, it is characterized by the following scaling relation
\[R(t)\equiv\sqrt{\left<r^{2}(t)\right>}\propto t^{H}, \tag{1}\]
where \(r(t)\) is the end to end distance at time \(t\), \(\left<...\right>\) is the ensemble average and \(H=1/\alpha\) is the corresponding Hurst exponent. For a normal diffusion \(\alpha=2\), while for the super- (sub-) diffusion \(\alpha<2\) (\(\alpha>2\)). The anomalous diffusion can be due to time correlations, as well as the fractal structure of the _space_. An important primitive example of anomalous diffusion was given by Havlin and Ben-Avraham for random walks on fractal objects, the PDF of which is given by Havlin and Ben-Avraham (1997)
\[P(x,t)\propto R(t)^{-d_{f}}\exp\left[-c\left(\frac{x}{R(t)}\right)^{\frac{ \alpha}{\alpha-1}}\right], \tag{2}\]
where \(d_{f}\) is the fractal dimension of the space in which the random walker is doing an exploration process. Other types of distributions with the same scaling relation between \(x\) and \(R(t)\) are proposed to describe anomalous diffusion processes in different physical systems, which are special solutions of the modified Fokker-Planck equations (FPEs). These modifications of the FPE may include the fractionalization of the space as well as the time derivative operators, and non-linearization depending on the system that FPE is going to describe. Various fractional diffusion equations have been introduced each of which has its own advantages and weaknesses [2; 3; 4]. A fractionalization of derivative can be either local or non-local depending on the (temporal and spatial) nature of the system [40]. The examples are the Schneider and Wyss time-derivative fractionalization [41], O'Shaugnessy and Procaccia space-derivative fractionalization [42], Giona and Roman space-time-derivative fractionalization [43], and more general cases [44].
An important feature in Eq. 2 is related to its scaling behavior. This equation suggests that for a \(d\)-dimensional system, \(R=|\mathbf{x}|\) (where \(\mathbf{x}\) shows the position of a random walker) scales with a general function of time \(\phi(t)\), so that
\[\mathbf{x}\rightarrow\lambda\mathbf{x}\,\ \phi(t)\rightarrow\lambda\phi(t)\,\ P \rightarrow\lambda^{-d}P\, \tag{3}\]
(see Appendix A for more details). Here \(\phi(t)\) is a time-dependent function characterizing the anomalous diffusion. This suggests the following scaling solution
\[P(\mathbf{x},t)=\frac{1}{\phi(t)^{d}}F\left[\frac{\mathbf{x}}{\phi(t)}\right]. \tag{4}\]
The nature of anomalous diffusion is directly calculated using
\[R^{2}=\left<r(t)^{2}\right>\propto\int\mathrm{d}^{d}\mathbf{x}|\mathbf{x}|^{2 }F\left[\frac{\mathbf{x}}{\phi(t)}\right]\propto\phi(t)^{2}. \tag{5}\]
The scaling properties of the time series are associated with the form of \(\phi(t)\). In fact, for the solution of Eq. 1 we have \(\phi(t)=\phi_{\mathrm{SS}}(t)\) where the index "SS" points out the self-similarity law, given by [8; 9]:
\[\phi_{\mathrm{SS}}(t)\propto t^{1/\alpha}. \tag{6}\]
Combining Eq. 5 and Eq. 6, one reaches Eq. 1.
An important well-known example is the fractional Brownian motion, in which the PDF follows a Gaussian distribution with a self-similar structure. For a good review see Appendix B and [45; 46]. The Levy-stable distribution is another example of a self-similar system that has vast applications in stochastic processes, including the stock markets. The heavy-tail behavior observed in stock market price fluctuations has been a cornerstone for many scientists in supporting the use of Levy-stable distributions for modeling the stochastic behavior of the price return [47; 48; 49; 50; 51; 52].
There are, however, some pieces of evidence indicating that Levy-stable distributions are not sufficient to describe the stylized facts of the price return. Recent observations of PDF of price returns for the S&P500 have shown that they follow more general distributions [8]. The Levy-stable distribution provides only an estimation of the stock market fluctuations at low frequencies where the correlations can be neglected. However, correlations during the first minutes on the price fluctuations were observed at high frequencies, making the Levy regime no
longer applicable. Additionally, the characteristic exponents used to model the power-law tails in the PDF of price returns during the first minutes lie outside the Levy regime [9]. This divergence highlights a common occurrence in the modeling of complex systems, which can be attributed to the nonlinear nature of the governing physical phenomena.
In nonlinear systems, the principles of homogeneity and superposition do not hold. These systems exhibit a distinctive property known as _non-extensivity_, meaning that their corresponding entropy is not additive. A notable example of non-extensive systems is observed in the porous media equation (PME), which possesses broad applications in stochastic processes, including the analysis of stock markets. More accurate models can be created by introducing a fractional version of the PME. This extension offers a powerful theoretical framework with the potential to effectively describe a wide range of stochastic systems. The local (Katugampola) fractional PME reads [9]
\[\frac{\partial^{\xi}}{\partial t^{\xi}}P(x,t)=D\frac{\partial^{2}}{\partial x ^{2}}P(x,t)^{\nu}, \tag{7}\]
where \(\xi\), \(D\), and \(\nu\equiv 2-q\) are the constant parameters to be found by fitting the data with the time series under investigation. \(D\) is the diffusion coefficient in the limit \(q,\,\xi\to 1\) where the normal diffusion is retrieved. Eq 7 admits solutions in terms of \(q\)-Gaussian and generalized \(q\)-Gaussian functions [8; 53], which forms an important class of functions with a wide range of applications [8; 54], shown as
\[P(x,t)=\frac{1}{C_{q}\phi_{\text{SS}}(t)}e_{q}\left[\left(\frac{x}{\phi_{ \text{SS}}(t)}\right)^{2}\right], \tag{8}\]
where \(e_{q}(x)\equiv\left(1-(1-q)x^{2}\right)^{\frac{1}{1-q}}\) is a generalization of the exponential function, and \(C_{q}\) is a \(q\)-dependent normalization factor. The self-similar time part reads:
\[\phi_{\text{SS}}(t)\equiv(D^{\prime}t)^{1/\alpha}. \tag{9}\]
where the parameter \(\alpha=\frac{3-q}{\xi}\) is the anomalous diffusion exponent associated with the self-similarity of the time series and \(D^{\prime}\equiv(D/\xi)^{1/\xi}\) is the modified diffusion parameter to be estimated using the real data analysis. The evolution equation of the price return's PDF can be constructed based on the \(q\)-Gaussian fitting. Originally conceived for studying fluid propagation in porous media, PME has significantly broadened its scope over time. Now PME is used to investigate any diffusion process where the diffusion coefficient depends on the state variable, the most important of which is the stock market with \(q\)-Gaussian PDF [8; 55]. In our previous studies, we investigated the fractional PME with local and non-local fractional derivatives, focusing solely on its solutions for describing the PDF of S&P500 market index. Considering both cases, the results obtained from the non-local derivatives were found to be more accurate [53].
Despite the fact that a generalized form of \(q\)-Gaussian PDFs better describes the PDFs of S&P500 data at any time, the exponents have been shown to vary over time [31]. Solving Eq. 7 with "time-dependent exponents" introduces a contradiction because the time dependence of the exponents should have been considered in the governing equation from the outset. Such governing equations with time-dependent exponents are referred to as "variable order" (VO) governing equations. The very important question we should answer is: _Which is the generalized non-linear fractional Fokker-Planck equation that governs the probability density function (PDF) with variable orders?_
Variable exponents are observed in complex diffusion processes [56; 57; 58; 59; 60]. The diffusion properties of homogeneous media are usually modelled by constant order (CO) time-fractional diffusion processes, for example, see [61]. However in complex media where heterogeneous regions are present the CO fractional dynamic models are not robust over long time scales. Additionally, when considering diffusion processes in porous media where the medium structure or external field changes with time, the use of CO fractional dynamic models may not yield satisfactory results [32; 33]. In such cases, the variable order (VO) time-fractional model emerges as a more suitable approach for describing space-dependent anomalous diffusion processes [62]. Previous works on VO diffusion models have made substantial contributions to the modeling and analysis of complex systems [56; 57; 58; 59; 60]. Building upon these works, this paper aims to generalize the PME by incorporating variable exponents. The investigation focuses on systematically exploring the problem with variable exponents and solving a time variable order porous media equation (VO-PME). The VO-PME reads
\[\frac{\partial^{\xi(t)}}{\partial t^{\xi(t)}}P(x,t)=D(t)\frac{\partial^{2}}{ \partial x^{2}}P^{\nu(t)}(x,t), \tag{10}\]
where \(\xi(t)\), \(D(t)\), and \(\nu(t)\equiv 2-q(t)\) now vary with time. Some properties of the CO equations cannot be extrapolated to the VO counterpart. For example, While one may be tempted to derive the effective time-dependent Hurst exponent as \(H(t)\equiv\frac{\xi(t)}{3-q(t)}\), it is crucial to exercise caution when utilizing this expression. The reason is that the definition of the Hurst exponent is based on the autocorrelation function, which has not been explicitly obtained for the VO-PME in this study. Therefore, the aforementioned expression should be interpreted with care. In the following sections, we find an important class of solutions for the VO-PME.
## III A local variable order non-linear time diffusion equation
In this section, we consider a time-dependent VO-PME as follows:
\[\frac{\partial\xi^{(t)}}{\partial t^{\xi(t)}}P(x,t)=D(t)\frac{\partial^{2}}{ \partial x^{2}}P^{\nu(t)}(x,t), \tag{11}\]
where \(\xi(t)\) and \(\nu(t)=2-q(t)\) are VO exponents and \(D(t)\) is a slow-varying time-dependent diffusion coefficient. In this equation the time derivative is fractionalized using a Katugampola derivative, see Appendix C for the details. Note that in the limit \(\xi,\nu,D\) are constant, the solution given by Eq. 8 is retrieved, i.e. the \(q\)-Gaussian distribution. In the analogy of Eq. 4 (\(d=1\)), we consider the factorized solution \(P(x,t)=\frac{1}{\phi(t)}F(\frac{\sigma}{\phi(t)})\). This approach enables us to use the method of separating variables, where \(\phi(t)\) satisfies a time-fractional equation. By inserting Eq. (4) into Eq. (11), we find that (also see Appendix C)
\[-\frac{\phi^{\nu(s)}(s)}{D(s)}\frac{\partial^{\xi(s)}\phi(s)}{\partial s^{\xi (s)}}=\left(\frac{d}{dz}[zF]\right)^{-1}\frac{d^{2}}{dz^{2}}F^{\nu(s)}, \tag{12}\]
where we change the variable \((x,t)\rightarrow(z\equiv\frac{x}{\phi(t)},s\equiv t)\). To simplify the calculations, we assume that the function \(q(s)\) is a slow-varying function so that the derivatives of \(q(s)\) with respect to \(s\) can be neglected as a first-order approximation. Thus, the right-hand side is a sole function of \(z\), while the left-hand side is a sole function of \(s\). Then we find
\[\begin{cases}&\frac{\phi^{\nu(s)}(s)}{D(s)}\frac{\partial^{\xi(s)}\phi(s)}{ \partial s^{\xi(s)}}=k\qquad\text{(I)}\\ &\\ \left(\frac{d}{dz}[zF]\right)^{-1}\frac{d^{2}}{dz^{2}}F^{\nu(s)}=-k\ \ \text{(II)} \end{cases} \tag{13}\]
where \(k\) is a real number, which serves as a free parameter. Using the properties of VO-K derivative (K stands for Katugampola, see Appendix C) we find that
\[\frac{\partial^{\xi(s)}\phi(s)}{\partial s^{\xi(s)}}=\frac{1}{s^{\xi(s)-1}} \phi^{\prime}(s) \tag{14}\]
where (here and throughout of the paper) \(f^{\prime}(s)\) shows the first derivative of \(f(s)\) with respect to the argument \(s\). Therefore, Eq. 13-(I) leads to:
\[\phi^{\nu(s)}(s)\phi^{\prime}(s)=kD_{s}s^{\xi(s)-1}. \tag{15}\]
To solve this equation, taking a similar approach from Eq. 6, we assume:
\[\phi(s)=\phi_{0}s^{1/\tilde{\alpha}(s)}, \tag{16}\]
where \(\tilde{\alpha}(s)\) is a ne slow-VO exponent, and \(\phi_{0}\) is a constant. Note that in the constant order case \(\tilde{\alpha}(s)\) is identical to \(\alpha\). Substituting this into Eq. 15 we find (\(\nu_{s}\equiv\nu(s)\), \(\xi_{s}\equiv\xi(s)\), \(\tilde{\alpha}_{s}\equiv\tilde{\alpha}(s)\) and \(D_{s}\equiv D(s)\))
\[\phi_{0}^{\nu_{s}+1}s^{\frac{\nu_{s}+1}{\tilde{\alpha}_{s}}}\frac{d}{ds}\left( \frac{\ln s}{\tilde{\alpha}_{s}}\right)=kD_{s}s^{\xi_{s}-1}, \tag{17}\]
or in terms of a new variable \(y\equiv(\nu_{s}+1)\frac{\ln s}{\tilde{\alpha}(s)}\) (so that \(e^{y}=\left(\frac{\phi(s)}{\phi_{0}}\right)^{\nu_{s}+1}\)) we have:
\[(\nu_{s}+1)e^{y}\frac{d}{ds}\left(\frac{y}{\nu_{s}+1}\right)=k\tilde{D}_{s}s^{ \xi_{s}-1}, \tag{18}\]
where,
\[\tilde{D}_{s}\equiv\frac{(\nu_{s}+1)}{\phi_{0}^{\nu_{s}+1}}D_{s}. \tag{19}\]
When \(\nu_{s}\) is a smooth function of \(s\), by considering that \(\nu_{s}=2-q_{s}\), we can ignore its first derivative, so one can easily cast the equation to the form:
\[\frac{d}{ds}e^{y}=k\tilde{D}_{s}s^{\xi_{s}-1}, \tag{20}\]
the solution is with initial condition \(y_{0}\equiv y(s_{0})\) is:
\[e^{y}=e^{y_{0}}+kG(s,s_{0}). \tag{21}\]
In Eq. 21 we define:
\[G(s,s_{0})\equiv\int_{s_{0}}^{s}\tilde{D}_{t}t^{\xi_{t}-1}dt\equiv\int_{s_{0} }^{s}g(t)dt, \tag{22}\]
where,
\[g(t)\equiv\frac{D_{t}(\nu_{t}+1)}{\phi_{0}^{\nu_{t}+1}}t^{\xi_{t}-1}. \tag{23}\]
Equation 21 can be written in the following form:
\[\phi(s)=\phi_{0}\left[k_{1}+kG(s,s_{0})\right]^{\frac{1}{\nu_{s}+1}}. \tag{24}\]
where \(k_{1}\equiv\left(\frac{\phi(s_{0})}{\phi_{0}}\right)^{\nu_{s_{0}}+1}\). Note that, for the solution to be real, we should always have the condition:
\[G(s,s_{0})\geq-\frac{k_{1}}{k}. \tag{25}\]
By choosing
\[k>0, \tag{26}\]
we see that this condition is satisfied given that \(k_{1}\geq 0\). Note \(k_{1}\) is a location parameter, and it is related to the initial condition, i. e. it sets the initial width of the PDF. When \(k_{1}=0\), the PDF is initially the Dirac delta function, and the solution of the equation will correspond to the Green Function. In applications where the initial condition is not given, we can set \(k_{1}\) to zero. The constant \(k\) is a scale parameter so that without loss of generality we can set \(k=1\). Equations 16 and 24 the exponent \(\tilde{\alpha}(s)\) is obtained as:
\[\frac{1}{\tilde{\alpha}_{s}}=\frac{\ln\left[k_{1}+kG(s,s_{0})\right]}{(\nu_{s}+ 1)\ln s}. \tag{27}\]
The initial time \(s_{0}\) can be set to zero. One can easily demonstrate that, when considering fractional constant exponents, \(\nu_{s}=\nu\), \(\xi_{s}=\xi\), Eq. 24 coincides with the result of ordinary PME, that is
\[\tilde{\alpha}_{s}\rightarrow\alpha,\ \phi(s)\rightarrow\phi_{\rm ss}(s) \tag{28}\]
which is given in Eq. 9. Specifically, in the limit \(D_{s}\to D={\rm const.},\nu_{s}\to 1\), \(\xi_{s}\to 1\) and \(k_{1}=0\), one retrieves the normal diffusion (ND), for which
\[\phi(s)\rightarrow\phi_{\rm SS}^{(ND)}\equiv as^{\alpha_{\rm ND}},\ \tilde{\alpha}_{s}\rightarrow\alpha_{\rm ND}=\frac{1}{2} \tag{29}\]
where \(a=k\sqrt{2D}\).
In the next step, we find the solution of \(F\). From now on we set \(k=1\) and \(k_{1}=0\), bearing in mind that the formulas can be generalized by considering other values of these parameters. We recall Eq. (13)-II,
\[\frac{d}{dz}[zF]=-\frac{d^{2}}{dz^{2}}F^{\nu_{s}}. \tag{30}\]
Let us consider the following trial special solution as a standard form:
\[F(z,s)=(c+\eta_{s}z^{2})^{\frac{1}{\nu_{s}-1}} \tag{31}\]
where \(c\) is a constant, and \(\eta_{s}\) is a pure function of \(s\) to be determined. Before going into the details, let us comment on the real-positively of this solution. To guarantee this, one has to impose:
\[c+\eta_{s}z^{2}\geq 0. \tag{32}\]
This inequality is satisfied only when
\[\eta_{s}\geq 0, \tag{33}\]
and at the same time \(c\geq 0\), which we arbitrarily set it to \(c=1\) (which can be done using a normalization). For the trial solution Eq. 31, one has
\[\frac{d}{dz}[F^{\nu_{s}}]=\frac{2\nu_{s}\eta_{s}}{\nu_{s}-1}zF. \tag{34}\]
After incorporating this expression into Eq. (30), we obtain
\[\eta_{s}=\frac{1-\nu_{s}}{2\nu_{s}}, \tag{35}\]
which completes the solution. After all, the Eq. 33 has to be satisfied, for which we should satisfy the following inequality
\[\frac{q_{s}-1}{2-q_{s}}\geq 0. \tag{36}\]
Noting that
\[\frac{q_{s}-1}{2-q_{s}}\begin{cases}\geq 0&\text{if }1\leq q_{s}<2\\ <0&\text{if }q_{s}<1\text{ or }q_{s}>2,\end{cases} \tag{37}\]
we find a physically relevant interval where the Eq. 32 is fulfilled: \(1\leq q<2\). Outside of this range, the solutions become imaginary. Finding another set of solutions is beyond the scope of the present paper as the study cases of price return in the stock market are restricted to the interval \(1<q\leq 2\); therefore, the lower branch in Eq 37 is applicable. Note that for \(k\neq 1\) we should satisfy \(\frac{q_{s}-1}{k(2-q_{s})}\geq 0\), which is automatically satisfied for \(1\leq q<2\) given the Eq. 26.
Altogether, we realize that \(F(z)\) and \(\phi(t)\) are positive functions, and eventually:
\[P(x,t)\propto\frac{A_{q}(t)}{\phi(t)}\left[1+\eta_{t}\left(\frac{x}{\phi(t)} \right)^{2}\right]^{\frac{1}{\nu_{t}-1}}, \tag{38}\]
where \(A_{q}(t)\) is a normalization factor (independent of \(x\)). Now defining \(\eta_{q_{t}}=\frac{\eta_{t}}{q_{t}-1}=\frac{1}{2(2-q_{t})}\), we find that
\[P(x,t|x_{0},t_{0})=\frac{A_{q}(t)}{\phi(t)}e_{q_{t}}\left[-\eta_{q_{t}}\left( \frac{x}{\phi(t)}\right)^{2}\right],\]
where
\[e_{q}[x]\equiv[1+(1-q)x]^{\frac{1}{1-q}}, \tag{39}\]
is a \(q\)-Gaussian function. To make the notation more abstract, we define:
\[\Phi(t)\equiv\frac{\phi(t)}{\sqrt{\eta_{q_{t}}}}. \tag{40}\]
Using Eq. 24 we find the explicit form
\[\Phi(t)=\sqrt{2(2-q_{t})}\left(\int_{t_{0}}^{t}g(s)ds\right)^{\frac{1}{3-q_{t }}}. \tag{41}\]
Then, \(P(x,t)\) is defined as:
\[P(x,t)=\frac{1}{C_{q}(t)\Phi(t)}e_{q_{t}}\left[-\left(\frac{x}{\Phi(t)} \right)^{2}\right], \tag{42}\]
where,
\[C_{q}(t)=\sqrt{\eta_{q_{t}}}A_{q}^{-1}(t)=\frac{\sqrt{\pi}\Gamma\left(\frac{ 3-q_{t}}{2(q_{t}-1)}\right)}{\sqrt{(q_{t}-1)}\Gamma\left(\frac{1}{q_{t}-1} \right)} \tag{43}\]
and \(\Gamma(.)\) is the Gamma function. Note that the standard deviation is:
\[\sqrt{\langle x^{2}\rangle}=\Phi(t). \tag{44}\]
If we represent
\[\Phi(t)\equiv\Phi_{0}t^{\frac{1}{\nu_{t}}}, \tag{45}\]
then using Eq. 16 one finds
\[\frac{t^{1/\tilde{\alpha}_{t}}}{\sqrt{\eta_{q_{t}}}}=Ct^{1/\alpha_{t}}\to\frac{1} {\alpha_{t}}\ln t=\frac{1}{\tilde{\alpha}_{t}}\ln t-\frac{1}{2}\ln\left(\eta_{q _{t}}\right)-\ln C. \tag{46}\]
where \(C\equiv\frac{\Phi_{0}}{\phi_{0}}\). Using Eq. 46, we can calculate \(\alpha_{t}\) if \(\tilde{\alpha}_{t}\) and \(\eta_{t}\) are provided and vice versa. In the practical situations, one calculates \(\alpha_{t}\) using Eq. 45, and the \(\tilde{\alpha}_{t}\) can be obtained from Eq. 46. A similar formulation for the case with drift is presented in Appendix D.
In the rest of this section, we provide some results for the VO \(q-\)Gaussians for various functions \(q(t)\), \(\xi(t)\) and \(D(t)\). Figure 1 displays the behavior of Eq. (38) and compares it with the solutions of normal diffusion and porous media processes, respectively. Figure 1-a shows the solution of the diffusion equation or Fick's second law, i.e. Eq. (7) for \(q=1\) and \(\xi=1\) with \(D=0.3\), in Figure 1-b and c we can observe \(q=1\) and \(\Phi=\sqrt{4Dt}\) respectively. Figure 1-d exhibits the solution of \(P(x,t)\) with respect to time and space for a PME, see (Eq. 7), applicable for \(1<q<3\). For this example we use the values of \(q=1.5\), \(\xi=3/4\), and \(D=0.3\) in Figure 1e and f we can observe that \(q=1.5\), remains constant, and \(\Phi=(Dt)^{1/\alpha}\) with \(\alpha=\frac{3-q}{\xi}\) respectively. Finally Figure 1g represents the VO \(q\)-Gaussian presented in Eq. (38) and applicable for \(1<q\leq 2\) with \(k=1\), \(k_{1}=0\), \(q_{0}=1.7\), \(\xi_{0}=1.3\), and \(D_{0}=1.3\). The value of \(q(t)=(q_{0}-1)e^{-at}+1\), with \(a=0.0003\), and \(\Phi(t)\), see Eq.(41), are shown in Figure 1-h and i, respectively.
## IV Application to Stock Markets and Cryptocurrency
In this section, we apply the VO \(q\)-Gaussian diffusion model to describe the evolution of PDF of stock market price return. Our empirical investigation focuses on two prominent market indices: the S&P500 stock market index and the Bitcoin cryptocurrency. The S&P500 dataset encompasses the period from January \(2^{\text{nd}}\), 2018, to August \(9^{\text{th}}\), 2022, with a frequency of one minute. For the Bitcoin currency, we analyze data spanning from January \(1^{\text{st}}\), 2021, to May \(9^{\text{th}}\), 2022, with data points collected at ten-minute intervals. Prior to conducting our price return analysis, we undertake a pre-processing step to remove time frames characterized by trading amounts falling below 0.10 USD. These instances predominantly occur approximately one hour before the stock market closes, specifically observed in the context of the S&P500.
The price return is defined as [31]
\[X(t)=I(t_{0}+t)-I(t_{0}), \tag{47}\]
where \(I(t)\) is the index at time \(t\), and \(t_{0}\) is some reference time. By decomposing the price return \(X(t)\) into a deterministic component \(\bar{X}(t)\) and a stationary fluctuating component \(x(t)\), we have
\[X(t)=\bar{X}(t)+x(t). \tag{48}\]
The trend \(\bar{X}(t)\) was obtained by calculating the moving average of the index over a specific time window \(t_{w}\)in the S&P500 and Bitcoin datasets, following the methodology described in [40]. For the S&P500 price return, a three-month optimal time window was used, while a one-week optimal time window was employed for the Bitcoin dataset. These specific time windows were carefully chosen to ensure that the fluctuations around the trend show stationary behavior. To confirm the validity of the observed stationary behavior, we experimented with different window sizes for detrending and ultimately selected the one that allowed the PDF to show the closest convergence to a Gaussian distribution for large times. The PDFs were calculated for \(x(t)\) of S&P500 and Bitcoin at the time range \(t\in[1,\ 47000]\)min. By an error-minimization process, we found that the VO q-Gaussian diffusion described in Eq. (42) provides the closest match to the observed time-dependent evolution of PDF\((x,t)\) for both S&P500 and Bitcoin.
Figure 2 presents the results of the calibration process, displaying the functions \(q(t)\), \(\Phi(t)\), and \(P_{\text{max}}(t)\) obtained by curve fitting the PDFs derived from the datasets to Eq. (42). We observed that the PDFs of S&P500 and Bitcoin converge to a Gaussian distribution function as time \(t\) increases, and this convergence is positively correlated with the chosen optimal time window \(t_{w}\). Subfigures 2-a and 2-d present the converge of \(q\) towards 1 as \(t\) approaches \(t_{w}\), indicating a Gaussian distribution function with \(\sigma^{2}=1/2\), and \(\mu=0\). The convergence to the normal distribution is expected given the fact that for large enough times, the conditions for the central limit theorem are satisfied. We observe also that \(q(t)\) for S&P500 and Bitcoin can be effectively modeled using the following relationship
\[q(t)=(q_{0}-1)\left(1-\frac{1}{\pi}\arctan(at)\right)+1.\]
where \(q_{0}\) represents the initial value of \(q\) at \(t_{0}\) and \(a\) is a parameter obtained through fitting. The fitting parameters for the S&P500 are \(q_{0}=1.414\) and \(a=1.28\times 10^{-4}\), while for Bitcoin, the fitted values are \(q_{0}=1.514\) and \(a=2.0\times 10^{-3}\). The black curves in subfigures 2-a and 2-d represent the results of this fitting process. It is notable that the Bitcoin time series exhibits a closer fit to this relationship compared to the S&P500 time series. Specifically, for S&P500, the fitted values of \(q(t)\) remain constant \(q(t)=1.4\) for approximately the first three days of trading, followed by a rapid convergence to \(q(t)=1\). Conversely, for Bitcoin, the transition takes place over a shorter duration of around 16 hours, with \(q(t)\) stabilizing at \(q(t)=1\) for larger values of \(t\). These findings highlight the varying dynamics and characteristics between the S&P500 and Bitcoin markets, with
Bitcoin demonstrating a more pronounced adherence to the modeled relationship for \(q(t)\). Subfigures 2-b and 2-e present the parameter \(\Phi(t)\) obtained from the \(q\)-Gaussian fitting for S&P500 and Bitcoin respectively. \(\Phi(t)\) showcases a distinct slope and remains constant at large times. These slopes correspond to the anomalous diffusion, where \(\Phi(t)\propto t^{1/\alpha(t)}\) (Eq. 6). The average slopes obtained for S&P500 and Bitcoin are \(1/\alpha=0.54\) and \(1/\alpha=0.55\) and \(1/\alpha=0.55\) respectively. The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The average slopes obtained for S&P500 and Bitcoin are \(1/\alpha=0.54\) and \(1/\alpha=0.55\) respectively. The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The average slopes obtained for S&P500 and Bitcoin are \(1/\alpha=0.54\) and \(1/\alpha=0.55\) respectively. The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The average slopes obtained for S&P500 and Bitcoin are \(1/\alpha=0.54\) and \(1/\alpha=0.55\) respectively. The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\). The slope of the \(\Phi(t)\) is \(\Phi(t)\propto t^{1/\alpha(t)}\), which is \(\Phi(t)\propto t^{1/\alpha(t)}\).
and \(1/\alpha=0.60\) respectively, while the local slopes depend on time. Moreover, as time \(t\) increases, a constant value for \(\Phi(t)\) becomes evident. This behavior is a consequence of the detrending process applied during the analysis.
An important feature of the VO diffusion is that the anomalous diffusion cannot be always derived from the temporal evolution of the peak of the PDF, it has been done in previous analyses of the S&P500 index [63; 8]. Figure 2-c and Figure 2-f shows the height of the PDF of price return (\(P_{\rm max}(t)\)) for S&P500 and Bitcoin respectively. This term is obtained as the value of the PDF at \(x=0\). For both stock markets, \(P_{\rm max}(t)\) exhibits a slope initially and remains constant over a large time. A linear fitting is conducted and the slopes obtained are \(-0.532\) and \(-0.512\) for S&P500 and Bitcoin. This slope is consistent with the anomalous diffusion exponents for the S&P500 but not in the Bitcoin data. In the latter case the exponent obtained from \(\Phi(t)\) is \(\alpha=1.86\) that does not correspond to the exponent \(1.95\) obtained from \(P_{\rm max}\). This apparent discrepancy can be understood in the light of Eq. 42. There one can derive the relationship \(P_{\rm max}(x=0,t)=\frac{1}{C_{q}(t)\Phi(t)}\), where \(C_{q}(t)\) is a time-dependent parameter associated with \(q\) as shown in Eq. 43. It is noteworthy that the absolute value of the slopes in \(\Phi(t)\) and \(P_{\rm max}(t)\) shows a better agreement for S&P500 than for Bitcoin. This disparity can be attributed to the distinction in the time-dependent term \(q(t)\) for both markets. In the case of S&P500, the value of \(q(t)\) remains approximately constant for a longer period than for Bitcoin. As a result, \(C_{q}(t)\) closely approximates a constant value, leading to a relationship of
Figure 2: Results of fitting VO q-Gaussian to the PDFs for S&P500 and Bitcoin. (a) Fitting parameter \(q(t)\) for S&P500, which presents a constant of \(1.5\) initially and slowly converges to \(1\). (b) Fitting parameter \(\Phi(t)\) for S&P500, with a slope of \(1/\alpha=0.538\). (c) The peak of PDF (\(P_{max}(t)\)) for S&P500 with a slope of \(-0.532\). (d) Fitting parameter \(q(t)\) for Bitcoin, initially at \(1.5\) and converges to \(1\) faster than S&P500. (e) Fitting parameter \(\Phi(t)\) for Bitcoin, with a slope of \(1/\alpha=0.598\). (f) The peak of PDF (\(P_{max}(t)\)) for Bitcoin with a slope of \(-0.512\).
\(P_{max}(t)\propto 1/\Phi(t)\), which aligns with the results. Whilst for Bitcoin, where \(q(t)\) varies with time, the effect of \(C_{q}(t)\) becomes more significant.
We now realize that neither \(P_{\text{max}}(t)\) nor \(\Phi(t)\) obtained from curve fitting are reliable methods to derive the exponent of the anomalous diffusion in VO diffusion processes. In fact, the function \(\Phi(t)\) obtained in Figure 2-b and 2-c present \(\Phi(t)\) calculated as a fitting parameter of the PDF to the \(q\)-Gaussian distribution. The results of \(\Phi(t)\) shown have some fluctuations and the fitting of the slope is not perfect. These fluctuations and deviations arise due to errors in the fitting process, as fitting to the q-Gaussian distribution may not always be perfect. Therefore, to get rid of systematic fluctuations it is more reliable to use the variance formula (Eq. 44) to estimate \(\Phi(t)\) and the associated exponent \(\alpha_{t}\) as we do in the rest of this section.
The power law relation obtained using Eq. 44 is presented in Figure 3-a. An adjustment was performed on the time series of both S&P500 and Bitcoin by rescaling the price return using the data frequency (\(T\)). Specifically, for the S&P500, \(T=1\) minute, while for Bitcoin, \(T=10\) minutes. The slopes of \(\Phi(t)\) were computed for both S&P500 and Bitcoin. The results reveal that \(\Phi(t)\) for S&P500 price return has a slope of 0.50(4), whereas for Bitcoin the slope of 0.47(3). This shows that the slope of \(\Phi(t)\) is related to the type of diffusion process. These exponents however are the averaged ones over time, and the local slopes show different behaviors, i.e. they change over time which needs a "variable order" analysis.
We calculate the local slope of \(\Phi(t)\) to determine \(1/\alpha_{t}\) based on the aforementioned relation 45, and the results are shown in Figure 3-b. The light-blue curve is the local slope of S&P500, indicating superdiffusion at small time scales (\(t\)) with \(1/\alpha(t)>0.5\), transitioning towards normal diffusion at large \(t\) with \(1/\alpha(t)\to 0.5\). On the other hand, the dark blue curve in Figure 3-b represents the local slope of the Bitcoin time series, displaying subdiffusion at small times \(t\) with \(1/\alpha(t)<0.5\), and gradually converging towards normal diffusion. Upon comparing the two curves, we observe that the S&P500 slope exhibits a slow decrease in \(1/\alpha\) at small \(t\), while for Bitcoin, it remains relatively constant at approximately 0.46. As time progresses, the S&P500 local slope converges to 0.5 faster than Bitcoin, indicating a quicker convergence to Gaussian diffusion. In contrast, the local slope of Bitcoin fluctuates between 0.46 and 0.5, gradually converging to normal diffusion over an extended time period. The difference in the rate of convergence to a normal diffusion process between the S&P500 and Bitcoin (Figure 3-b) can be attributed to several key factors. Firstly, the high volatility of Bitcoin contributes to its distinct diffusion characteristics. Additionally, the high decentralization of Bitcoin allows it to be influenced by investor sentiment, which further impacts its diffusion dynamics. Lastly, the regulatory framework surrounding cryptocurrency, or rather its absence within traditional regulatory frameworks, adds to the unique behaviour observed in Bitcoin.
A numerical estimation of the value of \(1/\tilde{\alpha_{t}}\) was made using Eq. 46, which is shown in Fig. 3-c. Using this function, one is able to calculate \(G(t,t_{0})\) using and 27, and \(g(t)\) as its time drivative (Eq. 22). Here we applied an approximation in the derivative to the Bitcoin time series to eliminate the negative values. As the final step, we are able to estimate the variable order exponents using the relation 22. More precisely, the parameter \(D_{t}t^{\xi_{t}-1}\) is calculated for both time series and plotted in Figure 3-d. Apart from the stochastic fluctuations, we observe that this parameter decreases with time in the small time regime, showing that the diffusion process becomes slower over time as becomes constant for a time period. The decrease of the diffusion coefficient with time is consistent with previous analysis of the S&P500 data [31]. Note that the PDF of the detrended data becomes constant when the time reaches the detrended time window. Thus, the diffusion coefficient should also become zero when the time reaches the time windows used to detrend the time series. The explanation of this trend of the diffusion coefficient is elusive but there is no reason to expect that the diffusion coefficient remains constant in VO diffusion processes.
## V Discussion
We have proposed a variable-order differential equation to describe nonlinear fractional diffusion processes with anomalous diffusion, applicable for the regime \(1\leq q<2\). In the variable-order equation, the analytical solution is self-similar in the broad sense, with \(x\sim\phi(t)\) as the scaling variable, which is given in terms of the \(q-\)Gaussian distribution. In the constant-order equation, the scaling variable reduces to \(x\sim t^{H}\), where \(H=1/\alpha\) is the Hurst exponent and \(\alpha\) defines whether the diffusion process is either normal or super/sub diffusive. The variable-order exponents \(\alpha(t)\) and \(q(t)\) are related to time correlations and non-linear diffusion. The time-dependency of the exponents allows them to comply with the Central Limit Theorem, which requires \(\alpha(t)\to 2\) and \(q(t)\to 1\) as \(t\to\infty\). Typically, alpha is related to the Hurst exponent (\(H=1/\alpha\)) of the time series associated with the diffusion process. This can be discussed in light of the fractional Brownian motion (fBm), which is a self-similar Gaussian stochastic process characterized by stationary power-law correlated increments. In the fBm, the Hurst exponent provides a crucial link between the diffusion coefficient \(\alpha\) through the relation \(H=1/\alpha\)[9]. This exponent measures the long-range memory in time series data and can also be associated with autocorrelation patterns. Specifically, for \(0<H<0.5\) (or \(\alpha>2\)) the time series exhibits anti-correlated behavior. In the case of \(H=0.5\) (or \(\alpha=2\)), the time series is uncorrelated. Finally, for \(0.5<H<1\) (or \(\alpha<2\)), the time series displays positively correlated behaviour [64]. Moreover, in positive long-range correlated series, the autocorrelation function
follows a power-decay pattern described by \(C(s)\sim s^{-\gamma}\) where \(\gamma=2-2H\)[64].
The dynamics of the market ecosystem suggest that traders' behavior can influence autocorrelation patterns. In the stock market, traders employ various strategies, including negative trading strategies where they buy after price increases and sell when prices decline, and negative traders who follow the 'buy-low sell-high approach [65]. Sentana and Wadhwani [66] explored the connection between volatility, returns autocorrelation, and trading strategies using a GARCH model. They conducted empirical investigations using the Dow Jones index data and discovered that positive traders can lead to negative autocorrelation, while negative traders can result in positive autocorrelation. Furthermore, they found that in an index comprising numerous securities with different trading frequencies, positive cross-autocorrelation emerges, contributing to positive autocorrelation within the index. This finding aligns with our study, specifically in the context of the S&P500 market index.
In our analysis, the variable order exponents are observed in both stock markets and cryptocurrencies. In the S&P500 market \(\alpha(t)<2\) while gently increasing to the value \(\alpha(t)=2\) for large times. This behavior is consistent with the ecology of this market, consisting of short to moderate-time investors that lead to short-time correlations. The behavior is different in the Bitcoin index where \(\alpha(t)\) oscillates slightly above 2, indicating anticorrelation in the time series. This behavior is consistent with the peculiar ecology of the cryptocurrencies mainly dominated by speculators who are frequently changing their strategy to maximize utilities. For large times, the time series of price returns become uncorrelated, leading to the expectation that \(\alpha\) converges to 2 in this limit.
The underlining mechanism of the values of the exponent \(q(t)\) may be attributed to the interaction between the equities of the stock market, This can be understood from the Langevin equation of the non-linear FPE (Eq. 7) that is converted by using the property of the Katugam-pola derivative \(d^{\xi}/dt^{\xi}=t^{1-\xi}d/dt\) to [67]
\[\frac{\partial P(x,t)}{\partial t}=Dt^{\xi-1}\frac{\partial^{2}P(x,t)^{2-q}}{ \partial x^{2}}, \tag{49}\]
The corresponding Langevin equation describing the stochastic dynamics of the price return \(X(t)\) has been derived by Bourland [68]
\[X(t+dt)=X(t)+\eta(t)(Dt^{1-\xi}P(X(t),t)^{1-q})^{1/2}dt, \tag{50}\]
where \(\eta(t)\) is the white noise signal. Let us consider the case of an idealized gas of particles experiencing Brownian-like motion, For \(q=1\) and \(\xi=1\) the stochastic dynamic corresponds to the classical random walk that describes ideal gases. For more dense situations, \(q\) may take values larger than zero, indicating that the random walk of each particle is influenced by the local density of
Figure 3: (a) \(\Phi(t)\) calculated for both S&P500 and Bitcoin price return based on the second moment, both present clear slopes. (b) \(1/\alpha(t)\) calculated as the localized slope of \(\Phi(t)\). Both S&P500 and Bitcoin present a convergence to 0.5, yet at a different pace. S&P500 converges to 0.5 faster than Bitcoin. (c) \(1/\tilde{\alpha}(t)\) calculated following Eq. 46 for S&P500 and Bitcoin. (d) Parameter \(D_{t}t^{\xi-1}\) calculated for both time series based on the relationship in Eq. 23.
the particles around its location. This physical picture can be extrapolated to financial markets as follows: Assuming that \(X(t)\) is the index of a particular stock, the dependency of the fluctuations on its probability density functions may be attributed to the interaction between different equities in the stock market. In fact, the S&P500 index is calculated based on the 500 largest companies in the US, and it should be noted that the performance of these companies cannot be assumed to be independent. The exponent \(q\approx 1.4\) is observed for price returns calculated within \(t=10^{3}\) minutes (roughly 3 days). For larger times \(q\) converges to 2 since the Central Limit Theorem (CLT) requires the PDF of the price return to converge to a Gaussian distribution when \(t\to\infty\). In the case of Bitcoin, the exponent \(q\) is larger (\(q\approx 1.5\)) and it converges to \(q=2\) faster (in the order of hours), which may be directly related to the transaction between different crypto-currencies. In both cases, the CLT is guaranteed since the standard deviation of the \(q-\)Gaussian distribution is finite for \(q<5/3\). Further investigation of these peculiar dynamic features from microscopic models such as agent-based models or order book models would provide some light on the underlines mechanism of the time evolution of these exponents.
## VI Authors' contribution
Y.T. and F. G. contributed as first authors of this paper; F.G. wrote the first draft of the paper.
## Appendix A dimensional analysis
In this appendix, we describe the scaling properties of the PDFs. From the dimensional analysis of the normality condition, one realizes that the dimension of any PDF \(P(\mathbf{x})\) is \([P(\mathbf{x})]=[\mathbf{x}]^{-d}\), where \([Q]\) shows the _space_ dimension of the quantity \(Q\). To see this more explicitly, we apply the scaling transformation \(\mathbf{x}\to\mathbf{y}\equiv\lambda\mathbf{x}\), and using the conservation of probability \(P(r)d^{d}x=P(y)d^{d}y\), one finds:
\[P(\lambda x)=\lambda^{-d}P(x). \tag{10}\]
For a stochastic process, where \(P\) changes with time we use the root mean displacement relation \(r\equiv|\mathbf{x}|\):
\[\left\langle r^{2}\right\rangle\propto\phi(t)^{2} \tag{11}\]
to predict the form of the PDF, where \(\phi(t)\) is a function of time the form of which determines the anomalous diffusion nature. In this case, the equation 10 generalizes to:
\[P(\lambda\mathbf{x},\lambda\phi(t))=\lambda^{-d}P(\mathbf{x},\phi(t)), \tag{12}\]
which is realized using the normality condition:
\[\begin{split}\int_{-\infty}^{\infty}& P(\mathbf{x}, \phi(t))d^{d}x=\lambda^{d}\int_{-\infty}^{\infty}P(\lambda\mathbf{x},\lambda \phi(t))d^{d}x=1\\ &\to P(\lambda\mathbf{x},\lambda\phi(t))=\lambda^{-d}P( \mathbf{x},\phi(t)).\end{split} \tag{13}\]
A solution of this equation is a factorized form as follows:
\[P(\mathbf{x},t)=\frac{1}{\phi(t)^{d}}F\left[\phi(t)^{-1}\mathbf{x}\right], \tag{14}\]
where the function \(F\) is a well-behaved one to be determined using the governing equation. Note that \(F\) is invariant under \(\mathbf{x}\to\lambda\mathbf{x},\phi(t)\to\lambda\phi(t)\), and also:
\[\left\langle r^{2}\right\rangle=\frac{\int\mathrm{d}^{d}xr^{2}F\left[\frac{ \mathbf{x}}{\phi(t)}\right]}{\int\mathrm{d}^{d}xF\left[\frac{\mathbf{x}}{ \phi(t)}\right]}=\left[\frac{\int\mathrm{d}^{d}uu^{2}F\left[\mathbf{z}\right] }{\int\mathrm{d}^{d}xF\left[\mathbf{z}\right]}\right]\phi(t)^{2}, \tag{15}\]
where \(\mathbf{z}\equiv\frac{\mathbf{x}}{\phi(t)}\). The scaling properties of the time series are associated with the form of \(\phi(t)\). In fact, for the solution of the Eq. 7 we have \(\phi(t)=\phi_{\mathrm{SS}}(t)\) where the index "SS" points out the self-similarity law, given by [8; 9]:
\[\phi_{\mathrm{SS}}(t)\propto t^{1/\alpha}\equiv t^{-H}. \tag{16}\]
where \(\alpha\) is a self-similarity exponent, and \(H\) is the Hurst exponent. Combining Eq. 5 and Eq. 16, one reaches the Eq. 1.
## Appendix B Relationship between FBM, time fractional FPEs and SDEs
This short appendix is devoted to some scaling properties of the fractional Brownian Motion (fBm), especially by focusing on its Langevin equation, and the corresponding Fokker-Planck equation (FPE). Our primary objective is to explore the relationship between time-fractional generalized Lagevine and FPE equations for fBm. This example sheds light on the essential difference between the driving processes of BM as an example of semi-martingales and fBM as a representative of non-semi-martingales. The complexity of the latter case arises mainly from the presence of correlations, resulting in sub- or super-diffusion. This also shows the consequences of the presence of non-local effects [9].
The classical FPE establishes a relationship between the SDE driven by Bm and its associated partial differential equation (PDE). The fBm \(B^{H}:=\{B^{H}(t),t\geqslant 0\}\) is a family of stochastic processes indexed by the Hurst index \(H\in(0,1)\). For each \(H\), the process \(B^{H}\) is defined as a weighted moving average of a Bm process:
\[B^{H}(t)=\int_{0}^{t}(t-\tau)^{H-1/2}dB(\tau), \tag{17}\]
where \(B:=\{B(t),t\geq 0\}\) is the Brownian motion and \(H\) is the Hurst parameter. The fBm with variance \(\sigma(t)\) is the solution of the following fractional SDE [69]:
\[\frac{d^{\xi}X}{d^{\xi}t}=\sigma(t)\eta(t) \tag{12}\]
where \(\eta(t)=dB/dt\) is the normalized white noise, \(1/2<\xi=H+1/2\leq 1.5\) and \(d^{\xi}/dt^{\xi}\) is the Riemann-Liouville fractional operator that is defined as
\[\frac{d^{\xi}f}{dt^{\xi}}:=\frac{1}{\Gamma(\xi)}\int_{0}^{t}{(t-\tau)^{\xi-1}f (\tau)d\tau} \tag{13}\]
Further, the fractional FPE of the fBm process is given by [69]
\[\frac{d^{2H}P_{X}(x,t)}{dt^{2H}}=\frac{\Gamma(2H)}{2\Gamma^{2}(H+1/2)}\sigma(t )\frac{\partial^{2}P_{X}(x,t)}{\partial x^{2}} \tag{14}\]
There are two main differences between the fBm and the fractional q-Gaussian process investigated in this paper: first, the FPE of the fBm is linear and its solution is expressed in terms of Gaussian distribution, while the latter is nonlinear and the solution is given in terms of the q-Gaussian distribution. Second, the fractional FPE of the fBm involves non-local fractional derivatives while the fractional q-Gaussian process involves local (Katugampola) fractional operators. Yet in both cases, one can recover the Bm by either taking \(H=1/2\) in the fBm or \(q=1\) and \(\alpha=1\) in the fractional q-Gaussian diffusion.
On the other hand, discussing the relationship between the fractional FPE (or the governing equation) and the SDE of their associated time series are equivalent to discussing the relation between the auto-correlation of the time series and the fractionalization scheme in the governing equation. The key point is the scaling properties of the governing equation describing a self-similar time series. If \(\{X(t)\}_{t\in\mathbb{Z}}\) denotes a self-similar time series with the property
\[\sqrt{\left\langle X^{2}\right\rangle}\propto t^{H} \tag{15}\]
Then the corresponding governing equation should have the same symmetry. It means the invariance of the PDF (of the governing equation) up to a scaling factor, i.e.
\[P(X,t)=t^{-H}F(X/t^{H}), \tag{16}\]
where \(F\) is a function to be fixed by the governing equation. FBMs are described by Eq. (16) when \(F\) is an exponential function,
\[P\left(X,t\right)\propto t^{-H}\exp\left[-\frac{1}{2}\left(\frac{X}{t^{H}} \right)^{2}\right]. \tag{17}\]
In fact, the governing equation for a self-similar time series should include fractional operators, or one should use a space- or time-dependent diffusion coefficient with power-law dependence. The symmetry of the system can easily be found in the corresponding PDF of the time series, and also by calculating the second moment of \(X\), which \(X\) denotes a self-similar time series. This means the fractionalization exponent is manifest in the PDF.
Consider a general \(H\)-self-similar time series \(\{X(t)\}_{t\in\mathbb{Z}}\) defined by the relation \(\{X(ct)\}_{t\in\mathbb{Z}}\equiv\{c^{H}X(t)\}_{t\in\mathbb{Z}}\), where \(c>0\), and \(H\) is the Hurst exponent. If this process has stationary increments \(Y_{n}=X(n)-X(n-1)\), then the auto-correlation \(\gamma_{Y}(k)\equiv\langle Y_{k}Y_{0}\rangle-\langle Y_{k}\rangle\langle Y_{0}\rangle\) behaves like \(k^{2d-1}\) as \(k\rightarrow\infty\), where \(d=H-\frac{1}{2}\), and \(0<d<1/2\), ensuring that \(\sum_{k=-\infty}^{\infty}\gamma_{Y}(k)=\infty\). From a spectral domain perspective, the spectral density of \(\{Y_{n}\}\) behaves as \(\omega^{-2d}\) as the frequency \(\omega\to 0\). This relates self-similar time series with fractionalization which is applied to the price return time series which becomes stationary by normalizing the detrended data set (see Eq. 47) [9]. Let us consider the special case \(\Phi(t)\sim t^{1/\alpha_{t}}\) in Eq. 44. Following these facts, we suggest a relation between the Hurst exponent \(H_{t}=1/\alpha_{t}\) given in Eq. (11), with a self-similar function Eq. 42 for the VO-nonlinear case.
## Appendix C Some details of VO calculations for PME
In this appendix, we introduce and inspect a VO extension of the Katugampola fractional operator, denoted as VO-K in this paper. We outline the definition of this operator and discuss some of its key properties. For a more comprehensive understanding of the Katugampola fractional operator and its underlying principles, we recommend referring to [67].
A VO-K fractional operator, which is used to construct the VO-PME in Sec. III is defined as
\[\mathcal{D}^{\alpha(t)}f(t)=\lim_{\epsilon\to 0}\frac{f(te^{ct^{- \alpha(t)}})-f(t)}{\epsilon}, \tag{18}\]
for \(t>0\) and \(\alpha(t)\in(0,1]\). If \(0\leq\alpha(t)<1\), the VO-K operator generalizes the classical calculus properties of polynomials. Furthermore, if \(\alpha(t)=1\), the definition is equivalent to the classical definition of the first-order derivative of the function \(f\). When \(\alpha(t)\in(n,n+1]\) (for some \(n\in\mathbb{N}\), and \(f\) is an \(n-\)differentiable at \(t>0\)), the above definition generalizes to
\[\mathcal{D}^{\alpha(t)}f(t)=\lim_{\epsilon\to 0}\frac{f^{(n)}(te^{cx^{n- \alpha(t)}})-f^{(n)}(t)}{\epsilon}.\]
If \(f\) is \((n+1)-\)differentiable at \(t>0\), then we have
\[\mathcal{D}^{\alpha(t)}f(t)=t^{n+1-\alpha(t)}f^{(n+1)}(t). \tag{19}\]
The properties of the VO-K derivatives are a simple ex
tension of the ordinary derivatives:
\[\mathcal{D}^{\alpha(t)}[af+bg] =a\mathcal{D}^{\alpha(t)}(f)+b\mathcal{D}^{\alpha(t)}(g),\ \ \forall a,b\in\mathbb{R},\] \[\mathcal{D}^{\alpha(t)}[C] =0,\ \ C\in\mathbb{R},\] \[\mathcal{D}^{\alpha(t)}[fg] =f\mathcal{D}^{\alpha(t)}(g)+g\mathcal{D}^{\alpha(t)}(f),\] \[\mathcal{D}^{\alpha(t)}[f/g] =\frac{g\mathcal{D}^{\alpha(t)}(f)-f\mathcal{D}^{\alpha(t)}(g)}{g ^{2}},\] \[\mathcal{D}^{\alpha(t)}(fog)(t) =f^{\prime}(g(t))\mathcal{D}^{\alpha(t)}g(t),\]
where \(fog(t)\equiv f(g(t))\). The proof of the properties of the variable order Katugampola fractional operator (VO-K) is analogous to the proof presented in [67]. In our case, we replace the constant order parameter \(\alpha\) with the variable order parameter \(\alpha(t)\).
Throughout this paper, the notations \(\mathcal{D}^{\alpha(t)}\) and \(\frac{\partial^{\alpha(t)}}{\partial t^{\alpha(t)}}\) is used with the same meaning. The fractional Katugampola calculations for the VO-PME is performed, as done in SEC. (III), starting from Eq. 4, and the Eq. (11), and applying the properties of the VO-K derivative, we get
\[\frac{\partial^{\xi(t)}}{\partial t^{\xi(t)}} \left(\frac{1}{\phi(t)}F\left(\frac{x}{\phi(t)}\right)\right)=F \left(\frac{x}{\phi(t)}\right)\left(\frac{-\mathcal{D}^{\xi(t)}\phi(t)}{\phi^ {2}(t)}\right)\] \[+\frac{1}{\phi(t)}\left(F^{{}^{\prime}}\left(\frac{x}{\phi(t)} \right)\mathcal{D}^{\xi(t)}\left(\frac{x}{\phi(t)}\right)\right)\] \[=F\left(z\right)\left(\frac{-\mathcal{D}^{\xi(t)}\phi(t)}{\phi^{ 2}(t)}\right)+zF^{{}^{\prime}}\left(z\right)\left(\frac{-\mathcal{D}^{\xi(t)} \phi(t)}{\phi^{2}(t)}\right)\] \[=\frac{-\mathcal{D}^{\xi(t)}\phi(t)}{\phi^{2}(t)}\frac{d}{dz}[zF( z)]. \tag{100}\]
Then, we get the following two equations:
\[\begin{split}&\partial_{x}^{2}P^{\nu(t)}(x,t)=\frac{1}{\phi^{\nu(t)+2}}\frac{d^{2}}{dz^{2}}F^{\nu(t)},\\ &\frac{\partial^{\xi(t)}P(x,t)}{\partial t^{\xi(t)}}=\frac{-1}{ \phi^{2}(t)}\frac{\partial^{\xi(t)}\phi}{\partial t^{\xi(t)}}\left[F+z\frac{ d}{dz}F\right],\end{split} \tag{101}\]
so that,
\[\frac{-1}{\phi^{2}(t)}\frac{\partial^{\xi(t)}\phi}{\partial t^{\xi(t)}}\frac{ d}{dz}[zF]=\frac{D(t)}{\phi^{\nu(t)+2}}\frac{d^{2}}{dz^{2}}F^{\nu(t)}. \tag{102}\]
## Appendix D The local VO fractional non-linear time diffusion equation with drift
The drift is often an inevitable part of stochastic systems, that should be analyzed in detail for every case study to control its effects. Although it is suggested to define the equations for the general drift term. For the case where it depends only on time (as is the case for many physical systems of interest), the situation becomes easier. In this case, the governing equation is:
\[\frac{\partial^{\xi(t)}}{\partial t^{\xi(t)}}P(x,t)=-a(t)\frac{\partial P(x,t) }{\partial x}+D(t)\frac{\partial^{2}P^{\nu(t)}(x,t)}{\partial x^{2}}, \tag{103}\]
Through a change of variable \(\tau=t^{\xi(t)}\) and VO Katugampola (VO-K) fractional derivative (see Appendix C, provided \(h(t)=\xi(t)\ln t\) has an inverse function, we have:
\[\partial_{\tau}P(x,t)=-a_{1}(\tau)\partial_{x}P(x,t)+D_{1}(\tau)\partial_{x}^{ 2}P^{\nu_{1}(\tau)}(x,t),\]
where \(a_{1}(\tau)=a(t(\tau))(t\xi^{\prime}(t)\ln t+\xi(t))^{-1}\), \(D_{1}(\tau)=D(t(\tau))(t\xi^{\prime}(t)\ln t+\xi(t))^{-1}\), and \(\nu_{1}(\tau)=\nu(t(\tau))\). By using the change of variable \((s,y)=(\tau,x-x_{0}-f(\tau))\), where \(f(\tau)=\int_{0}^{\tau}a_{1}(\tau^{\prime})d\tau^{\prime}\), and using the fact that \(\frac{\partial y}{\partial\tau}=-a_{1}(\tau)\) and \(\partial_{\tau}+a_{1}(\tau)\partial_{x}=\partial_{s}\), one finds that the governing equation \(P(y,\tau)\) is:
\[\partial_{\tau}P(y,\tau)=D(\tau)\partial_{y}^{2}P^{\nu(\tau)}(y,\tau),\]
for which the solution is (\(x_{0}\equiv 0\) and \(k\equiv 1\) and \(k_{1}\equiv 0\)):
\[P(y,\tau|y_{0},\tau_{0})=\frac{A_{q}(t)}{\left(\int_{\tau_{0}}^{ \tau}g(s)ds\right)^{\frac{1}{\nu(\tau)+1}}}\] \[\times\left(c+\frac{(\nu(\tau)-1)}{2\nu(\tau)}\frac{y^{2}}{ \left(\int_{\tau_{0}}^{\tau}g(s)ds\right)^{\frac{2}{\nu(\tau)+1}}}\right)^{ \frac{1}{\nu(\tau)-1}}, \tag{104}\]
where \(g(s)=D(s)(1+\nu(s)).\) Let us equate the \(P(y,\tau)\):
\[P(x,t)=\frac{\partial y}{\partial x}P(y,\tau(t)).\]
Then, we obtain that:
\[\begin{split}& P(x,t|_{0},t_{0})=\frac{A_{q}(t)}{\left(\int_{t_{0}}^ {t}g(s)ds\right)^{\frac{1}{\nu(t)+1}}}\\ &\times\left(c+\frac{(\nu(t)-1)}{2\nu(t)}\frac{(x-f(t))^{2}}{ \left(\int_{t_{0}}^{t}g(s)ds\right)^{\frac{2}{\nu(t)+1}}}\right)^{\frac{1}{ \nu(t)-1}},\end{split} \tag{105}\]
where \(A_{q}\) is a normalization factor, and \(c\) and \(k\) are constant. The Eq. (105) is a vo\(q\)-Gaussian solution with a drift.
|
2301.00642 | Special polynomials and new real-rootedness results | In this paper, we exhibit new monotonicity properties of roots of families of
orthogonal polynomials $P_n^{(z)}(x)$ depending polynomially on a parameter
(Laguerre and Gegenbauer). We show that $P_n^{(z)}(x)$ are realrooted in $z$
for $x$ in the support of orthogonality. As an application we show
realrootedness in $x$ and interlacing properties of $\partial_z^kP_n^{(z)}(x)$
for $k\leq n$ and $z \geq 0$, establishing a dual approach to orthogonality. | Aurelien Xavier Gribinski | 2022-12-30T15:07:32Z | http://arxiv.org/abs/2301.00642v2 | # Special polynomials and new real-rootedness results
###### Abstract.
In this paper, we show that for some orthogonal polynomials \(P_{n}^{(s)}(x)\) showing up in physics, namely Laguerre and Gegenbauer, \(P_{n}^{(z)}(x)\) are realrooted in \(z\) for \(x\) in the support of orthogonality. As an application we show realrootedness in \(x\) and interlacing properties of \(\partial_{z}^{k}P_{n}^{(z)}(x)\) for \(k\leq n\) for \(z>0\).
1. Introduction
2. General strategy
3. Laguerre polynomials
4. Gegenbauer polynomials
5. Applications to realrootedness in \(x\)
## 1. Introduction
Orthogonal polynomials like generalized Laguerre and Gegenbauer polynomials have long been studied, and show up in all fields of maths and physics. However little has been said about the properties of such polynomials when we vary the underlying parameter (see [1]). We study families of generalized orthogonal polynomials \(P_{n}^{(z)}(x)\) depending on a parameter \(z\) from a new angle, that is, as bivariate polynomials \(P_{n}(x,z)\). We fix the usual variable \(x\) and consider them instead as polynomials in \(z\). We show that they are real-rooted in \(z\) for \(x\) in the support of the underlying measure of orthogonality, and monotonous. Furthermore we show that when we differentiate these orthogonal polynomials with respect to \(z>0\) and consider them as polynomials in \(x\), then they are realrooted in \(x\). Such polynomials (derivatives with respect to the parameter) seem to have many nice properties similar to their corresponding orthogonal polynomials from which they are derived and yet have never been studied.
## 2. General strategy
Consider \(P_{n}^{(z)}(x)\), a family of polynomials depending on a parameter \(z\). We want to show that they are real-rooted in \(z\) for a fixed \(x\) in a given interval.
The strategy is as follows
* We check that for \(x\) at one of the extreme point of the interval it is real-rooted.
* We show that locally around this extreme point the roots in \(z\) are all monotonous.
* We show that there can't be a shared zero in \(z\) for \(\partial_{x}P_{n}(x,z)\) and \(P_{n}(x,z)\) or equivalently for \(P_{n-1}(x,z)\) and \(P_{n}(x,z)\).
* The roots in \(z\) are therefore monotonous when they are well-defined.
* We show that the roots in \(z\) of \(P_{n}(x,z)\) and simultaneously \(P_{n-1}(x,z)\) and \(\partial_{x}P_{n}(x,z)\) interlace as long as they are well-defined.
* We extend the local properties by exhibiting an ODE to which all roots in \(z\) are solutions and show that there has to be explosion at the other extreme point of the interval, all roots evolving monotonously along the way.
First we derive a way to locally prove the existence of rel-rootedness if it is known at some extreme point of the interval.
**Lemma 2.1** (Local existence of the roots and smoothness).: _Take \(a\in\mathbb{R}\). Assume \(P(x,z)\) is a bivariate polynomial such that \(P(a,z)\) is realrooted in \(z\) of degree \(j\) and has only simple roots in \(z\) (let's call them \(z_{i}(a)\) for \(i=1....j\)). Assume that \(P(x,z)\) has degree less than \(j\) for all \(x\).Then for \(x\) in a neighborhood of \(a\), \(P(x,z)\) is also realrooted in \(z\) of degree \(j\) with simple roots \(z_{i}(x)\). Furthermore, the roots \(z_{i}(x)\) are \(C^{\infty}\) functions of \(x\)._
Proof.: Consider the equation \(P(x,z)=0\), around the points \(\big{(}a,z_{i}(a)\big{)}\). We have \(\partial_{z}P(x,z)_{|_{x=a,z=z_{i}(a)}}\neq 0\) as the roots are simple (can't be a root of the derivative in \(z\)). Then using the implicit function theorem, we can find in the neighborhood of each point a \(C^{\infty}\) function \(z_{i}(x)\) which will be the only solution to the equation \(P(x,z)=0\) on this neighborhood. Therefore we have found \(j\) roots, and it is the maximal number of roots for a fixed \(x\), as the polynomial is of degree less than \(j\).
## 3. Laguerre polynomials
Let \(L_{n}^{(z)}(x)\) be the Laguerre polynomials with complex parameter \(z\). It is a polynomial in \(\mathbb{R}[x,z]\).
**Theorem 3.1**.: _For fixed \(x_{0}\in[0,+\infty[\), \(L_{n}^{(z)}(x_{0})\) is a real-rooted polynomial in \(z\) of degree \(n\). Furthermore, its roots in \(z\) are strictly increasing to \(+\infty\) when \(x_{0}\) moves along \([0,+\infty[\)._
Proof.: Let's recall the hypergeometric confluent definition
\[L_{n}^{(z)}(x) =\binom{n+z}{n}M(-n,z+1,x)\] \[=\prod_{l=1}^{n}\frac{z+n+1-l}{l}\sum_{k=0}^{n}\frac{(-1)^{k}n!} {(n-k)!\prod_{l=0}^{k-1}(z+1+k)}x^{k}\] \[=\sum_{k=0}^{n}\frac{(-1)^{k}\prod_{l=1}^{n}(z+l)}{(n-k)!\prod_{l =1}^{k}(z+l)}x^{k}\] \[=\sum_{k=0}^{n}\frac{(-1)^{k}\prod_{j=k+1}^{n}(z+j)}{(n-k)!}x^{k}\] \[=\frac{1}{n!}z^{n}+\Big{(}\frac{\sum_{j=1}^{n}j}{n!}-\frac{x}{(n- 1)!}\Big{)}z^{n-1}+R_{n-2}(z,x)\]
where \(R_{n-2}(z,x)\) is of degree lower than \(n-2\) in \(z\). We will henceforth write \(L_{n}(x,z)\) as it is clearly a bivariate polynomial of degree \(n\) in \(z\) with front coefficient \(\frac{1}{n!}\). Let's furthermore decompose \(L_{n}(x,z)\) using a priori complex roots \(\lambda_{i}(x)\):
\[L_{n}(x,z)=\frac{1}{n!}\prod_{i=1}^{n}\big{(}z-\lambda_{i}(x)\big{)}\]
where we order the roots by decreasing module: \(|\lambda_{1}(x)|\geq|\lambda_{2}(x)|\geq...\geq|\lambda_{n}(x)|\).
**Lemma 3.2** (Local realrootedness).: \(L_{n}(x,z)\) _is real rooted of degree \(n\) in \(z\) with simple roots in a neighborhood of \(x=0\)._
Proof.: We have
\[L_{n}(0,z)=\frac{\prod_{l=1}^{n}(z+l)}{n!}\]
so that we can apply Lemma 2.1 with \(a=0\).
**Lemma 3.3** (Local increasing property).: _The roots of \(L_{n}(x,z)\) in \(z\) are all strictly increasing when \(x\) is, in a neighborhood of \(x=0\)._
Proof.: To prove this, we need some information on the derivatives with respect to \(x\) of the roots in the neighborhood of \(0\), which we know are going to be real by the previous lemma.
**Lemma 3.4**.: \(\frac{d^{l}\lambda_{i}(x)}{dx^{l}}_{|_{x=0}}=0\) _for \(1\leq l<i\), and \(\frac{d^{i}\lambda_{i}(x)}{dx^{i}}_{|_{x=0}}>0\)._
Proof.: We get, for all \(1\leq l\leq n\),
\[\partial_{x}^{l}L_{n}(x,z)_{|_{x=0}}=l!(-1)^{l}\frac{\prod_{j=l+1}^{n}(z+j)}{( n-l)!}\]
Notice that \(\lambda_{i}(0)=-i\) for \(i=1...n\), so we see that for \(l\leq i-1\),
\[\partial_{x}^{l}L_{n}\big{(}0,\lambda_{i}(0)\big{)}=l!(-1)^{l}\frac{\prod_{j= l+1}^{n}(\lambda_{i}(0)+j)}{(n-l)!}=0\]
And
\[\partial_{x}^{i}L_{n}\big{(}0,\lambda_{i}(0)\big{)}=i!(-1)^{i}\frac{\prod_{j= i+1}^{n}(\lambda_{i}(0)+j)}{(n-i)!}=i!(-1)^{i}\]
On the other hand,
\[\partial_{z}L_{n}\big{(}0,\lambda_{i}(0)\big{)}=\frac{1}{n!}\prod_{l=1,l\neq i }^{n}(-i+l)=\frac{i!(-1)^{i-1}(n-i)!}{n!}\]
Now we have \(L_{n}\big{(}x,\lambda_{i}(x)\big{)}=0\) for all \(i=1...n\), by definition, so differentiating with respect to \(x\), we get
\[\frac{d\lambda_{i}(x)}{dx}=-\frac{\partial_{x}L_{n}}{\partial_{z}L_{n}}\big{(} x,\lambda_{i}(x)\big{)}\]
Note that the denominator is nonzero as the roots in \(z\) are simple at \(0\) ( so they won't be roots of the derivative in \(z\)). Using Leibniz's formula and induction on \(l\), we get for \(i>l\geq 1\),
\[\frac{d^{l}\lambda_{i}(x)}{dx^{l}}_{|_{x=0}}=0\]
And
\[\frac{d^{i}\lambda_{i}(x)}{dx^{i}}_{|_{x=0}}=-\frac{\partial_{x}^{i}L_{n}}{ \partial_{z}L_{n}}\big{(}-1,\lambda_{i}(0)\big{)}=\frac{n!}{(n-i)!}>0\]
We conclude by a Taylor expansion around \(0\) as
\[\lambda_{i}(x)=-i+\frac{x^{i}}{i!}\frac{n!}{(n-i)!}+o(x^{i})\]
**Lemma 3.5** (Distinct roots, degree and derivative wise).: _Assume \(\lambda_{i}(x)\) is real for \(x\in]0,b_{i}[\), \(b_{i}>0\). Then \(\partial_{x}L_{n}(x,\lambda_{i}(x))\) can't be zero for \(x\in]0,b_{i}[\). Therefore it has a constant sign on this interval. Equivalently, \(L_{n-1}(x,\lambda_{i}(x))\) can't be zero either: that is we can't have a nontrivial shared real root for \(L_{n-1}(x,z)\) and \(L_{n}(x,z)\)._
Proof.: Traditional results on simplicity of the roots can't be used because they are true only for \(z\geq 0\). By definition, \(L_{n}\big{(}x,\lambda_{i}(x)\big{)}=0\). Then the usual differential euqation still holds
\[x\partial_{x}L_{n}(x,z)=nL_{n}(x,z)-(n+z)L_{n-1}(x,z) \tag{1}\]
Let's assume by contradiction that \(\partial_{x}L_{n}\big{(}x_{0},\lambda_{i}(x_{0})\big{)}=0\) for some \(i\) and \(x_{0}\in]0,b_{i}[\). As \(\partial_{x}L_{n}(x,\lambda_{i}(x))\) is nonzero in a neighborhood of \(x=0\), \(x>0\)(local monotonicity above), then we can assume \(x_{0}\) is the smallest \(x>0\) such that \(\partial_{x}L_{n}\big{(}x,\lambda_{i}(x)\big{)}=0\). Therefore as
\[\frac{d\lambda_{i}(x)}{dx}=-\frac{\partial_{x}L_{n}}{\partial_{z}L_{n}}\big{(} x,\lambda_{i}(x)\big{)}\]
we have that on \(]0,x_{0}]\), \(\lambda_{i}(x)\) is strictly increasing in \(x\). As \(\lambda_{i}(0)\geq-n=\lambda_{n}(0)\) for all \(i\), then \(\big{(}n-\lambda_{i}(x_{0})\big{)}>0\), and we get that the statement is equivalent to \(L_{n-1}\big{(}x_{0},\lambda_{i}(x_{0})\big{)}=0\) using Equation 1. But then, as the following recurrence relations are still valid
\[(n+1)L_{n+1}(x,z)=(-x+2(n+1)+z)L_{n}(x,z)-(n+z)L_{n-1}(x,z)\]
we also get by induction \(L_{n+k}(x_{0},\lambda_{i}(x_{0})+1)=0\) for all \(k\in\mathbb{N}\). Using then the equality
\[\partial_{x}L_{n+k}(x,z)=-L_{n+k-1}(x,z+1)\]
we get that \(L_{n+k-1}(x_{0},\lambda_{i}(x_{0})+1)=0\) for all \(k\in\mathbb{N}\). Using induction, applying successively the previous recurrence equations, we then get that \(L_{n+k-1}(x_{0},\lambda_{i}(x_{0})+j)=0\) for all \(j\in\mathbb{N}\). For \(j\leq n\), the polynomials \(L_{n+k-1}(x,\lambda_{i}(x)+j)\) are standard Laguerre polynomials (positive parameter). It would mean that successive Laguerre polynomials of parameter \(\lambda_{i}(x_{0})+j\) have the root \(x_{0}\) in common, so that their derivatives share this root too, which is absurd as the roots of Laguerre polynomials are simple by classical orthogonality. We conclude that \(L_{n-1}(x,\lambda_{i}(x))\) as well as \(\partial_{x}L_{n}\big{(}x_{0},\lambda_{i}(x_{0})\big{)}\) can't be zero, and therefore \(\partial_{x}L_{n}\big{(}x_{0},\lambda_{i}(x_{0})\big{)}\) has a constant sign for \(x\in]0,b_{i}[\).
**Corollary 3.6** (Extended monotonicity).: _Assume \(\lambda_{i}(x)\) is real for \(x\in]0,b_{i}[\), \(b_{i}>0\). Then it follows from the previous proof that for \(x\in]0,b_{i}[\)_
\[\frac{d\lambda_{i}(x)}{dx}>0\]
**Theorem 3.7** (Interlacing roots, degreewise, simple roots).: _Consider an interval \(I=[0,b[\) such that \(L_{n}(x,z)\) has real roots in \(z\) on \(I\), then the same will be true of \(L_{n-1}(x,z)\) and their roots interlace. Furthermore, the interlacing is strict for \(x>0\) and both polynomials have simple roots on \(I\)._
Proof.: Let's write
\[L_{n}(x,z)=\frac{1}{n!}\prod_{i=1}^{n}\big{(}z-\lambda_{i}^{n}(x)\big{)} \hskip 56.905512ptL_{n-1}(x,z)=\frac{1}{(n-1)!}\prod_{i=1}^{n-1}\big{(}z- \lambda_{i}^{n-1}(x)\big{)}\]
and show that all \(x\in I\), all \(i\leq n-1\):
\[\lambda_{i}^{n}(x)\geq\lambda_{i}^{n-1}(x)\geq\lambda_{i+1}^{n}(x)\]
with strict inequalities for \(x>0\). We first check the property locally, that is a neighborhood of \(0\).
\[\lambda_{i}^{n}(0)=-i \lambda_{i}^{n-1}(0)=-i \lambda_{i+1}^{n}(0)=-(i+1)\]
We have
\[\frac{d^{i}\lambda_{i}^{n}(x)}{dx^{i}}_{|_{x=0}} =\frac{n!}{(n-i)!}\] \[=\frac{n}{n-i}\frac{(n-1)!}{(n-1-i)!}\] \[=\frac{n}{n-i}\frac{d^{i}\lambda_{i}^{n-1}(x)}{dx^{i}}_{|_{x=0}}\]
As \(\frac{n}{n-i}>1\), we conclude that for all \(i\leq n-1\), \(\frac{d^{i}\lambda_{i}^{n}(x)}{dx^{i}}_{|_{x=0}}>\frac{d^{i}\lambda_{i}^{n-1}( x)}{dx^{i}}_{|_{x=0}}\).
We can then do a Taylor expansion around \(x=0\):
\[\lambda_{i}^{n}(x)=-i+\frac{(x+1)^{i}}{i!}\frac{d^{i}\lambda_{i}^{n}(x)}{dx^{i }}_{|_{x=-1}}+o((x+1)^{i})\quad\lambda_{i}^{n-1}(x)=-i+\frac{(x+1)^{i}}{i!} \frac{d^{i}\lambda_{i}^{n-1}(x)}{dx^{i}}_{|_{x=-1}}+o((x+1)^{i})\]
It is then clear that in a neighborhood of \(0\), \(\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)\).
As \(\lambda_{i}^{n-1}(-1)-\lambda_{i+1}^{n}(-1)=1\), we also get \(\lambda_{i}^{n-1}(x)>\lambda_{i+1}^{n}(x)\) in a neighborhood of \(0\). Now as for all \(i\)\(\lambda_{i}^{n}(x),\lambda_{i}^{n-1}(x),\lambda_{i+1}^{n}(x)\) are continuous functions of \(x\), if by contradiction such inequalities where to fail for some \(x\in I\), then there would exist \(x_{0}\) such that \(\lambda_{i}^{n}(x_{0})=\lambda_{i}^{n-1}(x_{0})\) or \(\lambda_{i}^{n-1}(x_{0})=\lambda_{i+1}^{n}(x_{0})\).But then this would mean that \(\lambda_{i}^{n-1}(x_{0})\) is a root of \(G_{n}(x_{0},z)\) and \(G_{n-1}(x_{0},z)\), which is impossible by Lemma 4.5. Therefore we conclude that the inequality
\[\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)>\lambda_{i+1}^{n}(x)\]
holds for all\(x\in I\), \(x>0\) and \(i\leq n-1\).
**Theorem 3.8** (Interlacing roots, derivative).: _Consider an interval \(I=[0,b[\) such that \(L_{n}(x,z)\) has real roots in \(z\) on \(I\), then the same will be true of \(\partial_{x}L_{n}(x,z)\) and the roots of the two polynomials interlace and are simple._
Proof.: We bring ourselves back to a variant of the previous theorem by using the equality
\[\partial_{x}L_{n}(x,z)=-L_{n-1}(x,z+1)\]
The roots being simple results from Lemma 3.7. So it amounts to proving that \(L_{n-1}(x,z+1)\) and \(L_{n}(x,z)\) interlace. We want to show more precisely that for all \(x\in I\), with \(x>0\), all \(i\leq n-1\)
\[\lambda_{i}^{n}(x)\geq\lambda_{i}^{n-1}(x)-1\geq\lambda_{i+1}^{n}(x)\]
With strict inequalities for \(x>0\). First we check such inequalities in a neighborhood of \(0\). We can check that the inequality \(\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)-1\) is going to be true in a neighborhood of \(0\) as \(\lambda_{i}^{n}(0)=\lambda_{i}^{n-1}(0)\). So the nontrivial one is the other one, \(\lambda_{i}^{n-1}(x)-1>\lambda_{i+1}^{n}(x)\) for \(x>0\). We have equality at the origin as \(\lambda_{i}(0)-1=\lambda_{i+1}(0)\). Then we look at the Taylor expansions around \(x=0\):
\[\lambda_{i}^{n-1}(x)-1=\lambda_{i+1}(0)+\frac{(x+1)^{i}}{i!}\frac{d^{i} \lambda_{i}^{n-1}(x)}{dx^{i}}_{|_{x=0}}+o((x+1)^{i})\]
\[\lambda_{i+1}^{n}(x)=\lambda_{i+1}(0)+\frac{(x+1)^{i+1}}{(i+1)!}\frac{d^{i+1} \lambda_{i+1}^{n}(x)}{dx^{i+1}}_{|_{x=0}}+o((x+1)^{i+1})\]
It is clear then that locally \(\lambda_{i}^{n-1}(x)-1>\lambda_{i+1}^{n}(x)\) as \((x+1)^{i+1}<<(x+1)^{i}\). We extend the inequality to the whole interval \(I\) by noticing again that if the inequalities where not valid anymore, then there would have to be some equality \(\lambda_{i}^{n}(x)=\lambda_{i}^{n-1}(x)-1\) or \(\lambda_{i}^{n-1}(x)-1=\lambda_{i+1}^{n}(x)\), which would mean \(\partial_{x}L_{n}(x,\lambda_{i}^{n-1}(x))=0\) and as \(L_{n}(x,\lambda_{i}^{n-1}(x))=0\), we would again get a contradiction by Lemma 4.5.
**Lemma 3.9** (Global extension through ODE).: _The local property is in fact true over the whole interval: \(L_{n}(x,z)\) is real rooted in \(z\) with simple (distinct) roots for \(x\in[0,+\infty[\), and the roots are all increasing to \(+\infty\) when \(x\) goes to \(+\infty\)._
Proof.: Denote by \(F_{n}(x,z):=-\frac{\partial_{z}L_{n}}{\partial_{z}L_{n}}\big{(}x,z\big{)}\). Consider a rectangular domain \(D\) such that \(\partial_{z}L_{n}(x,z)\) is nonzero on the domain. \(F_{n}\) is continuous in \(x\) and \(z\) in the the domain \(D\). Indeed, it is a rational fraction whose denominator is nonzero and it is therefore \(C^{\infty}\) in both variables by theorem of composition. As \(L_{n}(0,z)\) is realrooted in \(z\) with simple roots, \(\partial_{z}L_{n}\big{(}0,\lambda_{i}(0)\big{)}\neq 0\) and by continuity we can find small rectangles \(D_{i}:=[0,\epsilon]\times[\lambda_{i}(0)-\delta,\lambda_{i}(0)+\delta]\) such that \(L_{n}(x,z)\) is nonzero on \(D_{i}\). A strong version of Picard's theorem tells us that there is a maximal interval \(I_{i}^{max}=[0,\eta_{max}^{i}[\) (where \(\eta_{max}^{i}\in\mathbb{R}^{+}\)) for which the roots \(\lambda_{i}(x)\) (\(i=1,2...n\)) are the unique solutions of the initial value ODE
\[\frac{dz}{dx}=F_{n}(x,z),\qquad z(-1)=-i\]
Note that on \(I_{i}^{max}\), \(\partial_{z}L_{n}\big{(}x,\lambda_{i}(x)\big{)}\neq 0\) (the denominator is nonzero, so that the differential equation is well defined).
Let's prove that \(I_{i}^{max}=[0,+\infty[\) (for all \(i\)) and that there is explosion at \(+\infty\) (roots going to infinity), the roots increasing constantly to \(+\infty\).
By Corollary 3.6, \(F_{n}(x,\lambda_{i}(x))>0\) on \(I_{i}^{max}\). According to Picard's theorem, we either have \(\lambda_{i}(x)\to_{x\to\eta_{max}^{i}}+\infty\) (explosion), or \(\eta_{max}^{i}\) is such that \(\lim_{x\to\eta_{max}^{i}}F_{n}(x,\lambda_{i}(x))\) is not well defined (we leave the domain of definition).
Now, explosion can't happen if \(\eta_{max}^{i}<+\infty\). Indeed, we have using the hypergeometric expansion above
\[\sum_{i=1}^{n}\lambda_{i}(x)=-n!\big{(}\frac{n(n+1)}{2}\frac{1}{n!}-\frac{x}{ (n-1)!}\big{)}=n\big{(}-\frac{n+1}{2}+x\big{)}\]
so the sum of roots is bounded above for \(x<\eta_{max}^{i}\) and there can be no explosion (necessarily to\(+\infty\) by monotonicity).
We can leave the domain of definition only if \(\lim_{x\to\eta_{max}^{i}}\partial_{z}L_{n}\big{(}x,\lambda_{i}(x)\big{)}=0\). If this is the case and if by contradiction \(\eta_{max}^{i}<+\infty\), \(\partial_{z}L_{n}(\eta_{max}^{i},z)\) would be of degree \(n\) in \(z\). Therefore it means that \(\lim_{x\to\eta_{max}^{i}}\lambda_{i}(x)=\mu\) where \(\mu\) is a root of \(\partial_{z}L_{n}(\eta_{max}^{i},z)\). But then it means that we can extend by continuity \(\lambda_{i}(x)\) at \(x=\eta_{max}^{i}\) with \(\lambda_{i}(\eta_{max}^{i})=\mu\). We check by continuity that \(L_{n}\big{(}\eta_{max}^{i},\lambda_{i}(\eta_{max}^{i})\big{)}=\partial_{z}L_{ n}\big{(}\eta_{max}^{i},\lambda_{i}(\eta_{max}^{i})\big{)}=0\) so that in fact \(\lambda_{i}(\eta_{max}^{i})\) is a real double root in \(z\) of \(L_{n}\big{(}\eta_{max}^{i},z\big{)}\). Using Lemma 3.8, as there is a root of \(\partial_{x}L_{n}(x,z)\) between any two roots of \(L_{n}(x,z)\) in \(z\) by interlacing, it follows that necessarily \(\partial_{x}L_{n}\big{(}\eta_{max}^{i},\lambda_{i}(\eta_{max})\big{)}=0\). But this is impossible according to Lemma 4.5. Therefore, we have necessarily \(\eta_{max}^{i}=+\infty\) for all \(i=1...n\).
Furthermore, assume by contradiction that there is no explosion for some index \(i\) at \(+\infty\). As \(\lambda_{i}(x)\) is monotonous for \(x\in[0,+\infty[\), then we have necessarily that \(\lim_{x\to+\infty}\lambda_{i}(x)=\mu\) exists and is finite. By continuity we have \(L_{n}(x,\mu)\sim_{x\to\infty}(-1)^{n}x^{n}\), and \(\partial_{x}L_{n}(x,\mu)\sim_{x\to\infty}n(-1)^{n}x^{n-1}\), as well as \(\partial_{z}L_{n}(x,\mu)\sim_{x\to\infty}(-1)^{n-1}x^{n-1}\) using
\[L_{n}^{(z)}(x)=\sum_{k=0}^{n}\frac{(-1)^{k}\prod_{j=k+1}^{n}(z+j)}{(n-k)!}x^{k}\]
so that
\[\frac{d\lambda_{i}(x)}{dx}\to_{x\to+\infty}n\]
and clearly we would have \(\lambda_{i}(x)\to+\infty\), which is a contradiction.
## 4. Gegenbauer polynomials
Let \(G_{n}^{(z)}(x)\) be the Gegenbauer polynomial with complex parameter \(z\). It is a polynomial in \(\mathbb{R}[x,z]\).
**Theorem 4.1**.: _For fixed \(x_{0}\in[-1,1]\), \(G_{n}^{(z)}(x_{0})\) is a real-rooted polynomial in \(z\) of degree at most \(n\) (exactly \(n\) except for \(x_{0}=0\)). Furthermore, its roots in \(z\) then they are increasing for \(x\in[-1,0[\), and decreasing for \(x\in]0,1]\), with an explosion to infinity at \(0\)._
Proof.: As the Gegenbauer polynomials are even or odd in \(x\), that is \(G_{n}^{(z)}(-x)=(-1)^{n}G_{n}^{(z)}(x)\), it is enough to prove our statement for \(x\in[-1,0[\). We prove it in an incremental way moving \(x\) from \(-1\) to \(0\). Let's recall the hypergeometric expression:
\[G_{n}^{(z)}(x) =\prod_{l=0}^{n-1}(2z+l)\sum_{k=0}^{n}(-1)^{k}\frac{1}{k!(n-k)!} \frac{\prod_{i=0}^{k-1}(2z+n+i)}{\prod_{i=0}^{k-1}(z+1/2+i)2^{k}}(1-x)^{k}\] \[=\sum_{k=0}^{n}(-1)^{k}\frac{2^{n}\binom{n}{k}}{n!}\frac{\prod_{i =0}^{n+k-1}(z+i/2)}{\prod_{i=0}^{k-1}(z+(2i+1)/2)}(1-x)^{k}\] \[=\sum_{k=0}^{n}\frac{2^{n}\binom{n}{k}}{n!}\prod_{i=2k}^{n+k-1}(z+ i/2)\prod_{i=0}^{k-1}(z+i)(x-1)^{k}\]
We will henceforth write \(G_{n}(x,z)\) as it is clear from the previous expression that it is indeed a bivariate polynomial and not a rational fraction in \(z\).
We start dealing with the extreme boundary. We have
\[G_{n}(x,z)=(-1)^{n}G_{n}(-x,z)=\sum_{k=0}^{n}\frac{2^{n}\binom{n}{k}}{n!}(-1) ^{n+k}\prod_{i=2k}^{n+k-1}(z+i/2)\prod_{i=0}^{k-1}(z+i)(x+1)^{k} \tag{2}\]
So that \(G_{n}(-1,z)=(-1)^{n}\frac{2^{n}}{n!}\prod_{i=0}^{n-1}(z+i/2)\), which is clearly realrooted in \(z\) with simple roots. We can also check this property for \(x=0\):
\[G_{n}(0,z)=\frac{\Gamma(n/2+z)}{\Gamma(z)\Gamma(n/2+1)}=\frac{1}{(n/2)!}\prod _{j=0}^{j=n/2-1}(z+j)\quad\text{when $n$ is even}\quad G_{n}(0,z)=0\quad\text{when $n$ is odd}\]
Also, each bivariate polynomial in the sum is of degree \(n\) in \(z\) so the sum is of degree at most \(n\) in \(z\). It is in fact of degree exactly \(n\) for \(x\neq 0\) by inspection of the coefficient of \(z^{n}\) in the sum, which is equal to
\[(-1)^{n}\frac{2^{n}}{n!}\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}(1+x)^{k}=(-1)^{n} \frac{2^{n}}{n!}\big{(}1-(1+x)\big{)}^{n}=(-1)^{n}\frac{2^{n}}{n!}(-x)^{n}= \frac{(2x)^{n}}{n!}\]
Notice that we can write, if \(n\) is even,
\[G_{n}(x,z)=\Big{[}\prod_{j=0}^{n/2-1}(z+j)\Big{]}\bar{G_{n}}(x,z)\]
and if \(n\) is odd,
\[G_{n}(x,z)=\Big{[}\prod_{j=0}^{(n-1)/2}(z+j)\Big{]}\bar{G_{n}}(x,z)\]
So that in all cases for \(x\neq 0\), we can write
\[G_{n}(x,z) =\Big{[}\prod_{j=0}^{\lceil n/2\rceil-1}(z+j)\Big{]}\tilde{G_{n}}(x,z)\] \[=\Big{[}\prod_{j=0}^{\lceil n/2\rceil-1}(z-\mu_{j})\Big{]}\frac{(2 x)^{n}}{n!}\prod_{i=1}^{\lfloor n/2\rfloor}\big{(}z-\lambda_{i}(x)\big{)}\]
where \(\mu_{j}=-j\) and the \(\lambda_{i}(x)\) are a priori complex roots defined only for\(x\neq 0\). But it simplifies greatly for \(x=-1\):
\[\tilde{G_{n}}(x,z)_{|_{x=-1}}=\frac{2^{n}}{n!}(-1)^{n}\prod_{i=1}^{\lfloor n /2\rfloor}(z+1/2+i-1)\]
so that \(\lambda_{i}(-1)=-1/2-(i-1)\) for \(i=1,2...\lfloor n/2\rfloor\) are all real and distinct. That is, approximately half of the roots in \(z\), depending on the oddness, are constant when \(x\) is moving. We can therefore investigate instead the evolution of the roots of \(L_{n}(x,z)\) to avoid considering roots that remain constant (and real). First, let's explain why roots will be indeed smooth and real for \(x\) close to \(-1\).
**Corollary 4.2** (Local reknotedness).: \(\tilde{G_{n}}(x,z)\) _is real rooted of degree \(\lfloor n/2\rfloor\) in \(z\) with simple roots in a neighborhood of \(x=-1\)._
Proof.: We have
\[\tilde{G_{n}}(x,z)_{|_{x=-1}}=\frac{2^{n}}{n!}(-1)^{n}\prod_{i=1}^{\lfloor n/2 \rfloor}(z+1/2+i-1)\]
which has \(\lceil n/2\rceil\) simple roots in \(z\), so we just have to apply Lemma 2.1 with \(a=-1\).
**Lemma 4.3** (Local increasing property).: _The roots of \(\tilde{G_{n}}(x,z)\) in \(z\) are all strictly increasing when \(x\) is, in a neighborhood of \(x=-1\)._
To prove this, we need some information on the derivatives with respect to \(x\) of the roots in the neighborhood of \(-1\).
**Lemma 4.4**.: _If we denote by \(\lambda_{i}(x)\) the roots of \(\tilde{G_{n}}(x,z)\) in decreasing order, then \(\frac{d^{l}\lambda_{i}(x)}{dx^{l}}_{|_{x=-1}}=0\) for \(1\leq l<i\), and \(\frac{d^{l}\lambda_{i}(x)}{dx^{i}}_{|_{x=-1}}>0\)._
Proof.: Using Equation 2 we get that
\[\partial_{x}^{l}G_{n}(x,z)_{|_{x=-1}}=l!\frac{2^{n}\binom{n}{l}}{n!}(-1)^{n+l} \prod_{j=2l}^{n+l-1}(z+j/2)\prod_{j=0}^{l-1}(z+j)\]
So that
\[\partial_{x}^{l}\tilde{G_{n}}(x,z)_{|_{x=-1}}=\frac{2^{n}\binom{n}{l}l!}{n!}( -1)^{n+l}\prod_{j=l}^{\lceil n/2\rceil-1}(z+1/2+j)\prod_{j=n+1}^{n+l-1}(z+j/2)\]
So we see that
\[\partial_{x}^{l}\tilde{G_{n}}(x,z)\big{(}-1,\lambda_{i}(-1)\big{)}=0\text{ for }i\geq l+1\]
and
\[(-1)^{n+i}\partial_{x}^{i}\tilde{G_{n}}(x,z)\big{(}-1,\lambda_{i}(-1)\big{)}= \frac{2^{n}\binom{n}{l}i!}{n!}\prod_{j=i}^{\lceil n/2\rceil-1}\big{(}j-(i-1) \big{)}\prod_{j=n+1}^{n+i-1}\big{(}(j-1)/2-(i-1)\big{)}>0\]
as \(i\leq\lfloor n/2\rfloor\).
Now we have \(\tilde{G_{n}}\big{(}x,\lambda_{i}(x)\big{)}=0\) for all \(i\), by definition, so differentiating with respect to \(x\), we get:
\[\frac{d\lambda_{i}(x)}{dx}=-\frac{\partial_{x}\tilde{G_{n}}}{\partial_{z}\tilde{ G_{n}}}\big{(}x,\lambda_{i}(x)\big{)}\]
Note that the denominator is nonzero as the roots in \(z\) are simple at \(-1\) ( so they won't be roots of the derivative in \(z\)). Using Leibniz's formula and induction on \(l\), we get for \(i>l\geq 1\),
\[\frac{d^{l}\lambda_{i}(x)}{dx^{l}}_{|_{x=-1}}=0\]
And
\[\frac{d^{i}\lambda_{i}(x)}{dx^{i}}_{|_{x=-1}}=-\frac{\partial_{x}^{i}\tilde{G_ {n}}}{\partial_{z}\tilde{G_{n}}}\big{(}-1,\lambda_{i}(-1)\big{)}\]
Now, we have \(\tilde{G_{n}}(x,z)_{|_{x=-1}}=(-1)^{n}\frac{2^{n}}{n!}\prod_{j=0}^{\lceil n/2 \rceil-1}(z+1/2+j)\) so that
\[\partial_{z}\tilde{G_{n}}(x,z)\big{(}-1,\lambda_{i}(-1)\big{)}=(-1)^{n}\frac{ 2^{n}}{n!}\prod_{j=0,j\neq i-1}^{\lceil n/2\rceil-1}\big{(}j-(i-1)\big{)}=(-1) ^{n}\frac{2^{n}}{n!}(-1)^{i-1}\prod_{j=0}^{i-2}\big{(}(i-1)-j\big{)}\prod_{j= i}^{\lceil n/2\rceil-1}\big{(}j-(i-1)\big{)}\]
and it follows that \((-1)^{n+i-1}\partial_{z}\tilde{G_{n}}(x,z)\big{(}-1,\lambda_{i}(-1)\big{)}>0\). Therefore
\[\frac{d^{i}\lambda_{i}(x)}{dx^{i}}_{|_{x=-1}}>0\]
as claimed.
Then Lemma 4.3 follows easily by Taylor expansion of the roots in \(x\) around \(0\), as by some asymptotic expansion at \(x=-1\),
\[\lambda_{i}(x)=\lambda_{i}(-1)+\frac{(x+1)^{i}}{i!}\frac{d^{i}\lambda_{i}(x)} {dx^{i}}_{|_{x=-1}}+o((x+1)^{i})\]
We have proved the "initial condition" : now we need to look at the evolution of roots from a differential equation point of view. First, let's prove some intermediate results.
**Lemma 4.5** (Simple roots).: _Assume \(\lambda_{i}(x)\) is real for \(x\in]-1,b_{i}[\), \(b_{i}\leq 0\) ( such a \(b_{i}>-1\) exists according to the previous local existence). Then \(\partial_{x}\tilde{G_{n}}(x,\lambda_{i}(x))\) and a fortiori \(\partial_{x}G_{n}(x,\lambda_{i}(x))\) ( for \(i=1,2...\lfloor n/2\rfloor\)) can't be zero for \(x\in]-1,b_{i}[\). Therefore it has a constant sign on this interval. Equivalently, \(G_{n-1}(x,\lambda_{i}(x))\) can't be zero either: that is we can't have a nontrivial shared root for \(G_{n-1}(x,z)\) and \(G_{n}(x,z)\)._
Proof.: We have \(\partial_{x}G_{n}(x,\lambda_{i}(x))=\prod_{j=0}^{\lceil n/2\rceil-1}\big{(} \lambda_{i}(x)+j\big{)}\partial_{x}\tilde{G_{n}}\big{(}x,\lambda_{i}(x)\big{)}\). We can't use directly the results on the monotonicity of the roots of Gegenbauer polynomials when the paremeter is moving, or the simplicity of the roots in \(x\), because the parameter here is negative and orthogonality results don't apply. Let's assume by contradiction that \(\partial_{x}\tilde{G_{n}}(x_{0},\lambda_{i}(x_{0}))=0\), and therefore \(\partial_{x}G_{n}(x_{0},\lambda_{i}(x_{0}))=0\) for some \(i\) and \(x_{0}\in]-1,b_{i}[\). As \(\partial_{x}G_{n}(x,\lambda_{i}(x))\) is nonzero in a neighborhood of \(x=-1\), \(x>-1\)(local monotonicity), then we can assume \(x_{0}\) is the smallest \(x>-1\) such that \(\partial_{x}G_{n}(x,\lambda_{i}(x))=0\). Therefore on \(]-1,x_{0}]\), \(\lambda_{i}(x)\) is strictly increasing in \(x\) because
\[\frac{d\lambda_{i}(x)}{dx}=-\frac{\partial_{x}\tilde{G_{n}}}{\partial_{z}\tilde {G_{n}}}\big{(}x,\lambda_{i}(x)\big{)}\]
As \(\lambda_{i}(-1)\geq-(n-1)/2\) for all \(i\), then \((n+2\lambda_{i}(x_{0})-1)>0\), and using the differential equation
\[(1-x_{0}^{2})\partial_{x}G_{n}(x_{0},\lambda_{i}(x_{0}))=-nxG_{n}(x_{0},\lambda _{i}(x_{0}))+(n+2\lambda_{i}(x_{0})-1)G_{n-1}(x,\lambda_{i}(x_{0}))\]
and the fact that \(G_{n}(x_{0},\lambda_{i}(x_{0}))=0\) (by definition), it would lead us to \(G_{n-1}(x_{0},\lambda_{i}(x_{0}))=0\). Then, using the recurrence relation (we have a fortiori \(2n+2\lambda_{i}(x_{0})>0\))
\[\frac{n+1}{2n+2\lambda_{i}(x_{0})}G_{n+1}(x_{0},\lambda_{i}(x_{0}))=xG_{n}(x, \lambda_{i}(x))-\frac{n+2\lambda_{i}(x_{0})-1}{2n+2\lambda_{i}(x_{0})}G_{n-1}( x_{0},\lambda_{i}(x_{0}))\]
we get successively by induction that \(G_{n+k}(x_{0},\lambda_{i}(x_{0}))=0\) for all \(k\in\mathbb{N}\), and using again the differential equation we get that \(\partial_{x}G_{n+k}(x_{0},\lambda_{i}(x_{0}))=0\) for all \(k\in\mathbb{N}\). But we have
\[\partial_{x}G_{n+k}(x_{0},\lambda_{i}(x_{0}))=2\lambda_{i}(x_{0})G_{n+k-1}(x_ {0},\lambda_{i}(x_{0})+1)\]
so that \(G_{n+k-1}(x_{0}),\lambda_{i}(x_{0})+1)=0\) for all \(k\in\mathbb{N}\). Then it is easy to show by induction that \(G_{n+k-1}(x_{0},\lambda_{i}(x_{0})+j)=0\) for all \(j\in\mathbb{N}\), and for \(j\) larger than \((n-1)/2\), the parameter is positive, and we are brought back to classical Gegenbauer polynomials. This means that successive Gegenbauer polynomials with parameter \(\lambda_{i}(x_{0})+j\) have a root in common, so that their derivatives share these roots too, which is absurd as their roots are simple by orthogonality. We conclude that \(\partial_{x}G_{n}(x,\lambda_{i}(x))\) has a constant sign for all \(x\in]-1,b_{i}[\).
**Theorem 4.6** (Interlacing roots, degree).: _Consider an interval \(I=[-1,b[\) such that \(\tilde{G_{n}}(x,z)\) has simple real roots in \(z\) on \(I\), then the same will be true of \(\tilde{G}_{n-1}(x,z)\) and their roots interlace._
Proof.: Let's write
\[\tilde{G_{n}}(x,z)=\frac{(2x)^{n}}{n!}\prod_{i=1}^{\lfloor n/2\rfloor}\big{(} z-\lambda_{i}^{n}(x)\big{)}\hskip 28.452756pt\tilde{G_{n-1}}(x,z)=\frac{(2x)^{n- 1}}{(n-1)!}\prod_{i=1}^{\lfloor(n-1)/2\rfloor}\big{(}z-\lambda_{i}^{n-1}(x) \big{)}\]
and show that for all \(x\in I\), all \(i\leq\lfloor(n-1)/2\rfloor\), \(\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)>\lambda_{i+1}^{n}(x)\). We first check the property locally, that is a neighborhood of \(-1\), above \(-1\). We have
\[\frac{d^{i}\lambda_{i}^{n}(x)}{dx^{i}}_{\mid_{x=-1}}=-\frac{\partial _{x}^{i}\tilde{G_{n}}}{\partial_{z}\tilde{G_{n}}}\big{(}-1,\lambda_{i}(-1) \big{)} =-\frac{(-1)^{n+i\frac{2n}{n}\binom{n}{i}!}\prod_{j=i}^{\lceil n /2\rceil-1}\big{(}j-(i-1)\big{)}\prod_{j=n+1}^{n+i-1}\big{(}(j-1)/2-(i-1)\big{)} }{\frac{2^{n}}{n!}(-1)^{n+i-1}\prod_{j=0}^{i-2}\big{(}(i-1)-j\big{)}\prod_{j=i }^{\lceil n/2\rceil-1}\big{(}j-(i-1)\big{)}}\] \[=\frac{\binom{n}{i}i!\prod_{j=n+1}^{n+i-1}\big{(}(j-1)/2-(i-1) \big{)}}{\prod_{j=0}^{i-2}\big{(}(i-1)-j\big{)}}\] \[=\frac{n}{n-i}\frac{\big{(}(n+i-1)/2-(i-1)\big{)}}{\big{(}(n-1)/ 2-(i-1)\big{)}}\binom{n-1}{i}i!\frac{\prod_{j=n}^{n+i-2}\big{(}(j-1)/2-(i-1) \big{)}}{\prod_{j=0}^{i-2}\big{(}(i-1)-j\big{)}}\] \[=\frac{n}{n-i}\frac{\big{(}(n+i-1)/2-(i-1)\big{)}}{\big{(}(n-1)/ 2-(i-1)\big{)}}\frac{d^{i}\lambda_{i}^{n-1}(x)}{dx^{i}}_{\mid_{x=-1}}\]
As \(\frac{n}{n-i}\frac{\big{(}(n+i-1)/2-(i-1)\big{)}}{\big{(}(n-1)/2-(i-1)\big{)}}>1\), we conclude that for all \(i\), \(\frac{d^{i}\lambda_{i}^{n}(x)}{dx^{i}}_{\mid_{x=-1}}>\frac{d^{i}\lambda_{i}^{n- 1}(x)}{dx^{i}}_{\mid_{x=-1}}\).
As \(\lambda_{i}^{n}(-1)=\lambda_{i}^{n-1}(-1)=\lambda_{i}(-1)=-1/2-(i-1)\), we can do a Taylor expansion around \(x=-1\):
\[\lambda_{i}^{n}(x)=\lambda_{i}(-1)+\frac{(x+1)^{i}}{i!}\frac{d^{i}\lambda_{i}^{ n}(x)}{dx^{i}}_{\mid_{x=-1}}+o((x+1)^{i})\quad\lambda_{i}^{n-1}(x)=\lambda_{i}(-1)+ \frac{(x+1)^{i}}{i!}\frac{d^{i}\lambda_{i}^{n-1}(x)}{dx^{i}}_{\mid_{x=-1}}+o((x +1)^{i})\]
It is then clear that in a neighborhood of \(-1\) and above \(-1\), \(\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)\). As \(\lambda_{i}^{n-1}(-1)-\lambda_{i+1}^{n}(-1)=1\), we also get \(\lambda_{i}^{n-1}(x)>\lambda_{i+1}^{n}(x)\) in a neighborhood of \(-1\). Now as for all \(i\)\(\lambda_{i}^{n}(x),\lambda_{i}^{n-1}(x),\lambda_{i+1}^{n}(x)\) are continuous functions of \(x\), if by contradiction such inequalities where to fail for some \(x\in I\), then there would exist \(x_{0}\) such that \(\lambda_{i}^{n}(x_{0})=\lambda_{i}^{n-1}(x_{0})\) or \(\lambda_{i}^{n-1}(x_{0})=\lambda_{i+1}^{n}(x_{0})\).But then this would
mean that \(\lambda_{i}^{n-1}(x_{0})\) is a root of \(G_{n}(x_{0},z)\) and \(G_{n-1}(x_{0},z)\), which is impossible by Lemma 4.5. Therefore we conclude that the inequality
\[\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)>\lambda_{i+1}^{n}(x)\]
holds for all\(x\in I\) and \(i\leq\lfloor(n-1)/2\rfloor\). Notice that according to \(n\), the polynomial \(\tilde{G}_{n}(x,z)\) can be of the same degree than \(\tilde{G}_{n-1}(x,z)\), or of degree one more.
**Theorem 4.7** (Interlacing roots, derivative).: _Consider an interval \(I=[-1,b[\) such that \(\tilde{G}_{n}(x,z)\) has simple real roots in \(z\) on \(I\), then the same will be true of \(\partial_{x}\tilde{G}_{n}(x,z)\) and the roots of the two polynomials interlace._
Proof.: We bring ourselves back to a variant of the previous theorem by using the equality
\[\partial_{x}G_{n}(x,z)=2zG_{n-1}(x,z+1)\]
As
\[\partial_{x}G_{n}(x,z)=\Big{[}\prod_{j=0}^{\lceil n/2\rceil-1}(z+j)\Big{]} \partial_{x}\tilde{G}_{n}(x,z)\quad G_{n-1}(x,z+1)=\Big{[}\prod_{j=0}^{\lceil( n-1)/2\rceil-1}(z+j+1)\Big{]}\tilde{G}_{n-1}(x,z+1)\]
We get
\[\partial_{x}\tilde{G}_{n}(x,z)=\frac{\prod_{j=0}^{\lceil(n-1)/2\rceil-1}(z+j+ 1)}{\prod_{j=0}^{\lceil n/2\rceil-1}(z+j)}2z\tilde{G}_{n-1}(x,z+1)\]
And
\[\partial_{x}\tilde{G}_{n}(x,z)=2(z+n/2)\tilde{G}_{n-1}(x,z+1)\quad\text{if $n$ is even}\quad\partial_{x}\tilde{G}_{n}(x,z)=2\tilde{G}_{n-1}(x,z+1)\quad \text{ if $n$ is odd}\]
So as \(-n/2<\min_{i,x}\lambda_{i}(x)\) for all \(i\) and \(x\), it amounts to proving that \(\tilde{G_{n-1}}(x,z+1)\) and \(\tilde{G_{n}}(x,z)\) interlace. We want to show that for all \(x\in I\), with \(x>-1\), all \(i\leq\lfloor(n-1)/2\rfloor\), \(\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)-1>\lambda_{i+1}^{n}(x)\). First we check this in a neighborhood of \(-1\). We can check that the inequality \(\lambda_{i}^{n}(x)>\lambda_{i}^{n-1}(x)-1\) is going to be true in a neighborhood of \(-1\) as \(\lambda_{i}^{n}(-1)=\lambda_{i}^{n-1}(-1)\). So the nontrivial one is the other one, \(\lambda_{i}^{n-1}(x)-1>\lambda_{i+1}^{n}(x)\) for \(x>-1\). We have equality at the origin as \(\lambda_{i}^{n}(-1)=\lambda_{i}^{n-1}(-1):=\lambda_{i}(-1)\) and \(\lambda_{i}(-1)-1=\lambda_{i+1}(-1)\). Then we look at the Taylor expansions around \(x=-1\):
\[\lambda_{i}^{n-1}(x)-1=\lambda_{i+1}(-1)+\frac{(x+1)^{i}}{i!}\frac{d^{i} \lambda_{i}^{n-1}(x)}{dx^{i}}_{\mid_{x=-1}}+o((x+1)^{i})\]
\[\lambda_{i+1}^{n}(x)=\lambda_{i+1}(-1)+\frac{(x+1)^{i+1}}{(i+1)!}\frac{d^{i+1 }\lambda_{i+1}^{n}(x)}{dx^{i+1}}_{\mid_{x=-1}}+o((x+1)^{i+1})\]
It is clear then that locally \(\lambda_{i}^{n-1}(x)-1>\lambda_{i+1}^{n}(x)\) as \((x+1)^{i+1}<<(x+1)^{i}\). We extend the inequality to the whole interval \(I\) by noticing again that if the inequalities where not valid anymore, then there would have to be some equality \(\lambda_{i}^{n}(x)=\lambda_{i}^{n-1}(x)-1\) or \(\lambda_{i}^{n-1}(x)-1=\lambda_{i+1}^{n}(x)\), which would mean \(\partial_{x}\tilde{G_{n}}(x,\lambda_{i}^{n-1}(x))=0\) and as \(\tilde{G_{n}}(x,\lambda_{i}^{n-1}(x))=0\), we would again get a contradiction by Lemma 4.5.
**Lemma 4.8** (Global extension through ODE).: _The local property is in fact true over the whole interval: \(\tilde{G_{n}}(x,z)\) is real rooted in \(z\) with simple (distinct) roots for for \(x\in[-1,0[\), and they are all increasing to \(+\infty\) when \(x\) goes to zero._
Proof.: Denote by \(F_{n}(x,z):=-\frac{\partial_{z}\tilde{G_{n}}}{\partial_{z}\tilde{G_{n}}}\big{(}x,z \big{)}\). Consider a rectangular domain \(D\) such that \(\partial_{z}\tilde{G_{n}}(x,z)\) is nonzero on the domain. \(F_{n}\) is continuous in \(x\) and \(z\) in the the domain \(D\). Indeed, it is a rational fraction whose denominator is nonzero and it is therefore \(C^{\infty}\) in both variables by theorem of composition. As \(\tilde{G_{n}}(-1,z)\) is realrooted in \(z\) with simple roots, \(\partial_{z}\tilde{G_{n}}(-1,\lambda_{i}(-1))\neq 0\) and by continuity we can find small rectangles \(D_{i}:=[-1,-1+\epsilon]\times[\lambda_{i}(-1)-\delta,\lambda_{i}(-1)+\delta]\) such that \(\partial_{z}\tilde{G_{n}}(x,z)\) is nonzero on \(D_{i}\). A strong version of Picard's theorem tells us that there is a maximal interval \(I_{i}^{max}=[-1,\eta_{max}^{i}[\) for which the roots \(\lambda_{i}(x)\) (\(i=1,2...\lfloor n/2\rfloor\)) are the unique solutions of the initial value ODE
\[\frac{dz}{dx}=F_{n}(x,z),\qquad z(-1)=-1/2-(i-1)\]
Note that on \(I_{i}^{max}\), \(\partial_{z}\tilde{G_{n}}(x,\lambda_{i}(x))\neq 0\) (the denominator is nonzero, so that the differential equation is well defined). Let's prove that \(I_{i}^{max}=[-1,0[\) (for all \(i\)) and that there is explosion at \(0\) (roots going to infinity), the roots increasing constantly to \(+\infty\). The local Lemma 4.3 tell us that on a neighborhood of \(-1\), \(F_{n}(x,\lambda_{i}(x))>0\), and as by Lemma 4.5, the numerator is of constant sign and the denominator doesn't vanish, then \(F_{n}(x,\lambda_{i}(x))>0\) on \(I_{i}^{max}\).
According to Picard's theorem, we either have \(\lambda_{i}(x)\rightarrow_{x\rightarrow\eta_{max}^{i}}+\infty\) (explosion), or \(\eta_{max}^{i}\) is such that \(\lim F_{n}(x,\lambda_{i}(x))\) is not well defined (we leave the domain of definition).
Now, explosion can't happen if \(\eta_{max}^{i}<0\). Indeed, we have that
\[\sum_{i}\lambda_{i}(x)+\sum_{j}\mu_{j}=\frac{-P_{n-1}(x)}{\frac{(2x)^{n}}{n!}}\]
where \(P_{n-1}(x)\) is a polynomial of degree \(n-1\) as well as the coefficient of \(z^{n-1}\) in the expansion of \(G_{n}(x,z)\). So the sum of roots is bounded above by a constant, so there can be no explosion (necessarily to\(+\infty\) by monotonicity).
We can leave the domain of definition only if \(\lim_{x\rightarrow\eta_{max}^{i}}\partial_{z}\tilde{G_{n}}\big{(}x,\lambda_{i }(x)\big{)}=0\). If this is the case and if by contradiction \(\eta_{max}^{i}<0\), we have seen that \(\partial_{z}\tilde{G_{n}}(\eta_{max}^{i},z)\) would be of degree exactly \(\lfloor n/2\rfloor-1\) in \(z\). Therefore it means that \(\lim_{x\rightarrow\eta_{max}^{i}}\lambda_{i}(x)=\mu\) where \(\mu\) is a root of \(\partial_{z}\tilde{G_{n}}(\eta_{max}^{i},z)\). But then it means that we can extend by continuity \(\lambda_{i}(x)\) at \(x=\eta_{max}^{i}\) with \(\lambda_{i}(\eta_{max})=\mu\). We check by continuity that \(\tilde{G_{n}}\big{(}\eta_{max}^{i},\lambda_{i}(\eta_{max})\big{)}=\partial_{z} \tilde{G_{n}}\big{(}\eta_{max}^{i},\lambda_{i}(\eta_{max})\big{)}=0\) so that in fact \(\lambda_{i}(\eta_{max})\) is a real double root in \(z\) of \(\tilde{G_{n}}\big{(}\eta_{max}^{i},z\big{)}\). Using Lemma 4.7, as there is a root of \(\partial_{x}\tilde{G_{n}}(x,z)\) between any two roots of \(\tilde{G_{n}}(x,z)\) in \(z\) by interlacing, it follows that necessarily \(\partial_{x}\tilde{G_{n}}\big{(}\eta_{max}^{i},\lambda_{i}(\eta_{max})\big{)}=0\). But this is impossible according to Lemma 4.5. Therefore, we have necessarily \(\eta_{max}^{i}=0\) for all \(i=1...\lfloor n/2\rfloor\).
Furthermore, assume by contradiction that there is no explosion for some index \(i\) at \(0\). As \(\lambda_{i}(x)\) is monotonous for \(x\in]-1,0[\), then we have necessarily that \(\lim_{x\rightarrow\eta}\lambda_{i}(x)=\mu\) exists and is finite. By continuity we have \(\tilde{G_{n}}(0,\mu)=0\). Let's distinguish according to the parity of \(n\). If \(n\) is even, we have \(\tilde{G_{n}}(0,z)=\frac{G_{n}(0,z)}{\Pi_{j=0}^{n/2-1}(z+j)}=\frac{1}{(n/2)!}\) for all \(z\), which shows the contradiction right away. If \(n\) is odd, then \(\partial_{z}\tilde{G_{n}}\big{(}x,\lambda_{i}(x)\big{)}=xQ_{n}\big{(}x,\lambda_ {i}(x)\big{)}\) where \(Q_{n}(0,z)=\frac{2(-1)^{\lfloor n/2\rfloor}}{(n-1)/2!}\). For \(x\in[-1,0]\), \(xF_{n}(x,\lambda_{i}(x))\) is therefore bounded above as a continuous function on a compact. It is always nonpostive, and the maximum can't be zero because it would mean that for an \(x\in[-1,0]\), \(\partial_{x}L_{n}\big{(}x,\lambda_{i}(x)\big{)}=0\), which is impossible according to Lemma 4.5. Therefore it is always smaller than \(-K<0\).
\[\frac{d\lambda_{i}(x)}{dx}=\frac{1}{x}xF_{n}(x,\lambda_{i}(x))>\frac{-K}{x} \lambda_{i}(x)-\lambda_{i}(-1)>-K\log(|x|)\]
It would follow that \(\lambda_{i}(x)\rightarrow_{x\to 0}+\infty\) which is contrary to the assumptions.
We conclude that there is explosion for all \(i=1...\lfloor n/2\rfloor\).
## 5. Applications to realrootedness in \(x\)
We start by recalling a well-known monotonicity result. In all the following we will consider \(z>0\).
**Lemma 5.1** (Monotonicity of the roots with respect to the parameter, from [1]).: _If \(x_{i}(z)\) are the roots of \(L_{n}^{(z)}(x)\) (Laguerre), then \(\frac{d}{dz}x_{i}(z)\geq 0\), \(i\leq n\). Also, the positive roots \(y_{i}\) of \(G_{n}^{(z)}(x)\) (Gegenbauer) are such that \(\frac{d}{dz}y_{i}(z)\geq 0\) and the negative symmetric such that \(\frac{d}{dz}y_{i}(z)\leq 0\)._
**Lemma 5.2**.: _For a fixed \(z\), we have that the roots of \(\partial_{z}L_{n}(x,z)\) in \(x\) (of degree \(n-1\)) are real and interlace those of \(L_{n}(x,z)\)._
Proof.: (3) \[L_{n}(x,z)=\sum_{k=0}^{n}\frac{(-1)^{k}\prod_{j=k+1}^{n}(z+j)}{(n-k)!}x^{k}\]
From this expression it follows that \(\partial_{z}L_{n}(x,z)\) is of degree \(n-1\) in \(x\).
\[\frac{d}{dz}x_{i}=-\frac{\partial_{z}L_{n}}{\partial_{x}L_{n}}(x_{i}(z),z) \partial_{z}L_{n}(x_{i}(z),z)=-\partial_{x}L_{n}(x_{i}(z),z)\frac{d}{dz}x_{i} \tag{4}\]
\(\partial_{x}L_{n}(x_{i}(z),z)\) changes sign when we increment \(i\) because \(L_{n}(x_{i}(z),z)=0\) so the derivative changes sign when we go from one root to the next. As \(\frac{d}{dz}x_{i}\geq 0\), We get that \(\partial_{z}L_{n}(x,z)\) changes sign \(n-1\) times and therefore we have \(n-1\) real zeros between zeros of \(L_{n}\). Therefore all the zeros of \(\partial_{z}L_{n}(x,z)\) have been found and are interlacing with the zeros of \(L_{n}(x,z)\).
**Theorem 5.3**.: \(\partial_{z}L_{n}(x,z)\) _and more generally \(\partial_{z}^{k}L_{n}(x,z)\) for all \(k\leq n\) are real-rooted in \(x\), and they form an interlacing family of decreasing degree, in the sense that \(\partial_{z}^{k+1}L_{n}(x,z)\) interlaces \(\partial_{z}^{k}L_{n}(x,z)\) and the roots are monotonously increasing in \(z\)._
Proof.: We show inductively the following property: \(\partial_{z}^{k+1}L_{n}(x,z)\) is realrooted, the roots of \(\partial_{z}^{k+1}L_{n}(x,z)\) interlace the roots of \(\partial_{z}^{k}L_{n}(x,z)\) and are increasing in \(z\). We start with the initial condition. Using 5.2, we get the real-rootedness and interlacing property for \(\partial_{z}L_{n}(x,z)\). Now we need to prove that the roots \(\tilde{x}_{i}(z)\) (\(i=1...n-1\)) of \(\partial_{z}L_{n}(x,z)\) also share the monotonicity property.
\[\partial_{z}L_{n}(\tilde{x_{i}}(z),z)=0\]
Which lead to
\[\frac{\partial_{z}L_{n}}{L_{n}}(\tilde{x_{i}}(z),z)=0\quad\frac{d(\frac{ \partial_{z}L_{n}}{L_{n}}(\tilde{x_{i}}(z),z))}{dz}=\partial_{x}\Big{(}\frac{ \partial_{z}L_{n}}{L_{n}}(\tilde{x_{i}}(z),z)\Big{)}\frac{d\tilde{x_{i}}}{dz }+\partial_{z}\Big{(}\frac{\partial_{z}L_{n}}{L_{n}}(\tilde{x_{i}}(z),z) \Big{)}=0 \tag{5}\]
We want to show that
\[\frac{d\tilde{x_{i}}}{dz}\geq 0\]
On the one hand,
\[\frac{\partial_{z}L_{n}}{L_{n}}(\tilde{x_{i}}(z),z)=\sum_{j=1}^{n}\frac{ \partial_{z}L_{n}}{\partial_{x}L_{n}}(x_{j},z)\frac{1}{\tilde{x_{i}}-x_{j}}\]
So that
\[\partial_{x}\Big{(}\frac{\partial_{z}L_{n}}{L_{n}}(\tilde{x_{i}}(z),z)\Big{)} =-\sum_{j=1}^{n}\frac{\partial_{z}L_{n}}{\partial_{x}L_{n}}(x_{j},z)\frac{1} {(\tilde{x_{i}}-x_{j})^{2}}=\sum_{j=1}^{n}\frac{dx_{j}}{dz}\frac{1}{(\tilde{ x_{i}}-x_{j})^{2}}\geq 0 \tag{6}\]
On the other hand, using the real-rootedness in \(z\), as \(\tilde{x_{i}}\in[0,+\infty[\) by the interlacing property, then
\[\partial_{z}\Big{(}\frac{\partial_{z}L_{n}}{L_{n}}(\tilde{x_{i}}(z),z)\Big{)} \leq 0\]
using Laguerre inequality for realrooted polynomials, stating that \(\partial_{zz}L_{n}L_{n}-(\partial_{z}L_{n})^{2}\leq 0\). We conclude by gathering the two inequalities.
The induction is proven using exactly the same method, given that
\[\partial_{z}\Big{(}\frac{\partial_{z}^{k}L_{n}}{\partial_{z}^{k}L_{n}}\Big{)} \leq 0\]
Because \(\partial_{z}^{k}L_{n}(x,z)\) is realrooted in \(z\) as the derivative of a realrooted polynomial, and \(x\) in the appropriate interval.
**Lemma 5.4**.: \(0\) _is a root of \(\partial_{z}^{k}G_{n}(x,z)\) for all \(k\leq n\), when \(n\) is odd._
Proof.: It comes directly from the formula
\[G_{n}(x,z)=\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^{k}\frac{\Gamma(n-k+z)}{\Gamma (z)k!(n-2k)!}(2x)^{n-2k}\]
**Lemma 5.5**.: _For a fixed \(z>0\), we have that the roots of \(\partial_{z}G_{n}(x,z)\) in \(x\) (of degree \(n\)) are real and the positive ones interlace those of \(G_{n}(x,z)\) by below (that is the largest root in module belongs to \(G_{n}(x,z)\))._
Proof.: (7) \[G_{n}(x,z)=\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^{k}\frac{\Gamma(n-k+z)}{\Gamma (z)k!(n-2k)!}(2x)^{n-2k}\]
From this expression it follows that \(\partial_{z}G_{n}(x,z)\) is of degree \(n\) in \(x\), as the coefficient of \(x^{n}\) is \(2^{n}\frac{\Gamma(n+z)}{\Gamma(z)n!}\). Let's denote the roots of \(G_{n}(x,z)\) by \(y_{i}\), then by differentiating the equality \(G_{n}(x,z)=0\) with respect to \(z\) we get
\[\frac{d}{dz}y_{i}=-\frac{\partial_{z}G_{n}}{\partial_{x}G_{n}}(y_{i}(z),z) \partial_{z}G_{n}(y_{i}(z),z)=-\partial_{x}G_{n}(y_{i}(z),z)\frac{d}{dz}y_{i} \tag{8}\]
\(\partial_{x}G_{n}(y_{i}(z),z)\) changes sign when we increment \(i\) because \(G_{n}(y_{i}(z),z)=0\) so the derivative \(-\partial_{x}G_{n}(y_{i}(z),z)\) changes sign when we go from one root to the next (the roots are simple). Let's distinguish between the even and odd cases. In the even case, \(\frac{d}{dz}y_{i}\geq 0\) when \(i=1..n/2\), and it gives us by the sign rule \(i=1..n/2-1\) positive roots. Now there is still one root missing, and we can check that the sign of \(\partial_{z}G_{n}(y_{n/2}(z),z)\) is opposite to the sign of \(\partial_{z}G_{n}(0,z)\), and as \(\partial_{z}G_{n}(-y_{n/2}(z),z)\) has the same sign by symmetry there has to be a root between both, and actually two, one positive and one negative by symmetry. We get \(n\) roots overall. In the odd case, \(0\) is a root, and we have again two roots missing. But if \(y_{l}\) denotes the positive smallest root, then \(\partial_{z}G_{n}(y_{l}(z),z)\) has the same sign as \(\partial_{z}G_{n}(-y_{l}(z),z)\) and so as \(0\) is a root we need to have at least two other roots by some easy change of sign argument. Therefore all the zeros of \(\partial_{z}G_{n}(x,z)\) have been found and the positive ones are interlacing with the zeros of \(G_{n}(x,z)\).
**Theorem 5.6**.: \(\partial_{z}G_{n}(x,z)\) _and more generally \(\partial_{z}^{k}G_{n}(x,z)\) for all \(k\leq n\) are real-rooted in \(x\) for \(z>0\), and the positive roots are monotonously increasing in \(z\), and their symmetric negative counterpart monotonously decreasing in \(z\)._
Proof.: We have
\[G_{n}(-x,z) =(-1)^{n}G_{n}(x,z) \partial_{z}G_{n}(-x,z) =(-1)^{n}\partial_{z}G_{n}(x,z) \tag{9}\]
So that if we have a root \(\tilde{y}\) of \(\partial_{z}G_{n}(x,z)\), then its symmetric \(-\tilde{y}\) will also be a root of \(\partial_{z}G_{n}(x,z)\). In other terms, \(\partial_{z}G_{n}(x,z)\) and more generally, \(\partial_{z}^{k}G_{n}(x,z)\) have symmetric roots. We show inductively the following property: \(\partial_{z}^{k+1}G_{n}(x,z)\) is realrooted, the positive roots of \(\partial_{z}^{k+1}G_{n}(x,z)\) interlace the positive roots of \(\partial_{z}^{k}G_{n}(x,z)\) and the positive roots are increasing with \(z\). We start with the initial condition. Using 5.2, we get the real-rootedness and interlacing property for \(\partial_{z}G_{n}(x,z)\). Now we need to prove that the positive roots \(\tilde{y}_{i}(z)\) of \(\partial_{z}G_{n}(x,z)\) also share the monotonicity property.
\[\partial_{z}G_{n}(\tilde{y}_{i}(z),z)=0\]
which leads to
\[\frac{\partial_{z}G_{n}}{L_{n}}(\tilde{y}_{i}(z),z)=0 \frac{d(\frac{\partial_{z}G_{n}}{G_{n}}(\tilde{y}_{i}(z),z)}{dz} =\partial_{x}\Big{(}\frac{\partial_{z}G_{n}}{G_{n}}(\tilde{y}_{i}(z),z) \Big{)}\frac{d\tilde{y}_{i}}{dz}+\partial_{z}\Big{(}\frac{\partial_{z}G_{n}}{ G_{n}}(\tilde{y}_{i}(z),z)\Big{)}=0 \tag{10}\]
We want to show that for the positive roots,
\[\frac{d\tilde{y}_{i}}{dz}\geq 0\]
On the one hand,
\[\frac{\partial_{z}G_{n}}{G_{n}}(\tilde{y}_{i}(z),z)=\sum_{j=1}^{n}\frac{ \partial_{z}G_{n}}{\partial_{x}G_{n}}(y_{j},z)\frac{1}{\tilde{y}_{i}-y_{j}}\]
so that
\[\partial_{x}\Big{(}\frac{\partial_{z}G_{n}}{L_{n}}(\tilde{y}_{i}( z),z)\Big{)}=-\sum_{j=1}^{n}\frac{\partial_{z}G_{n}}{\partial_{x}G_{n}}(x_{j},z) \frac{1}{(\tilde{y}_{i}-y_{j})^{2}} =\sum_{j=1}^{n}\frac{dy_{j}}{dz}\frac{1}{(\tilde{y}_{i}-y_{j})^{2}} \tag{12}\] \[=\sum_{j=1}^{\lfloor n/2\rfloor}\frac{dy_{j}}{dz}\big{[}\frac{1}{ (\tilde{y}_{i}-y_{j})^{2}}-\frac{1}{(\tilde{y}_{i}+y_{j})^{2}}\big{]}=4\tilde{ y}_{i}\sum_{j=1}^{\lfloor n/2\rfloor}\frac{\frac{dy_{j}}{dz}y_{j}}{(\tilde{y}_{i}-y_{j})^{ 2}(\tilde{y}_{i}+y_{j})} \tag{11}\]
As we have \(\tilde{y}_{i}>0\) and \(\frac{dy_{i}}{dz}\geq 0\) as well as \(y_{j}>0\) we get
\[\partial_{x}\Big{(}\frac{\partial_{z}G_{n}}{L_{n}}(\tilde{y}_{i}(z),z)\Big{)}\geq 0\]
On the other hand, using the real-rootedness in \(z\), as \(\tilde{y}_{i}\in[-1,1]\) by the interlacing property, then
\[\partial_{z}\Big{(}\frac{\partial_{z}G_{n}}{G_{n}}(\tilde{y}_{i}(z),z)\Big{)}\leq 0\]
using Laguerre inequality for realrooted polynomials, stating that \(\partial_{zz}G_{n}G_{n}-(\partial_{z}G_{n})^{2}\leq 0\). We conclude by gathering the two inequalities.
The induction is proven using exactly the same method, given that
\[\partial_{z}\Big{(}\frac{\partial_{z}^{k+1}G_{n}}{\partial_{z}^{k}G_{n}}\Big{)} \leq 0\]
Because \(\partial_{z}^{k}G_{n}(x,z)\) is realrooted in \(z\) as the derivative of a realrooted polynomial, and \(x\) in the appropriate interval \(([-1,1])\). |
2309.14001 | A discrete model for layered growth | In this work we present a discrete model that captures the fundamental
properties of additively manufactured solids in a minimal setting. The model is
based on simplified kinematics and allows for the onset of incompatible
deformations between discrete layers of an additively manufactured stack.
Thanks to the discrete nature of the model, we obtain an averaged formulation
of mechanical equilibrium for the growing stack, leading to closed-form
solutions that are both analytically simple and physically transparent. In
particular, we are able to explain the origin of residual stresses by the
accumulation of incompatible deformations between adjacent layers. At the same
time, we are able to formulate the technologically relevant inverse problem
that provides the deposition protocol required to produce a desired state of
internal stress in the manufactured stack. Another important aspect analyzed in
the work is the role played by an ideal ``glue'' between the layers, whose
presence is fundamental to prevent their sliding and whose mechanical behavior
can quantitatively influence the final stress distribution in the stack.
Although the model is an elementary approximation of additive manufacturing,
its simplicity makes it possible to highlight how the controls exerted during
deposition will have qualitative or quantitative effects on the final stress
state of the stack. This understanding is crucial in shedding light on the
complex mechanical behavior of additive manufactured solids. | Davide Renzi, Sonia Marfia, Giuseppe Tomassetti, Giuseppe Zurlo | 2023-09-25T10:07:53Z | http://arxiv.org/abs/2309.14001v1 | # A discrete model for layered growth
###### Abstract
In this work we present a discrete model that captures the fundamental properties of additively manufactured solids in a minimal setting. The model is based on simplified kinematics and allows for the onset of incompatible deformations between discrete layers of an additively manufactured stack. Thanks to the discrete nature of the model, we obtain an averaged formulation of mechanical equilibrium for the growing stack, leading to closed-form solutions that are both analytically simple and physically transparent. In particular, we are able to explain the origin of residual stresses by the accumulation of incompatible deformations between adjacent layers. At the same time, we are able to formulate the technologically relevant inverse problem that provides the deposition protocol required to produce a desired state of internal stress in the manufactured stack. Another important aspect analyzed in the work is the role played by an ideal "glue" between the layers, whose presence is fundamental to prevent their sliding and whose mechanical behavior can quantitatively influence the final stress distribution in the stack. Although the model is an elementary approximation of additive manufacturing, its simplicity makes it possible to highlight how the controls exerted during deposition will have qualitative or quantitative effects on the final stress state of the stack. This understanding is crucial in shedding light on the complex mechanical behavior of additive manufactured solids.
## 1 Introduction
The modeling of growth phenomena has been a scientific challenge for many years and continues to offer opportunities for development in various fields, such as biomechanics, construction science, and materials science. Growth refers to the change in mass of a body over time, and it is observed in many contexts. In biology, growth is fundamental to describing many aspects of life, from cell division to tumor growth in living beings. However, growth processes also appear in many physical phenomena, where new material is added to an evolving system, such as 3D printing, or where growth is associated with phase transition phenomena, such as crystal growth [10, 24, 25].
Growth phenomena can be classified into two categories: volumetric growth [32, 23, 14, 30, 20, 21, 19] and surface growth [31, 11, 13, 33, 4, 41, 42, 34]. In the former case, the addition or removal of matter takes places inside the bulk; in the latter, material is added or removed at the boundary of the body.
Both volumetric and surface growth can result in residual stress. This is not always an undesired outcome. In may biological contexts, residual stress plays an important role: it helps maintaining the structural integrity and the mechanical stability of a tissue; in blood vessels, residual stress contributes to the maintenance of an optimal vessel diameter; residual stress can influence how tissues respond to injury and their subsequent healing. In many technological contexts residual stress is an important target, like for example in the process of hot shrinking of metals, in reinforcing concrete structures, or in prestressing glass layers to enhance their resistance to fracture.
Apart of these (nowadays classical) applications that regard residual stress as a beneficial feature, the last decade has seen a blooming of technologies aimed at embedding target residual stress patterns into "shape lifting" structures, an exciting avenue with an immense scenario of futuristic applications, such as 4D printing technologies [10], biomimetic tissues and more.
The mathematical description of residual stress relies ultimately on the concept of strain incompatibility, a measure of the level of "unfitness" of sub-parts of the body to be arranged together without creating voids or overlaps. In principle, this concept is purely geometrical, and then depending on the stiffness of the various parts of the body, the same level of incompatibility will ultimately result into different distributions of residual stress.
Since the boundary of the body is directly accessible, it is intuitive that surface growth offers more control over the accretion process, giving the possibility of fine tuning, and hence program in advance the properties of the resulting body. Illustrating examples of how the choice of the accretion protocol affects the residual stresses may be found in [12] in the context of masonry materials, and in [42] in the context of 3D printing.
The understanding of the mechanism behind the accumulation of residual stress during surface growth was slowly achieved in stages over the span of the last 150 years. As early as 1883, G.H.Darwin (Charles' son) acknowledged the relevance of what he called "the historical element" in the horizontal thrust of a mass of sand [16], accounting for the fact that the final distribution of stress would depend on the way the sand had been layered in a container.
The first analytical treatment of the deposition-dependent stress distribution in massive bodies is apparently traceable to E.I.Rashba [29], who pioneered a prolific area of study of the residual stresses arising in surface growth problems. Some years later, Brown and Goodman [13] recognize that the stress state of a massive, self-gravitating body formed by layered accretion is not the same as it would be if the body had been first fabricated, and then endowed with mass. This is due to the fact that when a new layer is deposited, it deforms the pre-existing material before hardening.
The most recent developments in this vein have elucidated the geometric aspects of "non-Euclidean" surface growth [38, 37] and have established the non-locality, both in space and time, between the controls that are exerted on the growth surface and the ensuing state of incompatibility that is nailed in the body at the end of the process [42]. What emerges from these studies is that strain incompatibility can not be controlled locally, in the sense that even assuming that the "gluing" takes place immediately after deposition, the local value of strain incompatibility depends not only on the pre-strain imposed on the new block prior to deposition, but also on the elastic adjustment of the underlying body, which is required to reattain equilibrium, and thus implicitly, also on the whole accretion history.
Three-dimensional problem involving incompatibility are often challenging to solve analytically, unless symmetry assumptions are made [17, 18, 34, 39]. However, one should be careful when making such assumptions, as symmetric solutions may be unstable during growth [6, 7]. One might consider simplifying the problem by reducing the number of dimensions to one. However, doing so would eliminate the issue of incompatibility altogether, which is of course not an option. As a solution [43], suggests retaining a trace of higher-dimensionality in the
problem, through a 1.5-dimensional model.
In the spirit of [42] in the present paper we propose an approach where the growth of a body _occurs primarily in one dimension_ but the impact of the other dimensions is considered through the integration of _extra state variables that account for stress and strain in the perpendicular directions_. Differently from previous studies, the problem is discrete _both in space and time_. Specifically, the growing body is a stack elastic blocks glued sequentially, at discrete times, on top of another. It is quite intuitive that when two elastic blocks are pre-strained, and then glued, residual stresses may arise depending on the loads that were applied to the blocks prior to attachment, and on the type of glue that was used in the process. A fast glue would freeze the state of the bonding interface at the very instant of adhesion, some other glues may allow for a partial or a total relaxation of stress before activating the bond. These intuitive considerations disclose the possibility to play with the order and manner in which an elastic body are assembled by the juxtaposition of elementary "blocks", in order to create desired distributions of residual stress in the resulting patch-worked body. This is the main idea explored in this paper.
We introduce our setup in Section 2, where we formulate the equilibrium problem for a weightless stack of \(j\) blocks. Our key modeling assumptions are that each block undergoes a shear-less homogeneous strain, and that the difference between the horizontal strains of block \(j+1\) and block \(j\) be a prescribed _incompatibility_\(\delta_{j}\). Since the average horizontal stress in each block vanishes the problem consists of oneglobal equilibrium condition, \(j-1\) compatibility equations, and \(j\) constitutive equations, for a total of \(2j\) unknowns representing the horizontal stress and horizontal strain of the blocks. Of course this way to account for equilibrium in the horizontal direction is a crude approximation. In a real system it is inhomogeneous strain will occur, and therefore it should be understood that our analysis is only a first order approximation of a more refined model, to be developed elsewhere.
For this problem, we work out an analytical solution which shows that in the absence of external forces the horizontal stresses vanish only in the special case when the incompatibility vanishes as well. We then take into consideration external forces by extending the formulation to the case when each block is topped by a lumped mass, \(b_{i}\). In this case, the vertical stresses must be taken into account, and can be computed by means of \(j\) equilibrium conditions.
In Section 3, we shift our focus to the most fascinating issues in surface growth, specifically those where the incompatibilities \(\delta_{i}\), are not predefined, but instead dictated by the accretion rule. Starting from a stack of \(j\) weighting blocks in equilibrium with prescribed body forces \(b_{i}\), \(i=1,\ldots,j\) and incompatibilities \(\delta_{i}\), \(i=1,\ldots,j-1\), we position a new pre-stressed block atop the stack with a layer of glue between the two, and we subsequently release the block letting the whole system attain equilibrium. The resulting stack will have \(j+1\) block, where the incompatibility \(\delta_{j}\) between block \(j\) and block \(j+1\) is determined by the character of the glue used for bonding. We consider two extreme cases: a "fast" glue and a "slow" glue.
We remark that the new block perturbs the equilibrium state of the underlying stack only _after_ has been released. If the glue is fast, it locks the incompatibility \(\delta_{j}\)_before_ the new block is released, when the strain of the new block is determined by its pre-stress, and the strain of the underlying stack is the same as in the previous accretion step. If the glue is very slow, on the other hand, the incompatibility \(\delta_{j}\) locks in on only _after_ the block has been released. We assume that in the time span between the release of the new block and the solidification of the glue, the new block and the stack interact with each other with a _frictionless_ contact. As a result, when equilibrium is attained, the strain of the new block is determined by its own weight and not by its pre-stressed state; likewise, the strain of the underlying stack will change due to the additional weight of the new element. The strain difference between the last two blocks of the new stack determines the incompatibility after the glue solidifies.
Concerning the two modes of accretion, we work out explicit formulas for the incompatibility
\(\delta_{j}\). An interesting finding is that for the fast glue the incompatibility \(\delta_{j}\) is determined by the pre-strain (and hence the pre-stress) in block \(j+1\) and block \(j\), as well as on the weights of the blocks. For the case of slow glue, instead, the incompatibility \(\delta_{j}\) is determined solely by the weight \(b_{j}\) of the \(j-th\) element, and is unaffected, as anticipated, by the pre-stress.
Having established these general results, we consider in Section 4 some specific examples. In the first example we consider the growth of a stack up to the addition of 20 blocks, with constant horizontal tensile pre-stress, and we observe how the stress in the first block evolves as news blocks are added. What we observe is that the tensile pre-stress of the newly added blocks produces a contraction in the blocks beneath, and in particular, in the first block. However, as more and more blocks are added, the weight of the entire stack on the first block compensates the above-mentioned horizontal contraction, and as a result the horizontal strain of the first block becomes eventually positive. We also determine the strain profile in the entire stack, which shows that, while a constant incompatibility would result in a linear horizontal stress, a constant pre-stress produces a horizontal stress profile of "parabolic" type. In the second part of Section 4 we repeat the calculations for the case of slow glue, and we verify that the "slow glue" protocol results into lower final stressed.
We finally analyze the residual stresses that result from the removal of the weights, considering accretion protocols that differ not only for the type of glue used, but also for the weights of the blocks. If the weights \(b_{i}\) are increasing, the profiles resulting from the "fast" and "slow" glue protocols are similar in shape, but differ by approximately an order of magnitude, with the residual stress being much larger in the case of "fast glue". If the weights are decreasing, on the other hand, the two profile have opposite convexities. Finally, if all weights are the same, the residual stress in the "slow glue" case is linear (since in this case the generated incompatibilities \(\delta_{i}\) are all equal).
Section 5 deals with an even more intriguing problem: choosing a deposition strategy in order to attain a target residual stress, the so-called "inverse problem". As a first test, we start from the residual stress resulting from an analysis performed in Section 4 on the case when all weights are equal, and the pre-stress vanishes. We indeed verify that the original pre-stresses are recovered. As a further test, we consider the case of a sinusoidal residual stress, and we determine the pre-strains needed to obtain it. We then extend the model to incorporate Tresca's resistance criterion. We then repeat the treatment for the case of slow glue. Additional considerations and insights are contained in the concluding section.
## 2 Equilibrium of an incompatible stack
### The notion of incompatibility
To set the stage, we consider a planar system consisting of a stack of blocks of linear elastic material, which deform homogeneously and without shear strains.
We denote by \(\epsilon^{i}_{x}\) and \(\epsilon^{i}_{y}\) the horizontal and vertical strain of the \(i\)-th block. The corresponding stresses on the horizontal and vertical directions are \(\sigma^{i}_{x}\) and \(\sigma^{i}_{y}\), as shown in Fig.1. The "elastic state" of the system is described by \(2n\)_strain components_ and \(2n\)_stress components_, respectively,
\[\mathcal{E}=\{(\epsilon^{i}_{x},\epsilon^{i}_{y}),i=1,\ldots n\},\quad\text{ and}\quad\Sigma=\{(\sigma^{i}_{x},\sigma^{i}_{y}),i=1,\ldots,n\}. \tag{1}\]
We assume that the stress and the strain components are related by the _costitutive equations_:
\[\epsilon^{i}_{x} =(\sigma^{i}_{x}-\nu\sigma^{i}_{y})/E\] \[\epsilon^{i}_{y} =(\sigma^{i}_{y}-\nu\sigma^{i}_{x})/E\]
where \(E\) is the 2D _Young modulus_ (force/length) and \(\nu\) is the _Poisson ratio_ (dimensionless). As a key ingredient in our model, we also assume that the stack comes with _prescribed incompatibility_
\[\Delta=\{\delta_{i},i=1,\ldots,n-1\}, \tag{3}\]
and we impose the _incompatibility constraints_:
\[\epsilon^{i+1}_{x}-\epsilon^{i}_{x}=\delta_{i}\qquad i=1,...,n-1, \tag{4}\]
These incompatibilities are maintained in the by a "glue", whose nature will be detailed in the sequel. We shall see that, consistent with physical intuition, the enforcement of a non-trivial incompatibility \(\Delta\neq 0\) induces a non-homogeneous strain state \(\mathcal{E}\), as sketched in Fig. 2 below, and, as a result, a non-vanishing residual stress \(\Sigma\).
Figure 1: _a)_ reference (dashed) and strained (solid) block. _b_) the block is assumed to deform homogeneously under the action of horizontal and vertical stresses \(\sigma_{x},\sigma_{y}\). _c_) to simplify the notation, a grey block will denote the union of a lumped mass (of weight \(b_{i}\)) on top of a massless elastic block.
In this work we will focus attention both on weightless blocks, that according to Fig.1\({}_{a,b}\) will be described by unfilled blocks, and on heavy blocks, that will be filled in grey instead. In order to simplify the description of the state of deformation of the blocks under gravity, each block will be conceived as if it is formed by a heavy _lumped mass_, denoted in black in Fig.1\({}_{c}\), topping a massless block through a frictionless interface. This way we will be legitimated to keep describing the deformation of each block as homogeneous, whereas clearly in reality a heavy block with distributed mass will deform inhomogeneously.
The constitutive equations (2) and the incompatibility constraints (4) are complemented by equilibrium equations, that we introduce in the next Section.
### Weightless blocks
We first consider the problem of equilibrium of a weightless stack, as the one represented in Fig.2. If the incompatibilities are prescribed and there are no external loads acting on the stack, to determine the state of stress we complement (4) with the following \(n+1\) equilibrium equations:
\[\left\{\begin{array}{l}\sum_{i=1}^{n}\sigma_{x}^{i}=0,\\ \sigma_{y}^{i}=0\qquad i=1,...,n.\end{array}\right. \tag{5}\]
The first of (5) tells us that the stress average for the entire stack vanishes along the horizontal direction, whereas the remaining equations impose stress equilibrium of the individual blocks along the vertical direction. The averaged formulation of equilibrium (5) in the context of accreting media appeared for the first time in a continuous setting in the pioneering work of Palmov [1], in the study of solidification. The discrete form presented above can be derived from the principle of virtual work. Since the incompatibilities are fixed, by (4), every virtual variation
\[\widetilde{\mathcal{E}}=\{(\widetilde{\epsilon}_{x}^{i},\widetilde{\epsilon} _{y}^{i}),i=1,\ldots,n\} \tag{6}\]
of the strain state must satisfy
\[\widetilde{\epsilon}_{x}^{i+1}-\widetilde{\epsilon}_{x}^{i}=0, \tag{7}\]
which imply that
\[\widetilde{\epsilon}_{x}^{i}=\widetilde{\epsilon}_{x}, \tag{8}\]
with \(\widetilde{\epsilon}_{x}\) an arbitrary constant. Thus, the internal virtual work associated to a virtual variation of the strains is
\[W[\widetilde{\mathcal{E}}]=\sum_{i=1}^{n}\sigma_{x}^{i}\widetilde{\epsilon}_{ x}^{i}+\sum_{i=1}^{n}\sigma_{y}^{i}\widetilde{\epsilon}_{y}^{i}=\Big{(} \sum_{i=1}^{n}\sigma_{x}^{i}\Big{)}\widetilde{\epsilon}+\sum_{i=1}^{n}\sigma_ {y}^{i}\widetilde{\epsilon}_{y}^{i}. \tag{9}\]
Figure 2: Comparison between a compatible weightless stack, and an incompatible weightless stack with mismatches \(\delta_{i}\) between the blocks \(i\) and \(i+1\).
At equilibrium, in the absence of applied loads, the internal work must vanish for every virtual variation of the horizontal strains, whence (5), since \(\widetilde{\epsilon}_{x}\) and \(\{\widetilde{\epsilon}_{y}^{i},i=1,\ldots,n\}\) are arbitrary.
The linear system (2)-(5) is well-posed: it is comprised of linearly independent equations whose number equals that of the unknowns. The unique solution is:
\[\sigma_{x}^{i}=E\left(\sum_{k=1}^{i-1}\delta_{k}-\frac{1}{n}\sum_{a=1}^{n}\sum _{k=1}^{a-1}\delta_{k}\right)\qquad i=1,...,n, \tag{10}\]
where we use the convention that sums over an empty set of indices vanish, i.e., \(\sum_{k=1}^{0}\delta_{k}=0\). The solution (10) confirms that the quantities \(\delta_{i}\) are source of stress, even in the absence of loading. This motivates our interpretation of \(\delta_{i}\) as a lumped version of strain incompatibility.
For illustrative purposes, sample distributions of horizontal stress obeying (11) for \(n=20\) are represented in Fig.3. In the special situation when the incompatibility is constant (\(\delta_{i}=\delta\)), the stress is (see Fig.3\({}_{a}\)):
\[\sigma_{x}^{i}=\left(i-\frac{n+1}{2}\right)E\delta\qquad i=1,...,n. \tag{11}\]
Note that the blocks in the upper half are in tension along the horizontal direction; conversely, the blocks in the lower half of the stack are in compression.
Since equilibrium in the horizontal direction is imposed _globally_, our model does not display a localized residual stress in the proximity of an isolated disarrangement, as one would expect.
To see this, consider for instance the stress profile in Fig. 3\({}_{\rm c}\), which is obtained for a stack of \(n=20\) blocks with \(\delta_{10}=\delta>0\) and \(\delta_{i}=0\). The horizontal stress is
\[\sigma_{x}^{i}=\left\{\begin{array}{ll}-E\delta/2&i<10\\ E\delta/2&i\geq 11.\end{array}\right. \tag{12}\]
The stress resultant in the horizontal direction would sum to zero, but the whole stack beneath and above the interface between the blocks 10 and 11 has uniform stress, whereas in a real system one would expect this stress to decay moving towards the boundaries. This limitation of the model could clearly be relieved in multiple ways at the discrete level, for example by incorporating concentrated shear springs between all blocks, but this would significantly complicate the model and therefore would affect its transparency.
Figure 3: Renormalized (stress/Young modulus) residual stress in a stack with \(n=20\) blocks. \(a)\) constant incompatibilities \(\delta_{i}=1\); \(b)\) linear incompatibilities; \(c)\) singular incompatibility.
### Heavy blocks
We now consider the case when, along with incompatibilities, an external loading is applied to the stack, which we ascribe to gravity. As a modelling assumption, we consider that at the interface between every pair of blocks there is a lumped mass, as shown in Figure 4.
The weights of the lumped masses, renormalized by the bock width, are denoted by the symbol \(b_{i}\). These have the same dimensions of the 2D Young modulus \(E\) and the 2D stress components \(\sigma_{x/y}^{i}\), as a result of having dimensions of force per unit of length. As a remark, the \(b_{i}\) should not be confused with 2D body forces (force/area), yet we continue to use the same symbol because of how these are physically related to weight.
As depicted in Fig.1\({}_{c}\), heavy blocks made of a lumped mass topping a massless elastic block will be filled in gray, whereas massless blocks are unfilled. Note that as a further simplification in our model, thinking of the weight as concentrated supports the hypothesis that the heavy blocks undergo homogeneous strain.
Gravity is applied to the stack while the incompatibilities between the blocks are kept frozen to their prescribed values. Therefore, during the application of the loading, the blocks can not slide relative to each other, but they deform laterally as a whole. The equilibrium equations (10) are now replaced by:
\[\left\{\begin{array}{l}\sum_{i=1}^{n}\sigma_{x}^{i}=0,\\ \sigma_{y}^{i+1}-\sigma_{y}^{i}=b^{i}\qquad i=1,...,n-1,\\ \sigma_{y}^{n}=-b^{n}.\end{array}\right. \tag{13}\]
The equilibrium equations determine the stresses along the vertical direction:
\[\sigma_{y}^{i}=-\sum_{s=i}^{n}b_{s}\qquad i=1,...,n. \tag{14}\]
The stresses in the horizontal direction are instead obtained by solving the first of (13), together with the constitutive equations (2) and the incompatibility constraints (4). The solution is (the
Figure 4: a) Equilibrium configuration of a stack with \(b_{i}=0\) and \(\delta_{i}=0\). b) Stack in equilibrium with \(b_{i}>0\) and \(\delta_{i}=0\). c) Stack in equilibrium with \(b_{i}=0\) and \(\delta_{i}\neq 0\). d) Combined effect of the weights and incompatibilities. In \(b)\) and c) the black line denote a smooth rigid foundation, that allows horizontal sliding but holds the structure vertically.
details of the derivation may be found in Appendix 7.1):
\[\sigma_{x}^{i}=\sum_{k=1}^{i-1}(E\delta_{k}+\nu b_{k})-\frac{1}{n}\sum_{k=1}^{n-1 }((n-k)(E\delta_{k}+\nu b_{k}))\qquad i=1,...,n. \tag{15}\]
The mechanical effect of the weights \(b_{i}\) and of the incompatibilities \(\delta_{i}\) is illustrated in the cartoon in Fig.4. Due to the linearity of the problem, the final shape represented in \(d)\) may be seen as the result of superposing \(a)\), wherein \(\delta_{i}=0\) and \(b_{i}\neq 0\), and \(c)\), where instead \(\delta_{i}\neq 0\) and \(b_{i}=0\).
It is worth noting that the weight \(b_{n}\) of the last block _does not affect the horizontal stress_. Similarly to what we did to obtain (11), we can consider the special case when the incompatibilities and the weights are spatially uniform: \(\delta_{i}=\delta\), \(b_{i}=b\). In this case, the horizontal stress is an affine function of \(i\):
\[\sigma_{x}^{i}=\left(i-\frac{n+1}{2}\right)(E\delta+\nu b)\qquad i=1,...,n, \tag{16}\]
and the profile of the horizontal residual stress is qualitatively the same as in Figure 3. The horizontal strain is
\[\varepsilon_{x}^{i}=\left(i-\frac{n+1}{2}\right)\delta+\nu\frac{n+1}{2}\frac{ b}{E}. \tag{17}\]
We notice from (17) that the effect of a constant weight is to shift the average value of the horizontal strain from zero. The scaling factor \((n+1)/2\) is due to the fact that the mechanical work performed along the loading path by the weight on top of the \(i\)-th block is increasing with \(i\), due to a sort of "telescopic effect". Hence, the weights do not perform the same work.
A remarkable feature of the general solution (15) is that the dependence of the stress \(\sigma_{x}^{i}\) on the incompatibilities \(\delta_{i}\) and on the weights \(b_{i}\) is functionally similar (apart from the different coefficients \(E\) and \(\nu\)). Albeit this model is extremely simplified, it therefore retains a feature that is present at the level of the three-dimensional theory of elasticity, where incompatible strains and body forces play a similar role as sources of residual stress fields [3].
Another interesting feature of the general solution (15) is that that horizontal stresses would vanish inside each block (\(\sigma_{x}^{i}=0\)) _if and only if_
\[\delta_{i}=-\frac{\nu}{E}b_{i}. \tag{18}\]
Equation (18) also unveils that the incompatibilities \(\delta_{i}\), if these can be externally controlled, could be used to anneal the horizontal stress in a heavy stack, through a sort of remodeling process.
If, instead, the incompatibilities can be nailed and maintained in the stack, one could exploit (18) to _compensate_ for the effect of the concentrated weights \(b_{i}\), for example, to annihilate residual stresses in the deformed stack. Therefore, the loaded column would have zero horizontal stresses in all blocks, whereas of course the unloaded column (where weights are removed) would be residually stressed.
## 3 Growth of an heavy stack: the direct problem
While until now the incompatibilities were prescribed, in this section we establish how their distribution arises during a construction process of layered deposition, where a hypothetical printing devices deposits the blocks one by one, through a sequence of steps, with the incompatibilities being locked at the end of each step by a layer of glue between adjacent blocks. We call this the _direct problem_, as opposed to the _inverse problem_[41, 34], which consists in
determining the incompatibilities that are required to produce a desired distribution of residual stress.
We describe the construction process incrementally. Making reference to the cartoon of Fig.5, the process proceeds as follows. At the first step of the construction process, the first block \(j=1\) is laid on a frictionless support. For \(b^{1}\) the weight of the block, the stress and strain state at the end of the first step are
\[\Sigma^{1}=\{(\sigma_{x}^{1,1},\sigma_{y}^{1,1})\}=\{(0,-b^{1})\}. \tag{19}\]
The corresponding strains are
\[{\cal E}^{1}=\{(\epsilon_{x}^{1,1},\epsilon_{y}^{1,1})\}, \tag{20}\]
with
\[\epsilon_{y}^{1,1}=-\frac{b^{1}}{E},\qquad\epsilon_{x}^{1,1}=-\nu\epsilon_{y} ^{1,1}. \tag{21}\]
Now, suppose that we are at the end of the \(j^{\rm th}\) step (see Fig. 5.a). The stack is composed of \(j\) blocks, the glue that bonds all the blocks is solidified, the incompatibilities are:
\[\Delta^{j}=(\delta_{i},i=1,...,j-1).\]
Note that at the end of step \(j=1\) there is no incompatibility. The incompatibilities are _locked_, and the stack is in equilibrium under the action of the weights \(b_{i}\), \(i=1,\ldots j\). In this configuration, equilibrium is described by a \(j\)-uple of horizontal and vertical stresses
\[\Sigma^{j}=\{(\sigma_{x}^{i,j},\sigma_{y}^{i,j}),i=1,...,j\}\]
and by a \(j\)-uple of horizontal and vertical strains
\[{\cal E}^{j}=\{(\epsilon_{x}^{i,j},\epsilon_{y}^{i,j}),i=1,\ldots,j\}\]
which solve the equilibrium problem discussed in the previous section, namely:
\[(P_{j})\left\{\begin{array}{l}\sum_{i=1}^{j}\sigma_{x}^{i,j}=0\\ \sigma_{y}^{i+1,j}-\sigma_{y}^{i,j}=b^{i}\qquad i=1,...,j-1\\ \sigma_{y}^{j,j}=-b^{j}\\ \epsilon_{x/y}^{i,j}=\frac{\sigma_{x/y}^{i,j}-\nu\sigma_{y/x}^{i,j}}{E} \qquad i=1,...,j\\ \epsilon_{x}^{i+1,j}-\epsilon_{x}^{i,j}=\delta_{i}\qquad i=1,...,j-1\end{array}\right. \tag{22}\]
The \((j+1)^{\rm th}\) construction step takes place in two stages. In the first stage, the printing device deposits the \((j+1)^{\rm th}\) block on top of the stack, keeping the block under a self-balanced couple of "deposition surface stress":
\[\hat{\sigma}_{x}^{j+1}:=\sigma_{x}^{j+1,j}. \tag{23}\]
In addition, the printing device maintains the \((j+1)^{\rm th}\) block in equilibrium, by supplying a vertical force
\[\hat{\sigma}_{y}^{j+1}=-b_{j+1}, \tag{24}\]
at the bottom face of the block, see Fig.5\({}_{b}\), which equilibrates the weight \(b_{j+1}\).
In writing (23), we take inspiration from previous work on continuous surface growth [41, 34, 42], where the surface (i.e. tangential) components of the stress are controlled at the growth surface, besides the usual tractions or displacements, ordinarily controlled in non-growing bodies. In this stage, the deposition strains of the upper block are:
\[\dot{\varepsilon}_{x}^{j+1}=\frac{\hat{\sigma}_{x}^{j+1}+\nu b_{j+1}}{E},\qquad \dot{\varepsilon}_{y}^{j+1}=-\frac{b_{j+1}+\nu\hat{\sigma}_{x}^{j+1}}{E}. \tag{25}\]
We suppose that the glue begins its solidification process at the end of the first stage, as soon as the block \(j+1\) is brought to contact with the stack.
In the second stage of step \(j+1\), the printing devices releases the block, and the construction process pauses until the glue has solidified. At the end of the solidification process, the stack has attained a new equilibrium configuration (see Fig.5\({}_{c}\)), described by a a new stress state
\[\Sigma^{j+1}=\{(\sigma_{x}^{i,j+1},\sigma_{y}^{i,j+1}),i=1,...,j+1\}, \tag{26}\]
and by a \(j\)-uple of horizontal and vertical strains
\[\mathcal{E}^{j+1}=\{(\epsilon_{x}^{i,j+1},\epsilon_{y}^{i,j+1}),i=1,\dots,j+1\}. \tag{27}\]
As we shall see below, the new stress and strain distributions depend on the nature of the glue between block \(j\) and block \(j+1\). Whatever, these distributions be, one must expect that, in general, the stress \(\sigma_{x}^{j+1,j+1}\) in the upper block after the attainment of equilibrium be different from the pre-deposition stresses \(\hat{\sigma}_{x}^{j+1}\). This, as we will illustrate, has an important quantitative effect in the distribution of incompatibilities resulting from the deposition process.
The stress and strain states \(\Sigma^{j+1}\) and \(\mathcal{E}^{j+1}\), after the printing device is removed, and after the glue has solidified, depends on the solidification speed of the glue. We consider two extreme cases, which we refer to as "fast glue", which "freezes" the incompatibility at the very instant of deposition, and "slow glue", which allows the last block to slide on top of the second-to-last until the horizontal stress in the last block vanishes.
A point we would like to stress is that, even if at the current stage we have not yet described how the incompatibility between the last and second to last blocks is calculated, we will assume that incompatibilities between all pairs of block do not evolve during the process. In other words, their value is frozen at the value determined at deposition, and the latter will depend crucially on the mechanical behavior of the "glue".
Figure 5: a) A block \(j+1\) is prepared, in order to be deposited on top of an equilibrated stack of \(j\) block. b) Immediately prior to deposition, the “printer” pre-stresses the block through a self-balanced couple of horizontal forces of magnitude \(\hat{\sigma}_{x}^{j+1}\), and balances the weight of the block by exerting a vertical force \(\hat{\sigma}_{y}^{j+1}\) on the the bottom of the block. c) When the block is dropped, the new stack made of \(j+1\) blocks reattains equilibrium.
### Fast glue
The "fast glue" solidifies at the precise instant when block \(j+1\) is brought into contact with the stack, that is, when the printing device is still holding the block, i.e. the weight \(b_{j+1}\) is not trasmitted to the stack yet. Prior to deposition, the equilibrium of the stack is not perturbed, hence the stresses and strains are those resulting from Problem \((P_{j})\); moreover, the strain state of block \(j+1\) is given by (25). Thus, the incompatibility between block \(j\) and block \(j+1\) calculates as:
\[\delta_{j}=\hat{e}_{x}^{j+1}-e_{x}^{j,j}=-\frac{b_{j+1}+\nu\hat{\sigma}_{x}^{j +1}}{E}-e_{x}^{j,j}. \tag{28}\]
We remark that \(\delta_{j}\) depends not only on the pre-stress \(\hat{\sigma}_{x}^{j+1}\) imparted on the new block by the printing device, but also, through Problem \((P_{j})\), on the incompatibilities \((\delta_{1},\ldots,\delta_{j-1})\), _which keep memory of the history of the construction process_.
In Fig.5 we can see the cartoon of the process. At deposition of the \((j+1)-\)th block, when the latter is still supported by the printing machine, the fast glue perlects the bond, therefore locking the incompatibility at the value (28). At this point, the printing device releases the block, and this in turn imparts both a vertical action and an horizontal action on the underlying stack, which at this instant is yet unbalanced. Thus, for the case of the fast glue, the new stress and strain distributions (26) and (27) are the solution of Problem \((P_{j+1})\), where the incompatibility distribution
\[\Delta^{j+1}=(\delta_{1},\ldots,\delta_{j-1},\delta_{j}), \tag{29}\]
with \(\delta_{1},\ldots,\delta_{n}\) the same as in the previous step and \(\delta_{n}\) given by (28). The process is then iterated with the deposition of new blocks.
The direct problem, as formulated above, admits a closed-form solution. The details of its derivation may be found in Appendix 7.2. In terms of deposition strains, the incompatibility \(\delta_{i}\) is:
\[\delta_{i}=\hat{e}_{x}^{i+1}-\frac{i-1}{i}\hat{\epsilon}_{x}^{i}-\frac{\nu b_{ i}}{E}. \tag{30}\]
Using (25), we can rewrite (30) as:
\[\delta_{i}=\frac{\hat{\sigma}_{x}^{i+1}+\nu b_{i+1}}{E}-\frac{i-1}{i}\frac{ \hat{\sigma}_{x}^{i}+\nu b_{i}}{E}-\frac{\nu b_{i}}{E}. \tag{31}\]
The finding that the incompatibility \(\delta_{i}\) depends only on the last two elements \(\hat{\sigma}_{x}^{i},\hat{\sigma}_{x}^{i+1}\) of the deposition history \((\hat{\sigma}_{x}^{2},\ldots,\hat{\sigma}_{x}^{i},\hat{\sigma}_{x}^{i+1})\), was to be expected. Indeed, even in the continuous case the incompatibility depends on the spatial derivative (the divergence) of the pre-deposition surface stress [2, 41, 42]. Were this continuous problem to be discretized to find a numerical solution, the approximation of the divergence a the boundary would involve the values of the discretized deposition stress at the two grid points closest to the boundary, in accordance with (31).
Once the incompatibilities are known, the horizontal stresses can be calculated from (15), an equation that we repeat here for \(n=j\):
\[\sigma_{x}^{i,j}=\sum_{k=1}^{i-1}\left(E\delta_{k}+\nu b_{k}\right)-\frac{1}{j }\sum_{k=1}^{j-1}\left((j-k)\left(E\delta_{k}+\nu b_{k}\right)\right). \tag{32}\]
### Slow glue
If the glue perfects the bond between the last and second to last blocks very slowly, we may think that the interface between the \(j+1\)-th and the \(j\)-th blocks is frictionless since the instant of deposition to the instant when the whole stack finally reaches equilibrium. Since the glue cannot support shear stresses prior to solidification, which occurs more slowly than the attainment of equilibrium, the horizontal stress of block \(j+1\) needs to vanish,
\[\sigma_{x}^{j+1,j+1}=0. \tag{33}\]
The system of equations that govern the elastic state in the stack is
\[(\bar{P}_{j+1})\left\{\begin{array}{l}\sum_{i=1}^{j}\sigma_{x}^{ i,j+1}=0,\\ \sigma_{y}^{i+1,j}-\sigma_{y}^{i,j}=b^{i},\qquad i=1,...,j,\\ \sigma_{y}^{j+1,j+1}=-b^{j+1},\\ \epsilon_{x/y}^{i,j+1}=\frac{(\sigma_{x/y}^{i,j+1}-\nu\sigma_{y/x}^{i,j+1})}{ E},\qquad i=1,...,j+1,\\ \epsilon_{x}^{i+1,j+1}-\epsilon_{x}^{i,j+1}=\delta_{i},\qquad i=1,...,j-1,\\ \sigma_{x}^{j+1,j+1}=0.\end{array}\right. \tag{34}\]
When the glue perfects the bond at the attainment of equilibrium, the difference between the horizontal strains of the \(j+1\)-th and the \(j\)-th block remains frozen to its value at equilibrium, and the incompatibility \(\delta_{j}\) is therefore set to:
\[\delta_{j}=\epsilon_{x}^{j+1,j+1}-\epsilon_{x}^{j,j+1}. \tag{35}\]
Once the incompatibility is locked, the printing devices starts the deposition of the next block, and the procedure is iterated until \(j\leq n\).
In summary, the process proceeds as follows (following again the cartoon of Fig.5): prior to deposition, the stack of \(j\) blocks is in equilibrium, and the block \(j+1\) is made available geometrically; the block is then prestretched and maintained in equilibrium by the printing device; at deposition, however, differently than the case of fast glue, the horizontal stress in the \(j+1\) block drops to zero, and therefore the underlying stack deforms to reach a new equilibrium only due to the weight of the last block. Finally, the glue dries and the incompatibility is frozen at the value defined by (35).
It is immediate to check (details in Appendix 7.3) that the solution to the equilibrium condition (33) leads to
\[\delta_{i}=-\frac{\nu}{E}b_{i}. \tag{36}\]
Equation (36), which coincides with (18), confirms that in the slow glue protocol, the incompatibility is determined by the local value of \(b_{i}\). This was to be expected: the slow glue solidifies only _after_ equilibrium has been attained, meaning that prior to equilibrium the glue is frictionless, which is the same assumption at the basis of (18), where the variables \(\delta_{i}\) were permitted to evolve freely upon attainment of equilibrium.
We conclude this section by observing that (15) with (36) imply that in the slow glue protocol, the horizontal stresses \(\sigma_{x}^{i,j}\) vanishes everywhere in the stack during the construction process. Note however that, since the incompatibilities \(\delta_{i}\) remain frozen, the removal of the loads upon completion of the construction results, in general, into a non-trivial distribution of residual stress.
Direct problem: examples
To get further insight into the findings of the Sec.3, we are now going to consider some direct problems, where the final stress distributions is obtained under different protocols are deduced and compared quantitatively and qualitatively.
### Fast glue
In the fast glue case, one can prescribe both the weight and the horizontal prestress of each new block. By using the solutions (31) and (32) it is now possible to compare the outcomes of different loading protocols.
To establish a meaningful comparison, we consider stacks with the same final number \(n\) of blocks and the same total weight. Specifically, we consider blocks with increasing, constant and decreasing weight,
\[b_{i}=iE,\qquad b_{i}=\frac{n+1}{2}E,\qquad b_{i}=(n+1-i)E,\qquad(i=1,...,n). \tag{37}\]
Note that in all cases above, the total weight of the column is the same but, as we shall see, the stress distributions are significantly different.
We first compare stacks that are manufactured with \(\hat{\sigma}^{j}_{x}=0\). The resulting stress distributions are reported in Fig.6. All the stress distributions have, of course, zero resultant. By changing the order of the weights one can change the qualitative behavior of the \(\sigma^{i}_{x}\) in the stack. In particular, if the \(b_{i}\) are linearly increasing the distribution of the \(\sigma^{i}_{x}\) is also linearly increasing, if the \(b_{i}\) are uniform the \(\sigma^{i}_{x}\) still increase, but not linearly, and finally if the \(b_{i}\) are decreasing, the \(\sigma^{i}_{x}\) are non-monotonic.
Another interesting effect results from the comparison of stacks that are manufactured with the same linear distribution of \(b_{i}=i\,E\), but that differ in terms of deposition stress \(\hat{\sigma}^{j}_{x}\). In this case one can obtain stacks where the resulting distribution of \(\sigma^{i}_{x}\) is linear if \(\hat{\sigma}^{j}_{x}=0\), but it becomes concave or convex if, respectively, \(\hat{\sigma}^{j}_{x}>0\) or \(\hat{\sigma}^{j}_{x}<0\), that is, if the blocks are pre-tensioned or pre-compressed prior to deposition.
Figure 6: Stress distributions resulting from the _fast glue_ protocol in a stack with \(n=300\) layers, with \(\hat{\sigma}^{j}_{x}=0\) and different distributions of \(i\) sharing the same total weight (left); or with \(b_{i}\) linear and different prescriptions for the deposition stress \(\hat{\sigma}^{j}_{x}\) (right).
In passing we note that in both all distributions considered above, there are special points where the various stress distributions intersect, two points in the case where \(\hat{\sigma}^{j}_{x}=0\) and the \(b_{i}\) are changed, and one such point in the case where \(b_{i}\) is linear and the \(\hat{\sigma}^{j}_{x}\) are changed.
### A remark on the (non) controllability of the stress in 3D printing
A key aspect of the fast glue protocol is represented by the fact that the _pre-deposition_ stress
\[\hat{\sigma}^{j}_{x}:=\sigma^{j,j-1}_{x} \tag{38}\]
and the _post-deposition_ stress \(\sigma^{j,j}_{x}\) of the upper block will differ in general. This happens because after the deposition of the \(j\)-th block, the latter will deform together with the underlying stack to reach a new equilibrium configuration, and therefore its level of stress will generally change from the pre-deposition value.
This feature is illustrated in Fig.7 for two different loading scenarios, both taking place with uniform weights (\(b_{i}=E\)). In both cases the pre-deposition stress the post-deposition stress are different, although the differences seem to be bounded if the number of blocks is very large.
The difference between the pre-deposition stress and the post-deposition stress has relevant implications on 3D printing. Indeed, this result shows that during additive manufacturing, the value of the surface stress right after deposition can not be controlled directly, since it will depend _in a non-local fashion_ on the whole state of deformation of body. This fact was already been highlighted in pioneering works on surface growth, specifically we make reference to the seminal work of Trincher [2]. Further developments of this aspect were discussed in the setting of continuous incompatible surface growth in [41].
### Slow glue
While in the fast glue protocol the pre-deposition stress and post-deposition stress can be both different than zero, and differ between each other, in the slow glue protocol the post-deposition stress \(\sigma^{j,j}_{x}\) will always drop to zero throughout the process, despite any value given to the pre-deposition stress \(\hat{\sigma}^{j}_{x}\). Indeed, as Fig.6\({}_{a}\) shows, in the fast glue case even if the pre-deposition
Figure 7: Differences between the _pre-deposition_ stress \(\hat{\sigma}^{j}_{x}\) (black) and the resulting _post-deposition_ stress \(\sigma^{j,j}_{x}\) (red) in the newly deposited block, in two cases: \(a)\): \(\hat{\sigma}^{j}_{x}=0\) and \(b)\): \(\hat{\sigma}^{j}_{x}=jE\). In both cases the body force is uniform, \(b_{i}=E\). The plots display the dimensionless ratio \(\sigma_{x}/E\).
stress \(\hat{\sigma}^{j}_{x}\) is zero, the post-deposition stress \(\sigma^{j,j}_{x}\) will generally differ than zero, whereas it would always be zero in the slow glue case.
This marks a strong difference between fast and slow glue protocols, that may be readily appreciated in by comparing the incompatibilities stored at the end of the process. Consider the accretion of a stack with \(\hat{\sigma}^{j}_{x}=0\) and, once again, three distributions of \(b_{i}\) leading to the same total weight. As evidenced in Fig.8, in all cases the distributions of incompatibilities are different in the fast and slow glue protocols, above all for the first layers. Eventually however, for \(b_{i}\) constant or decreasing, the incompatibilities would tend asymptotically to a common value if the number of layers is very large, whereas the difference in incompatibilities would remain constant in for \(b_{i}\) linearly increasing.
Differences in terms of residual stress (that is, the stress distribution in the stack upon removal of all external loading) may be appreciated in all cases covered in Fig.8 in the plots of Fig.9. Upon calculating the incompatibility, the stress distributions may be computed with \(b_{i}=0\) from (15). The results show that if the stacks that are manufactured with \(\hat{\sigma}^{j}_{x}=0\), the effects of fast/slow glue are non-negligible, but essentially of _quantitative_ nature.
Figure 8: Comparison between the incompatibilities resulting from fast and slow protocols, for walls that are manufactured with \(\hat{\sigma}^{j}_{x}=0\). Here \(\nu=0.3\).
Figure 9: Comparison between the residual stresses arising from fast and slow protocols, for towers manufactured with \(\hat{\sigma}^{j}_{x}=0\). Here \(\nu=0.3\).
The inverse problem: programming residual stress
Thus far we have explored the problem of finding the residual stress arising from a given deposition protocol. We now turn our attention to the following _inverse problem_: find the deposition protocol
\[\Pi=(\hat{\sigma}^{2},\ldots,\hat{\sigma}^{n};b^{1},\ldots,b^{n}) \tag{39}\]
which delivers a desired residual stress state. To avoid confusing the target stress state with the current state of stress of the growing stack, we denote the former by
\[T_{x}^{n}=(\tau_{x}^{1},\ldots,\tau_{x}^{n}),\qquad\mbox{s.t.}\qquad\sum_{i=1} ^{n}\tau_{x}^{i}=0. \tag{40}\]
which is needed to achieve a desired residual stress state.
We remark that the target state of stress is solely sourced by the incompatibilities, therefore if weights are applied on the stack during manufacturing, the attainment of the target state is achieved by suppressing all the weights. We also remark that by removal of the weights, the vertical stresses \(\sigma_{y}^{i,n}\) vanish, therefore the target state is defined solely by a self-balanced distribution of horizontal stresses.
By using the definition (4) and the constitutive equations (2) (or by solving (10)) we find that the incompatibilities needed to attain the target state are:
\[\delta_{i}=\frac{\tau_{x}^{i+1}-\tau_{x}^{i}}{E}\qquad i=1,...,n-1. \tag{41}\]
Thus, our problem reduces to finding the deposition protocol that results in a given distribution of incompatibilities \((\delta_{1},\ldots,\delta_{n-1})\).
The protocol depends on whether we are using a fast glue or a slow glue, and on the loading applied to the stack during the deposition. We remark that albeit the target state is unloaded, weights may still be used in the manufacturing process to produce incompatibilities, but of course such weights would be removed when the stack is complete. Weights are, of course, the only possible type of control in the slow glue case, since here the horizontal pre-deposition stress always drops to zero.
### Fast glue: inverse problem
Consider the deposition of a heavy stack with the fast-glue protocol. By solving (31) where we now treat \(\delta_{i}\) and \(b_{i}\) as prescribed and \(\hat{\sigma}_{x}^{i}\) as unknown, and bearing in mind that \(\hat{\sigma}_{x}^{1}=0\), we obtain
\[\hat{\sigma}_{x}^{i}=\frac{1}{i-1}\sum_{k=1}^{i-1}\left(\nu\left(2kb_{k}-b_{k }-b_{k+1}\right)+Ek\delta_{k}\right),\qquad i>1. \tag{42}\]
The meaning of (42) is that to produce a desired distribution of incompatibility \(\delta_{i}\) in a heavy stack with given weights \(b_{i}\), one needs to provide the pre-deposition horizontal stress \(\hat{\sigma}_{x}^{i}\) as dictated by (42). At the end of the process, the weights would be removed and the only source of stress in the stack are the incompatibilities. Note that if the stack is directly manufactured with \(b_{i}=0\), the final distribution of stress would coincide precisely with the target residual stress state (40).
As a first application, we manufacture a stack with sinusoidal distribution of residual stress,
\[\tau_{x}^{i}=\sin\left(\frac{2\pi i}{n}\right) \tag{43}\]
which is clearly self-balanced. The target residual stress is represented in Fig.10\({}_{a}\). Once the incompatibilities are calculated by (41), by making use of (42) we can determine the required deposition protocol.
For illustration, we show that the same target may be achieved by two different processes, involving both heavy and massless stacks. In particular we manufacture the stack:
1. through the deposition of pre-stretched massless layers;
2. through the deposition of pre-stretched heavy layers with weight increasing as \(b_{i}=iE\).
The outcomes are illustrated in 10 for a stack of \(n=20\) blocks. In Fig.10\({}_{b}\) we illustrate the case 1), in Fig.10\({}_{c}\) we illustrate case 2). The required deposition stress seems intuitively consistent with the final stress distribution in the case \(b_{i}=0\), but it is rather different from the target stress in the case \(b_{i}=iE\).
An interesting remark relative to (42) is that the distribution of deposition stresses \(\hat{\sigma}_{x}^{i}\) required to produce a given incompatibility is independent on the weights, if these satisfy
\[b_{k}+b_{k+1}=2kb_{k}, \tag{44}\]
giving
\[\tilde{b}_{i}=b_{1}\frac{2^{i-1}}{\Gamma(i)}\prod_{k=0}^{i-2}(\tfrac{1}{2}+k), \tag{45}\]
where \(\Gamma(i)\) is Euler's Gamma function, with \(b_{1}\) arbitrary. For \(b_{1}=0\) we obtain trivially the case of weightless blocks, but it is interesting to note that even for \(b_{1}\neq 0\) the deposition stress required to produce a certain target residual stress would not depend on the weight distribution.
### Slow glue: inverse problem
In the slow glue paradigm the condition \(\sigma_{x}(i,i)=0\) results into the constraint \(E\delta_{i}+\nu b_{i}=0\) between weights and incompatibilities, therefore during deposition that weights may be used to obtain a target residual stress \(\tau_{x}^{i}\). Indeed, we simply need to prescribe weights according to
\[b_{i}=\frac{\tau_{x}^{i}-\tau_{x}^{i+1}}{\nu}. \tag{46}\]
Figure 10: \(a)\) target residual stress \(\tau_{x}^{i}\) to be achieved through the fast glue protocol. \(b)\) Required deposition pre-stress to achieve the target in the case of massless weights, and \(c)\) required deposition pre-stress in the case of linearly increasing weights. Here \(\nu=0.3\).
This makes the treatment of the inverse problem for the slow-glue case definitely simpler than the fast-glue case. Moreover, at least conceptually, (46) shows that any distribution of \(\tau_{i}\) can be targeted through the deposition of weights \(b_{i}\).
There is, however, a subtlety in the practical implementation of this method. Indeed, since \(\tau_{x}^{i}\) must have zero average, it means that \(\tau_{x}^{i}\) cannot be neither constant, nor strictly increasing or decreasing on the stack. This implies that there will be zones, where \(\tau_{x}^{i}\) is increasing, where one would need to deposit blocks with a negative weight.
## 6 Conclusions
In this work we have formulated a discrete model that captures, in a minimal setting where the material is treated as homogeneous, linearly elastic and isotropic, and where plasticity and thermal effects are neglected, some fundamental features of additively manufactured solids. By introducing a simplified kinematics that allows for the onset of incompatible deformations between the layers (schematized as elastic blocks) of an additively manufactured stack, we have studied the role of deposition prestress applied to the blocks, and of the weight of the blocks, on the resulting state of incompatibility. From here, we have computed the final residual stress patterns, and we could single out the factors behind qualitative and quantitative differences emerging in their distributions.
To capture with minimal complexity the role played an "ideal glue" between the blocks - which is required to maintain incompatible deformations, therefore preventing the blocks to slide relative to each other - we have devised two extreme behaviors: one, called "fast glue", where the bond between the prestressed block and the underlying stack occurs while the new block is still maintained in equilibrium by some invisible "printing device"; the other, called "slow glue", where the block is first released and is left free to find an equilibrium configuration together with the underlying stack, before the glue perfects the bond.
The model proposed in this work is a very elementary approximation of the complex aspects that occur in the process of additive manufacturing, for example the role of the glue is overly simplified and the deformations are constrained in a very special way, however its analytical simplicity allows to obtain closed form results, that are very transparent in physical terms. In particular, the model retains many features of richer continuous models, like in particular the presence of a new type of boundary condition, that is the control of "surface stress", that is not contemplated in non-growing solids [41, 42, 34].
The main findings of the model have been illustrated by comparing effects due to different deposition protocols, showing that some controls (for example the type of glue) will play a quantitative effect in the final stress distribution of the stack, whereas other controls (for example the deposition surface pre-stress) will play a qualitative effect. These results pave the way for new studies on the fundamental role of the adhesive properties of the growth surface.
The last part of the manuscript is devoted to the formulation of a technologically important inverse problem, consisting in finding the deposition protocol that is required to achieve a desired distribution of residual stress, but also heterogeneous elastic properties, in an additively manufactured stack [39, 8, 18].
The proposed model should be conceived as a template, that has the advantage to highlight with physical and analytical transparency the non-trivial mechanics involved in additive manufacturing of prestressed solids.
Appendix
### Derivation of (15)
This section illustrates the steps to arrive at the determination of the relation (15). Starting from the first relation of system (13), together with the constitutive equations (2) and (4) and taking into account that the problem along the vertical direction is solved by: \(\sigma_{y}^{i}=-\sum_{s=i}^{n}b_{s}\). The problem along the horizontal direction can be expressed by the following system:
\[\left\{\begin{array}{l}\sum_{i=1}^{n}\sigma_{x}^{i}=0\\ \\ \epsilon_{x/y}^{i}=\frac{(\sigma_{x/y}^{i}-\nu\sigma_{y/x}^{i})}{E}\qquad i=1,...,n\\ \\ \epsilon_{x}^{i+1}-\epsilon_{x}^{i}=\delta_{i}\qquad i=1,...,n-1\end{array}\right. \tag{47}\]
Expressing the third relation of the system (47) through the constitutive relation we can write:
\[\sigma_{x}^{i+1}-\sigma_{x}^{i}=E\delta_{i}+\nu b_{i} \tag{48}\]
From which we can write:
\[\sigma_{x}^{i+1}=\sigma_{x}^{1}+\sum_{k=1}^{i}E\delta_{k}+\sum_{k=1}^{i}\nu b _{k} \tag{49}\]
Using now the first relation of the system (47) we determine the following relation:
\[\sigma_{x}^{1}=-\frac{1}{n}\sum_{i=2}^{n}\sum_{k=1}^{i-1}E\delta_{k}-\frac{1} {n}\sum_{i=2}^{n}\sum_{k=1}^{i-1}\nu b_{k} \tag{50}\]
Replacing the (50) in (49) we get:
\[\sigma_{x}^{i+1}=\sum_{k=1}^{i}E\delta_{k}-\frac{1}{n}\sum_{a=2}^{n}\sum_{k=1 }^{a-1}E\delta_{k}+\sum_{k=1}^{i}\nu b_{k}-\frac{1}{n}\sum_{a=2}^{n}\sum_{k=1 }^{a-1}\nu b_{k}, \tag{51}\]
from which we can write the following relation:
\[\sigma_{x}^{i}=E\left(\sum_{k=1}^{i-1}\delta_{k}-\frac{1}{n}\sum_{a=2}^{n} \sum_{k=1}^{a-1}\delta_{k}\right)+\nu\left(\sum_{k=1}^{i-1}b_{k}-\frac{1}{n} \sum_{a=2}^{n}\sum_{k=1}^{a-1}b_{k}\right) \tag{52}\]
rearranged it we can write:
\[\sigma_{x}^{i}=\sum_{k=1}^{i-1}(E\delta_{k}+\nu b_{k})-\frac{1}{n}\sum_{k=1} ^{n-1}((n-k)(E\delta_{k}+\nu b_{k})) \tag{53}\]
### Derivation of (31)
This section illustrates the steps to arrive at the determination of the relation (31). Starting from the problem \(P_{j}\), the solution of the problem along the vertical direction is that expressed by \(\sigma_{y}^{i,j+1}=-\sum_{k=i}^{j+1}b_{k}\). The problem along the horizontal direction can be expressed by the following system:
\[\left\{\begin{array}{l}\sum_{i=1}^{j}\sigma_{x}^{i,j}=0\\ \\ \epsilon_{x/y}^{i,j}=\frac{(\sigma_{x/y}^{i,j}-\nu\sigma_{y/x}^{i,j})}{E}\qquad i =1,...,j\\ \\ \epsilon_{x}^{i+1,j}-\epsilon_{x}^{i,j}=\delta_{i}\qquad i=1,...,j-1\end{array}\right. \tag{54}\]
Expressing the third relation of the system (54) through the constitutive relation we can write:
\[\sigma_{x}^{i+1,j}-\sigma_{x}^{i,j}=E\delta_{i}+\nu b_{i} \tag{55}\]
From which we can write:
\[\sigma_{x}^{i+1,j}=\sigma_{x}^{1,j}+\sum_{k=1}^{i}E\delta_{k}+\sum_{k=1}^{i} \nu b_{k} \tag{56}\]
Using now the first relation of the system (54) we determine the following relation:
\[\sigma_{x}^{1,j}=-\frac{1}{j}\sum_{i=2}^{j}\sum_{k=1}^{i-1}E\delta_{k}-\frac{1 }{j}\sum_{i=2}^{j}\sum_{k=1}^{i-1}\nu b_{k} \tag{57}\]
Replacing the (57) in (56) we get:
\[\sigma_{x}^{i+1,j}=\sum_{k=1}^{i}E\delta_{k}-\frac{1}{j}\sum_{l=2}^{j}\sum_{k =1}^{l-1}E\delta_{k}+\sum_{k=1}^{i}\nu b_{k}-\frac{1}{j}\sum_{l=2}^{j}\sum_{k =1}^{l-1}\nu b_{k} \tag{58}\]
Using the constitutive relation we arrive at the relation
\[\epsilon_{x}^{j,j}=\frac{1}{E}(\sigma_{x}^{j,j}-\nu\sigma_{y}^{j,j})=\sum_{k =1}^{j-1}\delta_{k}-\frac{1}{j}\sum_{l=2}^{j}\sum_{k=1}^{l-1}\delta_{k}+\frac {1}{E}\sum_{k=1}^{j-1}\nu b_{k}-\frac{1}{jE}\sum_{l=2}^{j}\sum_{k=1}^{l-1}\nu b _{k}+\frac{\nu b_{j}}{E} \tag{59}\]
which describes the horizontal deformation of the \(j-th\) block when in the tower there are \(j\) blocks. From (59) we come to the incompatibility \(\delta_{j}\):
\[\delta_{j}=\hat{e}_{x}^{j+1}-\sum_{k=1}^{j-1}\delta_{k}+\frac{1}{j}\sum_{k=1} ^{j-1}(j-k)\delta_{k}-\frac{1}{E}\sum_{k=1}^{j-1}\nu b_{k}+\frac{\nu}{jE}\sum _{k=1}^{j-1}(j-k)b_{k}-\frac{\nu b_{j}}{E} \tag{60}\]
The (60) can be rewritten as:
\[\delta_{j}=\hat{e}_{x}^{j+1}-\frac{1}{j}\sum_{k=1}^{j-1}k\delta_{k}-\frac{\nu }{jE}\sum_{k=1}^{j-1}kb_{k}-\frac{\nu b_{j}}{E} \tag{61}\]
We can further rewrite the formulation by writing:
\[\begin{split}\delta_{j}&=\hat{e}_{x}^{j+1}-\frac{1}{j}( j-1)\delta_{j-1}-\frac{1}{j}\sum_{k=1}^{j-2}k\delta_{k}-\frac{\nu}{jE}\sum_{k=1}^{j-1}kb_{k}- \frac{\nu b_{j}}{E}\\ &=\hat{e}_{x}^{j+1}-\frac{j-1}{j}(\hat{e}_{x}^{j}-\frac{1}{j-1} \sum_{k=1}^{j-2}k\delta_{k}-\frac{\nu}{(j-1)E}\sum_{k=1}^{j-2}kb_{k}-\frac{\nu b _{j-1}}{E})-\frac{1}{j}\sum_{k=1}^{j-2}k\delta_{k}-\frac{\nu}{jE}\sum_{k=1}^{j -1}kb_{k}-\frac{\nu b_{j}}{E}\\ &=\hat{e}_{x}^{j+1}-\frac{j-1}{j}\hat{e}_{x}^{j}+\frac{1}{j}\sum_ {k=1}^{j-2}k\delta_{k}+\frac{\nu}{jE}\sum_{k=1}^{j-2}kb_{k}+\frac{j-1}{j}\frac {\nu b_{j-1}}{E}-\frac{1}{j}\sum_{k=1}^{j-2}k\delta_{k}-\frac{\nu}{jE}\sum_{k =1}^{j-1}kb_{k}-\frac{\nu b_{j}}{E}\\ &=\hat{e}_{x}^{j+1}-\frac{j-1}{j}\hat{e}_{x}^{j}+\frac{\nu}{jE} \sum_{k=1}^{j-2}kb_{k}+\frac{j-1}{j}\frac{\nu b_{j-1}}{E}-\frac{\nu}{jE}\sum_{ k=1}^{j-1}kb_{k}-\frac{\nu b_{j}}{E}\\ &=\hat{e}_{x}^{j+1}-\frac{j-1}{j}\hat{e}_{x}^{j}+\frac{j-1}{j} \frac{\nu b_{j-1}}{E}-\frac{j-1}{j}\frac{\nu b_{j-1}}{E}-\frac{\nu b_{j}}{E} \end{split} \tag{62}\]
Simplifying the (62) we arrive at
\[\delta_{j}=\hat{e}_{x}^{j+1}-\frac{j-1}{j}\hat{e}_{x}^{j}-\frac{\nu b_{j}}{E}, \tag{63}\]
a relation which expresses the incompatibility in the case of fast glue.
### Derivation of (36)
To prove that in the slow glue protocol the incompatibility arising in the process is determined only by the weight of the underlying block, we recall that in the slow glue case the horizontal stress of the last block has to be zero at equilibrium. By imposing \(\sigma_{x}^{i,i}=0\) for all \(i\geq 2\), we obtain the system
\[\begin{split}\sigma_{x}^{1,1}&=0\\ \sigma_{x}^{2,2}&=(E\delta_{1}+\nu b_{1})=0\\ \sigma_{x}^{3,3}&=(E\delta_{1}+\nu b_{1})+2(E\delta_ {2}+\nu b_{2})=0\\ \sigma_{x}^{4,4}&=(E\delta_{1}+\nu b_{1})+2(E\delta_ {2}+\nu b_{2})+3(E\delta_{3}+\nu b_{3})=0\\ \cdots\end{split} \tag{64}\]
Therefore, the result \(E\delta_{i}+\nu b_{i}=0\) for all \(i\geq 1\) follows at once.
## Acknowledgements
SM thanks Regione Lazio for funding the project H-S3D (DTC 2021-2023 - TE1 Centro di Eccellenza, CUP F85F21001090003) and European Union for funding the project Cultural Heritage Active Innovation for Next-Gen Sustainable Society - CHANGES - PE00000020 PNRR - NextGenerationEU (CUP: F83C22001650006).
DR and SM thanks Regione Lazio for funding the projects: 3DH-solutions (Progetti di Gruppi di Ricerca 2020, CUP F85F21001530009).
GZ gratefully acknowledges the support of GNFM (Gruppo Nazionale di Fisica Matematica) of the INdAM F. Severi. |
2309.00074 | On the Safety of Connected Cruise Control: Analysis and Synthesis with
Control Barrier Functions | Connected automated vehicles have shown great potential to improve the
efficiency of transportation systems in terms of passenger comfort, fuel
economy, stability of driving behavior and mitigation of traffic congestions.
Yet, to deploy these vehicles and leverage their benefits, the underlying
algorithms must ensure their safe operation. In this paper, we address the
safety of connected cruise control strategies for longitudinal car following
using control barrier function (CBF) theory. In particular, we consider various
safety measures such as minimum distance, time headway and time to conflict,
and provide a formal analysis of these measures through the lens of CBFs.
Additionally, motivated by how stability charts facilitate stable controller
design, we derive safety charts for existing connected cruise controllers to
identify safe choices of controller parameters. Finally, we combine the
analysis of safety measures and the corresponding stability charts to
synthesize safety-critical connected cruise controllers using CBFs. We verify
our theoretical results by numerical simulations. | Tamas G. Molnar, Gabor Orosz, Aaron D. Ames | 2023-08-31T18:24:46Z | http://arxiv.org/abs/2309.00074v1 | # On the Safety of Connected Cruise Control:
###### Abstract
Connected automated vehicles have shown great potential to improve the efficiency of transportation systems in terms of passenger comfort, fuel economy, stability of driving behavior and mitigation of traffic congestions. Yet, to deploy these vehicles and leverage their benefits, the underlying algorithms must ensure their safe operation. In this paper, we address the safety of connected cruise control strategies for longitudinal car following using _control barrier function (CBF)_ theory. In particular, we consider various safety measures such as minimum distance, time headway and time to conflict, and provide a formal analysis of these measures through the lens of CBFs. Additionally, motivated by how stability charts facilitate stable controller design, we derive _safety charts_ for existing connected cruise controllers to identify safe choices of controller parameters. Finally, we combine the analysis of safety measures and the corresponding stability charts to synthesize safety-critical connected cruise controllers using CBFs. We verify our theoretical results by numerical simulations.
## I Introduction
Vehicle automation holds the promise of improving the efficiency of traffic systems, with great prospective benefits in safety, passenger comfort, fuel economy, mitigation of traffic congestions and reduction of travel times. The success of automated vehicles (AVs), however, is conditioned on designing efficient longitudinal and lateral controllers. Therefore, strategies like _adaptive cruise control (ACC)_ have been studied extensively with great success. The performance of AVs is further improved by providing additional information about the surrounding traffic via vehicle-to-everything (V2X) connectivity--this leads to connected automated vehicles (CAVs) with better ability to respond to other road participants. For example, _cooperative adaptive cruise control (CACC)_[1] allows platoons of CAVs to share information, cooperate, and improve their driving behavior. _Connected cruise control (CCC)_[2], on the other hand, regulates the motion of a single CAV while leveraging information shared by other connected (but not necessarily automated) vehicles. With well-designed ACC, CACC or CCC systems, CAVs have shown significant benefits in energy saving [3] and in mitigating traffic congestions [4, 5]. Remarkably, these benefits have also been showcased by experiments [6, 7].
To deploy CAVs and thereby enjoy their benefits, safe and collision-free behavior is of primary concern. Recently, the literature has put emphasis on safety-critical control designs for CAVs. These include safe ACC systems established using reachability analysis [8], formal methods [9] and control barrier functions (CBFs) [10, 11], and safe CACC with model predictive control [12], just to mention a few examples. Now, we focus on CBF-based approaches, due to their success in a variety of application areas, including AV experiments in traffic [13], AVs executing obstacle avoidance [14], multi-agent systems capturing AVs [15], traffic merging [16], roundabout crossing [17], and safe traffic control by CAVs [18]. Safe CCC with CBFs has also appeared recently [19], and it has been implemented on a full-size truck and successfully tested in experiments [20].
In this paper, we establish safe CCC designs that allow CAVs to follow other vehicles with guaranteed safety w.r.t. various metrics like minimum distance, time headway or time to conflict. We make two contributions through the application of CBF theory. First, we analyze the safety of an existing CCC strategy, and determine provably safe choices of controller parameters. These results are summarized as safety charts--a concept adopted from [19]. Second, we synthesize safety-critical controllers by minimally modifying existing, potentially unsafe designs. We use numerical simulations to verify the theoretical analysis. Throughout the paper we highlight the benefits of connectivity in order to maintain safety in mixed traffic scenarios. The results presented are also extendable to other mobile agent systems where spatiotemporal separation between the agents is required, e.g., legged robots, airborne agents and sea vessels.
## II Connected Cruise Control
Consider the scenario in Fig. 1, where a connected automated vehicle (CAV) is controlled longitudinally to follow a connected human-driven vehicle (CHV) on a single lane straight road while maintaining a safe distance. We assume that the CAV has access to measurements of its own speed \(v\) and acceleration \(a\), the leading CHV's speed \(v_{\mathrm{L}}\) and acceleration \(a_{\mathrm{L}}\), and the distance \(D\), by the help of on-board range sensors and vehicle-to-vehicle (V2V) connectivity. Note that \(a_{\mathrm{L}}\) is typically difficult to obtain by range sensors, while V2V communication can help provide it.
Fig. 1: Connected cruise control (CCC) setup: a connected automated vehicle (CAV) is controlled to safely follow a connected human-driven vehicle (CHV) by using information from vehicle-to-vehicle (V2V) communication.
We describe car following by the state \(x\!=\!\begin{bmatrix}D&v&v_{\rm L}\end{bmatrix}^{\top}\) and the model:
\[\dot{D} =v_{\rm L}-v, \tag{1}\] \[\dot{v} =u-p(v),\] \[\dot{v}_{\rm L} =a_{\rm L},\]
where the CAV executes the acceleration command \(u\) subject to rolling and air resistance captured by \(p(v)\geq 0\).
The car-following task can be accomplished by the following _connected cruise control (CCC)_ strategy, \(u=k_{\rm d}(x)\), that was proposed in [2] and experimentally tested in [6]:
\[k_{\rm d}(x)=A\big{(}V(D)-v\big{)}+B\big{(}W(v_{\rm L})-v\big{)}+Ca_{\rm L}. \tag{2}\]
The CAV responds to the distance, speed difference and CHV's acceleration with gains \(A,B,C\!\geq\!0\), respectively, and:
\[V(D) =\min\{\kappa(D-D_{\rm st}),v_{\rm max}\}, \tag{3}\] \[W(v_{\rm L}) =\min\{v_{\rm L},v_{\rm max}\}.\]
Here the speed policy \(W\) prevents the CAV from following a CHV that exceeds the speed limit \(v_{\rm max}\), while the range policy \(V\) prescribes a desired speed as a function of the distance \(D\), that is zero at the standstill distance \(D_{\rm st}\) and increases linearly with gradient \(\kappa>0\) up to the speed limit.
Fig. 2 shows the performance of CCC by numerical simulations of (1)-(2) for two different sets of control gains (solid lines and dash-dot lines). Unless stated otherwise, the parameters of each numerical result in this paper are those listed in Table I. The simulations1 capture an emergency braking where the CHV comes to a stop. Using CCC, the CAV responds to this event and decelerates; see panels (b) and (d). Notice in panel (a) that the distance between the vehicles greatly depends on the choice of control gains. The selection of controller parameters has significant impact on safety, that is highlighted in panel (c) and is discussed below.
Footnote 1: Matlab codes for each example are available at: [https://github.com/molnartamags/safe-connected-cruise-control](https://github.com/molnartamags/safe-connected-cruise-control).
To characterize the safety of longitudinal control, we rely on various safety measures that require different kinds of spatiotemporal separations between the vehicles. We list three possible safety criteria below, in the form \(h(x)\geq 0\), associated with a safety measure \(h\) and a _safe set_\(\mathcal{S}\) of states.
1. _Distance_ must be kept above a safe value \(D_{\rm sf}\): \[\mathcal{S}_{\rm D} =\{x\in\mathbb{R}^{3}:D\geq D_{\rm sf}\},\] (4) \[h_{\rm D}(x) =D-D_{\rm sf}.\]
2. _Time headway_ must be kept above a safe value \(T_{\rm h}\): \[\mathcal{S}_{\rm TH} =\{x\in\mathbb{R}^{3}:D\geq D_{\rm sf}+T_{\rm h}v\},\] (5) \[h_{\rm TH}(x) =(D-D_{\rm sf})/T_{\rm h}-v.\]
3. _Time to conflict_ must be kept above a safe value \(T_{\rm c}\): \[\mathcal{S}_{\rm TTC} =\{x\in\mathbb{R}^{3}:D\geq D_{\rm sf}+T_{\rm c}(v-v_{\rm L})\},\] (6) \[h_{\rm TTC}(x) =(D-D_{\rm sf})/T_{\rm c}+v_{\rm L}-v.\]
The time to conflict is often referred to as time to collision if \(D_{\rm sf}=0\). For further choices of safety indicators, please see [18, 19] and the references therein.
The safe sets in (4)-(6) are depicted in Fig. 3, where their boundaries are shown by thick lines in panel (a) and as planes in panel (b). The system is safe w.r.t. the given safety measure if it evolves in the half space indicated by arrows. Note that the time headway is the strictest of the three safety indicators: it requires the largest distance at a given speed to be safe. The safety of the previous simulation results w.r.t. the time headway is evaluated in Fig. 2(c). While CCC (2) keeps system (1) safe with one choice of CCC parameters (solid lines), it fails to do so with another (dash-dot lines). As such, choosing the parameters of the CAV's controller has crucial impact on safety and safe parameters must be identified.
The problem of choosing controller parameters has been well-studied for CCC from stab
Fig. 3: Safe sets of longitudinal car-following, considering safety with respect to distance (D), time headway (TH) and time to conflict (TTC). Arrows indicate the safe side of the set boundaries.
Fig. 2: Simulations of model (1) with CCC (2), using a provably safe choice of controller parameters (solid lines) and an unsafe choice (dash-dot lines). These sets of parameters correspond to points P and Q in Fig. 5(a), respectively. For the unsafe CCC setup, formal safety guarantees can be recovered by utilizing the safety filter (28)-(30) (dashed lines).
cally, [2] has derived _stability charts_ for (1)-(2) that identify controller parameters associated with stable driving. These charts distinguish the _plant stable_ domain, where the CAV is able to approach a constant speed in a stable way, and the _string stable_ region, where the CAV has smaller speed fluctuations than the CHV, thereby smoothing the traffic flow. Examples of these stability charts from [2] are plotted in Fig. 4, where the plant stable domain, given by \(A\geq 0\) and \(A\geq-B\), is red, and the string stable region, given by \(A\geq 0\), \(A\geq 2\big{(}(1-C)\kappa-B\big{)}\) and \(C\leq 1\), is blue. The charts are shown for \(C=0\) in panel (a), and \(C=0,0.25,0.75\) (thin lines), \(C=0.5\) (thick line and shading) and \(C\to 1\) (dashed line) in panel (b). In what follows, we establish _safety charts_--whose concept first appeared in [19]--in a similar way to identify safe controller parameters. As preliminary, we revisit the theory of control barrier functions.
## III Control Barrier Functions
Consider a control system with state \(x\in\mathbb{R}^{n}\), input \(u\in\mathbb{R}^{m}\) and dynamics given by locally Lipschitz continuous functions \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) and \(g:\mathbb{R}^{n}\to\mathbb{R}^{n\times m}\):
\[\dot{x}=f(x)+g(x)u; \tag{7}\]
cf. (1) with \(f(x)=\big{[}v_{\mathrm{L}}-v\ \,-p(v)\ a_{\mathrm{L}}\big{]}^{\top}\), \(g(x)=\big{[}0\ \ 1\ \ 0\big{]}^{\top}\). With a locally Lipschitz continuous controller \(k:\mathbb{R}^{n}\to\mathbb{R}^{m}\), \(u=k(x)\), such as (2), the corresponding closed loop system:
\[\dot{x}=f(x)+g(x)k(x), \tag{8}\]
with the initial condition \(x(0)=x_{0}\in\mathbb{R}^{n}\), has a unique solution \(x(t)\), which we assume to exist for all \(t\geq 0\).
We call (8) as safe if its solution \(x(t)\) evolves within a safe set \(\mathcal{S}\) for all time. We consider \(\mathcal{S}\), and its boundary \(\partial\mathcal{S}\), to be given by a continuously differentiable function \(h:\mathbb{R}^{n}\to\mathbb{R}\):
\[\mathcal{S} =\{x\in\mathbb{R}^{n}:h(x)\geq 0\}, \tag{9}\] \[\partial\mathcal{S} =\{x\in\mathbb{R}^{n}:h(x)=0\}.\]
That is, \(h(x(t))\geq 0\) for all \(t\geq 0\) indicates safety while \(h(x(t))<0\) for any \(t\geq 0\) is unsafe, where \(h\) is selected based on the application; cf. (4), (5) and (6). To maintain \(h(x(t))\geq 0\), we rely on the derivative of \(h\) along (7):
\[\dot{h}(x,u)=\underbrace{\nabla h(x)f(x)}_{L_{f}h(x)}+\underbrace{\nabla h(x)g (x)}_{L_{g}h(x)}u. \tag{10}\]
With this, Nagumo's theorem [21] establishes safety for (8).
**Theorem 1** ([21]).: _Let \(h\) satisfy \(\nabla h(x)\neq 0\), \(\forall x\in\partial\mathcal{S}\). System (8) is safe w.r.t. \(\mathcal{S}\) if and only if:_
\[\dot{h}\big{(}x,k(x)\big{)}\geq 0,\quad\forall x\in\partial\mathcal{S}. \tag{11}\]
Condition (11) means that the controller does not allow the system to leave the safe set \(\mathcal{S}\) when it is at the boundary \(\partial\mathcal{S}\). To _certify_ that (8) with a given controller \(k\) is safe, one needs to verify that (11) holds. Yet, (11) does not provide a constructive way to _synthesize_ controllers for (7), since it does not provide guidelines inside \(\mathcal{S}\) (i.e., when \(h(x)>0\)).
Control barrier functions (CBFs) [11] have been proposed for the purpose of safety-critical controller synthesis.
**Definition 1** ([11]).: Function \(h\) is a _control barrier function_ for (7) on \(\mathcal{S}\) if there exists \(\alpha\in\mathcal{K}_{\infty}^{\mathrm{e}}\)2 such that for all \(x\in\mathcal{S}\):
Footnote 2: Function \(\alpha:\mathbb{R}\to\mathbb{R}\) is of extended class-\(\mathcal{K}_{\infty}\) (\(\alpha\in\mathcal{K}_{\infty}^{\mathrm{e}}\)) if it is continuous, strictly increasing, \(\alpha(0)=0\) and \(\lim_{x\to\pm\infty}\alpha(r)=\pm\infty\).
\[\sup_{u\in\mathbb{R}^{m}}\dot{h}(x,u)>-\alpha\big{(}h(x)\big{)}. \tag{12}\]
Note that the \(\sup\) on left-hand side of (12) gives \(L_{f}h(x)\) if \(L_{g}h(x)=0\) and \(\infty\) otherwise, thus (12) is equivalent to:
\[L_{f}h(x)>-\alpha\big{(}h(x)\big{)},\quad\forall x\in\mathbb{R}^{n}\ \mathrm{s.t.}\ L_{g}h(x)=0. \tag{13}\]
[11] established safety-critical control with CBFs as follows.
**Theorem 2** ([11]).: _If \(h\) is a CBF for (7) on \(\mathcal{S}\), then any locally Lipschitz continuous controller \(k\) that satisfies:_
\[\dot{h}\big{(}x,k(x)\big{)}\geq-\alpha\big{(}h(x)\big{)} \tag{14}\]
_for all \(x\in\mathcal{S}\) renders (8) safe w.r.t. \(\mathcal{S}\)._
Note that if (14) holds, then (11) also does. Furthermore, condition (14) provides guidelines over the entire set \(\mathcal{S}\) to synthesize controllers. For example, CBFs are often used in _safety filters_ that modify a desired but not necessarily safe controller \(k_{\mathrm{d}}:\mathbb{R}^{n}\to\mathbb{R}^{m}\) to a safe controller subject to (14), in the form of an optimization problem (quadratic program):
\[k(x)=\operatorname*{argmin}_{u\in\mathbb{R}^{m}} \|u-k_{\mathrm{d}}(x)\|^{2}\] (15) s.t. \[\dot{h}(x,u)\geq-\alpha\big{(}h(x)\big{)}.\]
The solution of (15) can be given in closed form [20]:
\[k(x) =\begin{cases}k_{\mathrm{d}}(x)+\max\{0,\eta(x)\}\frac{L_{g}h(x)^ {\top}}{\|L_{g}h(x)\|^{2}},&\text{if }L_{g}h(x)\neq 0,\\ k_{\mathrm{d}}(x),&\text{if }L_{g}h(x)=0,\end{cases} \tag{16}\] \[\eta(x) =-L_{f}h(x)-L_{g}h(x)k_{\mathrm{d}}(x)-\alpha\big{(}h(x)\big{)}.\]
For single input systems like (1), where \(u\) and \(L_{g}h(x)\) are scalars, the safety conditions greatly simplify. If \(L_{g}h(x)<0\), (14) is equivalent to:
\[k(x)\leq k_{\mathrm{s}}(x), \tag{17}\]
with:
\[k_{\mathrm{s}}(x)=-\frac{L_{f}h(x)+\alpha\big{(}h(x)\big{)}}{L_{g}h(x)}. \tag{18}\]
If \(L_{g}h(x)>0\), (14) yields:
\[k(x)\geq k_{\mathrm{s}}(x). \tag{19}\]
Fig. 4: Stability chart [2] of CCC (1)-(2) with (a) \(C=0\), (b) various \(C>0\) values.
If \(L_{g}h(x)=0\), (13) guarantees that (14) holds for any \(k(x)\). Thus, for scalar input \(u\), the safety filter (16) becomes [20]:
\[k(x)=\begin{cases}\min\{k_{\mathrm{d}}(x),k_{\mathrm{s}}(x)\},&\mathrm{if}\ L_{g}h(x)<0,\\ k_{\mathrm{d}}(x),&\mathrm{if}\ L_{g}h(x)=0,\\ \max\{k_{\mathrm{d}}(x),k_{\mathrm{s}}(x)\},&\mathrm{if}\ L_{g}h(x)>0.\end{cases} \tag{20}\]
Finally, it is important to distinguish the case \(L_{g}h(x)\equiv 0\), where the input \(u\) does not affect safety directly in (10) for any \(x\). Then, \(h\) is not a CBF and safety-critical controller synthesis is not possible directly with \(h\) (unless (8) is safe for any \(k(x)\)). Instead, one may construct an _extended CBF_[22, 23, 24] with a continuously differentiable \(\alpha\in\mathcal{K}_{\infty}^{\mathrm{e}}\):
\[h_{\mathrm{e}}(x)=L_{f}h(x)+\alpha\big{(}h(x)\big{)}, \tag{21}\]
that is associated with the _extended safe set_:
\[\begin{split}\mathcal{S}_{\mathrm{e}}&=\{x\in \mathbb{R}^{n}:h_{\mathrm{e}}(x)\geq 0\},\\ \partial\mathcal{S}_{\mathrm{e}}&=\{x\in \mathbb{R}^{n}:h_{\mathrm{e}}(x)=0\}.\end{split} \tag{22}\]
If the system is kept inside \(\mathcal{S}_{\mathrm{e}}\), condition (14) holds, and the system also evolves within \(\mathcal{S}\). Ultimately, safety w.r.t. the intersection \(\mathcal{S}\cap\mathcal{S}_{\mathrm{e}}\) of the two sets is guaranteed as follows.
**Corollary 1** ([23]).: _If \(L_{g}h(x)\equiv 0\) and \(h_{\mathrm{e}}\) in (21) is a CBF for (7) on \(\mathcal{S}_{\mathrm{e}}\) with \(\alpha_{\mathrm{e}}\in\mathcal{K}_{\infty}^{\mathrm{e}}\), then any locally Lipschitz continuous controller \(k\) that satisfies:_
\[\dot{h}_{\mathrm{e}}\big{(}x,k(x)\big{)}\geq-\alpha_{\mathrm{e}}\big{(}h_{ \mathrm{e}}(x)\big{)} \tag{23}\]
_for all \(x\in\mathcal{S}_{\mathrm{e}}\) renders (8) safe w.r.t. \(\mathcal{S}\cap\mathcal{S}_{\mathrm{e}}\)._
With this result, safety filters incorporating (23) can be constructed analogously to (15)-(16), with \(h_{\mathrm{e}}\) instead of \(h\).
Accordingly, safety certification by Nagumo's theorem--the extension of Theorem 1 for \(L_{g}h(x)\equiv 0\)--is performed on the boundary of \(\mathcal{S}\cap\mathcal{S}_{\mathrm{e}}\). This boundary is located at \(\partial\mathcal{S}_{\mathrm{e}}\cap\mathcal{S}\) (where \(h_{\mathrm{e}}(x)=0\) and \(h_{\mathrm{e}}(x)\geq 0\)) and at \(\partial\mathcal{S}\cap\mathcal{S}_{\mathrm{e}}\) (where \(h(x)=0\) and \(h_{\mathrm{e}}(x)\geq 0\)). Note that only the former case needs further analysis, since the latter case implies \(h_{\mathrm{e}}(x)=\dot{h}\big{(}x,k(x)\big{)}\geq 0\), and the system cannot leave \(\mathcal{S}\cap\mathcal{S}_{\mathrm{e}}\) along this boundary per Theorem 1. Thus, safety certification is summarized as follows.
**Corollary 2**.: _Let \(L_{g}h(x)\equiv 0\) and \(h_{\mathrm{e}}\) in (21) satisfy \(\nabla h_{\mathrm{e}}(x)\neq 0\), \(\forall x\in\partial\mathcal{S}_{\mathrm{e}}\). System (8) is safe w.r.t. \(\mathcal{S}\cap\mathcal{S}_{\mathrm{e}}\) if:_
\[\dot{h}_{\mathrm{e}}\big{(}x,k(x)\big{)}\geq 0,\quad\forall x\in\partial \mathcal{S}_{\mathrm{e}}\cap\mathcal{S}. \tag{24}\]
Again, notice that (24) holds if (23) does. In the case of a single input, (23) as well as the corresponding safety filter can be expressed in simple form. Analogously to the non-extended case, formulas (17), (19) and (20) can be used with:
\[k_{\mathrm{s}}(x)=-\frac{L_{f}h_{\mathrm{e}}(x)+\alpha_{\mathrm{e}}\big{(}h_{ \mathrm{e}}(x)\big{)}}{L_{g}h_{\mathrm{e}}(x)}, \tag{25}\]
cf. (18).
## IV Safe Connected Cruise Control
Now, we apply CBF theory to analyze the safety of CCC (1)-(2), and to synthesize safety-critical CCC laws.
### _Safe Time Headaway_
First, we characterize the safety of CCC w.r.t. the time headway criterion in (5). For brevity, we introduce \(\bar{\kappa}=1/T_{\mathrm{h}}\). The gradient and Lie derivatives of \(h_{\mathrm{TH}}\) in (10) read:
\[\begin{split}\nabla h_{\mathrm{TH}}(x)&=\big{[} \bar{\kappa}\quad-1\quad 0\big{]}\,,\quad L_{g}h_{\mathrm{TH}}(x)=-1,\\ L_{f}h_{\mathrm{TH}}(x)&=\bar{\kappa}(v_{\mathrm{L} }-v)+p(v).\end{split} \tag{26}\]
Note that both \(\nabla h_{\mathrm{TH}}(x)\neq 0\), \(\forall x\in\partial\mathcal{S}_{\mathrm{TH}}\) and (13) hold, hence Theorems 1 and 2 are applicable for certifying safety and synthesizing safety-critical controllers, respectively.
We first use Theorem 1 to analyze the safety of the CCC introduced in (2), i.e., \(u=k(x)=k_{\mathrm{d}}(x)\). Since the input is scalar and \(L_{g}h_{\mathrm{TH}}(x)<0\), (11) is equivalent to:
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x)\geq 0,\quad\forall x\in\mathbb{R}^{n}\ \mathrm{s.t.}\ h_{ \mathrm{TH}}(x)=0, \tag{27}\]
cf. (17), where the safe input defined in (18) is given by:
\[k_{\mathrm{s}}(x)=\bar{\kappa}(v_{\mathrm{L}}-v)+p(v)+\alpha\big{(}\bar{\kappa }(D-D_{\mathrm{sf}})-v\big{)}. \tag{28}\]
By analyzing under what conditions (27) holds, we arrive at the following result. For the detailed analysis and the proof of this result, please see the Appendix.
**Theorem 3**.: _System (1) with \(u\!=\!k(x)\!=\!k_{\mathrm{d}}(x)\) given by (2), \(A,B\!\geq\!0\) and \(C=0\) is safe w.r.t. \(\mathcal{S}_{\mathrm{TH}}\) in (5) if:_
* \(v\geq 0\)_,_ \(D_{\mathrm{st}}\geq D_{\mathrm{sf}}\) _and_ \(B=\bar{\kappa}\geq\kappa\)_; or_
* \(v,v_{\mathrm{L}}\in[0,\bar{v}]\) _with some_ \(\bar{v}\geq 0\)_,_ \(D_{\mathrm{st}}>D_{\mathrm{sf}}\)_,_ \(\bar{\kappa}\geq\kappa\) _and:_ \[A\geq\frac{|\bar{\kappa}-B|\bar{v}}{\kappa(D_{\mathrm{st}}-D_{\mathrm{sf}})}.\] (29)
Condition (29) can be visualized in the space \((B,A)\) of control gains, resulting in the _safety chart_ in Fig. 5(a). The safe domain--associated with a provably safe choice of control gains--is shown for \(\bar{\kappa}=\kappa\) and the maximum speed \(\bar{v}=15\,\mathrm{m/s}\) with thick green boundary and green shading, on top of the stable domains (red and blue) that were plotted in Fig. 4(a). Additional boundaries are shown for \(\bar{v}=5,10\) and \(20\,\mathrm{m/s}\) (thin lines) and the limit \(\bar{v}\to\infty\) (dashed line). As the maximum speed \(\bar{v}\) increases, the V-shaped safe region closes to a single line given by the first bullet point in Theorem 3. The safe and unsafe simulations in Fig. 2 correspond to points P and Q, respectively, that indeed lie in the safe and unsafe domains. Note that safety w.r.t. time headway can be achieved even without acceleration feedback (\(C=0\)),
Fig. 5: Safety charts of CCC (1)-(2) (a) w.r.t. the time headway criterion (5) with \(C=0\) and various maximum speed \(\bar{v}\); (b) w.r.t. the distance and time to conflict in (4) and (6), respectively, with various \(C\).
while the response to acceleration (via \(Ca_{\rm L}\) in (2)) will be necessary for safety w.r.t. distance and time-to-collision.
The safety charts allow us to select the parameters of the CCC (2) in a safe way. Alternatively, one can also synthesize a safety-critical controller via Theorem 2, by viewing (2) as desired input and using a CBF-based safety filter. Since \(L_{g}h_{\rm TH}(x)<0\), the safety filter (20) simplifies to:
\[k(x)=\min\{k_{\rm d}(x),k_{\rm s}(x)\}. \tag{30}\]
The behavior of the safety filter in (28) and (30) is demonstrated in Fig. 2 by dashed lines. The safety filter is applied on the desired controller (2) with the unsafe gains corresponding to the dash-dot lines (cf. point Q in Fig. 5(a)) and \(\alpha(r)=r\). The end result is provably safe CCC.
In conclusion, safety can be guaranteed both by controller tuning through safety charts and by applying safety filters. Safety charts combined with stability charts (or other analysis on performance) provide both safe and performant controllers. However, if safe regions are too small, or do not overlap with stable regions, one may design CCC based on performance only (i.e., properties like string stability), and apply safety filters. Safety filters ensure safety even when safe parameters for nominal CCC laws are hard to realize.
### _Safe Distance and Time to Conflict_
Next, we address safety w.r.t. distance, as in (4), for which:
\[\begin{split}\nabla h_{\rm D}(x)&=\begin{bmatrix} 1&0&0\end{bmatrix},\quad L_{g}h_{\rm D}(x)\equiv 0,\\ L_{f}h_{\rm D}(x)&=v_{\rm L}-v.\end{split} \tag{31}\]
Since \(L_{g}h_{\rm D}(x)\equiv 0\), an extended CBF must be constructed from \(h_{\rm D}\) via (21). We do this by the following observation.
**Observation.** For system (1), the time to conflict-based safety measure \(h_{\rm TTC}\) in (6) can be expressed using the distance-based safety indicator \(h_{\rm D}\) in (4) as:
\[h_{\rm TTC}(x)=L_{f}h_{\rm D}(x)+h_{\rm D}(x)/T_{\rm c}. \tag{32}\]
That is, \(h_{\rm TTC}\) is an extension (21) of \(h_{\rm D}\) with \(\alpha(r)=r/T_{\rm c}\).
Thus, \(h_{\rm TTC}\) is regarded as extended CBF \(h_{\rm e}\), yielding:
\[\begin{split}\nabla h_{\rm TTC}(x)&=\begin{bmatrix} \bar{\kappa}&-&1\end{bmatrix},\quad L_{g}h_{\rm TTC}(x)=-1,\\ L_{f}h_{\rm TTC}(x)&=\bar{\kappa}(v_{\rm L}-v)+a_{\rm L}+p(v), \end{split} \tag{33}\]
where \(\bar{\kappa}=1/T_{\rm c}\). Observe that \(\nabla h_{\rm TTC}(x)\neq 0\), \(\forall x\in\partial\mathcal{S}_{\rm TTC}\) holds and \(h_{\rm TTC}\) is in fact a CBF. Thus, Corollaries 1 and 2 apply, and safety is established w.r.t. \(\mathcal{S}_{\rm D}\cap\mathcal{S}_{\rm TTC}\) as follows.
**Proposition 1**.: _System (1) is safe w.r.t. \(\mathcal{S}_{\rm D}\cap\mathcal{S}_{\rm TTC}\) given by (4) and (6) if (24) holds (with \(h_{\rm e}(x)=h_{\rm TTC}(x)\)) for a given controller, \(u\!=\!k(x)\). Moreover, any controller, \(u\!=\!k(x)\), that satisfies (23) renders (1) safe w.r.t. \(\mathcal{S}_{\rm D}\cap\mathcal{S}_{\rm TTC}\)._
Thus, safe distance is guaranteed by ensuring safe time to conflict, and safety is ultimately achieved for \(\mathcal{S}_{\rm D}\cap\mathcal{S}_{\rm TTC}\). Moreover, note that if the vehicles do not move in reverse, i.e., \(v,v_{\rm L}\!\geq\!0\), and if \(T_{\rm h}\!=\!T_{\rm c}\), then \(h_{\rm TH}(x)\!\geq\!0\) implies \(h_{\rm D}(x)\!\geq\!0\) and \(h_{\rm TTC}(x)\!\geq\!0\); cf. (4)-(6). This means that ensuring safety w.r.t. time headway yields safety w.r.t. time to conflict and distance (provided that \(h_{\rm TH}(x_{0})\geq 0\) holds).
The safety of (1) w.r.t. distance and time to conflict with the CCC law in (2) can be certified by (24) that reduces to:
\[k_{\rm s}(x)\!-\!k_{\rm d}(x)\!\geq\!0,\ \forall x\!\in\!\mathbb{R}^{n}\ {\rm s.t.}\ h_{\rm TTC}(x)\!=\!0\ {\rm and}\ h_{\rm D}(x)\!\geq\!0, \tag{34}\]
as \(L_{g}h_{\rm TTC}(x)<0\), cf. (17). Here \(k_{\rm s}\) is obtained from (25):
\[k_{\rm s}(x)=\bar{\kappa}(v_{\rm L}\!-\!v)\!+\!p(v)\!+\!\alpha_{\rm e}\big{(} \bar{\kappa}(D\!-\!D_{\rm sf})\!+\!v_{\rm L}\!-\!v\big{)}\!+\!a_{\rm L}. \tag{35}\]
After analyzing for which parameters (34) holds, we obtain the following result (with proof in the Appendix).
**Theorem 4**.: _System (1) with \(u\!=\!k(x)\!=\!k_{\rm d}(x)\) given by (2) and \(A,B,C\!\geq\!0\) is safe w.r.t. \(\mathcal{S}_{\rm D}\cap\mathcal{S}_{\rm TTC}\) in (4) and (6) if \(a_{\rm L}\geq-\gamma(v_{\rm L})\) with some \(\gamma\in\mathcal{K}\), \(v,v_{\rm L}\!\in\![0,\bar{v}]\) with some \(\bar{v}\geq 0\), \(D_{\rm st}>D_{\rm sf}\), \(\bar{\kappa}\geq\kappa\), \(C\leq 1\) and:_
\[A\kappa(D_{\rm st}-D_{\rm sf})+\min\{0,B-\bar{\kappa}\}\bar{v}\] \[+\min_{v_{\rm L}\in[0,\bar{v}]}\Big{[}(\bar{\kappa}-B+A)v_{\rm L}-(1 -C)\gamma(v_{\rm L})\Big{]}\geq 0. \tag{36}\]
Notice that the condition \(\dot{v}_{\rm L}\!=\!a_{\rm L}\!\geq\!-\!\gamma(v_{\rm L})\) describes the lead CHV's motion and, according to CBF theory, it guarantees \(v_{\rm L}(0)\geq 0\implies v_{\rm L}(t)\geq 0\), \(\forall t\geq 0\). That is, this condition describes that the lead vehicle neither brakes too hard nor drives in reverse. How "hard" it brakes is characterized by the function \(\gamma\). For example, for the constant-jerk profile of the lead CHV in Fig. 2, it can be shown that \(a_{\rm L}(t)\geq-\sqrt{20v_{\rm L}(t)}\) for all time, i.e., \(\gamma(r)=\sqrt{20r}\).
Similar to (29), condition (36) can be visualized in the \((B,A)\) space as safety chart; see Fig. 5(b) for \(\gamma(r)=\sqrt{20r}\) and \(\bar{\kappa}=\kappa\). The same parameters are used as in Fig. 4: \(C=0\) in panel (a), and \(C\!=\!0,0.25,0.75\) (thin lines), \(C\!=\!0.5\) (thick lines and shading) and \(C\!\to\!1\) (dashed lines) in panel (b). The safe region was found by brute-force evaluation of (36) on a grid of \(v_{\rm L}\), \(A\) and \(B\) (although expressing \(A\) from (36) explicitly could be possible depending on the form of \(\gamma\)). The safe region has V-shape, similar to Fig. 5(a), and it moves towards smaller gain \(A\) as the acceleration gain \(C\) is increased. This shows that acceleration \(a_{\rm L}\) feedback--that is typically obtained by V2V connectivity--is helpful in achieving safety w.r.t. distance and time to conflict, since safety would otherwise require large gains and control inputs.
Moreover, apart from the safety charts, the safety filter (30) provides an alternative way of safety-critical control regardless of the parameters of CCC (2). Choosing between safety chart and safety filter is up to the user--the end result is CCC with formal safety guarantees in both cases.
## Conclusion
In this paper, we investigated the safety of connected automated vehicles (CAVs) executing connected cruise control (CCC) by means of control barrier function (CBF) theory. We established safety charts for existing CCC designs to identify provably safe choices of controller parameters, analogously to stability charts found in the literature. To recover formal safety guarantees for unsafe parameter choices, we also proposed CBF-based safety filters for controller synthesis. As future research, we plan to investigate safe CCC in connected vehicle networks where CAVs respond to multiple vehicles.
## Appendix
Proof of Theorem 3.: To prove safety, we apply Theorem 1 by showing that (27) holds. We express \(k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x)\):
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x) =\alpha(\bar{\kappa}(D-D_{\mathrm{sf}})-v)+\bar{\kappa}(v_{\mathrm{ L}}-v)+p(v)\] \[\quad-A(V(D)-v)-B(W(v_{\mathrm{L}})-v), \tag{37}\]
and use \(V(D)\leq\kappa(D-D_{\mathrm{st}})\), \(W(v_{\mathrm{L}})\leq v_{\mathrm{L}}\), and \(p(v)\geq 0\):
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x) \geq\alpha(\bar{\kappa}(D-D_{\mathrm{sf}})-v)+\bar{\kappa}(v_{ \mathrm{L}}-v)\] \[\quad-A\big{(}\kappa(D-D_{\mathrm{st}})-v\big{)}-B(v_{\mathrm{L}} -v). \tag{38}\]
This means that providing safety without considering the saturations at \(v_{\mathrm{max}}\) in \(V\), \(W\) and the resistance term \(p(v)\) implies safety with those terms too. We substitute \(h_{\mathrm{TH}}(x)=0\) into (38), which makes the term of \(\alpha\) zero; cf. (5). Then we add \(Ah_{\mathrm{TH}}(x)=0\) to both sides, and reorganize to:
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x)\geq A(\bar{\kappa}-\kappa)( D-D_{\mathrm{sf}})+A\kappa(D_{\mathrm{st}}-D_{\mathrm{sf}})\\ +(\bar{\kappa}-B)(v_{\mathrm{L}}-v). \tag{39}\]
If \(v\geq 0\), then \(D\!\geq\!D_{\mathrm{sf}}\) when \(h_{\mathrm{TH}}(x)=0\). Thus, if \(D_{\mathrm{st}}\!\geq\!D_{\mathrm{sf}}\) and \(B=\bar{\kappa}\geq\kappa\) also hold, (27) follows and safety is proven. Furthermore, if \(v,v_{\mathrm{L}}\!\in\![0,\bar{v}]\), we have \(\left|v_{\mathrm{L}}-v\right|\!\leq\!\bar{v}\) and \(D\geq D_{\mathrm{sf}}\) when \(h_{\mathrm{TH}}(x)=0\). With \(\bar{\kappa}\geq\kappa\), (39) leads to:
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x)\geq A\kappa(D_{\mathrm{st}}-D_{\mathrm{sf }})-|\bar{\kappa}-B|\bar{v}. \tag{40}\]
Thus, (27) and safety follows for \(D_{\mathrm{st}}>D_{\mathrm{sf}}\) and (29).
Proof of Theorem 4.: We prove safety by applying Corollary 2 and showing that (34) holds, where:
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x)=\alpha_{\mathrm{e}}\big{(} \bar{\kappa}(D-D_{\mathrm{sf}})+v_{\mathrm{L}}-v\big{)}+\bar{\kappa}(v_{ \mathrm{L}}-v)+a_{\mathrm{L}}\\ +p(v)-A\big{(}V(D)-v\big{)}-B\big{(}W(v_{\mathrm{L}})-v\big{)}-Ca_ {\mathrm{L}}. \tag{41}\]
We use \(V(D)\!\leq\!\kappa(D-D_{\mathrm{st}})\), \(W(v_{\mathrm{L}})\!\leq\!v_{\mathrm{L}}\), and \(p(v)\geq 0\), substitute \(h_{\mathrm{TTC}}(x)\!=\!0\), which makes the term of \(\alpha_{\mathrm{e}}\) zero; cf. (6). Then we add \(Ah_{\mathrm{TTC}}(x)=0\) to both sides:
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x)\geq A(\bar{\kappa}-\kappa)(D- D_{\mathrm{sf}})+A\kappa(D_{\mathrm{st}}-D_{\mathrm{sf}})\\ +Av_{\mathrm{L}}+(1-C)a_{\mathrm{L}}+(\bar{\kappa}-B)(v_{\mathrm{ L}}-v). \tag{42}\]
With \(a_{\mathrm{L}}\!\geq\!-\!\gamma(v_{\mathrm{L}})\), \(C\leq 1\), \(\kappa\leq\bar{\kappa}\) and \(h_{\mathrm{D}}(x)\!=\!D\!-\!D_{\mathrm{sf}}\geq 0\):
\[k_{\mathrm{s}}(x)-k_{\mathrm{d}}(x)\geq A\kappa(D_{\mathrm{st}}- D_{\mathrm{sf}})+(B-\bar{\kappa})v\\ +(\bar{\kappa}-B+A)v_{\mathrm{L}}-(1-C)\gamma(v_{\mathrm{L}}). \tag{43}\]
For \(v,v_{\mathrm{L}}\!\in\![0,\bar{v}]\), we have \((B-\bar{\kappa})v\!\geq\!\min\{0,B-\bar{\kappa}\}\bar{v}\), whereas \(D_{\mathrm{st}}>D_{\mathrm{sf}}\) and (36) yield (34) and imply safety.
|
2309.10267 | Star Clusters in Tidal Debris | We present results of a Hubble Space Telescope (HST) UBVI-band study of star
clusters in tidal tails, using new WFC3 and ACS imaging to complement existing
WFPC2 data. We survey 12 tidal tails across seven merging systems, deriving
ages and masses for 425 star cluster candidates (SCCs). The stacked mass
distribution across all systems follows a power law of the form $dN/dM \propto
M^{\beta}$, with $\beta = -2.02 \pm 0.15$, consistent with what is seen in
other star forming environments. GALEX and Swift UV imaging provide star
formation rates (SFRs) for our tidal tails, which when compared with ages and
masses of our SCCs, allows for a determination of the cluster formation
efficiency (CFE). We find the CFE increases with increasing SFR surface
density, matching the theoretical model. We confirm this fit down at SFR
densities lower than previously measured (log $\Sigma_\text{SFR} \:
(\text{M}_\odot \: \text{yr}^{-1} \: \text{kpc}^{-2}) \approx -4.2$), as
related to the CFE. We determine the half-light radii for a refined sample of
57 SCCs with our HST WFC3 and ACS imaging, and calculate their dynamical age,
finding the majority of them to be gravitationally bound. We also provide
evidence of only low-mass ($< 10^4 \: \text{M}_\odot$) cluster formation in our
nearest galaxy, NGC 1487, consistent with the theory that this system is a
dwarf merger. | Michael Rodruck, Jane Charlton, Sanchayeeta Borthakur, Aparna Chitre, Patrick R. Durrell, Debra Elmegreen, Jayanne English, Sarah C. Gallagher, Caryl Gronwall, Karen Knierman, Iraklis Konstantopoulos, Yuexing Li, Moupiya Maji, Brendan Mullan, Gelys Trancho, William Vacca | 2023-09-19T02:42:39Z | http://arxiv.org/abs/2309.10267v1 | # Star Clusters in Tidal Debris
###### Abstract
We present results of a _Hubble Space Telescope_ (_HST_) _UBVI_-band study of star clusters in tidal tails, using new WFC3 and ACS imaging to complement existing WFPC2 data. We survey 12 tidal tails across seven merging systems, deriving ages and masses for 425 star cluster candidates (SCCs). The stacked mass distribution across all systems follows a power law of the form \(dN/dM\propto M^{\beta}\), with \(\beta=-2.02\pm 0.15\), consistent with what is seen in other star forming environments. _GALEX_ and _Swift_ UV imaging provide star formation rates (SFRs) for our tidal tails, which when compared with ages and masses of our SCCs, allows for a determination of the cluster formation efficiency (CFE). We find the CFE increases with increasing SFR surface density, matching the theoretical model. We confirm this fit down at SFR densities lower than previously measured (log \(\Sigma_{\rm SFR}\) (M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\)) \(\approx-4.2\)), as related to the CFE. We determine the half-light radii for a refined sample of 57 SCCs with our _HST_ WFC3 and ACS imaging, and calculate their dynamical age, finding the majority of them to be gravitationally bound. We also provide evidence of only low-mass (\(<10^{4}\) M\({}_{\odot}\)) cluster formation in our nearest galaxy, NGC 1487, consistent with the theory that this system is a dwarf merger.
keywords: galaxies: interactions - galaxies: star formation - galaxies: star clusters: general
## 1 Introduction
The gravitational origin of tidal tails was first realized with the pivotal work by Toomre and Toomre (1972), who showed that galactic tidal tails and warped disks could be simulated as gravitational encounters with another galaxy. In a very forward thinking section of their work, they suggested that galaxy mergers may be able to produce large amounts of star formation. This prediction was later borne out with the discovery of luminous infrared galaxies (LIRGs), made possible with the launch of the IR telescope _IRAS_. _IRAS_ found that many galaxies with high (\(>10^{11}L_{\odot}\)) IR luminosites have disturbed morphologies, indicative of past merging events (Sanders et al., 1988). The IR emission of these LIRGs suggested star formation rates (SFRs) on the order of 100 M\({}_{\odot}\)/ yr (Schweizer, 1987).
Galaxy mergers can produce collisions between clouds of gas, which can provoke star formation. The high pressures generated in these collisions will lead to enhanced star formation efficiency (Jog and Solomon, 1992) and the formation of massive star clusters (Zubovas et al., 2014; Maji et al., 2017). Additionally, cloud collisions may provide external
pressure to young clusters, keeping their contents gravitationally bound, preventing their destruction (Elmegreen, 2008). The advent of high-resolution imaging, possible with _HST_, has found evidence of such massive clusters within the interiors of merging galaxies in the form of bright, blue, compact objects (e.g. Zepf et al., 1999; Whitmore & Schweizer, 1995; Whitmore et al., 1993). These objects, labeled as young massive clusters (YMCs), show properties similar to what is expected for young globular clusters, such as mass and radius. Therefore, by studying YMCs, we may be able to study the formation of today's globular clusters.
While the interiors of mergers have been well-studied in the past, few surveys have looked at the tidal debris associated with mergers. However, simulations are now showing that star formation can occur in the extended regions of a merger. Simulations with explicit stellar feedback indicate that \(\sim 20-50\%\) of a merger's SFR can occur in extended debris (Hopkins et al., 2013). Observations of the Tadpole galaxy confirm this prediction, as \(\sim 30\%\) of the system's star formation is occurring in tidal tail star clusters (Jarrett et al., 2006). Tidal tails also offer a relatively clean and uncluttered environment as compared to the interiors of galaxies. Their sparse environments mean clusters will not be subject to shocks and tides found in the dense nuclear region, and may avoid disruption, surviving to the present day (Renaud, 2018).
Despite the differences in location and interaction, several common properties seem to exist between studies of YMCs found in merging and quiescent galaxies. The slopes of the mass and luminosity functions has been measured to be between \(-1.8\) and \(-2.2\) for both types of galaxies. The cluster formation efficiency, a measure of the percentage of the SFR occurring within star clusters, tracks the local SFR density in both environments. Additionally, cluster radii are similar, with half-light radii of \(\sim 0.5-10\) pc (Portegies Zwart et al., 2010). Such similarities suggests common physics behind star cluster formation.
This paper builds on previous work by Mullan et al. (2011) and Knierman et al. (2003), who studied clusters in tidal tails using the WFPC2 camera on _HST_. Both works found statistically significant populations of clusters in a variety of tails, with colours suggesting masses comparable to the YMCs found in the interiors of mergers. Merging systems with young interaction ages and bright tails produced the most star clusters. However, analysis was limited by the lack of multiband photometry. Our new observations of 12 tails in seven interacting systems add F336W and F438W WFC3 and F435W ACS imaging to existing WFPC2 F606W and F814W observations, to allow for age and mass determination.
We will begin in Section 2 by describing our imaging datasets and our analysis methods. In Section 3 we show our results. In Section 4 we discuss our findings, and conclude our chapter with the main points of our research in Section 5. We add notes for individual tails in Section A.
## 2 Data Analysis
Our analysis consists of two parts: _HST_ imaging to identify tidal tail star clusters and determine their ages and masses, and UV imaging with _GALEX_ and _Swift_ to derive the local star formation rates. Both efforts are described below.
### _Hst_
Our optical and near-IR observations consist of WFPC2, ACS, and WFC3 imaging from _HST_ across multiple cycles; the full sample is shown in Figure 1. Properties of our sample are highlighted in Table 1, with systems ordered according to their interaction age. We sample not only major disk mergers, but minor and dwarf mergers as well.
WFPC2 F555W and F814W data were taken in Cycle 7 (GO 7466) Knierman et al. (2003) for NGC 3256. The remaining samples of AM1054-325, MCG-03-068-13, NGC 1487, NGC 2992, NGC 2993, NGC 6872, and NGC 1614 were observed with WFPC2 in Cycle 16 (GO 11134), in F606W and F814W Mullan et al. (2011). Galaxies from Mullan et al. (2011) represent an extension of the original sample in Knierman et al. (2003), designed to sample a variety of ages, mass ratios, and optical properties. Our survey adds WFC3 F336W and WFC3 F438W/ACS F435W imaging to galaxies lying in the Southern hemisphere, which have also been observed with the Gemini-South GMOS detector, and will be the subject of a forthcoming paper on the diffuse light in the tidal debris. Archival ACS F438W and WFC3 F336W data for NGC 1614N/S are used from Cycle 14 (GO 10592) and Cycle 23 (GO 14066). A full description of our _HST_ observations is in Table 2. We refer to our F606W, F814W, F336W, F438W, and F435W observations as V\({}_{606}\), I\({}_{814}\), U\({}_{336}\), B\({}_{438}\), and B\({}_{435}\), respectively.
WFPC2 data were reduced in IRAF and corrected for degraded charge transfer effects (CTE) using Dolphin (2000). We refer the reader to the respective papers (Mullan et al., 2011; Knierman et al., 2003) for a more in depth discussion on WFPC2 data reduction. WFPC2 magnitudes were taken from Mullan et al. (2011) and Knierman et al. (2003). Objects in Knierman et al. (2003) were transformed from the WFPC2 Vegamag photometric system to the Johnson-Cousins system using transformations found in Holtzman et al. (1995), going from F555W and F814W to \(V\) and \(I\). Rather than converting back to WFPC2 magnitudes, we keep them in their transformed system. Data from Mullan et al. (2011) were provided in WFPC2 Vegamag magnitudes (V\({}_{606}\) and I\({}_{814}\)).
ACS and WFC3 images were downloaded from the _HST_ archive, which have been processed through the standard ACS and WFC3 pipelines. ACS images have a scale of 0.05" per pixel, while WFC3 observations have a scale of 0.04" per pixel. This is compared to the WFPC2 WF scale of 0.1" per pixel; objects on the WFPC2 PC were not analyzed due to its larger readnoise.
We perform aperture photometry on all our objects using the DAOPHOT.PHOT (Stetson, 1987) task in IRAF. We choose to match the aperture settings in Mullan et al. (2011), which uses radii of 2, 5, and 8 pixels for object, inner background, and outer background annuli, respectively. When translated to the WFC3 scale, this results in radii of 5, 12.5, and 20 pixels. For ACS, we use radii of 4, 10, and 16 pixels. In this manner, we are assured that we are measuring our clusters to the same physical extent, camera to camera. Aperture corrections to bright, isolated stars were performed out to a radius of 10 pixels for WFC3 and ACS imaging. Due to the lack of adequate stars in the NGC 1487 field, we take the average aperture corrections from all other fields and apply them to NGC 1487W and NGC 1487E. 10
pixel zeropoints of 23.392, 24.895, and 25.762 were used for U\({}_{336}\), B\({}_{438}\), and B\({}_{435}\), respectively.
### Galex
The _Galaxy Evolution Explorer_ (_GALEX_) operated from 2003 to 2012 as an orbiting UV telescope. Its 50 cm diameter mirror imaged a 1.25\({}^{\circ}\) FOV with two separate microchannel-plate detectors, designated as the near-ultraviolet (NUV) and far-ultraviolet (FUV) channels. The telescope has a pixel scale of 1.5" per pixel. GALEX conducted several surveys, including the All-Sky Imaging survey (AIS), Medium Imaging Survey (MIS), Deep Imaging Survey (DIS), and Nearby Galaxy Survey (NGS), to varying depths. Our targets are drawn from AIS (AM1054-325, ESO 376-28, MGC-03-13-063, NGC 1487E/W) and NGS (NGC 6872, NGC 2992, NGC 2993). Our _GALEX_ observations are listed in Table 3, along with our _Swift_ observations (described below).
We use images taken in the FUV (\(\lambda_{\rm eff}=1538.6\)) obtained from the MAST archive. These have been pipeline processed. While background subtracted images are available via MAST, we do not use these because the background subtraction can be inaccurate for extended objects. Photometry was performed on the entire flux enclosed within the tail boundary, as described in Section 2.4. The sky background was determined by sampling five nearby regions of 30 x 30 pixels and taking the mean value. For NGC 2992, a bright, blue star visible in the FUV image was manually masked.
We corrected for Galactic extinction using coefficients from Yuan et al. (2013) for _GALEX FUV_.
### Swift
NGC 3256 and NGC 1614 do not have FUV exposures with _GALEX_; instead, we have opted to use NUV data from _Swift_, downloaded from the MAST archive. The _Swift_ satellite operates the 30 cm UltraViolet-Optical Telescope (UVOT), with a FOV of 17 x 17 arcmin. Images were taken in 2 x 2 binned mode, with a pixel scale of 1" per pixel. The UVOT CCD operates as a photon counter, which is susceptible to coincidence loss if two or more photons arrive within a single frame. The effect of this scales with brightness, and past analysis has shown that coincidence loss becomes greater than 1% when the count rate is greater than 0.007 counts sec\({}^{-1}\) pixel\({}^{-1}\); for 2 x 2 binned images, as we use, the count rate threshold is then 0.028 counts sec\({}^{-1}\) pixel\({}^{-1}\). The faint tails of these two systems, NGC 3256 and NGC 1614, fall well below this threshold, and we can discount effects from coincidence loss.
_Swift_ offers three UV filters, _uvw1_, _uvw2_, and _uvm2_. We use the _uvm2_ filter (\(\lambda_{\rm eff}=2221\)) as the other two filters have extended red tails which leak optical light into the UV.
Figure 1: Snapshots of our sample from the Digitized Sky Survey (DSS). Red outlines indicate WFPC2 pointings from Mullan et al. (2011) and Knierman et al. (2003). Blue and cyan squares show the WFC3 and ACS footprints, respectively.
\begin{table}
\begin{tabular}{l l l l l l}
**System** & **Interaction Age** & **Distance** & **Merger Type** & **Tidal Features** & **Tidal Features** \(\mu_{V}\) \\ & (Myr) & (Mpc) & & & (mag arcsec\({}^{-1}\)) \\ \hline \hline NGC 1614N/S & 50 & 65.6 & Major & Tidal tails & 22.27/23.00 \\ AM1054-325/ESO 376-28 & 85 & 52.9 & Major & Tidal tails and tidal dwarf & 22.65/22.76 \\ NGC 2992/3 & 100 & 36.6 & Major & Tidal tails and tidal bridge & 23.47/24.78 \\ MCG-03-13-063 & 100 & 46.2 & Minor & Extended spiral arm & 23.91 \\ NGC 6872 & 150 & 62.6 & Minor & Tidal tails & 24.06 \\ NGC 3256E/W & 400 & 42.8 & Major & Tidal tails & 24.04/23.75 \\ NGC 1487E/W & 500 & 10.8 & Dwarf & Tidal tails & 24.02/24.57 \\ \hline \end{tabular}
\end{table}
Table 1: System information. Interaction age gives the most recent interaction, which produced the visible tidal features.
Photometry was performed using the previously mentioned procedure for _GALEX_ imaging. Extinction coefficients were taken from Roming et al. (2009) for the _uvm2_ filter.
### Tail definition
Regions defined as tidal debris are taken from Mullan et al. (2011). The "in-tail" and "out-of-tail" regions were defined using images taken with the WFPC2 V\({}_{006}\) filter. Images were smoothed with a Gaussian kernel at 5 - 7 pixels FWHM, and a contiguous region one count above background was defined as "in-tail", using SAO DS9; all other regions were defined as
\begin{table}
\begin{tabular}{l l l l l l} System & Filter & Exposure time (s) & Date & Program ID & Camera \\ \hline \hline NGC 1614N/S & F336W & 6510 & 2015 Dec 12 & 14066 & WFC3 \\ & F438W & 1260 & 2006 Aug 14 & 10592 & ACS \\ & F606W & 1900 & 2007 Nov 15 & 11134 & WFPC2 \\ & F814W & 1900 & 2007 Nov 15 & 11134 & WFPC2 \\ AM 1054-325/ESO 376-28 & F336W & 3440 & 2017 Nov 6 & 14937 & WFC3 \\ & F435W & 1520 & 2017 Nov 7 & 14937 & WFC3 \\ & F606W & 1900 & 2008 Feb 24 & 11134 & WFPC2 \\ & F814W & 1900 & 2008 Feb 24 & 11134 & WFPC2 \\ NGC 2992 & F336W & 2230 & 2018 Nov 27 & 15083 & WFC3 \\ & F438W & 2100 & 2018 Apr 19 & 15083 & ACS \\ & F606W & 1000 & 2007 Dec 28 & 11134 & WFPC2 \\ NGC 2993 & F336W & 2230 & 2018 Apr 19 & 15083 & WFC3 \\ & F438W & 2100 & 2018 Nov 27 & 15083 & ACS \\ & F606W & 1000 & 2007 Dec 3 & 11134 & WFPC2 \\ & F814W & 900 & 2007 Dec 3 & 11134 & WFPC2 \\ MCG-03-13-063 & F336W & 3420 & 2017 Sep 27 & 14937 & WFC3 \\ & F435W & 1520 & 2017 Sep 28 & 14937 & WFC3 \\ & F606W & 1000 & 2007 Nov 24 & 11134 & WFPC2 \\ & F814W & 900 & 2007 Nov 24 & 11134 & WFPC2 \\ NGC 6872\({}^{a}\) & F336W & 3920 & 2018 Jul 24 & 15083 & WFC3 \\ & F435W & 1760 & 2018 Jul 24 & 15083 & WFC3 \\ & F606W & 2100/1900 & 2008 Feb 23/2008 May 16 & 11134 & WFPC2 \\ & F814W & 2100/1900 & 2008 Feb 23/2008 May 16 & 11134 & WFPC2 \\ NGC 3256E & F336W & 3520 & 2018 Jun 15 & 15083 & WFC3 \\ & F435W & 1600 & 2018 Jun 15 & 15083 & WFC3 \\ & F555W & 1000 & 1999 Oct 11 & 7466 & WFPC2 \\ & F814W & 1000 & 1999 Oct 11 & 7466 & WFPC2 \\ NGC 3256W & F336W & 3520 & 2018 Jan 15 & 15083 & WFC3 \\ & F435W & 1600 & 2018 Jan 15 & 15083 & WFC3 \\ & F555W & 1000 & 1999 Mar 24 & 7466 & WFPC2 \\ & F814W & 1000 & 1999 Mar 24 & 7466 & WFPC2 \\ NGC 1487E & F336W & 3520 & 2017 Nov 25 & 15083 & WFC3 \\ & F435W & 1600 & 2017 Nov 25 & 15083 & WFC3 \\ & F606W & 1000 & 2008 Aug 9 & 11134 & WFPC2 \\ NGC 1487W & F336W & 3520 & 2019 Mar 24 & 15083 & WFC3 \\ & F435W & 1600 & 2019 Mar 24 & 15083 & WFC3 \\ & F606W & 1000 & 2008 Aug 31 & 11134 & WFPC2 \\ & F814W & 900 & 2008 Aug 31 & 11134 & WFPC2 \\ \hline \end{tabular} \({}^{a}\)NGC 6872 was measured in two pointings with WFPC2.
\end{table}
Table 2: _HST_ observations.
\begin{table}
\begin{tabular}{l l l l l l} System & Observatory & Filter & Exposure time (s) & Observation ID & Observation date \\ \hline \hline NGC 1614N/S & _Swift_ & UVM2 & 3271.5 & 00046270001; 004627002 & 2012 May 10; 2012 Jul 6 \\ AM1054-325/ESO 376-28 & _GALEX_ & FUV & 108 & 6386924688810967040 & 2007 Feb 3 \\ NGC 2992/3 & _GALEX_ & FUV & 1045.5 & 2485918962089459712 & 2005 Feb 12 \\ MCG-03-13-063 & _GALEX_ & FUV & 204 & 6381260060739239936 & 2006 Jan 8 \\ NGC 6872 & _GALEX_ & FUV & 3371.3 & 250562221059423360 & 2006 Jun 29 \\ NGC 3256E/W & _Swift_ & UVM2 & 1867.2 & 00049720003; 00049720012 & 2013 Sep 15; 2014 Sep 18 \\ NGC 1487E/W & _GALEX_ & FUV & 108 & 638583398382340608 & 2007 Jan 5 \\ \hline \end{tabular} \({}^{a}\)NGC 6872 was measured in two pointings with WFPC2.
\end{table}
Table 3: _GALEX_ and _Swift_ observations.
'out-of-tail". In the cases of NGC 1487W, AM1054-325, ESO 376-28, and NGC 6872, the centre of the galaxy is imaged as well. The boundary between the centre and the tail is found where the radial light profile changes in scale length.
### Cluster Detection
Objects were detected using the DAOPHOT.DAOFIND (Stetson, 1987) task in IRAF. Selection criteria for the initial WFPC2 cluster list required 2 counts per object in both V\({}_{606}\) and I\({}_{144}\), a signal-to-noise (S/N) ratio of at least 3, error in F606W less than 0.25 mag, and detections in at least one dither position. For our ACS/WFC3 imaging, we require an S/N of at least 3 in both U\({}_{336}\) and B\({}_{438}\)/B\({}_{435}\). Objects had to be detected in all four filters. We further excluded objects which were fit to our simple stellar population cluster models (Marigo et al., 2008) with a \(\chi^{2}>3\).
Magnitude and colour cuts were applied to our source catalogue to separate potential clusters from contaminant stars and background galaxies. Objects which met these requirements are defined as Star Cluster Candidates (SCCs). We apply a magnitude cut of \(M_{V}<-8.5\) (\(M_{F606W}<-8.6\)) designed to eliminate individual stars. Magnitude cuts between \(-8<M_{V}<-9\) are commonly used in studies of star clusters (Konstantopoulos et al., 2010). Whitmore et al. (2010) found that even at fainter magnitudes, down to \(M_{V}=-7\), more than 60% of detected objects were clusters as opposed to individual stars, adding confidence to our selection criteria. In addition to discriminating against non-clusters, a magnitude cut allows a uniform measurement standard across our systems, most of which have nearly complete samples of SCCs down to the M\({}_{V}\) = -8.5 cutoff, as shown in Section 2.6. A colour cut of \(V-I<2.0\) (V\({}_{606}\) - I\({}_{184}\) < 1.4) is added as well; this will still allow for old globular clusters which have reddened as they evolve, while eliminating individual stars.
### Completeness
Completeness curves from Mullan et al. (2011) show that our WFPC2 data are on average 50% complete at \(m_{\rm V\,_{606}}\approx\) 25.5 and \(m_{\rm I\,_{814}}\approx\) 24.5. To measure the completeness of our WFC3 and ACS imaging, we perform a similiar analysis as in Mullan et al. (2011); we add 10,000 fake stars to each individual image, 100 at a time, with DAOPHOT.ADSTAR (Stetson, 1987), and calculate how many are recovered above a 3\(\sigma\) limit. Our completeness curves are shown in Figure 2. For systems AM1054-325, ESO 376-28, NGC 3256, NGC 1487, MCG-03-13-063, and NGC 6872, we are complete at 50% at \(m_{\rm B\,_{438}}=\) 25.5 and \(m_{\rm U\,_{336}}=\) 24.8. NGC 2992 and NGC 2993 are 50% complete at \(m_{\rm B\,_{435}}=\) 26.5 and \(m_{\rm U\,_{336}}=\) 24.5. NGC 1614, observed with program GO-14066 and GO-10592 is complete at \(m_{\rm B\,_{438}}=\) 26.5 and \(m_{\rm U\,_{336}}=\) 25.4.
## 3 Results
### Colour-colour diagrams
In Figures 3-14 we show the U\({}_{336}\) - B\({}_{438}\) vs V\({}_{606}\) - I\({}_{814}\) colour-colour diagrams for all our observed systems. Systems are ordered in reference to the age of the tidal debris (see Table 1). The interaction age of the merger is plotted as a yellow circle. Objects which fulfill our SCC criteria in Section 2.5 are shown as dark blue circles. For completeness, we show objects which do not meet our criteria as gray boxes, with arrows indicating upper and lower limits.
Data are plotted against simple stellar population (SSP) models from Marigo et al. (2008), with logarithmic ages overplotted on the evolutionary track. We use a Salpeter IMF; however, the choice of IMF is negligible in determining ages, and the tracks are also consistent expectations for a Chabrier (Chabrier, 2001) or Kroupa (Kroupa, 2001) IMF. We corrected for foreground Galactic extinction using an \(R_{\lambda}=A_{\lambda}/E(B-V)\) reddening law, with \(R_{V}=3.1\), with data from Schlafly & Finkbeiner (2011). Internal cluster extinction is not corrected for when plotting our colour-colour diagrams. However, this is taken into account when SED fitting our clusters (Section 3.3). We also include a reddening arrow in the lower right hand corner of our plots for A\({}_{V}\) = 0.5.
The area inside and outside the tail was calculated using SAO DS9 regions of the tail, as defined in Section 2.4. We subtract the cluster density outside the tail (\(N_{\rm out}^{\rm SCC}/A_{\rm out}\)) from inside the tail (\(N_{\rm in}^{\rm SCC}/A_{\rm in}\)) to determine the excess cluster density, \(\Sigma_{\rm SCC}\). Errors are determined from Poisson statistics. Half of our tails show excesses above the 3\(\sigma\) level, indicating significant amounts of SCCs. It is important to note that while some systems may not contain significant numbers of SCCs, some individual objects in tails may still be real star clusters. The data are shown in Table 4.
We perform Kolmogorov-Smirnov (KS) tests on the distributions of U\({}_{336}\) - B\({}_{438}\), B\({}_{438}\) - V\({}_{606}\), and V\({}_{606}\) - I\({}_{814}\) colours between in-tail and out-of-tail objects for all detected objects and for SCCs only. This is another way to determine the likelihood that objects within the tail are unique and independent from those outside the tail region, in addition to measuring cluster excess. Recorded \(p\)-values are shown in Table 5. Our V\({}_{606}\) - I\({}_{814}\) colour is most useful in discriminating between tail and non-tail objects. Of the eight systems which contain enough SCCs for a KS test, five of them show \(p\)-values less than 0.04, indicating the data are drawn from independent distributions.
### Addition of nebular flux
Clusters less than 10 Myr old can show strong emission lines, as the surrounding hydrogen gas from cluster formation is ionized and undergoes recombination. An example of these systems is shown in Figure 15 for NGC 1614. As clusters will expel their hydrogen gas via stellar feedback on timescales of several million years (Pang et al., 2020), the presence of recombination lines indicates recent star formation.
Our SSP model (black solid line in Figures 14 - 3) does not contain nebular continuum emission, nor flux from emission lines; we use Starburst99 (Leitherer et al., 1999) to calculate contributions from the nebular continuum, as well as the H\(\alpha\) and H\(\beta\) emission lines. We include the H\(\gamma\) line, with the flux ratio of H\(\gamma\) / H\(\beta\) = 0.47 (Osterbrock, 1989), and the O[III] and O[III] lines as well. The oxygen lines are determined from the KPNO International Spectroscopic Survey (KISS) sample of nearby, low-mass star forming galaxies (Salzer et al., 2005); we take the median log ratio of these lines to the H\(\beta\) line, O[III]/H\(\beta\) = 0.08 and O[III]/H\(\beta\) = 0.56.
The effect of the emission lines depends on the filter. The
H\(\alpha\), H\(\beta\), and O[III] lines are covered by the F606W filter, while the H\(\gamma\) line falls in the middle of the B\({}_{438}\) and B\({}_{435}\) filters. The O[II] line is at the edge of the F435W and F336W filters. The result is that the V\({}_{606}\) filter is most strongly affected by the presence of emission lines, due to the strong H\(\alpha\) and H\(\beta\) lines, while the effect in the other filters is relatively inconsequential. This shifts the V\({}_{606}\) - Is\({}_{14}\) colour towards bluer colours in our colour-colour plots. The resulting evolutionary track is degenerate with extinction for ages \(<\) 10 Myr for our U\({}_{336}\) - B\({}_{438}\) vs V\({}_{606}\) - Is\({}_{14}\) diagrams. The effect is more apparent when we plot B\({}_{438}\) - V\({}_{606}\) vs V\({}_{606}\) - I\({}_{14}\); the evolutionary track dramatically swings upward in our plots as the B\({}_{438}\) - V\({}_{606}\) colour trends towards redder values. This effect is seen for similar young clusters in interacting systems as well, using the V\({}_{606}\) filter (Fedotov et al., 2015, 2011; Gallagher et al., 2010).
Data for NGC 3256 were taken from Knierman et al. (2003). For NGC 3256W/E, we determined the magnitudes in the F555W and F814 filters, and then transformed these to the Johnson-Cousins system, as in Knierman et al. (2003), using transformations from Holtzman et al. (1995). The shape and wavelength boundaries of the F555W filter transmission are noticeably different from the V\({}_{606}\) filter in that the H\(\alpha\) line falls at the edge of the filter, thus it has a much less of an effect on our model magnitudes. Therefore, on the colour-colour diagrams for NGC 3256W/E, there is less of a differ
\begin{table}
\begin{tabular}{l c c c c c c c} System & \(N_{\rm in}^{\rm SCC}\) & \(N_{\rm out}^{\rm SCC}\) & A\({}_{\rm in}\) & A\({}_{\rm out}\) & Density\({}_{\rm in}\) & Density\({}_{\rm out}\) & \(\Sigma_{SCC}\) \\ & & & (kpc\({}^{2}\)) & (kpc\({}^{2}\)) & (kpc\({}^{-2}\)) & (kpc\({}^{-2}\)) & (kpc\({}^{-2}\)) \\ \hline \hline NGC 1614N & 21 & 11 & 304.6 & 483.6 & 0.069 & 0.023 & \(0.046\pm 0.017\) \\ NGC 1614S & 33 & 11 & 327.8 & 483.6 & 0.101 & 0.023 & \(0.078\pm 0.019\)\({}^{*}\) \\ AM1054-325 & 135 & 23 & 129.5 & 869.5 & 1.042 & 0.026 & \(1.016\pm 0.090\)\({}^{*}\) \\ ESO 376-28 & 1 & 23 & 88.9 & 869.5 & 0.011 & 0.026 & \(-0.015\pm 0.013\) \\ NGC 2992 & 22 & 4 & 242.4 & 193.9 & 0.091 & 0.021 & \(0.070\pm 0.022\)\({}^{*}\) \\ NGC 2993 & 9 & 0 & 179.1 & 90.0 & 0.050 & 0 & \(0.050\pm 0.017\) \\ MCG-03-13-063 & 10 & 9 & 12.8 & 400.0 & 0.781 & 0.023 & \(0.76\pm 0.25\)\({}^{*}\) \\ NGC 6872 & 158 & 19 & 890.3 & 662.7 & 0.179 & 0.029 & \(0.150\pm 0.016\)\({}^{*}\) \\ NGC 3256E & 11 & 5 & 246.0 & 473.2 & 0.045 & 0.011 & \(0.034\pm 0.014\) \\ NGC 3256W & 24 & 9 & 199.9 & 519.9 & 0.120 & 0.017 & \(0.103\pm 0.025\)\({}^{*}\) \\ NGC 1487E & 0 & 0 & 12.4 & 28.5 & 0 & 0 & 0 \\ NGC 1487W & 1 & 0 & 13.7 & 33.3 & 0.073 & 0 & \(0.073\pm 0.073\) \\ \hline \end{tabular} \({}^{*}\)Signifies excess at 3\(\sigma\) or above.
\end{table}
Table 4: Cluster excesses.
Figure 2: Completeness data for our WFC3 F336W/F435W and ACS F438W imaging. Solid curves represent data for F438W, dotted curves represent data for F336W, and dashed curves represent data for F435W.
ence between the nebular emission evolutionary tracks and the SSP models than for our other systems.
### Ages and Masses
Ages and masses of clusters were determined using the 3DEF spectral energy distribution (SED) fitting code, as described in Bik et al. (2003). This code compares a set of input magnitudes to a grid of SSP models with ages between \(10^{6}\) and \(10^{10.12}\) yr. It will apply an extinction to our observed magnitudes through a range of \(E(B-V)\) values, compare the set of model magnitudes and extincted, observed magnitudes, and minimize the resulting \(\chi^{2}\) value to determine the best fit ages and masses. We use an evolutionary track from Marigo et al.
Figure 3: _Top left: HST_ B435-band image of NGC 1614N, with the tail region outlined with a black, dashed curve. In-tail SCCs are shown as blue circles, while out-of-tail SCCs are red circles. Non-SCC detected objects are shown as crosses. _Top middle_: Colour-colour diagrams for in-tail and out-of-tail sources plotted against stellar evolutionary tracks from Marigo et al. (2008) in solid black; the dashed light blue line is the same track with nebular continuum and emission lines added. The yellow circle indicates the age of the merger. Blue/red circles represent SCCs for in-tail/out-of-tail sources, and gray boxes are non-SCC detected objects. The total area enclosed in each in-tail or out-of-tail region in kpc\({}^{2}\) is indicated on the upper left, along with median error bars. _Top right_: Ages and masses for detected objects. SCCs are shown in blue, non-SCCs in gray. The solid black curve is our magnitude limit of M\({}_{V}\) = -8.5, while the red, green, purple, and cyan curves show 50% completeness limits. The vertical yellow line marks the interaction age of the system, and the horizontal dashed line marks our mass cut-off of log Mass = 4.3 for our CFE determinations. _Bottom row_: age and mass distributions for our SCCs. A vertical dashed line marks the median distance from the centre. We find 21 in-tail SCCs, and 11 out-of-tail SCCs for NGC 1614N. The tail curls to the North from the East, with young clusters scattered throughout the region.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} & \multicolumn{3}{c}{All} & \multicolumn{3}{c}{SCC only} \\ \cline{2-11} System & N\({}_{in}\) & N\({}_{out}\) & KS\({}_{U-B}\) & KS\({}_{B-V}\) & KS\({}_{V-I}\) & N\({}_{in}\) & N\({}_{out}\) & KS\({}_{U-B}\) & KS\({}_{B-V}\) & KS\({}_{V-I}\) \\ \hline \hline NGC 1614N & 26 & 14 & 0.025 & 0.020 & 6.6 \(\times 10^{-3}\) & 21 & 11 & 0.17 & 0.19 & 7.2 \(\times 10^{-3}\) \\ NGC 1614S & 40 & 14 & 0.068 & 2.0 \(\times 10^{-3}\) & 5.852 \(\times 10^{-5}\) & 33 & 11 & 0.65 & 0.045 & 1.2\(\times 10^{-4}\) \\ A1054-325 & 172 & 51 & 0.40 & 6.9 \(\times 10^{-6}\) & 6.6 \(\times 10^{-18}\) & 135 & 23 & 0.166 & 4.5 \(\times 10^{-3}\) & 2.0 \(\times 10^{-8}\) \\ ESO 376-28 & 3 & 51 & N/A & N/A & N/A & 1 & 23 & N/A & N/A & N/A \\ NGC 2992 & 61 & 15 & 0.058 & 0.42 & 0.75 & 22 & 4 & 0.73 & 0.87 & 0.50 \\ NGC 2993 & 23 & 5 & 0.19 & 0.018 & 0.34 & 9 & 0 & N/A & N/A & N/A \\ MCG-03-13-063 & 19 & 24 & 2.3 \(\times 10^{-3}\) & 0.73 & 1.8 \(\times 10^{-3}\) & 10 & 9 & 2.1 \(\times 10^{-3}\) & 0.35 & 1.7 \(\times 10^{-3}\) \\ NGC 6872 & 205 & 29 & 1.6 \(\times 10^{-3}\) & 0.031 & 0.012 & 158 & 19 & 0.026 & 0.049 & 0.025 \\ NGC 3256E & 71 & 53 & 0.74 & 0.021 & 0.061 & 11 & 5 & 0.92 & 0.17 & 0.23 \\ NGC 3256W & 92 & 74 & 0.055 & 8.1 \(\times 10^{-3}\) & 6.7 \(\times 10^{-4}\) & 24 & 9 & 0.97 & 0.33 & 0.18 \\ NGC 1487E & 83 & 33 & 0.75 & 0.083 & 4.5 \(\times 10^{-3}\) & 0 & 0 & N/A & N/A & N/A \\ NGC 1487W & 92 & 44 & 0.39 & 0.36 & 0.46 & 1 & 0 & N/A & N/A & N/A \\ \hline \end{tabular}
\end{table}
Table 5: KS test results for our photometric data. We include results for all of our sources on the left, and SCC objects only on the right.
Figure 4: Same as 3, but for NGC 1614S. We find 33 in-tail SCCs, and 11 out-of-tail SCCs. This system has a statistically significant excess cluster density above \(3\sigma\). The tail extends to the south from the center of the merging system. Young objects are found throughout the tail, similar to NGC 1614N.
Figure 5: Same as 3, but for AM1054-325. We find 135 in-tail SCCs, and 23 out-of-tail SCCs. This system has a statistically significant excess cluster density above \(3\sigma\). A tidal dwarf is directly North of the center of the galaxy, while the tidal tail extends Northward from the Western edge. Many SCCs show signs of emission lines, indicated by their large B – V values. Its interacting partner can be seen at the top right of the image.
Figure 6: Same as 3, but for ESO 376-28. We find 1 in-tail SCC, and 23 out-of-tail SCCs. Little structure is seen in this galaxy.
Figure 7: Same as 3, but for NGC 2992. We find 22 in-tail SCCs, and 4 out-of-tail SCCs. This system has a statistically significant excess cluster density above 3\(\sigma\). We have targeted the tidal dwarf, with the Northern edge of the galaxy NGC 2992 shown on the left. We do not include sources from the galaxy itself.
Figure 8: Same as 3, but for NGC 2993. We find 9 in-tail SCCs, and 0 out-of-tail SCCs. We capture the tidal tail of NGC 2993 (seen at the top of the image). Most of the SCCs in the tail have ages comparable to the interaction age of the NGC 2992/3 system.
Figure 9: Same as 3, but for MCG-03-13-063. We find 10 in-tail SCCs, and 9 out-of-tail SCCs. This system has a statistically significant excess cluster density above 3\(\sigma\). An extended, thin tail is seen, produced from an unseen companion. All the SCCs in the tail are \(<\) 10 Myr.
Figure 11: Same as 3, but for NGC 3256E. We find 11 in-tail SCCs, and 5 out-of-tail SCCs. Emission line clusters do not clearly stand out, as the NGC 3256 system was imaged with the F555W filter, which weakly covers the H\(\alpha\) line. Despite the old age of the interaction, we see a few SCCs with young ages.
Figure 10: Same as 3, but for NGC 6872. We find 158 in-tail SCCs, and 19 out-of-tail SCCs. This system has a statistically significant excess cluster density above 3\(\sigma\). This is Eastern tidal tail of NGC 6872, which stretches out to 70 kpc; the center of the galaxy lies to the West. A range of ages are seen, with young SCCs spread out along the length of the tail.
(2008) with solar metallicity and a Salpeter IMF (Salpeter, 1955), with nebular flux added to it (see Section 3.2). Our models do not account for binary star evolution. Spectroscopic observations of clusters and merging systems have found metallicities in the range of \(\sim\) 1.0 - 1.5 Z\({}_{\odot}\)(Rosa et al., 2014; Trancho et al., 2012; Bastian et al., 2009; Trancho et al., 2007, 2007). For comparison, we run our 3DEF algorithm for NGC 6872, with 158 SCCs, using a tracks at 0.5 Z\({}_{\odot}\) and 2 Z\({}_{\odot}\). We find the median ratio of ages and masses determined using solar and half solar metallicity to be 1 and 1.1, respectively. The ratio of ages and masses between Z\({}_{\odot}\) and 2 Z\({}_{\odot}\) is 1.02 and 0.98, respectively. A comparison of the different metallicities and their effects on our U\({}_{336}\)- B\({}_{438}\), B\({}_{438}\)-V\({}_{606}\), and V\({}_{606}\)- Is\({}_{14}\)colours is shown in Figure 16. Here, we plot data for NGC 6872 against SSP models and SSP models with nebular emission added, for metallicities of Z\({}_{\odot}\), 0.5 Z\({}_{\odot}\), and 2 Z\({}_{\odot}\). The similarity between all three metallicities shows that our age and mass estimates will not be sensitive to our chosen metallicity.
The choice of IMF will influence the masses of our clusters, but generally has little effect on the derived ages. A Chabrier or Kroupa IMF will decrease the masses by a factor of \(\sim\) 2 relative to our assumed Salpeter IMF.
in Figures 3 - 14, and collectively in Figure 17. We see a gap in age from log Age = 7.0 to 7.5 yr in many of our systems; this is an artifact of the fitting process and is seen in similar studies of star clusters (e.g. Randriamanakoto et al., 2019; de Grijs
Figure 16: Comparison of SSP tracks at different metallicities, plotted against data for NGC 6872. Data points are objects that fall within the tail, as in Figure 10. Black lines correspond to Z = Z\({}_{\odot}\), red corresponds to Z = 0.5 Z\({}_{\odot}\), and gold corresponds to Z = 2 Z\({}_{\odot}\). On the left, we plot tracks from Marigo et al. 2008. On the right, we include nebular emission (see Section 3.2) from Starburst99 models (Leitherer et al., 1999); the inclusion of line emission affects only ages < 10 Myr. In both cases, for a pure SSP and one with nebular emission added to it, the tracks are similar to one another; consequently, the ages and masses derived using either metallicity will be similar.
Figure 14: Same as 3, but for NGC 1487W. We find 1 in-tail SCC, and 0 out-of-tail SCCs. Like NGC 1487E, we again find a number of non-SCC objects in the tail, suggesting low-mass objects present in the tail.
Figure 15: Colour image of NGC 1614S. The data through filters F435W (ACS), F606W (WFPC2), F814 (ACS), and F665N (ACS) were stretched with a logarithm and bias/contrast adjustment in CARTA1. Subsequently in GIMP2 they were assigned the colours blue, yellow-green, red and pink respectively and using a layering schema blended with the screen algorithm (English, 2017). Several regions containing SCCs with emission lines are highlighted in yellow, showing that star cluster formation is ongoing in this tidal tail.
et al., 2013; Bastian et al., 2005b). The true ages are likely spread over neighboring ages. It is worth noting, however, that every system, with the exception of NGC 1487, contains SCCs with ages \(<\) 10 Myr. This shows us that tidal debris are capable of supporting cluster formation.
### Cluster radii
The radii of our SCCs were found using the program ISHAPE (Larsen, 1999). ISHAPE requires that sources are isolated to prevent PSF blending and at a high S/N. We perform an additional cut on our SCCs by visually selecting objects which are not in crowded regions and do not have highly elliptical shapes, which could indicate we are looking at blended clusters. We choose objects with a S/N \(>\) 25 as determined by ISHAPE, as the program also requires bright objects to perform accurate measurements.
ISHAPE will deconvolve selected sources with a user supplied PSF and a selected analytic model. Both an EFF (Elson et al., 1987) and King (King, 1962) model have been used in the past to study star clusters. We use a King model as it has been used to describe extragalactic and Galactic globular clusters (Correnti et al., 2021; Larsen et al., 2021; Chandar et al., 2016; Bastian et al., 2012). ISHAPE will fit the model to an ellipse and produce the FWHM of the major axis, and the ratio between the minor and major axes (_q_), for each selected source. We take the FWHM as the average between the major and minor axes of the fit. We require that \(q\)\(>\) 0.3 to eliminate unrealistically elliptical sources, as in Brown & Gnedin (2021). ISHAPE is able to reliably determine a FWHM for sources at 10% of the size of the PSF, which for our WFC3 and ACS images, corresponds to \(\approx\) 0.2 pix (the pixel scale for WFC3 is 0.04" per pixel, and 0.05" per pixel for ACS). We remove objects smaller than this size. The result of our selection criteria reduces the number of SCCs for analysis from 425 to 57, largely as a result of selecting isolated clusters. Our furthest systems, NGC 6872 in particular, are susceptible to crowding as the angular separation for nearby objects decreases, and this is borne out in our reduced number of sources. As a result, we emphasize that our radii sample is biased towards isolated, physically large SCCs.
This FWHM value is converted to a half-light radius r\({}_{h}\) (also referred to as the effective radius R\({}_{eff}\)) by multiplying the FWHM by 1.48, as noted in the ISHAPE manual. We note that while an EFF and King model will produce unique FWHM values, the resulting half-light radii are similar to one another (Larsen, 1999). The minimum value of r\({}_{h}\) we are able to detect, when combining our FWHM and axis ratio limits and converting FWHM to r\({}_{h}\), is 0.19 pix, corresponding to 0.0076" for WFC3 and.0095" for ACS. We use our B\({}_{438}\)- and B\({}_{435}\)-band images to derive radii as they offer a better S/N than our U\({}_{336}\)-band images.
Our cluster radii are shown in Figure 18, with interacting pairs grouped together. The minimum detectable radius for each system is shown as a vertical dashed line. The combined distribution for all our sources is shown as the bottom left plot. The distribution shows a peak at \(\approx\) 5.6 pc, with an extended tail up to \(>\) 100 pc. Objects at the tail end of the distribution are likely blended together, but are included for completeness. While our objects peak at a larger value than typical of Milky Way globular clusters (\(\sim\) 3.2 pc, Baumgardt & Hilker, 2018), objects of this size are seen in both the Milky Way and in extragalactic systems (Baumgardt & Hilker, 2018; Ryon et al., 2017; Chandar et al., 2016).
Note that ESO 376-28 is not included as it only contained one SCC, which was eliminated due to a low S/N in ISHAPE, and NGC 1487W's single SCC did not have a good fit. We address NGC 1487 in Section 4.6.
## 4 Results and Discussion
### Mass Function
A global mass function for all our systems is presented in Figure 19. We stack our measured masses together for all our SCCs and plot them with bins of constant number, at 20 per bin, with 235 SCCs total. The mass bins are modeled as a power law with the form \(dN/dM\propto M^{\beta}\). The slope, \(\beta\), is found with a linear fit to log \((dN/dM)\) with \(\beta=-2.02\pm 0.15\). The lower mass cutoff for our fit is at log \(M\) = 4.6 M\({}_{\odot}\), where the mass function begins to turnover. This turnover is caused by incompleteness at the lower mass limit. While some studies have suggested the mass function follows the form of a Schecter function (Messa et al., 2018; Adamo et al., 2015; Bastian et al., 2012), we see no turnover at high masses, leading us to conclude a power law is a good fit for our data.
The value of \(\beta\) has been measured for many other systems as well, with values ranging from \(\approx\) -2.15 - -1.85. Our result of \(\beta\) = -2.02 \(\pm 0.15\) for our stacked distribution is consistent with these previous results. Varying the number of objects per bin finds values of \(\beta\) consistent with our stated value, with \(\beta=2.14\pm 0.11\) and \(\beta=2.08\pm 0.19\) for 10 and 30 objects per bin, respectively.
The low numbers of SCCs in each system means we cannot generate a useful mass function for each system. It is, however, useful to look at our two systems with the largest numbers of SCCs, AM1054-325 and NGC 6872. These contain 89 and 87 SCCs below 10 Myr, respectively. As these constitute more than half of the objects in our stacked mass function, it is possible that they have a heavy influence on the derived slope and form of the distribution. To ensure we are not affected by this, we also look at the mass function while excluding objects from AM1054-325 and NGC 6872. In Figure 19 we include these three cases to compare to our full, stacked function.
The peak of the function at log Mass = 4.6 M\({}_{\odot}\) is normalized to the stacked function, and vertically offset in steps of 0.75 dex to plot all functions on the same plot. We again plot data using bins of constant number, 10 for AM1054-325 and NGC 6872, and 7 for the excluded function. All cases are consistent with one another, and the stacked mass function as well, suggesting our more populous systems are not over-influencing the stacked mass function.
As a separate check, we fit our data to a power law using the the IDL program mspecfit.pro, developed by Rosolowsky (2005). This uses a maximum-likelihood fitting technique to the cumulative distribution to find the slope of a power law distribution, and factors in the individual errors in mass for each data point. It will perform a fit to a regular power law \(N(M^{\prime}>M)\propto M^{\beta}\) as well as determine the possibility of the cumulative distribution having the form of a truncated power law, given as \(N(M^{\prime}>M)\propto N_{0}[(M/M_{0})^{\beta}-1]\), where \(M_{0}\) is the cutoff mass and \(N_{0}\) is the number of objects more massive
than \(2^{1/\beta}M_{0}\). If \(N_{0}\lesssim 1\), then the distribution is best fit to a single power law. We fit the four datasets shown in Figure 19 and find that \(N_{0}=0.0\pm 1.8,1.0\pm 2.0,0.0\pm 0.8\), and \(0.0\pm 1.3\) for our stacked mass function, AM1054-325, NGC 6872, and our excluded function, respectively. This suggests our data is best fit with a power law distribution. We thus plot our cumulative distribution functions of our data in Figure 20, along with the best fit values of \(\beta\), using a standard power law function. We find our results are consistent with that of our binned data within a standard deviation.
### Cluster formation efficiency
Stars form in clustered fashion, as seen in the Milky Way and in galaxies beyond (Bressert et al., 2010; Whitmore et al., 2010; Lada and Lada, 2003; Clarke et al., 2000; Zepf et al., 1999). Observational studies have attempted to determine how many stars are born in bound clusters by measuring the amount of star formation occurring in clusters, compared to the local region (e.g. Adamo et al., 2015; Ryon et al., 2014; Goddard et al., 2010). The cluster formation efficiency (CFE) is defined as the ratio of the cluster formation rate and the star formation rate (both in units of M\({}_{\odot}\)/ yr), so that CFE = CFR / SFR. Difficulty arises in the definition of "bound" clusters, as internal and environmental processes are capable of disrupting and unbinding clusters (Krumholz et al., 2019; Portegies Zwart et al., 2010), requiring age limits for cluster analysis.
To match previous observations, we limit our SCC ages to 1 - 10 Myr. The SFR is measured from the UV flux of our _GALEX_ and _Swift_ images and converted to SFR using the following relation from Kennicutt and Evans (2012):
\[\log\,\mathrm{SFR}\,\left(\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\right)=\log L_ {x}-\log C_{x}, \tag{1}\]
where \(L_{x}\) is luminosity (ergs/s), and \(C_{x}\) is a calibration constant dependent on the observed wavelength (43.35 for _GALEX_ FUV and 43.17 for _Swift_ _wmap_).
The CFR was found by summing the mass of all clusters with an age between 1 - 10 Myr, and dividing by the time interval of 9 Myr. Completeness will affect the total mass of clusters which can be detected; low-mass clusters will not be observed in our images. We find the approximate mass of a 10 Myr old cluster with M\({}_{V}\) = -8.5 (from our detection cutoff in Section 2.5) to be \(\approx 10^{4.3}\) M\({}_{\odot}\). We assume we are complete above this mass limit, and calculate the missing, or undetected mass, assuming the cluster mass function follows a power law with slope \(\beta\) = -2.0, extending from 100 M\({}_{\odot}\)to 10\({}^{7}\) M\({}_{\odot}\). We perform the same calculation with slopes of \(\beta\) = -2.15 and -1.85 for our upper and lower error bounds. For \(\beta\) = -2.0, -2.15, and -1.85, we find the total percentage of mass in clusters with masses greater than \(10^{4.3}\) M\({}_{\odot}\) to be 54%, 33%, and 74%, respectively. Note that a shallower slope implies we measure a more complete sample of our clusters.
Our results are shown in Table 6. As NGC 1487W/E both do not contain clusters which fit our criterion, they do not have a corresponding CFE. The general trend is that systems
Figure 17: Ages and masses from Figures 3 - 14, compiled together. All systems except for NGC 1487 have at least one SCC with an age \(<\) 10 Myr, indicating recent cluster formation. Both NGC 3256W and NGC 3256E show a number of non-SCC objects at the interaction age, with large masses. Their high masses, indicating bright absolute magnitudes, along with poor fits to our SSP model (\(\chi^{2}>3\), Section 2.5) suggest these objects are foreground stars in the Milky Way. It is notable that these only appear in NGC 3256, which has the lowest Galactic latitude of our systems. They are not included in our SCC analysis.
Figure 19: Mass functions for all, stacked objects (blue), AM1054-325 (purple), NGC 6872 (green), and stacked (orange), but excluding AM1054-325 and NGC 6872. Data are normalized to the stacked mass function and vertically offset to include all curves on the same plot. The corresponding fit slopes are shown on the right. While there is some scatter in \(\beta\), all values are consistent with each other, within their uncertainties. Vertical dashed line indicates our cut-off mass at log Mass = 4.6.
Figure 20: Cumulative mass distribution for stacked objects (blue), AM1054-325 (purple), NGC 6872 (green), and stacked (orange), but excluding AM1054-325 and NGC 6872. Data are vertically offset to include all curves on the same plot. The corresponding fit slopes are shown on the right. Values of \(\beta\) shown here are consistent with those from Figure 19 to within a standard deviation. Horizontal gray bars indicate 1\(\sigma\) error bars for the individual mass points. Vertical dashed line indicates our cut-off mass at log Mass = 4.6.
with a higher SFR density (\(\Sigma_{\rm SFR}\)) show more efficient cluster formation. Data are plotted in Figure 21. We plot our data against a theoretical CFE from Kruijssen (2012) for comparison, with dotted lines indicating a variation factor of 2. This model predicts cluster formation to follow the surface gas density (\(\Sigma_{\rm gas}\)), and therefore \(\Sigma_{\rm SFR}\) via the Kennicutt-Schmidt law (Kennicutt, 1998). The CFE tails off at high \(\Sigma_{\rm SFR}\). At the high gas densities implied by the high \(\Sigma_{\rm SFR}\) tidal interactions between neighboring GMCs can impede cluster formation.
While the Kennicutt-Schmidt law assumes the relation between \(\Sigma_{\rm gas}\) and \(\Sigma_{\rm SFR}\) remains the same at all scales, this may not be the case. When studying star formation at sub-kpc scales, Bigiel et al. (2008) found a decrease in \(\Sigma_{\rm SFR}\) at low values of \(\Sigma_{\rm gas}\). This decrease is correlated with saturation of Hi in the total gas content of the region (\(\rm H_{1}+H_{2}\)). This effect has been parameterized by Johnson et al. (2016), using a broken power law, to predict \(\Sigma_{\rm SFR}\) based on \(\Sigma_{\rm gas}\). We include the CFE as a function of \(\Sigma_{\rm SFR}\) based on this relation, again using the formulation from Kruijssen (2012), in Figure 21 as a dashed line. The broken power law formula manifests as a flattening of the curve at -2.3 in log \(\Sigma_{\rm SFR}\), corresponding to the gas environment being dominated by Hi over \(\rm H_{2}\). Upper and lower limits provided by Johnson et al. (2016), derived from the Bigiel et al. (2008) data, are plotted as dot-dash lines.
We find a good agreement with the Bigiel et al. (2008) curve, with only NGC 3256W and NGC 1614N falling more than 1\(\sigma\) below the lower limit. Notably, our data fall on the flattened part of the CFE curve, corresponding to an Hi dominated environment, suggesting the star forming regions in our tails are primarily composed of Hi over \(\rm H_{2}\).
We note that relation between the CFE and \(\Sigma_{\rm SFR}\) was designed for a "typical" spiral galaxy, and should used as a rough estimate (Kruijssen, 2012). In Figure 21, we include data from previous surveys (Adamo et al., 2015; Lim & Lee, 2015; Ryon et al., 2014; Adamo et al., 2011; Annibali et al., 2011; Silva-Villa & Larsen, 2011; Goddard et al., 2010) as compiled by Adamo et al. (2015). Our data extend to smaller \(\Sigma_{\rm SFR}\) values than previously measured.
The dependence of the CFE on \(\Sigma_{\rm SFR}\) has been questioned by Chandar et al. (2017), who studied several systems and found an average CFE of 24%, independent of \(\Sigma_{\rm SFR}\). This constant value is indicated in Figure 21 as a red horizontal line. They suggested that the variability seen in the CFE by other authors was caused by using different cluster age intervals for different galaxies in determination of the CFE. The result was a biased sampling of the CFE using a cluster age range of 0 - 10 Myr for systems with high \(\Sigma_{\rm SFR}\), and 10 - 100 Myr for systems with a low \(\Sigma_{\rm SFR}\). Indeed, we note that of the data points in Figure 21, Adamo et al. (2015); Lim & Lee (2015); Annibali et al. (2011); Adamo et al. (2011) use objects with ages of 1 - 10 Myr, while the others use a range of ages from 1 - 10, 10 - 100, and 1 - 100 Myr in their CFE determinations. Our use of 1 - 10 Myr matches the majority of the listed studies. Independent of other studies, however, we find that our sample seems to match the theoretical model, and shows a trend of decreasing CFE with decreasing \(\Sigma_{\rm SFR}\).
### Spatial distribution
SCC age and mass are plotted against distance from the center of the interacting system in Figures 3 - 14. The central point of each system is obtained as coordinates from SIMBAD. The median distance is marked with a dashed vertical line. We perform KS tests on the age and mass distributions for our SCCs as separated by the median distance to the center, with results shown in Table 7. We find statistically significant results beyond 2\(\sigma\) in only two tails, NGC 1614S, and AM1054-325. NGC 1614S shows significant results in both its age and mass distributions, with \(p\)-values of 0.0011 and 0.0052 for its age and mass distributions, respectively. From Figure 4 we see there is a clump of young objects near the base of the tail, with masses \(\approx 10^{5}\) M\({}_{\odot}\). KS results for its companion tail, NGC 1614N, produce \(p\)-values of 0.18 and 0.06 for the age and mass distributions, respectively. We do not claim either of these results for NGC 1614N to be significant. AM1054-325 shows significant results in the mass distribution, but not in age.
For the majority of our sample, we see that the general trend shows a relatively even distribution of ages though the tidal debris; young objects are not concentrated in any particular region, but are found throughout the tidal tails.
Figure 21: Cluster formation efficiency (CFE) plotted against the log star formation rate density. Included are several similar measurements gathered in Adamo et al. (2015), as well as a theoretical curve from Kruijssen (2012) in solid black. Dotted lines indicate a factor of 2 variation in the theoretical model. We include a modified version of the Kruijssen (2012) curve, using the \(\Sigma_{\rm gas}\) vs \(\Sigma_{\rm SFR}\) relation from Bigiel et al. (2008), as a dashed line. The dot-dash lines indicate upper and lower limits on this relation as defined by Johnson et al. (2016). The red solid line corresponds to the 24% value of CFE from Chandar et al. (2017).
### Dynamical age
The dynamical age of a cluster (\(\Pi\)), introduced by Gieles and Portegies Zwart (2011), offers a method for estimating if a cluster is gravitationally bound at the current time. It is defined as the ratio of the age of a cluster to the crossing time of the cluster (\(T_{\rm cr}\)):
\[\Pi\equiv\frac{t_{\rm cluster}}{T_{\rm cr}}. \tag{2}\]
\(T_{\rm cr}\) is defined as:
\[T_{\rm cr}\left({\rm s}\right)\equiv 10\left(\frac{r_{\rm h}^{3}}{GM}\right)^{1/2}, \tag{3}\]
where \(G\) is the gravitational constant, \(M\) is the mass of the cluster, and \(r_{\rm h}\) is the half-light radius of the cluster. A cluster is said to be gravitationally bound if \(\Pi\geq 1\). At this dynamical age, the stars in the cluster have evolved for longer than a crossing time, and as such are not likely to escape from the cluster, meaning the cluster is gravitationally bound. Objects with \(\Pi<1\) are not necessarily unbound, but as the stars have not yet evolved for longer than a crossing time, they still have time to escape before then, and we cannot determine if they are bound to the cluster. Unbound associations will expand with time, causing \(T_{\rm cr}\) to increase as well, and remain at or below \(\Pi=1\). Bound objects, on the other hand, will remain bound and compact with time, causing \(\Pi\) to increase with time. Data for our SCCs are shown in Figure 22.
The determination of a cluster as bound or unbound remains an estimate, as several factors will influence the evolution of a cluster. If the natal gas of a cluster is expelled too quickly, on a timescale comparable to the crossing time, the cluster is more susceptible to disruption, for a given star formation efficiency (Baumgardt and Kroupa, 2007; Hills, 1980). Similarly, it is not possible for us to determine if a cluster, even with \(\Pi>1\), will remain bound throughout the lifetime of the merger, as this calculation does not take into account external forces. Clusters which pass nearby GMCs or through the disk or nucleus of the merging galaxies are subject to gravitational tidal forces which can disrupt them (e.g. Krumholz et al., 2019; Kim et al., 2018; Tacconi et al., 2008; Spitzer, 1958). However, as these objects exist in the diffuse regions of tidal debris, their chances of survival is increased as they will not experience the strong gravitational forces seen in the nuclear regions.
### Age and mass as related to half-light radii
There is debate on the relation between cluster age and mass on their physical sizes. The initial half-light radius of a cluster is predicted, from an analytic model, to depend on the mass and gas surface density (Choksi and Kruijssen, 2021). As clusters evolve, mass loss and stellar interactions within the cluster will cause its expansion (Portegies Zwart et al., 2010). Gieles et al. (2011) suggest two stages of radii evolution for clusters in tidal fields: expansion, driven by mass loss, followed by contraction. They find that two-thirds of the Milky Way's globular clusters are in the expansion phase.
Several studies of extragalactic clusters have found a slight dependence of age on radius (Ryon et al., 2017; Bastian et al., 2012; Lee et al., 2005), while others have found none (Brown
\begin{table}
\begin{tabular}{l l l l l l} System & SFR & SFR\({}_{\rm density}\) & CFE & A\({}_{\rm in}\) & \(\Sigma_{\rm SCC}\) \\ & (\(10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\)) & (\(10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\)) & (\%) & (kpc\({}^{2}\)) & (kpc\({}^{-2}\)) \\ \hline \hline ESO 376-28 & \(0.55\pm 0.25\) & \(0.062\pm 0.028\) & \(2.1^{+1.3}_{-0.6}\) & 88.9 & \(-0.015\pm 0.013\) \\ NGC 2993 & \(2.31\pm 0.15\) & \(0.129\pm 0.008\) & \(1.11^{+0.70}_{-0.30}\) & 179.1 & \(0.050\pm 0.017\) \\ NGC 2992 & \(3.75\pm 0.21\) & \(0.163\pm 0.009\) & \(3.5^{+2.3}_{-1.0}\) & 229.5 & \(0.070\pm 0.022\) \\ NGC 3256E & \(6.30\pm 0.12\) & \(0.256\pm 0.005\) & \(2.0^{+1.2}_{-0.5}\) & 246.0 & \(0.034\pm 0.014\) \\ NGC 3256W & \(6.42\pm 0.11\) & \(0.321\pm 0.006\) & \(0.17^{+0.11}_{-0.05}\) & 199.9 & \(0.103\pm 0.025\) \\ NGC 1487E & \(0.428\pm 0.030\) & \(0.345\pm 0.024\) & N/A & 12.4 & 0 \\ NGC 1487W & \(0.493\pm 0.033\) & \(0.360\pm 0.024\) & N/A & 13.7 & \(0.073\pm 0.073\) \\ NGC 1614S & \(17.45\pm 0.40\) & \(0.532\pm 0.012\) & \(5.0^{+3.2}_{-1.4}\) & 327.8 & \(0.078\pm 0.019\) \\ NGC 6872 & \(65.4\pm 3.0\) & \(0.734\pm 0.033\) & \(5.7^{+3.7}_{-1.6}\) & 890.3 & \(0.15\pm 0.016\) \\ MCG-03-13-063 & \(0.98\pm 0.33\) & \(0.77\pm 0.26\) & \(6.6^{+4.2}_{-1.8}\) & 12.8 & \(0.76\pm 0.25\) \\ NGC 1614N & \(48.68\pm 0.14\) & \(1.598\pm 0.005\) & \(0.66^{+0.42}_{-0.18}\) & 304.6 & \(0.046\pm 0.017\) \\ AM1054-325 & \(33.0\pm 3.1\) & \(2.55\pm 0.24\) & \(6.9^{+4.4}_{-1.9}\) & 129.5 & \(1.016\pm 0.090\) \\ \hline \end{tabular}
\end{table}
Table 6: SFRs and CFE for our sample. CFE is determined by comparing the mass of clusters with ages below 10 Myr to the SFR within their respective tidal tail. The SFR is found using _GALEX_ and _Swift_ UV data, converted to a SFR (Kennicutt and Evans, 2012).
\begin{table}
\begin{tabular}{l l l} System & KSAge & KSMass \\ \hline \hline NGC 1614N & 0.18 & 0.06 \\ NGC 1614S & 0.0011 & 0.0052 \\ AM1054-325 & 0.92 & 0.018 \\ ESO 376-28 & N/A & N/A \\ NGC 2992 & 0.27 & 0.56 \\ NGC 2993 & 0.96 & 0.96 \\ MCG-03-13-063 & 0.44 & 0.99 \\ NGC 6872 & 0.68 & 0.89 \\ NGC 3256E & 0.85 & 0.45 \\ NGC 3256W & 0.91 & 0.66 \\ NGC 1487E & N/A & N/A \\ NGC 1487W & N/A & N/A \\ \hline \end{tabular}
\end{table}
Table 7: KS results for age and mass distributions of SCCs in tails, between objects interior and exterior to the median distance to the centre of the system.
& Gnedin, 2021; Scheepmaker et al., 2007; Larsen, 2004). There is a similar debate on the relation between mass and radius, and it is not clear that observational studies have found a relationship between these cluster properties (Brown & Gnedin, 2021; Ryon et al., 2017). There is considerable scatter in the distribution of cluster radius, hiding any clear signals.
We plot the ages and masses of our SCCs against their radii in Figure 23. We include both objects which are determined to be gravitationally bound and those which are not. We again find considerable scatter in our radii distribution, complicating any clear conclusions. On the left side of Figure 23 we compare the ages of our SCCs to their radii. There are only a small number of objects at ages \(<10\) Myr, as a result of our selection criteria, namely that we look for isolated objects. Star clusters are formed in clustered fashion (Grasha et al., 2015; Gouliermis et al., 2015; Bastian et al., 2009; Gieles et al., 2008), meaning our young SCCs will be in crowded regions which have been removed from our ISHAPE catalogue. This effect is seen in the fact that the young, unbound objects have large, extended radii, indicating blending. Thus, while Figure 23 suggests that cluster radius increases with age, we have a small and biased sample at ages around 10 Myr.
Looking at the right panel of Figure 23 we see the relation between our masses and radii. We again see a large amount of scatter in our data, with no clear trend. Recent work by Brown & Gnedin (2021) looked at the cluster radii of 31 galaxies from the Legacy Extragalactic UV Survey (LEGUS). LEGUS galaxies are nearby (\(<12\) Mpc) spiral and irregular galaxies, with few interacting systems and no major mergers. Their work finds a power-law relation between mass and radius extending up to \(10^{5}\) M\({}_{\odot}\). Above this mass limit, the relation appears to flatten, though this may be the result of low numbers of clusters with these large masses. We overplot their fit in Figure 23, continuing their trend to higher masses. We find our data are consistent with their fit, though there is large scatter.
### NGC 1487 Objects
NGC 1487 stands out from the rest of the mergers in our sample: it is three times closer than our next closest galaxy, there is only one SCC between the two tidal tails, and it has been classified as both a merger between two disk galaxies (Aguero & Paolantonio, 1997) and a merger between dwarf galaxies (Buzzo et al., 2021; Bergvall et al., 2003). Despite the lack of SCCs, visual examination of the tails shows an abundance of objects within the debris. This suggests the debris host faint, low-mass objects which belong to the merging system, but are not luminous, high-mass clusters with \(M_{V}<-8.6\). The absence of high-mass objects may arise if this is indeed a merger between dwarf galaxies. Massive star clusters and high SFRs require high gas pressure (Maji et al., 2017; Zubov
Figure 22: Dynamical ages of SCCs, following the prescription set by Gieles & Portegies Zwart (2011). The vertical dashed line marks the limit for gravitationally bound objects; those to the right of the line are gravitationally bound, while those to the left are unbound. Gray boxes indicate counts per bin for each system as a whole, while coloured lines represent individual tails (where applicable).
was et al., 2014; Blitz and Rosolowsky, 2006), and these required pressures may not be produced in a dwarf galaxy merger. Lahen et al. (2019) were able to simulate a merger between two equal sized dwarf galaxies, which produced clusters with masses \(\geq 10^{5}\) M\({}_{\odot}\). Pressures in these clusters was found to be \(\sim 10^{7}\)\(k\)(K cm\({}^{-3}\))\({}^{-1}\), smaller than the \(10^{8}-10^{12}\)\(k\)(K cm\({}^{-3}\))\({}^{-1}\) values seen in simulations of major mergers (Maji et al., 2017).
To consider this scenario of low-mass cluster formation, we construct a stacked mass function with objects from both tails. We eliminate our magnitude limit, but still require that sources are fit to our SSP models with \(\chi^{2}\leq 3\) and V\({}_{\rm 006}\) - I\({}_{\rm 144}\) < 1.43. We only include objects with an age \(\leq\) 10 Myr as before; results are shown in Figure 24 for 58 objects which meet our requirements. We fit our data to the mass turnover at log Mass = 3.1, and find that our data shows a slope of \(\beta=-2.06\pm 0.31\). This is consistent with our results at higher masses for our SCCs, where the completeness limit only allowed a fit down to log Mass = 4.6.
Our SED modelling assumes a continuously populated IMF, which is a reasonable assumption for our clusters with masses \(>10^{4}\) M\({}_{\odot}\). Below this mass limit, the stochastic sampling of the stellar IMF can affect photometric measurements (Larsen, 2011). The low numbers of stars can mean that a cluster can host only single digit numbers of supergiants, or none at all. The overall colour of the cluster would become bluer in the absence of supergiants, causing us to underestimate the age; consequently, the mass can decrease as well, as younger clusters are more luminous than older ones. This can result in a bi-modality of colours in clusters (Popescu and Hanson, 2010; Silva-Villa and Larsen, 2011). Despite this, the effect on the slope of the mass function \(\beta\) is small (Fouesneau et al., 2012), and eliminating sources with poor fits to models will reduce this effect (Fouesneau and Lancon, 2010).
We plot the cumulative fraction of objects in Figure 25, again using the mspecfit.pro code to search for a truncated power law. We find for NGC 1487 a value of \(N_{0}=3.7\pm 3.6\), giving marginal significance (\(\approx 1\sigma\)) for a truncated power law. We plot both a standard power law and a truncated power law to our cumulative mass distribution in Figure 25.
The similarities in slopes of our mass functions of NGC 1487 and stacked systems (see Figures 19 and 24, and Figures 20 and 25) imply we are observing the low-mass end cluster formation. We suggest that the pressure in NGC 1487 is too low to reach the threshold for massive clusters, as it is the result of mergers between dwarf galaxies, and not a major merger.
We look at the cluster radii of these objects as well, using ISHAPE. Neither tail in NGC 1487 contains enough foreground stars to produce a PSF image; we generate a PSF for each of the five B\({}_{\rm 238}\)-band images in our sample (NGC 3256W, NGC 3256E, NGC 6872, AM1054-325, and MCG-03-13-063), run ISHAPE five separate times, using each PSF once. Our final derived \(r_{\rm h}\) value is the mean value from each run. We find the dynamical ages for these objects as well, as in Section 4.4. Results are shown in Figure 26. All ob
Figure 23: Half-light radii for our sources, as a function of age (_Left_) and mass (_Right_). Filled circles represent bound objects, while diamonds are unbound objects, using the definition set in Section 4.4. On the right, we include the mass-radius relation from Brown and Gnedin (2021). Our data are consistent with their model, but there is a large amount of scatter in our data.
Figure 24: Mass function for sources in NGC 1487E and NGC 1487W. The dashed vertical line indicates our mass cut at log Mass = 3.1.
Figure 25: Cumulative fraction of sources in NGC 1487E and NGC 1487W. We overplot a power law function as a solid black line, and a truncated power law as a dotted line.
jects except for one in these tails are gravitationally bound, following Portegies Zwart et al. (2010). Figure 27 shows the age and mass of these objects plotted against their radii. We again include the relation of Brown and Gnedin (2021) between mass and radius. Our data points show similar scatter as our SCCs, in Figure 23.
Objects in NGC 1487 are more compact than in our other tails. The median half-light radius for bound sources in NGC 1487 is 3.03 pc, compared to 6.78 pc for all our other systems. We do not see the large, 10 pc clusters that exist in NGC 3256 or NGC 2992/3.
## 5 Summary
We have analyzed 425 SCCs in 12 tidal tails, across seven merging systems. We summarize our findings as follows:
1. Many objects in tidal tails show signs of line emission in colour-colour diagrams, indicating young ages \(<10\) Myr. Clusters at these ages will have strong emission in H\(\alpha\), which falls in our V\({}_{606}\)-band filter. The effect of this emission impacts our colour-colour diagrams. These colours indicate current, ongoing star formation in tidal debris. The age and mass distributions (Figure 17) suggest that previous star formation episodes produced many more SCCs, as evidenced by high-mass objects seen at older ages.
2. The mass function of our SCCs has a consistent shape as compared to YMCs in other systems. Conventionally, the mass function of YMCs takes the form of a power-law with a slope of \(\beta\approx-2.0\). Other studies have found evidence of a high-mass cut-off, suggesting the mass function follows a Schecter function instead. We do not find evidence for such a mass cut-off, and find our data fit well to a power law with slope \(\beta=-2.02\pm 0.15\) using binned data, and \(\beta=-2.16\pm 0.09\) for a cumulative distribution fit.
3. The CFE in tidal tails increases as the SFR density increases. We use _GALEX_ and _Swift_ UV imaging to determine the SFR in our tails. When compared to previous observations and theoretical predictions using a reformulation of the Kennicutt-Schmidt law from Bigiel et al. (2008) and Johnson et al. (2016), we find good agreement, implying the gas in the tidal debris is primarily Hi and that the CFE depends on the local environment. Our data pushed this link to lower SFR densities than previously observed for cluster formation.
4. Little dependence on galactic radii is seen for ages or masses of SCCs. Our KS tests reveal only NGC 1614S has significant differences in age or mass distributions with regard to galactocentric distance, while AM1054-325 shows significance in mass and distance only. Our other systems show that young clusters are distributed throughout the tails.
5. Cluster radii of gravitationally bound objects, as determined using calculations from Portegies Zwart et al. (2010), fall in the range of 2 - 32 pc, with a median value of 7 pc. We do not see a relation between age and radius, or mass and radius. Work by Brown and Gnedin (2021) suggests cluster radius increases with mass; we include their power-law determination, which is consistent with our data.
6. Low-mass objects in NGC 1487, which fall below our magnitude limit of M\({}_{V}\)\(<\) -8.5, show a mass function with a slope of \(\beta=-2.00\pm 0.28\) using binned data. We find minimal significance (\(\approx 1\sigma\)) for a truncated power law with slope \(\beta=-2.07\pm 0.22\), and a slope of \(\beta=-2.37\pm 0.18\) for a pure power law, using a cumulative distribution fit. Though the uncertainties are large, these values are consistent with our stacked SCC mass function, suggesting cluster formation is consistent down to low masses.
## 6 Acknowledgments
We would like to thank the anonymous referee for helpful comments which have improved the quality and content of this paper. This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-7466, GO-10592, GO-11134, GO-14066, GO-14937, and GO-15083. Support for this work was provided by grant HST-GO-14937.002-A and HST-GO-15083.001-A. The Digitized Sky Survey was produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital
Figure 27: Same as Figure 23, but for combined sources in NGC 1487E/W. We again include the mass-radius relation from Brown and Gnedin (2021).
Figure 26: Half-light radii (_left_) and dynamical ages (_right_) for sources in NGC 1487W and NGC 1487E.
form with the permission of these institutions. This research is based on observations made with _GALEX_, obtained from the MAST data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## 7 Data Availability
_HST_, _GALEX_, and _Swift_ data are publicly available through the MAST portal at [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html). The derived data generated in this research will be shared on reasonable request to the corresponding author.
|
2309.09736 | An Optimization Case Study for solving a Transport Robot Scheduling
Problem on Quantum-Hybrid and Quantum-Inspired Hardware | We present a comprehensive case study comparing the performance of D-Waves'
quantum-classical hybrid framework, Fujitsu's quantum-inspired digital
annealer, and Gurobi's state-of-the-art classical solver in solving a transport
robot scheduling problem. This problem originates from an industrially relevant
real-world scenario. We provide three different models for our problem
following different design philosophies. In our benchmark, we focus on the
solution quality and end-to-end runtime of different model and solver
combinations. We find promising results for the digital annealer and some
opportunities for the hybrid quantum annealer in direct comparison with Gurobi.
Our study provides insights into the workflow for solving an
application-oriented optimization problem with different strategies, and can be
useful for evaluating the strengths and weaknesses of different approaches. | Dominik Leib, Tobias Seidel, Sven Jäger, Raoul Heese, Caitlin Isobel Jones, Abhishek Awasthi, Astrid Niederle, Michael Bortz | 2023-09-18T13:00:09Z | http://arxiv.org/abs/2309.09736v4 | An Optimization Case Study for solving a Transport Robot Scheduling Problem on Quantum-Hybrid and Quantum-Inspired Hardware
###### Abstract
We present a comprehensive case study comparing the performance of D-Waves' quantum-classical hybrid framework, Fujitsu's quantum-inspired digital annealer, and Gurobi's state-of-the-art classical solver in solving a transport robot scheduling problem. This problem originates from an industrially relevant real-world scenario. We provide three different models for our problem following different design philosophies. In our benchmark, we focus on the solution quality and end-to-end runtime of different model and solver combinations. We find promising results for the digital annealer and some opportunities for the hybrid quantum annealer in direct comparison with Gurobi. Our study provides insights into the workflow for solving an application-oriented optimization problem with different strategies, and can be useful for evaluating the strengths and weaknesses of different approaches.
**Keywords--** Optimization, Quantum Computing, Robot Scheduling
## 1 Introduction
Quantum computing (QC) is a field that has witnessed a rapid increase in interest and development over the past few decades since it was theoretically shown that quantum computers can provide an exponential speedup for certain tasks (Deutsch, Jozsa, 1992; Grover, 1996; Shor, 1994). Translating this potential into a practically relevant quantum advantage, however, has proven to be a very challenging endeavor. Nevertheless, the emerging field is considered to have a highly disruptive potential for many domains, for example in machine learning (Schuld, Sinayskiy, Petruccione, 2015), chemical simulations (Cao _et al._, 2019) and optimization (Li _et al._, 2020), the domain of this work. Due to the fact that optimization problems are of utmost importance also for industrial applications, we investigated a potential advantage of quantum and quantum-inspired technology for the so-called _transport robot scheduling problem_ (TRSP), a real-world use-case in optimization that is derived from an industrial application of an automatized robot in a high-throughput laboratory. The optimization task is to plan a time-efficient schedule for the robot's movements as it transports chemical samples between a rack and multiple machines to conduct experiments. This is an NP-hard problem which for certain instances can be challenging to solve using
classical computing techniques, and hence is an attractive candidate to search for an advantage with non-classical techniques.
In our study, we compared the solution quality and runtime of different solvers on a large set of instances of the problem. As solvers, we considered D-Wave's hybrid _Leap_ framework (LBQM) that makes use of the D-Wave quantum annealer (D-Wave Systems Inc. 2020), Fujitsu's digital annealer (FDA)(Nakayama _et al._, 2021), Fujitsu's digital annealer hybrid framework (FDAh), as well as the industry-grade Gurobi solver (Gurobi Optimization, LLC 2023). As a key element of this work, we provide three different models for the TRSP that follow different design philosophies. This is justified by the different ways in which the problem task can be modelled and the inherent differences in the problem formulations that the solvers addressed can accept. LBQM, FDA and FDAh are restricted to a formulation as a quadratic unconstrained binary optimization (QUBO), whereas a mixed integer program (MIP) with integer and float variables can be used by Gurobi, which makes a comparison of multiple formulations meaningful.
The TRSP considered in this paper is a special combination of different scheduling problems that, to our knowledge, has not been considered before. Scheduling problems have been studied intensively for several decades and classical algorithms exist for numerous variants (Brucker, 2007; Pinedo, 2016). Since most of the industry-relevant scheduling problems are NP-hard, these classical algorithms mainly consist of meta-heuristics or use general-purpose MIP solvers, which basically solve the problem using a branch and bound approach with several additional improvements like cutting planes. In addition to classical algorithmic developments, a considerable amount of research has also been done in hardware-based parallel computing, especially in general purpose computation on graphics processing unit (GPGPU) parallelization (Chakroun _et al._, 2013; Awasthi _et al._, 2016). The problem discussed in this work is an extension of the typical job shop scheduling problem (JSSP), where the inclusion of a robot adds additional restrictions. More specifically, the studied scheduling problem falls into the category of robotic cell scheduling and automated guided vehicles (AGV) scheduling problems. Most work on robotic cell scheduling deals with infinite cyclic schedules (Dawande _et al._, 2007). This comprises polynomial-time algorithms and hardness results (Steiner, Xue, 2005), MIP techniques (Phillips, Unger, 1976; Brucker, Burke, Groenemeyer, 2012; Feng, Che, 2013) and heuristic approaches (Liu, Kozan, 2017). Many efficiently solvable and hard special cases have been identified (Shabtay, Arviv, 2016) and heuristics have been proposed for some of the hard cases (Stern, Vitner, 1990). Those problems differ from our use case in one way or another. The problems considered by the above-cited papers allow, unlike our use case, that the jobs can wait at a machine after their completion before being picked up by the robot. Robotic cell scheduling problems without this possibility have been studied by (Agnetis, 2000; Agnetis, Pacciarelli, 2000), whose problems differ from our, among others, in the considered objective function. Our objective function, the total job completion time, has been extensively studied for flow shop scheduling problems without a robot (Pinedo, 2016; Hall, Sriskandarajah, 1996; Allahverdi, 2016; Rock, 1984), the latter of which shows that the no-wait variant is strongly NP-hard on two machines. Apart from the no-wait constraint, the problem considered in our work is characterized by the fact that jobs have to go to the last machine several times. Such settings are known as a re-entrant flow shops, for which (Jing, Huang, Tang, 2011) developed a heuristic algorithm.
We are mainly interested in the performance of non-standard solution approaches using quantum or quantum-inspired solvers in this study. Because these solvers rely on heuristics, benchmarks for real-world applications are a highly relevant research topic. Most quantum optimization approaches fall into two major groups, one for gate-based hardware and one for annealing-based hardware (Alexeev _et al._, 2021). The majority of gate-based approaches to optimization use parameterized gates to find the ground state of a Hamiltonian related to the cost function of the optimization problem in a quantum-classical hybrid fashion, for example via the quantum approximate optimization algorithm (QAOA) (Farhi, Goldstone, Gutmann, 2014; Blekos _et al._, 2023).
Approaches based on quantum annealing also seek to find the ground state of a Hamiltonian, but by aiming for an adiabatic change from an initial state that can be easily prepared. In contrast to actual quantum computing devices, other classical software and hardware components are merely _inspired_ by quantum computing, for example FDA (Aramon _et al._, 2019) and Toshiba's Simulated Bifurcation Machine (TSB) (Tatsumura, Dixon, Goto, 2019). Typically, optimization tasks for quantum solvers and
the aforementioned quantum-inspired technologies are modeled as QUBO problems (Kochenberger _et al._, 2014). An in-depth analysis of pure QUBO comparison on four quantum and quantum-inspired solvers can be found in (Oshiyama, Ohzeki, 2022). In their work, the authors compare the solutions of a library of quadratic benchmark problems on the D-Wave quantum annealer, FDA, and TSB against each other.
QC has already been successfully used for optimization in various fields. For example, in (Mizuno, Komatsuzaki, 2023), chemical reaction networks are optimized with quantum computing. In (Streif _et al._, 2021), it is shown that using the QAOA, it is possible to beat some classical heuristic algorithms on the binary paint shop problem. However, some work has shown that the current circuit model algorithms are not always adequate enough to reach significant convergence required for a good solution (Awasthi _et al._, 2023). Quantum annealing has proven to offer some advantage against the classical simulated annealing algorithm for a spin-glass problem, using D-Wave hardware (Raymond _et al._, 2023), but this is no conclusive evidence. In one of the more recent works on quantum annealing (Schuetz _et al._, 2022), the authors suggest a nature inspired hybrid quantum algorithm for robot trajectory optimization for PVC sealing in a real industrial setting. In (Ebadi _et al._, 2022), the authors present a solution to the maximum independent set (MIS) problem using a Rydberg atom device, along with a claim of a possible super-linear quantum speed-up against classical simulated annealing. Other classical algorithms might still be superior to a quantum approach on current devices (Albash, Lidar, 2018). Several works consider scheduling problems (Yarkoni _et al._, 2021; Carugno, Ferrari Dacrema, Cremonesi, 2022; Tomasiewicz _et al._, 2020). In (Geitz _et al._, 2022), an AGV transportation problem using different classical and quantum approaches is studied and (Ikeda, Nakamura, Humble, 2019) investigates a nurse scheduling problem with the usage of a quantum annealer.
The remaining manuscript is structured as follows. We provide a detailed description of the TRSP and its mathematical modeling in Section 2. In Section 3, we describe the design of our numerical study and list the problem instances and solvers that we use. The results of this study are presented in Section 4. Finally, we conclude our study in Section 5. Detailed model descriptions, solver information, further information on the benchmark setup and instance lists are contained in the supplementary material (referenced by a preceding "S" to the label it is referring to).
## 2 Transport Robot Scheduling Problem
In this section, we present a detailed explanation of the TRSP, which is a real-world use case derived from one of BASF's high-throughput laboratories. This optimization problem is about finding the most time-efficient route of a transport robot tasked with moving chemical samples from one processing machine to another. In the following, we first provide a general description of the problem setup and then present different modeling approaches. These models build the foundation of the subsequent benchmarks.
### Problem Description
The laboratory we are modeling consists of a _sample rack_ and three different processing machines: a _water mixer_, a _sample shaker_ and a _photo booth_. And, finally, the _robot_ itself that is tasked with carrying chemical samples from one place to another with the goal to conduct chemical experiments. Only the experimental plan (i. e., how each sample has to be processed in the laboratory) is predefined in advance, but not the specific order of the experiments. Initially, a certain number of samples is stored on the rack. Each of these samples needs to be first taken to the water mixer, then to the sample shaker. Once the sample shaking is completed, one or more photos have to be taken of each sample at the photo booth. Consecutive photos need to be taken after specific (i. e., predefined) time intervals, where the first photo of each sample has to be taken immediately after the shaking process. Finally, each sample has to be brought back to the rack. The processing times for different samples on the same machine can be different as specified by the experimental plan. We assume that each machine can only hold (and process) one single sample at any given time, or remain idle, and the processing steps cannot be interrupted before their completion. It is required that a machine starts processing a sample as soon as the sample is brought by the robot. Moreover, we assume that a sample has to be moved by the robot
in-between two processing steps. Hence, a sample has to be lifted from a machine (and the machine is made available) as soon as it finishes processing.
By definition, the robot requires exactly one time unit to move from any place to any other, with or without a sample, and picking up or dropping a sample does not require extra time. Like the machines, the robot can transport only a single sample at any given time or drive empty or remain idle. In particular, it is not possible that the robot places a sample at a machine and picks up another at the same time.
The objective of this scheduling task is to minimize the _sum of sample completion times_, i. e., the sum of the times when the samples arrive at the the rack after their last photo has been taken. The solution of this optimization problem is a sequence of tasks for the robot that yields an efficient laboratory operation.
### Mathematical Modeling
In our benchmark, we test three modeling approaches against each other. On the quantum and quantum-inspired side we consider a QUBO formulation, whereas on the classical side we use two MIP formulations. First, a so-called _sequence model_ and second, a so-called _time-indexed model_. In the following, we first introduce the common terminology for all modeling approaches. Next, we shortly sketch the main features of each model. For a more detailed description, we refer to Section S1. The motivation for the development of multiple models is to carry out a comparison between the solutions obtained by the most suitable problem encoding for quantum and classical solvers. This ensures that we are comparing the best of both worlds (classical and quantum), and do not restrict ourselves to a model which is more suitable for quantum over classical computing.
#### 2.2.1 Common Terminology
The processing machines are addressed by \(M_{1}\) for the water mixer, \(M_{2}\) for the sample shaker and \(M_{3}\) for the photo booth. The scheduling time is discretized into time slots which all have length of one time unit. The transport robot takes one time unit for each operation that is either transportation or empty traversal between the machines and the rack. In this way, each transport robot scheduling problem is uniquely determined by the number of samples to be scheduled \(N\geq 1\), the number of photos \(K\geq 1\), which agrees for each sample \(j\in\{1,\ldots,N\}\), the processing times \(p_{j,1},p_{j,2},p_{j,3}\in\mathbb{N}_{>0}\) for machines \(M_{1},M_{2}\) and \(M_{3}\), which can vary for each sample \(j\in\{1,\ldots,N\}\) and the time gaps \(g_{j,k}\in\mathbb{N}_{\geq 2}\) to be kept between consecutive photos \(k\) and \(k+1\) for \(k\in\{1,\ldots,K-1\}\), which also can vary for each sample \(j\in\{1,\ldots,N\}\). As an example, Fig. 1 provides a feasible schedule in form of a Gantt chart to visualize these parameters.
#### 2.2.2 QUBO Model
A general QUBO reads
\[\begin{split}\min_{x}& x^{\top}\cdot Q\cdot x\\ \text{s.t.}& x\in\left\{0,1\right\}^{n}\end{split} \tag{1}\]
for some matrix \(Q\in\mathbb{R}^{n\times n}\), where \(x\) represents a vector of \(n\) binary optimization variables. Two challenging properties of QUBOs must be taken into account in the modeling. Since only binary variables are allowed, this implies that other types of variables must be avoided, i. e. a reformulation into a binary form is necessary. Second, the problem is unconstrained. This restriction can be overcome by using _penalty terms_, which are quadratic functions in the model variables that evaluate to a positive value when the current assignment of values to the variables leads to an infeasible solution. Typically, the penalty terms are designed to yield 0 if the corresponding solution is feasible, so that they do not contribute to the objective values of feasible solutions. More general information about QUBOs and their properties can be found, e. g., in (Kochenberger _et al._, 2014; Lucas, 2014; Glover, Kochenberger, Du, 2018).
Our proposed QUBO model for the TRSP is based on the well-known starting time formulation (see e. g. (Carugno, Ferrari Dacrema, Cremonesi, 2022)) and can be written as
\[\begin{split}\min_{x}&\quad\rho_{0}F(x)+\sum_{i=1}^{ 7}\rho_{i}P_{i}(x)\\ \text{s.t.}&\quad x\in\{0,1\}^{n},\end{split} \tag{2}\]
where \(F\) is the objective function and \(P_{1},\dots,P_{7}\) denote the penalty functions and \(\rho_{0},\dots,\rho_{7}\in\mathbb{R}_{>0}\) are tunable parameters that have to be chosen such that the objective and penalty terms are suitably balanced. As in equation (1), \(n\) represents the total number of binary optimization variables. These have a distinct meaning that can be identified with three indices. Specifically,
\[x_{j,m,t}:=\begin{cases}1,&\text{if sample $j$ starts processing on machine $M_{m}$ at time $t$},\\ 0,&\text{otherwise}\end{cases} \tag{3}\]
for all \(j\in\{1,\dots,N\}\), \(m\in\{1,2,3\}\) and \(t\in\{1,\dots,T-1\}\). Here, \(T\) denotes the time horizon, which is chosen in such a way that there is enough time to schedule all samples sequentially, implying that there is at least one feasible solution. It can be explicitly computed for each instance as described in Section S1.1. In terms of Fig. 1, one has, for example, \(x_{1,1,1}=1\) and \(x_{1,2,8}=1\).
The penalty terms for the QUBO model have to be formulated using the binary optimization variables. This section only provides an example for such a term, a complete description can be found in Section S1.1. Specifically, we consider here the constraint that each sample must access the machines \(M_{1}\) and \(M_{2}\) exactly once, which can be achieved by
\[P_{1}:=\sum_{j=1}^{N}\sum_{m=1}^{2}\left[\left(\sum_{t=1}^{T-1}x_{j,m,t}\right) -1\right]^{2}. \tag{4}\]
This term evaluates to zero if and only if for each pair of sample \(j\) and machine \(M_{m}\), the variable \(x_{j,m,t}\) is \(1\) for precisely one time slot \(t\). Since \(P_{1}\) is bounded below by \(0\) due to its quadratic nature, each local minimum of \(P_{1}\) is a feasible solution w.r.t. the rule of machine access to \(M_{1}\) and \(M_{2}\). The other penalty terms can be formulated similarly.
Figure 1: An example Gantt chart of a robot transport scheduling problem with \(N=2\) samples and \(K=2\) photos.Tasks associated with sample one (two) are colored blue (red). When a sample is processed on one of the machines or carried by the robot in the time-frame \([t,t^{\prime}]\), a bar is drawn from \(t\) to \(t^{\prime}\) in the respective row in a corresponding color. Empty movements of the robot are not drawn explicitly. For example, at time \(t=13\) the robot is at the rack as sample \(1\) has been brought to the sample rack from \(t=12\) to \(13\). It takes one unit of time for the robot to travel from the rack to the water mixer to pick up sample \(2\) at \(t=14\). From \(t=22\) to \(t=23\), the sample is brought from the photo booth to the rack and back, which is a consequence of the assumption that a sample has to be moved by the robot in-between two processing steps. The objective value of the depicted schedule is \(19+26=45\).
Finally, the objective function \(F\) sums up for each sample the time when the sample arrives at the rack after the entire scheduling process ("sum of sample completion times"). For example, the objective function in the case of Fig. 1 evaluates to 45 time units.
#### 2.2.3 MIP Models
MIPs have been used since the late 1950s as a tool for solving scheduling problems. It is not possible to model the disjunctive constraints resulting from the discrete ordering decisions only by means of starting time variables. Different types of binary variables have been proposed to achieve this. The main types are position variables \(x_{ijk}\) indicating if job \(j\) is the \(k\)th job on machine \(i\)(Wagner, 1959), linear ordering variables \(\delta_{ijk}\) deciding if job \(j\) is processed before job \(k\) on machine \(i\)(Manne, 1960) and time-indexed variables \(x_{ijt}\) specifying that job \(j\) is started (or processed or completed) on machine \(i\) at time \(t\)(Bowman, 1959; Pritsker, Waiters, Wolfe, 1969). (Ku, Beck, 2016) compared these three approaches experimentally for a job shop scheduling problem.
Due to the powerful nature of (mixed) integer programming in contrast to the restrictive nature of the QUBO models, we provide two MIP models to be solved using Gurobi, where we follow two state-of-the-art approaches for formulating scheduling problems as MIPs(Pinedo, 2016). The first one, in the following named _sequence model_, makes use of continuous start time and binary linear ordering variables. The second model, called the _time-indexed model_, is restricted to a binary formulation comparable to the QUBO model, where we make use of time-indexed variables. The latter provides a model with a natural vicinity to the QUBO formulation whereas the sequence model exploits the features of MIP formulations. In this sense we provide a baseline from two different angles, one for each solution approach.
#### 2.2.4 MIP: Sequence Model
In the sequence model, we model sequences of _events_ that affect the behavior of the transport robot with respect to the machines and the photos of a sample. We define the _set of events_ as
\[E:=\big{\{}(j,i,a)\mid j\in\{1,\ldots,N\},\,i\in\{1,\ldots,2+K\},\,a\in\{0,1\} \big{\}}. \tag{5}\]
An event \(e=(j,i,0)\) represents either that a sample \(j\) is placed on machine \(M_{i}\) for \(i\in\{1,2\}\) or to the \((i-2)\)th photo shoot for \(i>2\), an event \((j,i,1)\) corresponds to picking it up again. For each event \(e\in E\) we define an optimization variable \(\tau_{e}\in\mathbb{R}_{\geq 0}\) to model the time for event \(e\) to happen. In terms of Fig. 1, we have, for example, \(\tau_{(1,1,0)}=1\) and \(\tau_{(1,1,1)}=4\). A simple formulation can be achieved by additionally introducing a binary variable for each pair \(e,f\in E\), \(e\neq f\) of events that indicates if \(e\) occurs before \(f\). We reduce the size of the model by exploiting the fact that the ordering of some events is fixed or coupled. For example, we do not need a variable that specifies the order in which a given sample is brought to the water mixer and to the sample shaker. This leads to three sets of linear ordering variables that can be found in Section S1.2, as well as the various constraints to ensure feasibility. The objective function (i. e., the sum of the sample completion times) can be easily expressed using the variables \(\tau_{e}\) corresponding to events when a sample is picked up from the last photo.
#### 2.2.5 MIP: Time-Indexed Model
The second constrained model makes use of discrete time-indexed variables similar to the QUBO model from Section S1.1. In this formulation, we model the behavior of the transport robot by defining certain routes a sample can be transported along, which include those from the rack to all machines and back or movements between subsequent machines. The numbering of the moves is shown in Fig. S1.
As the model name implies, we have, given a discrete time horizon \(T\in\mathbb{N}_{>0}\), binary variables to model when each sample takes which route as
\[y_{j,r,t}:=\begin{cases}1,\text{ if sample $j$ is transported by the robot on route $r$ during the time }(t,t+1)\;,\\ 0,\text{ otherwise}\end{cases} \tag{6}\]
for all \(j\in\{1,\ldots,N\}\), \(r\in\{1,\ldots,8\}\) and \(t\in\{0,\ldots,T-1\}\). In terms of the Gantt chart from Fig. 1, this would imply \(y_{1,1,0}=1\), \(y_{1,2,4}=1\), \(y_{2,1,5}=1\) and so on. The time horizon \(T\) is defined as for the QUBO model, see Eq. (S2).
The constraints of the model are similar to the penalty terms of the QUBO Model and are listed in Section S1.3. The objective function (i. e., the sum of the sample completion times) is defined in terms of the ancilla optimization variables \(z_{j}\) for \(j\in\{1,\ldots,N\}\), that are bounded below by the arrival time of sample \(j\) at the rack after the schedule has finished.
## 3 Benchmark Setup
In the present section, we describe the design of the benchmark. We start with an outline of the considered problem instances that are listed in more detail in Section S2. Subsequently, we describe the three different commercial technologies that we use.
### Instances
To set the stage for our benchmark, we specify 260 test instances of our optimization problem of interest, each defined by a different set of parameters. Specifically, each instance is uniquely determined by the number of samples \(N\), the number of photos \(K\), the gaps \(g_{j,k}\) between subsequent photos \(k\) and \(k+1\) for \(k\in\{1,\ldots,K-1\}\) and \(j\in\{1,\ldots,N\}\), and, finally, the processing times \(p_{j,1},p_{j,2},p_{j,3}\) of the water mixer, sample shaker and photo booth, respectively, as explained in Section 2.2. For the sake of simplicity, the processing time of the photo booth agrees for all samples of the same instance, that is \(p_{j,3}:=p_{3}\) for all \(j\in\{1,\ldots,N\}\).
In Section S2, we describe the algorithm that was used to generate parameter sets for the benchmark instances. Since the resulting instances span a wide range of complexity, we divide the resulting benchmark library into two parts, where each part is defined by the number of binary variables in the corresponding QUBO formulation from Section 2.2.2 as explained in Section S1.1 in more detail. The first part, which we call _library of minor instances_, contains all 161 instances that have at least 2071 and at most 8080 binary variables. The second part, which we call _library of major instances_, contains the remaining 99 instances with at least 10 822 and at most 22 692 binary variables. The reason for that specific division is that 8192 is the maximal amount of variables that can be solved directly on Fujitsu's digital annealer.
We collect groups of instances \((N,K)\) that have the same number of samples and photos as shown in Fig. S2, i. e., within those groups the leftover parameters \(p_{j,m}\) and \(g_{j,k}\) for \(j\in\{1,\ldots,N\},m\in\{1,2,3\}\) and \(k\in\{1,\ldots,K-1\}\) may vary. These groups can be understood as a collection of "similar" TRSPs in the sense that the complexity of the tasks to be solved is comparable. However, some instances may still be easier or more difficult to solve than others in practice. This grouping approach allows us to consider statistical metrics over several instances when we compare models and solvers. Moreover, it allows us to estimate the scaling behavior of different solution approaches. In Section S2, we list how many instances each group contains.
### Quantum and classical solvers
In our benchmark, we solve the generated instances with a selection of model and solver combinations with the main goal to assess the performance of quantum and quantum-inspired technology. Specifically, we consider three solver candidates:
1. Gurobi: As a baseline, we use the branch and bound algorithm of Gurobi, which is a state-of-the-art mathematical programming solver running on classical hardware (Gurobi Optimization, LLC 2023). In summary, it relies on an implicit enumeration that allows the original problem to be split into smaller sub-problems using a decision tree. The use of lower bounds derived from linear programming (LP) relaxations allows for a reduction of the search space. Gurobi is an all-purpose solver that can in principle solve the proposed optimization problems to a guaranteed optimality in a deterministic fashion (given sufficient time). In this work we utilized the cloud based service of
Gurobi solver, which ran on a Intel(R) Xeon(R) Platinum 8275CL CPU (3.00 GHz with 8 physical cores).
2. D-Wave's hybrid _Leap_ framework (LBQM): D-Wave provides cloud-based access to their adiabatic quantum computers with over 5000 qubits (D-Wave Systems Inc. 2020). By design, their hardware is specifically tailored to solve QUBOs. To this end, the QUBO is encoded in an Hamiltonian such that each optimization variable is represented by one qubit (Zbinden _et al._2020) and the ground state corresponds to the optimal solution. The quantum annealing mechanism aims to find the ground state by performing a suitable time evolution of the quantum system with a subsequent measurement of all qubits to reveal the optimal solution. The D-Wave hardware has only limited connectivity, which means that each qubit can only interact with a certain number of other qubits. This limitation restricts the correlations between optimization variables that can be represented by the Hamiltonian. Finding a suitable representation with these constraints is an NP-hard problem (Lobe, Lutz 2022) that has to be solved classically to configure the quantum annealer for a certain problem. In practice, the quantum annealer can typically only be used for QUBOs with much less than 5000 optimization variables. For this reason, D-Wave also provides a hybrid software framework LBQM, which is a black-box algorithm for binary quadratic models (BQMs) that runs on both classical and quantum annealing hardware. It allows larger optimization problems that are too big for the quantum hardware to be handled by presenting only parts of the original problem to the quantum annealer. However, the exact mode of operation of LBQM is not publicly available. In this study, we use only the quantum annealer in a hybrid fashion via LBQM. The quantum machine used in the hybrid framework is the _D-Wave Advantage System 4.1_ and the region _na-west-1_. We choose to use a constant number of 1000 samples (or readouts) for all evaluations and use default settings for all parameters.
3. Fujitsu's digital annealer (FDA) and Fujitsu's digital annealer hybrid framework (FDAh): The digital annealer from Fujitsu can be considered as a quantum-inspired algorithm that runs on dedicated (classical) hardware (Aramon _et al._2019) and can be accessed using a cloud service. It is based on simulated annealing (Kirkpatrick, Gelatt, Vecchi 1983; Cerny 1985) with two major differences. Firstly, the utilization of an efficient parallel-trial scheme to exploit the parallelization capabilities of the hardware and, secondly, a dynamic escape mechanism to avoid locally optimal solutions. The detailed hardware specifications are confidential. The solver supports QUBOs with up to 8192 variables. In addition, the hybrid solver FDAh is provided to solver bigger problem instances by utilizing both dedicated and classical hardware (Nakayama _et al._2021) similar to D-Wave's LBQM. In this study, we use both FDA and FDAh. Both solvers require a set of parameters that specify how the annealing is done, which also include the number of repetitions and parallel runs on the chip. The specific parameters we used for FDA and FDAh are provided in Section S3.
In a small pre-study, we excluded a few other solvers; see Section S4. The main scope of the paper is to benchmark the performance of quantum-hybrid and quantum-inspired technologies on the TRSP on a high level against an all-purpose solver with an out-of-the-box performance. In this sense, we also exclude meta-heuristics that are tailor-made to the problem as well.
Each instance can be modelled with each of the three modeling approaches from Section 2. However, not all solvers are applicable to all problem formulations and all instances. The MIP sequence model is solved with Gurobi for all instances. The time-indexed model is solved with Gurobi only for the minor instances. The QUBO model is solved with LBQM and FDA for minor instances. For major instances, the QUBO model is only solved with FDAh.
We call each valid model and solver combination an _approach_ and use a unique name to refer to it. Summarized, we consider Gurobi with the sequence model (SE-GU), Gurobi with the time-indexed model (TI-GU), LBQM with the QUBO model (QU-LBQM), FDA with the QUBO model (QU-FDA) and FDAh with the QUBO model (QU-FDAh). An overview over all approaches is shown in Fig. 2.
For all problems, we prescribe a runtime limit of 3600 seconds for Gurobi. This limit was determined on a heuristic basis, since initial experiments have shown that Gurobi can solve the considered problem instances on this time scale with a practically relevant quality. This time limit exceeds the runtimes of
LBQM, FDA and FDAh by far to provide Gurobi enough time to return solutions that are suitable for a relative comparison (see Fig. 4).
Both LBQM and FDA also require a time limit for each run, which scales with the problem size in the QUBO formulation as follows. The time limit for LBQM is set to be \(\min\{100,1.5\cdot\frac{n}{100}\}\) seconds, where \(n\) is the number of variables in the QUBO formulation for the minor instances. The runtime of the digital annealer is implicitly set with the _steps_ parameter, where each step taken in the annealing process takes a constant amount of time. We set the number of steps to be \(10^{7}\) for the instances with \(2071\leq n\leq 4096\), \(5\cdot 10^{7}\) for the ones with \(4096<n\leq 6000\) and \(10^{8}\) for the instances with \(6000<n\leq 8080\) variables in the QUBO formulation. Lastly the major instances computed with the hybrid framework FDAh based on the digital annealer require a time limit as well. For this we distributed the available time of 5 hours to the instances, correspondingly to their number of variables. This computes approximatively as \(n\cdot 0.0117\) seconds where \(n\) is the number of variables in the QUBO formulation.
The benchmark setup is summarized in Table 1, where we recall the approaches from Fig. 2. The table also contains the values of the QUBO parameters \(\rho_{0},\ldots,\rho_{7}\) from Eq. (S16) that were chosen for LBQM, FDA and FDAh, respectively. The choice was made according to previous experiments with smaller problem instances. For this purpose, a typical strategy is to iteratively increase the parameter \(\rho_{i}\) if the corresponding penalty term \(P_{i}\) is non-vanishing. Additionally, one needs to make sure that the parameter \(\rho_{0}\) for the target function is set such that it is not in favor to violate penalty terms and a good optimization is achieved.
Some solutions of the library of minor instances have not been solved to feasibility by LBQM, i. e., the solution vector returned does not translate to a feasible schedule of the TRSP. Those instances can be identified by having an objective value of at least \(10^{4}\), which is the minimum of the penalty parameters chosen for the QUBO model according to Table 1. This can be seen as follows: the parameters of the library of minor instances are bounded as \(N\leq 9\), \(K\leq 4\), \(p_{3,j}\leq 3,p_{1,j}\leq 8,p_{2,j}\leq 4,g_{1,j}\leq 5,g_{2,j}\leq 12\) and \(g_{3,j}\leq 24\) for \(j=1,\ldots,N\). Using those upper bounds we compute a maximal time horizon of \(T=648\) time units for those instances. It follows that the sum of sample completion times is bounded above by \(9\cdot 648=5832<10^{4}\), i. e., a solution to an instance of the library of minor instances is feasible if and only if it has an objective value below \(10^{4}\). Of course this does neither apply to the library of major instances nor to the solutions of FDA or FDAh as they have lower penalty parameters due to
Figure 2: Summary of model (see Section 2.2) and solver (see Section 3.2) combinations for the benchmarks.
prestudies with the smallest instances. In a general setup a way to identify infeasible solutions is to store the penalty term \(\sum_{i=1}^{7}P_{i}(x)\) and evaluate the solution with it. The solution is feasible in this case if and only if the penalty term evaluates to \(0\) on it.
## 4 Benchmark Results
In the current section, we present the results of our previously described benchmark, which is summarized in Table 1. For this purpose, we first show the results for the minor instances and subsequently the results for the major instances.
### Results for Minor Instances
In Fig. 3, we show the objective values and runtimes of several approaches as scatter plots. All runtimes are end-to-end runtimes, that is, we consider the entire evaluation pipeline, beginning with the submission of the problem to the solver and ending with the return of a solution, including potential network delays. The programmatic construction of the optimization problem for the application programming interface (API) of the solver based on the instance data is not part of the runtime.
From Fig. 2(a), we can observe that both the SE-GU and TI-GU solutions reach a better objective value than the solutions from QU-LBQM and QU-FDA. When comparing objective values, it has to be taken
\begin{table}
\begin{tabular}{l c c} \hline \hline Property & Minor instances & Major instances \\ \hline Number of instances & 161 & 99 \\ Number of variables (\(n\)) & 2071 to 8080 & 10 822 to 22 692 \\ \hline Approach (cf. Fig. 2) & Used for minor instances & Used for major instances \\ \hline SE-GU & ✓ & ✓ \\ TI-GU & ✓ & ✗ \\ QU-LBQM & ✓ & ✗ \\ QU-FDA & ✓ & ✗ \\ QU-FDAh & ✗ & ✓ \\ \hline Approach (cf. Fig. 2) & Minor instance limit & Major instance limit \\ \hline SE-GU & 3600 s & 3600 s \\ TI-GU & 3600 s & — \\ QU-LBQM & \(\min\{100,1.5\cdot\frac{n}{100}\}\cdot 1\,\mathrm{s}\) & — \\ QU-FDA & \(\begin{cases}1\cdot 10^{7}\text{ iterations},&2071\leq n\leq 4096\\ 5\cdot 10^{7}\text{ iterations},&4096<n\leq 6000\\ 1\cdot 10^{8}\text{ iterations},&6000<n\leq 8080\\ \text{---}&n\cdot 0.0117\,\mathrm{s}\end{cases}\) \\ QU-FDAh & \(\begin{cases}1\cdot 10^{7}\text{ iterations},&2071\leq n\leq 4096\\ 5\cdot 10^{7}\text{ iterations},&4096<n\leq 6000\\ 1\cdot 10^{8}\text{ iterations},&6000<n\leq 8080\\ \text{---}&n\cdot 0.0117\,\mathrm{s}\end{cases}\) \\ \hline Solver & QUBO parameters from Eq. (S16) \\ \hline LBQM & \(\rho_{0}=1,\rho_{1}=30\,000,\rho_{2}=\rho_{3}=\rho_{4}=\rho_{5}=\rho_{7}=10\,000,\rho_{6}=15\,000\) \\ FDA & \(\rho_{0}=1000,\rho_{1}=4000,\rho_{2}=\rho_{3}=1000,\rho_{4}=\rho_{5}=\rho_{6}= \rho_{7}=1500\) \\ FDAh & \(\rho_{0}=1000,\rho_{1}=2000,\rho_{2}=\rho_{3}=500,\rho_{4}=\rho_{5}=\rho_{6}= \rho_{7}=750\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Benchmark setup: Summary of problem instances from Section 3.1 and solvers from Section 3.2 for the optimization problems (or models) from Section 2.
into account that the QUBO model objective, equation (2), also includes penalty terms, which become positive for infeasible solutions and therefore increase the objective value accordingly. Specifically, we find that only QU-LBQM yields infeasible solutions for some instances, whereas all other approaches yield feasible solutions (SE-GU and TI-GU solutions are by definition always feasible). For our analysis, we include both feasible and infeasible solutions. By performing a Welch t-test (Welch 1947), we find that the means of the results from both SE-GU and TI-GU are lower than the means of the QU-FDA and QU-LBQM results with a statistical significance of over 99%, respectively. The same holds for the QU-FDA objective values in comparison to QU-LBQM.
On the other hand, according to Fig. 2(b), the computation time for TI-GU and for some instances of SE-GU exceed the computation time of QU-LBQM and QU-FDA. Since MIP solvers typically spend a lot of time proving that a solution is optimal, we are also interested in the time taken by Gurobi (for both SE-GU and TI-GU) to find solutions of the same quality as those obtained from QU-LBQM or QU-FDA. Hence, we perform an additional analysis of the iterative solver progress of each Gurobi
Figure 3: Benchmark results for minor instances as scatter plots. The results are grouped into sets of instances \((N,K)\) with the same number of samples \(N\) and photos \(K\). A horizontal line marks the upper time limit of \(3600\,\mathrm{s}\) for Gurobi in Fig. 2(b). Some instances have not been solved to feasibility by QU-LBQM, as indicated by the peaks above \(10^{4}\) in Fig. 2(a). Abbreviations according to Fig. 2.
run and look for the earliest computation time at which Gurobi has reached an objective value that is less than or equal to the corresponding objective value returned by the competing solvers for the same instance. We call this earliest computation time the _relative runtime_. Specifically, we consider the relative runtime of TI-GU w.r.t. QU-LBQM (TI-GU@QU-LBQM), the relative runtime of SE-GU w.r.t. QU-LBQM (SE-GU@QU-LBQM), the relative runtime of TI-GU w.r.t. QU-FDA (TI-GU@QU-FDA) and the relative runtime of SE-GU w.r.t. QU-FDA (SE-GU@QU-FDA). In the special case that Gurobi is not able to find an objective value of the desired quality within its limit of 3600 seconds (which only occurs for some major instances), this time limit is used in place of the earliest computation time. Exemplarily, we consider a specific instance to visualize TI-GU@QU-LBQM and TI-GU@QU-FDA in Fig. 4.
The results of this analysis are shown in Fig. 5. This plot shows that QU-LBQM is not able to compete with SE-GU. All problems from the first 4 out of 9 instance groups have been solved with SE-GU in under 1 second while the remaining instances in less than 10 seconds, whereas the QU-LBQM runtimes range between 50 and 100 seconds. However, LBQM finds a comparable solution faster than TI-GU for most problems with 6 or more samples and remains competitive for smaller problems. A Welch-t test confirms that the mean of TI-GU runtime is larger than the one of QU-LBQM runtime with a significance over 99%.
Furthermore, Fig. 4(b) shows that QU-FDA is outperformed by SE-GU as well. Analogous to Fig. 4(a), the instances in groups \((4,4),(5,3),(5,4)\) and \((6,3)\) have been solved by SE-GU in 1 second or less. But in contrast to Fig. 4(a), the other groups have their median between 1 second and 10 seconds, i. e., which reflects that the target objectives from QU-FDA are lower than those from QU-LBQM (see Fig. 2(a)). Nonetheless, the time taken for SE-GU to reach the solution quality of QU-FDA is 10 to 100 times smaller. Regarding TI-GU, QU-FDA finds a comparable solution almost always faster with a few exceptions.
Figure 4: Visualization of the relative runtime of TI-GU w.r.t. QU-LBQM and QU-FDA, denoted by TI-GU@QU-LBQM and TI-GU@QU-FDA, respectively. Here, we consider the example instance \((7,4,3)(3)\); see supplementary material. The orange dots (connected by lines for better visualization) mark the resulting objective values of TI-GU at the corresponding time steps. The horizontal upper, blue and lower, green line mark the final objective value of QU-FDA and QU-LBQM, respectively, on the same instance. The blue and green lines intersect with the orange lines at some point. The time coordinate of the next lower TI-GU objective value after this intersection represents the relative runtime of TI-GU w.r.t. the solver, which is marked as a vertical line in the corresponding color. In other words, the relative runtime represents how long TI-GU has to run until it reaches an objective value that is at least as good as the result from QU-LBQM or QU-FDA, respectively.
Figure 5: Benchmark results for minor instances as scatter plots. We show the relative runtimes of TI-GU and SE-GU w.r.t. QU-LBQM and QU-FDA, denoted by TI-GU@QU-LBQM, TI-GU@QU-FDA, SE-GU@QU-LBQM and SE-GU@QU-FDA, respectively. The results are grouped into sets of instances \((N,K)\) in analogy to Fig. 3. See Fig. 4 for an example of the relative runtime computation. Abbreviations according to Fig. 2.
### Results for Major Instances
The results for major instances are presented in analogy to the results for minor instances from the previous section. In Fig. 6, we show the runtime and the target value of the solvers on the corresponding models as scatter plots.
The objective values of QU-FDAh are worse than the ones of SE-GU with a significance of over 97%, but Fig. 5(b) shows that the runtime of SE-GU increases strictly until it reaches the upper bound for the computation time of 3600 seconds, which happens for ca. 15 samples. On the other hand, the computation time of QU-FDAh ranges between 120 and 300 seconds, where only a slight increase can be seen.
Analogously to Fig. 4(b), we evaluate the earliest computation times of SE-GU model to reach objective values equal to or lower than the objective values obtained from QU-FDAh, denoted by the relative runtime of SE-GU w.r.t. QU-FDAh (SE-GU@QU-FDAh). The results are shown in Fig. 7.
In Fig. 7, a strictly increasing computation time can be seen for SE-GU, whereas the QU-FDAh runtime remains almost constant. For the biggest instances with \(N=20\) samples, QU-FDAh has a clear advantage with respect to the computation time, whereas it is competitive to SE-GU for the instances
Figure 6: Benchmark results for major instances as scatter plots. The results are grouped into sets of instances \((N,K)\) as for previous the plots. Abbreviations according to Fig. 2.
with 15 samples. In this sense QU-FDAh finds a solution of comparable quality much faster for problems with 20 samples than SE-GU and the latter was not able to prove optimality for some of the instances with 20 samples. A Welch t-test confirms with a significance of over 99% that the QU-FDAh mean is lower than the SE-GU@QU-FDAh mean.
## 5 Conclusion and Outlook
This paper presents a thorough benchmarking of an industrially relevant use case of combinatorial optimization, the _transport robot scheduling problem_ (TRSP) with the goal to achieve a time-optimal robot schedule, as motivated by a BASF high-throughput laboratory. We solve a large set of instances for this optimization problem with varying difficulty using three commercially available solvers: (i) the D-Wave's hybrid Leap framework, (ii) the quantum-inspired Fujitsu digital annealer and (iii) the classical state-of-the-art solver Gurobi. To this end, we develop several mathematical models: a quadratic unconstrained binary optimization (QUBO) model for the quantum and digital annealer and two different mixed integer program (MIP) models for Gurobi, which we call time-indexed and sequence model, respectively. Modeling the same problem in different, solver-specific forms helps us to optimally assess the capabilities of each solver. In total, we compare five different approaches (i. e., model and solver combinations as sketched in Fig. 2): (i) Gurobi with the time-indexed model (TI-GU), (ii) Gurobi with the sequence model (SE-GU), (iii) D-Wave's hybrid _Leap_ framework (LBQM) with the QUBO model (QU-LBQM), (iv) Fujitsu's digital annealer (FDA) with the QUBO model (QU-FDA) and (v) Fujitsu's digital annealer hybrid framework (FDAh) with the QUBO model (QU-FDAh). For our performance study, we separated all problem instances into two groups. First, the _minor instances_ with problems less than \(10\,000\) binary variables in the QUBO formulation and, second, the _major instances_ with problems with more than \(10\,000\) and up to \(22\,000\) variables. For practical reasons, we only solve the minor instances with SE-GU, TI-GU, QU-LBQM and QU-FDA, whereas the major instances are only solved with SE-GU and QU-FDAh, respectively.
Our benchmark reveals insights both regarding the objective values of the optimization problem (i. e., the sum of sample completion times) as well as the end-to-end runtimes for the considered approaches. Regarding the objective values, we observe for minor instances that SE-GU and TI-GU give similar results, outperforming QU-FDA, which in turn outperforms QU-LBQM (cf. Fig. (a)a). For major instances, SE-GU outperforms QU-FDAh (cf. Fig. (a)a). Regarding the runtime, we find that for smaller
Figure 7: Benchmark results for major instances as scatter plots. We show the relative runtime of SE-GU w.r.t. QU-FDAh, denoted by SE-GU@QU-FDAh, in analogy to Fig. 5. We also show the runtime of QU-FDAh from Fig. (b)b. The results are grouped into sets of instances \((N,K)\) as for previous plots. Abbreviations according to Fig. 2.
instances TI-GU takes the highest time and SE-GU takes mostly the lowest. Between these two extremes, QU-FDA and QU-LBQM take about the same amount of time (cf. Fig. 2(b)). However, the runtime of SE-GU significantly increases with increasing instance complexity. This same observation continues for the large instances, for which the runtime of SE-GU is mostly larger than that of QU-FDAh (cf. Fig. 5(b)).
To get further insights into the relationship between objective value and runtime, we also studied the relative runtime of Gurobi, that is the time that Gurobi took to find an objective value that is at least as good as the final result from another approach. For minor instances, we find that the relative runtimes of SE-GU w.r.t. QU-LBQM and QU-FDA, respectively, are strictly lower than the runtimes of QU-LBQM and QU-FDA, i. e., Gurobi found solutions of comparable quality faster than the quantum and quantum-inspired approaches (cf. Figs. 4(a) and 4(b)). This is not surprising since SE-GU tended to find better objectives in shorter time. For major instances, the relative runtimes of SE-GU w.r.t. QU-FDAh increase significantly with increasing instance complexity and clearly exceed the runtime of QU-FDAh for the biggest instances (cf. Fig. 7). Thus, QU-FDAh shows an advantage on some bigger instances. Although the resulting objective values of QU-FDAh were not optimal, the approach shows a clear advantage on some bigger instances when compared to SE-GU on a similar time scale.
Our benchmark spans instances of different scales and therefore allows qualitative estimation of the scaling behavior of different approaches. Specifically, we observe that TI-GU and SE-GU show a runtime that scales exponentially with the instance complexity (as estimated by the number of samples and photos), whereas the runtime of QU-LBQM, QU-FDA and QU-FDAh remains almost constant. The quality of the solutions is not significantly determined by the instance complexity. Further research is needed to investigate and quantify these observations in more detail.
Summarized, no general advantage of the quantum and quantum-inspired solvers was found. However, for certain instances the quantum-inspired hybrid usage of the Fujitsu digital annealer turned out to be a very promising alternative to Gurobi and was clearly superior to the usage of D-Wave's hybrid Leap framework. Our study is not a conclusive result but rather an application-oriented case study that provides a snapshot of the current technology and leaves room for performance improvements on the modeling as well as the solver side. For example, an improvement of the quantum annealer inside the hybrid framework might be possible with additional problem-specific fine-tuning of the annealing schedule or other hardware-related parameters. Moreover, the recently released constrained quadratic model (CQM) solver from D-Wave also promises to provide much better performance compared to the solver used in this work. Especially in an agile field such as quantum computing, a technology snapshot such as ours can hardly provide any forecasts about future developments. Therefore, in order to preserve an up-to-date assessment, further practical evaluations for real-world use cases will be necessary. The methods and results from this project can serve as a blueprint or at least point of reference for this kind of ongoing research.
## 6 Acknowledgements
We would like to thank Behrang Shafei, Jens Meissner and Horst Weiss for their invaluable input and support throughout the research process. Without their ongoing contributions, the work would not have been accomplished. This work was partly funded by the German Federal Ministry of Education and Research (Bundesministerium fur Bildung und Forschung, BMBF) within the project "Rymax One".
|
2305.19828 | "Barcodes" for continuous maps and a brief introduction to Alternative
Morse Theory | This paper reviews the description of "bar codes" for a continuous
real-valued map and explains how to recover the Morse complex of a Morse
function from them. In this presentation the bar codes appear as the support of
two vector-space valued maps, one defined on the Euclidean plane and the other
on the "above diagonal" half plane. | Dan Burghelea | 2023-05-31T13:09:51Z | http://arxiv.org/abs/2305.19828v3 | # "Barcodes" for continuous maps and a brief introduction to Alternative Morse Theory
###### Abstract
This paper reviews the description of "bar codes" for a continuous real-valued map \(f:X\to\mathbb{R}\) and explains how to recover the Morse complex of a Morse function from them. In this presentation the bar codes appear as the support of two vector-space valued maps, one defined on the Euclidean plane \(\mathbb{R}^{2}\) and the other on the "above diagonal" half plane \(\mathbb{R}^{2}_{+}\).
_Dedicated to Valentin Poenaru for his 90-th anniversary_
## 1 Introduction
Classical Morse Theory (C.M.T) considers smooth functions \(f:M\to\mathbb{R}\) on smooth manifolds \(M\) whose all critical points are non-degenerate (generic smooth functions) and, under some conditions, relates them to the homology of the underlying manifold. This is done by providing a family of chain complexes, each associated to some additional data, but all isomorphic 1 and therefore called _Morse complex_ which calculate the homology of the manifold. The Morse complex explains the relations between the number \(c_{r}\) of critical points of the index \(r\) and the dimension \(\beta_{r}\) of the \(r-\)homology vector spaces (the \(r-\)Betti numbers) and detect existence of instantons (e.g. isolated trajectories between rest points) for a vector field which admits the function \(f\) as Lyapunov function. The Morse complex is also equipped with an \(\mathbb{R}-\) filtration by sub complexes which calculate the homology of the piece \(f^{-1}((-\infty,t]).\) This filtration is locally constant in \(t\) outside the critical values of \(f.\) The elementary Morse theory, as summarily reviewed in Section 3, has extensions to various type of smooth infinite dimensional manifolds and smooth Whitney stratified spaces and to the case when the set of non degenerate critical points consists of submanifolds rather than points.
Footnote 1: when the chain complex is a complex of \(\kappa-\)vector spaces, as it will be the case in this paper, the unicity will follow from calculations; when the chain complex is of modules over an arbitrary ring this was established in [7]
Alternative Morse Theory (A.M.T) begins with a continuous map \(f:X\to\mathbb{R}\) and refines the set \(CR(f)\) of critical values of \(f\)2, into a collection of four type of intervals which, in topological persistence homology theory, are referred to as _barcodes_.
Footnote 2: the values \(t\) for which the homology of the level \(f^{-1}(t)\) changes
In our work they appear as the points in the support of two type of vector spaces-valued maps \(\hat{\delta}^{f}_{r}\) and \(\hat{\gamma}^{f}_{r}\) defined on \(\mathbb{R}^{2}\) and \(\mathbb{R}^{2}\setminus\Delta,\) where \(\Delta\) denotes the diagonal in \(\mathbb{R}^{2},\) associated to each \(r\in\mathbb{Z}_{\geq 0}\) and field \(\kappa.\) The multiplicity of each bar code is the dimension of the corresponding vector space, possibly infinite. Since in this paper only the restriction of the map \(\hat{\gamma}^{f}_{r}\) to the \(\mathbb{R}^{2}_{+}:=\{x,y\in\mathbb{R}^{2}\mid x<y\}\) will appear, one denotes this restriction by \({}^{+}\hat{\gamma}^{f}_{r}.\) The restriction to the below diagonal half plane, \(\mathbb{R}^{2}_{-}:=\{x,y\in\mathbb{R}^{2}\mid x<y\},\) denoted by \({}^{-}\hat{\gamma}^{f}_{r},\) can be derived from \({}^{+}\hat{\gamma}_{r}\) by the formula \({}^{-}\hat{\gamma}^{f}_{r}(a,b)={}^{+}\hat{\gamma}^{-f}_{r}(-b,-a).\)
These maps (actually the points of their supports with their multiplicity) permit to define a chain complex of \(\kappa-\)vector spaces. This chain complex is also equipped with an \(\mathbb{R}-\)filtration determined by the critical values and when considered for \(f\) a Morse function is isomorphic to the Morse complex with its filtration. In particular, as in the case of a Morse function \(f\), from this complex, determined by bar codes, one can recognize the number of critical points corresponding to each critical value, the homology of the manifold, of the sub-levels \(f^{-1}((-\infty,t])\), of the levels \(f^{-1}(t)\) etc. and one detects presence of instantons; all these in a considerably larger class of situations than in the C.M.T.
A key merit of the theory is that the numerical invariants which appear are _computer friendly_, i.e. when considered for nice spaces (for example finite simplicial complexes and simplicial maps) they can be calculated by implementable algorithms. cf [1].
The purpose of this paper is to provide the definition of the maps \(\hat{\delta}^{f}_{r}\) and \({}^{+}\hat{\gamma}^{f}_{r}\) for an arbitrary continuous map, relate them to the homology of the underlying space when \(f\) is tame and present the arguments to establish the isomorphism of the chain complex defined by \(\hat{\delta}^{f}_{r}\) and \({}^{+}\hat{\gamma}^{f}_{r}\) with the Morse complex, however in this paper we do this only in the case that the involved chain complexes are of finite dimension vector spaces in particular for the subcomplexes describing the \(\mathbb{R}-\)filtration. The concluded isomorphism is not canonical. The existence of an isomorphism compatible with the filtration is algebraically more subtle and will be discussed in subsequent work.
The paper begins with a few observation about chain complexes in Section 2, and a brief review of elementary Morse theory in Section 3. Section 4 provides the definitions of the maps \(\hat{\delta}^{f}\) and \({}^{+}\hat{\gamma}^{f}_{r}\) and of the associated chain complex \((C_{*}^{\delta,\gamma,\mu},\partial_{*}^{\delta,\gamma,\mu})\) and of its \(\mathbb{R}-\)filtration \((C^{\delta,\gamma,\mu}(t)_{*}.\partial_{*}^{\delta,\gamma,\mu})\).
Section 5 collects results needed to establish the non canonical isomorphism () of \((C^{\delta,\gamma,\mu}(t)_{*}.\partial_{*}^{\delta,\gamma,\mu})\) with the \(t-\)subcomplex of the Morse complex. The theory can be extended to A.M-N.T (M-N.T stands for Morse-Novikov theory cf [2]).
**Note** In 1961-62, as a second year undergraduate student I met Valentin Poenaru at that time a young and charismatic researcher at the Mathematical Institute of the Romanian Academy in Bucharest. In view of my interest in topology he has invited me to attend his seminar in differential topology; at that time the seminar discussed Morse Theory. Despite our rather short intersection in Bucharest (he soon left Romania) directly, through his seminar, or may be indirectly, by his reputation, he has much encouraged my wish to become a topologist. I am pleased to dedicate to him this paper, containing a few considerations about, or at least related to Morse theory, a topic in topology, which has remained directly or indirectly in the back of much of my mathematics and turns out to have more and more relevance outside topology.
## 2 Chain complexes of \(\kappa-\)vector spaces
For a chain complex of \(\kappa-\)vector spaces
\[(C_{*},\partial_{*})\equiv\{\,\cdots\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
with \(\underline{\partial}_{n}\) an isomorphism; clearly \(H_{n}(C_{*},\partial_{*})=H_{n}.\) Such decomposition exists and is unique up to a non canonical isomorphism. When \(C_{n}\) are finite dimensional one writes \(c_{n}=\dim C_{n},\)\(\beta_{n}=\dim H_{n}\) and \(\rho_{n}=\mathrm{rank}\ \partial_{n}\) and these numbers are related by
\[c_{n}=\beta_{n}+\rho_{n}+\rho_{n-1}.\]
The existence of Hodge decompositions and the equality above implies :
**Observation 2.1**: _Two of these three sets of numerical invariants determine up to a non-canonical isomorphism a chain complex of finite dimensional vector spaces._
The proof is straightforward. For more details, if necessary, see [2] section 8.
An \(\mathbb{R}-\)filtration of \((C_{*},\partial_{*})\) consists of a family of subcomplexes \((C_{*}(t),\partial_{*})\subset(C_{*},\partial_{*}),\) i.e.\(\partial_{*}(C_{*}(t)\subset C_{*-1}(t),\) indexed by \(t\in\mathbb{R}\) s.t. \((C_{*}(t),\partial_{*})\subseteq(C_{*}(t^{\prime}),\partial_{*})\) for \(t<t^{\prime}\) and \(\bigcup_{t}(C_{*}(t),\partial_{*})=(C_{*},\partial_{*}).\) The complex \((C_{*}(t),\partial_{*})\) is referred to as the \(t-\)filtration or the \(t-\)subcomplex of \((C_{*},\partial_{*}).\)
## 3 Classical Morse theory
Morse theory for finite dimensional smooth manifolds considers proper smooth maps bounded from below with all critical points non degenerate 3,
Footnote 3: a critical points \(x\in Cr(f):=\{x\in M\mid df(x)=0\}\) is non degenerate if the Hessian of \(f\) at \(x,\) i.e. \(\partial^{2}f/\partial x_{i},\partial x_{j}(x),\) in a coordinate system (ant then in any) is a non degenerate quadratic form
To such function and an arbitrary field \(\kappa\) one associates a collection of chain complexes of finite dimensional \(\kappa-\) vector spaces equipped with an \(\mathbb{R}-\)filtration, locally constant for \(t\in\mathbb{R}\setminus CR(f),\) which calculates the homology of \(M\) and of \(f^{-1}((-\infty,t])\) whose components \(C_{n}\) resp. \(C_{n}(t)\) are the \(\kappa-\)vector spaces generated by the critical points of index \(n\) resp. the critical points of index \(n\) of critical values smaller or equal to \(t\). The boundary maps \(\partial_{n}:C_{n}\to C_{n-1}\) depends on additional data. but different additional data provide isomorphic complexes. Any such complex will be named _Morse complex of \(f\)_.
The additional data contains a smooth vector field \(X\) which has \(f\) as good Lyapunov function, in Milnor terminology a _gradient like vector field_, cf. [9]. Precisely, \(X(f)(x)<0\) iff \(x\in M\setminus Cr(f)\) with \(Cr(f)=\{x\in M\mid df(x)=0\}\) and for any \(x\in Cr(f)\) one can find a chart \(\varphi_{x}:(U_{x},x)\rightarrow(\mathbb{R}^{n},0)\) s.t.
\[f\cdot\varphi_{x}^{-1}(x_{1},\cdots x_{n})= -1/2\sum_{i\leq k}x_{i}^{2}+1/2\sum_{i\geq k+1}x_{i}^{2}\] \[\varphi_{x}^{*}X= \sum_{i\leq k}x_{i}\partial_{x_{i}}-\sum_{i\geq k+1}x_{i} \partial_{x_{i}}.\]
Equivalently, \(X=-grad_{g}f\) for \(g\) a complete Riemannian metric on \(M\) which in the neighborhood of each critical point is flat.
For a smooth vector field \(X\) on a smooth manifold \(M^{n}\) call _rest point_ a point \(x\in M\) s.t. \(X(x)=0\) and _trajectory_ a smooth map \(\gamma:\mathbb{R}\supset U\rightarrow\mathbb{M}\) which satisfies \(d\gamma(t)/dt=X(\gamma(t)).\) Denote by \(\gamma_{y}\) the maximal trajectory with \(\gamma_{y}(t)=y,\)\(y\in M,\) and define
\[W_{x}^{\mp}:=\{y\in M\mid\lim_{t\rightarrow\mp\infty}\gamma_{y}(t)=x\},\]
the unstable (-) resp. stable (+) set of the rest point \(x.\) If the vector field is as above then these sets are actually submanifolds diffeomorphic to the euclidean space of dimension \(index(x)\) resp. \(n-index(x).\)
An _additional data_ consists of a smooth vector field \(X\) which has the unstable manifolds transversal to the stable manifolds and has \(f\) as a good Lyapunov function plus a collection \(\mathcal{O}=\{o_{x},x\in Cr(f)\}\) of orientations \(o_{x}\) for any unstable manifold \(W^{-}_{x}\). Gradient like vector fields exist, and any vector field with hyperbolic rest points having \(f\) as Lyapunov function can be perturbed arbitrary little on arbitrary small neighborhood of the rest points, but still remaining the same in a smaller neighborhood of these points, to get the stable and unstable manifolds transversal cf. [10] or [6].
If \(x\) is a critical point of index \(k\) and \(y\) a critical point of index \((k-1)\) then \(W^{-}_{x}\cap W^{+}_{y}\) is a union of components \(\gamma,\) each submanifold of dimension one called _instanton_ (= isolated trajectory between rest points).
The orientation \(o_{x}\) compared to the orientation \(o_{y}\) followed by the orientation from \(x\) to \(y\) along the instanton \(\gamma,\) provides a sign \(\epsilon(\gamma)=\pm 1\) and then one defines the _algebraic cardinality_\(I(x,y)=\sum_{\gamma}\epsilon(\gamma)\) of the set of instantons from \(x\) to \(y\).
The additional data \((X,\mathcal{O})\) provides the vector spaces \(C^{X,\mathcal{O}}_{k}\) generated by rest points of \(X\) the same as the critical points of \(f\) of index \(k,\) hence equipped with a base. The subspace \(C^{f,X,\mathcal{O}}_{k}(t)\) is generated by the rest points \(x\) with \(f(x)\leq t\) and the linear map \(\partial^{X,\mathcal{O}}_{k}:C^{X,\mathcal{O}}_{k}\to C^{X,\mathcal{O}}_{k-1}\) is given by the matrix with entries \(I(x,y).\)
The main theorem of elementary Morse theory claims that if \(f\) is a proper smooth function bounded from below with all critical points non degenerate and \((X,\mathcal{O})\) is an additional data, then these vector spaces and linear maps describe above define a chain complex \((C^{X,\mathcal{O}}_{*},\partial^{X,\mathcal{O}}_{*})\) with \(\mathbb{R}-\)filtration \((C^{X,\mathcal{O}}_{*}(t),\partial^{X,\mathcal{O}}_{*})\) which calculates the homology of \(X\) and of \(f^{-1}((-\infty,t]).\) The \(t-\)filtration subcomplex has all vector spaces of finite dimension. This implies the famous Morse inequalities, the independence of additional data up to a non canonical isomorphism, i.e. the complexes derived from different additional data are isomorphic, cf. [7], as well as the fact that \(\mathrm{rank}\ \partial_{k}\neq 0\) implies existence of instantons for any vector field having \(f\) as Lyapunov function.
We refer to any of these complexes \((C^{X,\mathcal{O}}_{*},\partial^{X,\mathcal{O}}_{*})\) and the sub complexes \((C^{f,X,\mathcal{O}}_{*}(t),\partial^{X,\mathcal{O}}_{*})\) as Morse complexes. The A.M.T as proposed has an extension to A.M-N.T (Alternative Morse Novikov theory) but this is not discussed in this paper.
The maps \(\hat{\delta}^{f}_{r},\ ^{+}\hat{\gamma}^{f}_{r},\) and \({}^{+}\mu^{f}_{r}\) and the associated chain complex
**The maps \(\hat{\delta}^{f}_{r},\ ^{+}\hat{\mu}^{f}_{r},\ ^{+}\hat{\gamma}^{f}_{r},\ ^{+}\hat{\lambda}^{f}_{r}\)**
Let \(f:X\rightarrow\mathbb{R}\), be a continuous real-valued map and let \(H_{*}\) denote the singular homology 4 with coefficients in a fixed field \(\kappa\).
Footnote 4: or any other homology theory which satisfies the Eilenberg-Steenrod axioms and commutes with the direct limits
For any \(a\in\mathbb{R}\) denote by
\[\begin{split} X^{f}_{a}&:=f^{-1}((-\infty,a]),\,X^{ a}_{f}:=f^{-1}([a,\infty))\\ X^{f}_{<a}&:=f^{-1}((-\infty,a)),\,X^{>a}_{f}:=f^{- 1}((a,\infty)).\end{split} \tag{1}\]
The real number \(a\in\mathbb{R}\) is called _regular value_ if
\[H_{*}(X^{f}_{a},X^{f}_{<a})\oplus H_{*}(X^{a}_{f},X^{>a}_{f})=0\]
and _critical value_ if not regular. Denote the set of critical values by \(CR(f).\)
For any \(a\in\mathbb{R}\) denote by
\[\begin{split}\mathbb{I}^{f}_{a}(r)&:=\mathrm{i}mg(H _{r}(X^{f}_{a})\to H_{r}(X)),\mathbb{I}^{a}_{f}(r):=\mathrm{i}mg(H_{r}(X^{a}_ {f})\to H_{r}(X))\\ \mathbb{I}^{f}_{<a}(r)&:=\mathrm{i}mg(H_{r}(X^{f}_{< a})\to H_{r}(X)),\mathbb{I}^{>a}_{f}(r):=\mathrm{i}mg(H_{r}(X^{>a}_{f}) \to H_{r}(X))\end{split} \tag{2}\]
with the arrows representing the inclusion induced linear maps.
Note that \(H_{r}(X_{<a})=\varinjlim_{\epsilon\to 0}H_{r}(X_{a-\epsilon})\) and then \(\mathbb{I}_{<a}(r)=\varinjlim_{\epsilon\to 0}\mathbb{I}_{a-\epsilon}(r)\).
Similarly \(H_{r}(X^{>a})=\varinjlim_{\epsilon\to 0}H_{r}(X^{a+\epsilon})\) and then \(\mathbb{I}^{>a}(r)=\varinjlim_{\epsilon\to 0}\mathbb{I}^{a+\epsilon}(r)\).
For any \(a,b\in\mathbb{R}\) let
\[\mathbb{F}_{r}^{f}(a,b):= \mathbb{I}_{a}^{f}(r)\cap\mathbb{I}_{f}^{b}(r), \tag{3}\] \[\mathbb{F}_{r}^{f}<a,b):= \mathbb{I}_{<a}^{f}(r)\cap\mathbb{I}_{f}^{b}(r),\] \[\mathbb{F}_{r}^{f}(a,b):= \mathbb{I}_{a}^{f}(r)\cap\mathbb{I}_{f}^{b}(r),\]
and define:
\[\boxed{\hat{\delta}_{r}^{f}(a,b):=\mathbb{F}_{r}(a,b)/(\mathbb{F}_{r}(<a,b)+ \mathbb{F}_{r}(a,>b))}.\]
Let
\[\delta_{r}^{f}(a,b):=\dim\hat{\delta}_{r}^{f}(a,b)\in\mathbb{Z}_{\geq 0} \cup\infty.\]
We also introduce
\[\mathbb{I}_{\infty}^{f}(r) :=\cup_{a\in\mathbb{R}}\mathbb{I}_{a}^{f}(r)=H_{r}(X), \tag{4}\] \[\mathbb{I}_{f}^{-\infty}(r) :=\cup_{a\in\mathbb{R}}\mathbb{I}_{f}^{a}(r)=H_{r}(X),\] \[\mathbb{I}_{-\infty}^{f}(r) :=\cap_{a\in\mathbb{R}}\mathbb{I}_{a}^{f}(r),\] \[\mathbb{I}_{f}^{\infty}(r) :=\cap_{a\in\mathbb{R}}\mathbb{I}_{f}^{a}(r)\]
and then define
\[\boxed{{}^{+}\hat{\mu}_{r}(a):=\varinjlim_{y\to\infty}(\mathbb{I}_{a}(r)\cap \mathbb{I}^{y}(r))/(\mathbb{I}_{<a}(r)\cap\mathbb{I}^{y}(r)).}\]
It can be shown in view of Proposition 5.6 below that when \(\dim H_{r}(X_{a},X_{<a})<\infty\) one has
\[{}^{+}\hat{\mu}_{r}(a):=(\mathbb{I}_{a}(r)\cap\mathbb{I}^{\infty}(r))/( \mathbb{I}_{<a}(r)\cap\mathbb{I}^{\infty}(r)).\]
**Observation 4.1**: _The assignment \(\hat{\delta}_{r}\) defines the map_
\(\hat{\delta}:\mathbb{R}^{2}\rightsquigarrow\kappa-\mathrm{Vector\ spaces}\)
_and the assignment \({}^{+}\hat{\mu}_{r}\) defines the map_
\({}^{+}\hat{\mu}_{r}:\mathbb{R}\rightsquigarrow\kappa-\mathrm{Vector\ spaces},\)
_which in view of the definition of regular / critical values have the supports contained in \(CR(f)\times CR(f)\) and. \(CR(f)\) resp.. If \(f\) is bounded from above \(\varprojlim x\to\infty\mathbb{I}^{x}(r)=0\) and then \({}^{+}\hat{\mu}_{r}(a)=0\) for any \(a\in\mathbb{R}\)._
For \(a,b\in\mathbb{R}\) let
\[\mathbb{T}_{r}^{f}(a,b):= \ker(H_{r}(X_{a}^{f})\to H_{r}(X_{b}^{f}))\ \ \mathrm{if}\ \ \ \ \ a\leq b \tag{5}\] \[\mathbb{T}_{r}^{f}<a,b):= \ker(H_{r}(X_{<a}^{f})\to H_{r}(X_{b}^{f}))\ \ \mathrm{if}\ \ \ \ a\leq b\] \[\mathbb{T}_{r}^{f}(a,<b):= \ker(H_{r}(X_{a}^{f})\to H_{r}(X_{<b}^{f}))\ \ \mathrm{if}\ \ \ \ a<b\]
and define:
\[\boxed{{}^{+}\hat{\gamma}_{r}^{f}(a,b):=\mathbb{T}_{r}(a,b)/(i\mathbb{T}_{r}(< a,b)+\mathbb{T}_{r}(a,<b))},\]
with \(\iota:\mathbb{T}_{r}(<a,b)\to\mathbb{T}_{r}(a,b)\) the inclusion induced linear map. When we want to specify the source and the target we write \({}^{c}{}_{a}^{b}:\mathbb{T}_{r}(a,c)\to\mathbb{T}_{r}(b,c),a<b<c\) for the inclusion induced linear map. The \(\iota\) in the definition of \({}^{+}\hat{\gamma}_{r}^{f}\) above is actually \({}^{b}{}_{\iota<a}\).
Let
\[{}^{+}\gamma_{r}^{f}(a,b):=\dim\hat{\gamma}_{r}^{f}(a,b)\in\mathbb{Z}_{\geq 0} \cup\infty.\]
We also introduce
\[\begin{array}{c}\mathbb{T}_{r}(-\infty,a):=\varprojlim_{a\subset t \rightarrow-\infty}\mathbb{T}_{r}(t,a),\\ \mathbb{T}_{r}(a,\infty):=\varinjlim_{a<t\rightarrow\infty}\mathbb{T}_{r}(a, t),\end{array} \tag{6}\]
the first limit w.r. to the linear maps \({}^{a}t_{t}^{\prime}:\mathbb{T}_{r}(t,a)\rightarrow\mathbb{T}_{r}(t^{\prime},a)\) for \(t<t^{\prime},\) the second w.r. to the inclusions \(\mathbb{T}_{r}(a,t)\subseteq\mathbb{T}_{r}(a,t^{\prime})\) for \(t<t^{\prime},\) and then define 5
Footnote 5: A more appropriate definition is \({}^{+}\hat{\lambda}_{r}^{f}(a):\cap_{t>a}\mathrm{i}mg({}^{a}t_{t}^{<a}: \mathbb{T}_{r}(-t,a)\rightarrow\mathbb{T}_{r}(<a,a)\) under the hypotheses of weak tameness satisfied in what follows, they coincide
\[\boxed{\begin{array}{l}\mbox{\rm$+$}\hat{\lambda}_{r}^{f}(a):\mathrm{i}mg ({}^{a}\iota_{-\infty}^{<a}:\mathbb{T}_{r}(-\infty,a)\rightarrow\mathbb{T}_{r }(<a,a)).\end{array}} \tag{7}\]
Let
\[{}^{+}\lambda_{r}^{f}(a,b):=\dim^{+}\hat{\gamma}_{r}^{f}(a,b)\in\mathbb{Z}_{ \geq 0}\cup\infty.\]
**Observation 4.2**: _The assignment \({}^{+}\hat{\gamma}_{r}^{f}\) defines the map_
\({}^{+}\hat{\gamma}_{r}^{f}:\mathbb{R}_{+}^{2}\rightsquigarrow\kappa-\mathrm{ Vector\ spaces}\)
_and the assignment \({}^{+}\hat{\lambda}_{r}\) defines the map_
\({}^{+}\hat{\lambda}_{r}^{f}:\mathbb{R}\rightsquigarrow\kappa-\mathrm{Vector\ spaces}\)
_which in view of the definition of regular / critical values have the supports contained in \(CR(f)\times CR(f)\) and \(CR(f)\) resp.. If \(f\) is bounded from below then \({}^{+}\hat{\lambda}_{r}^{f}(a)=0.\)_
**Relationship with barcodes**
The point \((a,b)\in supp\delta_{r}^{f}\) corresponds to what in [1] is called an \(r-\)closed bar code \([a,b]\) of multiplicity \(\dim\hat{\delta}_{r}^{f}(a,b)\) when \(a\leq b\) and to an \((r-1)-\)open bar code \((b,a)\) when \(a>b\)6 with the multiplicity \(\dim\hat{\delta}_{r}(a,b)\). Similarly the point \((a,b)\) with \(a<b\) in \(supp^{+}\gamma_{r}^{f}\) corresponds to what in [1] is called an \(r-\)closed-open bar code \([a,b)\) of multiplicity \(\dim{}^{+}\hat{\gamma}_{r}^{f}(a,b)\).
Footnote 6: this because \(b\) and \(b\) should denote the ends of an interval
**Definition 4.3**: _The map \(f\) is called weakly tame 7 if for any \(t\in\mathbb{R}\)\(\dim H_{r}(X_{t},X_{<t})<\infty\) and tame if in addition \(CR(f)\) is a discrete subset in \(\mathbb{R},\)_
Footnote 7: actually left weakly tame
All real-valued maps on finite or infinite dimensional manifolds considered in C.M.T are tame.
As shown in Observation 5.5\(\dim H_{r}(X_{t},X_{<t})<\infty\) implies \(\dim\hat{\delta}_{r}(t,s)\), \(\dim^{+}\hat{\gamma}_{r}^{f}(t,s)\), \(\dim^{+}\hat{\mu}_{r}^{f}(t),\)\(\dim^{+}\hat{\gamma}_{r-1}^{f}(u,t)\) and \(\dim^{+}\hat{\lambda}_{r-1}^{f}(t)\) are finite for any \(s\) or \(u\), hence if \(f\) is weakly tame hence the maps \(\delta_{r}^{f}\) and \({}^{+}\gamma_{r}^{f}\) are \(\mathbb{Z}_{\geq 0}-\)valued functions with the same support as \(\hat{\delta}_{r}^{f}\) and \({}^{+}\hat{\gamma}_{r}^{f}\) and so are the maps \({}^{+}\mu_{r}^{f}=\dim^{+}\hat{\mu}_{r}^{f}\) and \({}^{+}\lambda_{r}^{f}=\dim^{+}\hat{\lambda}_{r}^{f}.\) This because \(\dim H_{r}(X_{a},X_{<a})\) finite implies that \(\dim\mathrm{c}oker(H_{r}(X_{<a}\to H_{r}(X_{a})),\) hence \(\dim\mathbb{I}_{a}(r)/\mathbb{I}_{<a}(r),\) and \(\dim\mathbb{T}_{r}(<a,a)\) are finite.
As shown below in Proposition (5.6)\(\dim H_{r}(X_{t},X_{<t})<\infty\) implies that
1. \(supp\delta_{r}^{f}\cap t\times\mathbb{R}\) is finite and empty when \(t\) is a regular value,
2. \(supp\ \gamma_{r}^{f}\cap t\times\mathbb{R}\) and \(supp\ \gamma_{r-1}^{f}\cap\mathbb{R}\times t\) is finite and empty when \(t\) is a regular value.
**The chain complexes \((C_{*}^{\delta,\gamma},\partial_{*}^{\delta,\gamma,\mu})\) and \((C_{*}^{\delta,\gamma,\mu}(t),\partial_{*}^{\delta,\gamma,\mu})\)**
Define
\[\begin{split} C_{n}^{-}:=&\bigoplus_{\{a,b|a<b\}}~{}~{} ~{}^{+}\hat{\gamma}_{n-1}^{f}(a,b)\\ H_{n}:=&\bigoplus_{\{a,b\}}~{}~{}\hat{\delta}_{n}^{ f}(a,b)\oplus\bigoplus_{\{a\}}~{}^{+}\hat{\mu}_{n}(a)\\ C_{n}^{+}:=&\bigoplus_{\{a,b|a<b\}}~{}~{}^{+}\hat{ \gamma}_{n}^{f}(a,b)\\ C_{n}:=& C_{n}^{-}\oplus H_{n}\oplus C_{n}^{+}\end{split} \tag{8}\]
and
\[\partial_{n}=\begin{bmatrix}0&0&0\\ 0&0&0\\ id&0&0\end{bmatrix}\]
with the \(t-\)filtration provided by
\[\begin{split} C_{n}(t):=&\bigoplus_{\{(a,b)|a<b\leq t\}}~{} ~{}^{+}\hat{\gamma}_{n-1}(a,b)\oplus\\ &\bigoplus_{\{(a,b)|a\leq t\}}\hat{\delta}_{n}^{f}(a,b)\oplus \bigoplus_{\{(a,b)|a\leq t<b\}}~{}~{}^{+}\hat{\gamma}_{n}^{f}(a,b)~{}~{}) \oplus\bigoplus_{\{a|a\leq t\}}~{}~{}^{+}\hat{\mu}_{r}(a)\oplus\\ &\bigoplus_{\{(a,b)|a<b\leq t\}}~{}~{}~{}^{+}\hat{\gamma}_{n}^{f}(a,b). \end{split} \tag{9}\]
Clearly \(\partial_{n}(C_{n}(t))\subset C_{n-1}(t).\) This sub complex has
\[\begin{split} C_{n}^{-}(t)&=\bigoplus_{\{(a,b)|a<b \leq t\}}^{+}\hat{\gamma}_{n-1}(a,b)\\ H_{n}(t)&=\bigoplus_{\{(a,b)|a\leq t\}}\hat{\delta }_{n}^{f}(a,b)\oplus\bigoplus_{\{a|a\leq t\}}~{}~{}^{+}\hat{\mu}_{r}(a)\oplus \bigoplus_{\{(a,b)|a\leq t<b\}}~{}~{}^{+}\hat{\gamma}_{n}(a,b)\\ C_{n}^{+}(t)&=\bigoplus_{\{(a,b)|a<b\leq t\}}~{}~{} ~{}^{+}\hat{\gamma}_{n}^{f}(a,b).\end{split} \tag{10}\]
From now on one denotes the above complex (8) by \((C_{*}^{\delta,\gamma,\mu},\partial_{*}^{\delta,\gamma,\mu})\) and the \(t-\)filtration sub complex (10) by \((C_{*}^{\delta,\gamma,\mu}(t),\partial_{*}^{\delta,\gamma,\mu})\).
The tameness implies that the set of critical values can be totally ordered as a sequence \(\{<\cdots c_{i}<c_{i+1}<\cdots\},\) possibly infinite in both directions, and the filtration is locally constant in \(t\in\mathbb{R}\setminus CR(f).\) If \(f\) is in addition bounded from below then the sequence of critical values is bounded from below and each of the sub complexes \((C_{*}^{\delta,\gamma,\mu}(t),\partial_{*}^{\delta,\gamma,\mu})\) are finite dimensional vector spaces, hence the considerations in Section 2 apply.
The following Theorem establishes the relation between the maps \(\hat{\delta}_{r}^{f},+\hat{\mu}_{r}^{f},+\hat{\gamma}_{r}^{f},+\hat{\lambda}_{ r}^{f}\) and the homology of the pair \((X_{a},X_{<a}),\) and of the spaces \(X_{a},\)\(X.\) The proof will be sketched in Section 5 below.
**Theorem 4.4**:
1. _Suppose_ \(f\) _is weakly tame. Then one has_ \[H_{r}(X_{a},X_{<a})=\bigoplus_{t\in\mathbb{R}}\hat{\delta}_{r}^{f}(a,t)\oplus^{+ }\hat{\mu}_{r}^{f}(a)\oplus\bigoplus_{t\in(-\infty,a)}{}^{+}\hat{\gamma}_{r-1}^ {f}(t,a)\oplus^{+}\hat{\lambda}_{r-1}^{f}(a)\oplus\bigoplus_{t\in(a,\infty)}{} ^{+}\hat{\gamma}_{r}^{f}(a,t)\]
2. _If in addition_ \(f\) _is tame_ 8 _and bounded from below then_ Footnote 8: this remains true under considerably weaker hypothesis than stated in [3] 1. \(H_{r}(X_{t})=\bigoplus_{(a,b)|a\leq t}\hat{\delta}_{r}^{f}(a,b)\oplus\bigoplus_{ (a,b)|a\leq t<b}{}^{+}\hat{\gamma}_{r}^{f}(a,b)\oplus\bigoplus_{a\leq t}{}^{+} \hat{\mu}_{r}^{f}(a),\)__ 2. \(H_{r}(X)=\bigoplus_{(a,b)}\hat{\delta}_{r}^{f}(a,b)+\bigoplus_{a\in\mathbb{R}}{ }^{+}\hat{\mu}_{r}(a)\)__
Note that:
i) in these formulae the vector spaces involves are trivial unless both \(a\) and \(t\) are critical values.
ii) all sums in item 1 have finite many non vanishing terms
iii) since \(f\) is bounded from below, situation we meet in C.M.T, the sums in item 2(a) contains only finite many non vanishing terms.
iv)If \(X\) is a compact ANR, hence \(f\) is also bounded from above, in item 2(b) one has \({}^{+}\hat{\mu}_{r}^{r}(a)=0\) and the sum has finite many non vanishing terms.
The proof of this theorem is sketched in Section 5.
If \(f:M\to\mathbb{R}\) is a proper smooth function on a finite dimensional manifold 9 with all critical points non degenerate denote by \(c_{r}(f,a)\) the number of critical points of of index \(r\) with \(f(x)=a\). Then one has:
Footnote 9: or smooth function which satisfies Palais-Smale condition on an infinite dimensional manifold
**Proposition 4.5**: _(Morse Lemma)_
\[c_{r}(f,a)=\dim H_{r}(M_{a},M_{<a}).\]
Proposition (4.5) remains true if \(M\) is an infinite dimensional smooth manifold and \(f\) satisfies Palais-Smale condition C 10. Theorem 4.4 and Proposition 4.5 imply the main result of this paper, Theorem 4.6
Footnote 10: any sequence of points \(x_{i}\in M\) s.t. \(df(x_{i})\to 0\) contains a subsequence convergent to a critical point
**Theorem 4.6**: _Suppose \(f\) is a proper Morse function bounded from below on a smooth manifold or a smooth function on a smooth infinite dimensional manifold which satisfies Palais Smale condition whose critical points are non degenerate. Then for any \(t\) the chain complexes \((C_{*}^{f,X,\mathcal{O}}(t),\partial_{*}^{X,\mathcal{O}})\) and the chain complex \((C^{\delta,\gamma,\mu}(f)_{*}(t),\partial_{*}^{\delta,\gamma,\mu})\) are isomorphic._
_Proof:_ Since both complexes are complexes of finite dimensional vector spaces, in view of Observation 2.1, it suffices to check that the dimension of the \(r\) components and the dimension of \(r-\)homology vector spaces are the same.
Indeed:
* \(\dim C_{r}^{f,X,\mathcal{O}}(t),\) which by Proposition 4.5 is equal to \(\dim(\oplus_{a<t}H_{r}(X_{a},X_{<a}),\) is by Theorem 4.4 item 1 equal to \(\dim(C_{r}^{-}(t)\oplus H_{r}(t)\oplus C_{r}(t)^{+})=C_{r}^{\delta,\gamma,\mu }(f)(t),\) (cf. (7).
* the dimension of he homology of the chain complex \((C_{*}^{f,X,\mathcal{O}}(t),\partial_{*}^{X,\mathcal{O}}),\) which by CMT is the dimension of \(H_{r}(M_{t}),\) is by Theorem 4.4 item 2 equal to \(\dim H_{r}(t)\) the dimension of the homology of the complex \((C_{*}^{\delta,\gamma,\mu}(t),\partial_{*}^{\delta,\gamma,\mu}),\) (cf. (8).
One expects that the chain complexes \((C^{\delta,\gamma,\mu}_{*},\partial^{\delta,\gamma,\mu}_{*})\) and \((C^{X,\mathcal{O}}_{r},\partial^{X,\mathcal{O}})\) are isomorphic by an isomorphism which preserve the \(\mathbb{R}-\)filtration; this issue will be addressed later, being algebraically more involved.
In particular all conclusions about relationships between the rest points and instantons (for a vector field which admits a Morse function as Lyapunov) derived via homology can be derived from bar codes and in considerably more general situations involving more general spaces and more general flows.
## 5 About proofs
Since in this section we will use direct and inverse limits it will be useful to have in mind the following facts which will be tacitly applied.
1. For a direct system of short exact sequences of vector spaces the direct limit remains an exact sequence.
2. For an inverse system of short exact sequences \[\begin{CD}0@>{}>{}>\{A_{\alpha},^{A}\iota_{\alpha}^{\alpha^{\prime}}\}@>{}>{}>\{B_{ \alpha},^{B}\iota_{\alpha}^{\alpha^{\prime}}\}@>{}>{}>\{C_{\alpha},^{C}\iota_{ \alpha}^{\alpha^{\prime}}\}@>{}>{}>0\end{CD}\] passing to the inverse limit induces the exact sequence \[\begin{CD}0@>{}>{}>\{}>\{\lim_{\alpha}A_{\alpha}\xrightarrow{}}>{}>\{\lim_{ \alpha}B_{\alpha}\xrightarrow{}}>{}>\{\lim_{\alpha}C_{\alpha}\xrightarrow{}} \underrightarrow{\lim_{\alpha}^{\prime}A_{\alpha}\xrightarrow{}}@>{}>{}> \cdots\end{CD}\] with \(\underrightarrow{\lim_{\alpha}^{\prime}A_{\alpha}=0}\) if the system \(\{A_{\alpha},^{A}\iota_{\alpha}^{\alpha^{\prime}}\}\) satuses Mitag Leffer condition, in particular if \({}^{A}\iota_{\alpha}^{\alpha^{\prime}}\) are surjective or if \({}^{A}\iota_{\alpha}^{\alpha^{\prime}}\) are injective and \(A_{\alpha}\) are subspaces of a finite dimensional space \(A\).
For any \(a,\alpha,\beta\in\mathbb{R},\alpha<\beta\) introduce
\[\boxed{\mathbb{F}_{r}^{f}(a\times[\alpha,\beta)):=\mathbb{F}_{r}(a,\alpha)/( \mathbb{F}_{r}(<a,\alpha)+\mathbb{F}_{r}(a,\beta)).} \tag{11}\]
As shown in [2] one has
**Proposition 5.1**: _For any \(a,b_{1},b_{2},b_{3}\in\mathbb{R}\) with \(b_{1}<b_{2}<b_{3}\) one has the obviously induced linear maps \(\iota\) and \(\pi\) which provide the short exact sequence_
Unless obvious to the reader, details on the description of \(\iota\) and \(\pi\) and on the proof of this proposition can be found in [2].
In view of this Proposition (5.1) for any \(x^{\prime},x,y,y^{\prime},\;x^{\prime}<x<y<y^{\prime}\) one has the commutative diagrams
both cartesian and co-cartesian with the horizontal arrows surjective and the vertical arrows injective. This implies (exercise left to the reader) the commutation of the direct and inverse limit stated below.
**Observation 5.2**: _The canonical map below is an isomorphism_
\[\varprojlim_{x\to-\infty}\varprojlim_{y\to\infty}\mathbb{F}_{r}(a\times[x,y)) \to\varprojlim_{y\to\infty}\varinjlim_{x\to-\infty}\mathbb{F}_{r}(a\times[x,y)). \tag{12}\]
If one defines \(\mathbb{F}_{r}(a\times(-\infty,\infty))\) as either one of these two limits, in view of (4) one concludes that
\[\mathbb{F}_{r}(a\times(-\infty,\infty)):=\varprojlim_{x\to-\infty}\varprojlim_{ y\to\infty}\mathbb{F}_{r}(a\times[x,y))=\]
\[=\varprojlim_{y\to\infty}(\mathbb{I}_{a}(r)\cap\mathbb{I}^{-\infty}(r))/( \mathbb{I}_{<a}(r)\cap\mathbb{I}^{-\infty}(r)+\mathbb{I}_{a}(r)\cap\mathbb{I} ^{y}(r))=\varprojlim_{y\to\infty}\mathbb{I}_{a}(r)/(\mathbb{I}_{<a}(r)+ \mathbb{I}_{a}(r)\cap\mathbb{I}^{y}(r))\]
Since passing to direct limit in the exact sequence
\[0\]
implies
\[\ker(\,\mathbb{I}_{a}(r)/\mathbb{I}_{<a}(r)\to\varprojlim_{y\to\infty} \mathbb{I}_{a}(r)/(\mathbb{I}_{<a}(r)+\mathbb{I}_{a}(r)\cap\mathbb{I}^{y}(r)) \,)=\varprojlim_{y\to\infty}(\mathbb{I}_{a}(r)\cap\mathbb{I}^{y}(r))/( \mathbb{I}_{<a}(r)\cap\mathbb{I}^{y}(r))=^{+}\hat{\mu}_{r}(a)\]
one obtains
\[\mathbb{I}_{a}(r)/\mathbb{I}_{<a}(r)\simeq\mathbb{F}_{r}(a\times(-\infty, \infty))\oplus^{+}\hat{\mu}_{r}(a). \tag{13}\]
Similarly, for any \(a,b\in\mathbb{R}\) and \(a\leq\beta<\gamma\) and \(\alpha<\beta<b\) introduce
\[\begin{split}\boxed{\mathbb{T}_{r}(a\times(\beta,\gamma]):=& \mathbb{T}_{r}(a,\gamma)/i\mathbb{T}_{r}(<a,\gamma)+\mathbb{T}_{r}(a,\beta)) \\ \boxed{\mathbb{T}_{r}((\alpha,\beta]\times b):=& \mathbb{T}_{r}(\beta,b)/i\mathbb{T}_{r}(\alpha,b)+\mathbb{T}_{r}(\beta,<b)) }\end{split} \tag{14}\]
and verify as in [2]
**Proposition 5.3**:
1. _For any_ \(a,b_{1},b_{2},b_{3}\in\mathbb{R},\ a\leq b_{1}<b_{2}<b_{3}\) _one has the obviously induced linear maps_ \(\iota\) _and_ \(\pi\) _which provide the short exact sequence_ \[0\]
2. _For any_ \(a_{1},a_{2},a_{3},b\in\mathbb{R},\ a_{1}<a_{2}<a_{3}<b\) _one has the obviously induced linear maps_ \(\iota\) _and_ \(\pi\) _which provide the short exact sequence_ \[0\]
Details on the description of \(\iota\) and \(\pi\) and the verifications of this proposition can be found in [2] or [1].
In view of this Proposition (5.3) for \(a<y^{\prime}<y<x^{\prime}<x\) resp. \(x^{\prime}<x<y<y^{\prime}<b\), one has the commutative diagrams
both cartesian and co-cartesian, whose horizontal arrows are surjections and the vertical arrows injections. As before they imply the commutation of the direct and inverse limit stated below.
**Observation 5.4**:
_The canonical maps below are isomorphisms_
\[\begin{split}\lim_{x\to 0}\lim_{y\to\infty}\mathbb{T}_{r}(a\times(x,y ])&\to\lim_{y\to\infty}\lim_{x\to a}\mathbb{T}_{r}(a\times(x,y]),\\ \lim_{y\to b}\lim_{x\to-\infty}\mathbb{T}_{r}((x,y]\times b)& \to\lim_{x\to-\infty}\lim_{y\to b}\mathbb{T}_{r}((x,y]\times b),\end{split} \tag{15}\]
If one defines \(\mathbb{T}_{r}((-\infty,a)\times a)\) using either one of the second raw limits in (15) one obtains \(\mathbb{T}_{r}((-\infty,a)\times a)=\mathbb{T}(<a,a)/\iota\mathbb{T}_{r}(- \infty,a)\) which implies in view of (7)
\[\mathbb{T}_{r}(<a,a)=\mathbb{T}_{r}((-\infty,a)\times a)\oplus^{+}\hat{\lambda }_{r}^{f}(a). \tag{16}\]
**The case \(H_{r}(X_{a},X_{<a})\) is finite dimensional**
Consider the obvious linear maps
1. \(\mathbb{F}_{r}(a,b)/\mathbb{F}_{r}(<a,b)\)\(\mathbb{I}_{a}(r)/\mathbb{I}_{<a}(r)\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,
2. _there exists a finite set of real numbers_ \(\{y_{1},y_{2},\cdots y_{r}\}\) _with_ \(a<y_{1}<y_{2}\cdots y_{r}<\infty\) _s.t._ 1. \({}^{+}\hat{\gamma}_{r}^{f}(a,y)=0\) _of_ \(y\neq\{y_{1},\cdots y_{r}\}\)__ 2. \(\mathbb{T}_{r}(a\times(x,y])\simeq\bigoplus_{\{i|x<y_{i}\leq y\}}{}^{+}\hat{ \gamma}_{r}^{f}(a,y_{i})\)__
3. _there exists a finite set of real numbers_ \(\{x_{1},x_{2},\cdots x_{k}\}\) _with_ \(-\infty<x_{1}<x_{2}\cdots x_{k}<a\) _s.t._ 1. \(\hat{\delta}_{r-1}^{f}(x,a)=0\) _of_ \(x\neq\{x_{1},\cdots x_{k}\}\)__ 2. \(\mathbb{T}_{r}((x,y]\times a)\simeq\bigoplus_{\{i|-\infty<x_{i}\leq y\}}{}^{+} \hat{\gamma}_{r}^{f}(x_{i},a)\)_._
The proof is presented in [2] but the for the reader's convenience is also sketched below.
Note that the \(\mathbb{Z}_{\geq 0}-\)valued maps
\(y\rightsquigarrow\dim\mathbb{F}_{r}(a\times[x,y))\) for \(x\) fixed,
\(y\rightsquigarrow\dim\mathbb{T}_{r}(a\times(x,y])\) for \(x\) fixed, \(x>a\),
increase when \(y\) increases to \(\infty\) in view of Propositions (5.1) and (5.3) item 1.
Similarly, the \(\mathbb{Z}_{\geq 0}-\)valued maps
\(x\rightsquigarrow\dim\mathbb{T}_{r-1}((x,y]\times a)\) for \(y\) fixed, \(y<a\),
increases when \(x\) decreases to \(-\infty\) in view of Proposition (5.3) item 2. All these functions remain bounded by \(\dim H_{r}(X_{a},X_{<a})\) so they should have finitely many jumps and no more than \(\dim H_{r}(X_{a},X_{<a})\).
The jump at \(y\) for the first two functions and at \(x\) for the third function, when appear, in view of Propositions 5.1 and 5.3 are given by
\[\begin{split}\lim_{\epsilon\to 0}(\dim\mathbb{F}_{r}(a\times[x,y+ \epsilon))-\dim\mathbb{F}_{r}(a\times[x,y)))&=\lim_{\epsilon \to 0}\dim\mathbb{F}_{r}(a\times[y,y+\epsilon)=\delta_{r}^{f}(a,y),\\ \lim_{\epsilon\to 0}(\dim\mathbb{T}_{r}(a\times(x,y)-\dim\mathbb{T}_{r}(a \times(x,y-\epsilon]))&=\lim_{\epsilon\to 0}\dim\mathbb{T}_{r}(a \times(y-\epsilon,y])=^{+}\gamma_{r}^{f}(a,y)\\ \lim_{\epsilon\to 0}(\dim\mathbb{T}_{r}((x-\epsilon,y]\times a)- \dim\mathbb{T}_{r}((x,y]\times a))&=\lim_{\epsilon\to 0}\dim \mathbb{T}_{r}((x-\epsilon,x]\times a)=^{+}\gamma_{r}^{f}(x,a)\end{split} \tag{17}\]
the first because \(\mathbb{I}^{>a}(r)=\varinjlim_{\epsilon\to 0}\mathbb{I}^{a-\epsilon}(r)\), the second because \(\mathbb{T}_{r}(a,<b)=\varinjlim_{\epsilon\to 0}\mathbb{T}_{r}(a,b-\epsilon)\) the third because \(\varinjlim_{\epsilon\to 0}\mathbb{T}_{r}(x-\epsilon,a)=\mathbb{T}_{r}(<x,a)\).
In view of Propositions (5.1) and (5.3) observe that:
\(dim\mathbb{F}_{r}(a\times[x,y))\) =0 for \(x\) large enough towards \(\infty\), or \(-y\) large enough, with \(y\) towards \(-\infty,\)
\(dim\mathbb{T}_{r}(a\times(x,y])\) =0 for \(x\) large enough towards \(\infty\) or \(x\) closed enough to \(a\) and
\(dim\mathbb{T}_{r}((x,y]\times a)\) =0 for \(y\) closed enough to \(a\) or \(-x\) large enough with \(x\) towards \(-\infty\).
These observations imply items 1, 2, and 3 as formulated. Of course all \(x_{i}\) and \(y_{i}\) claimed in items 1, 2 and 3 are critical values since they refer to nontrivial jumps which imply that \(\delta_{r}^{f}(a,y)\) resp. \({}^{+}\hat{\gamma}_{r}^{f}(a,y)\) or \({}^{+}\hat{\gamma}_{r}^{f}(x,a)\) are nontrivial.
**Proof of theorem (4.4)**
_Proof:_ Item 1.
The exact homology sequence of the pair \((X_{\alpha},X_{<a})\) induces the isomorphism
\[H_{r}(X_{a},X_{<a})=\text{coker}(H_{r}(X_{<a})\to H_{r}(X_{a}))\oplus T_{(r-1)} (<a,a). \tag{18}\]
The commutative diagram with all raws and columns exact sequences
implies the isomorphism
\[coker(H_{r}(X_{<a})\to H_{r}(X_{a}))\simeq\mathbb{I}_{a}(r)/\mathbb{I}_{<a}(r) \oplus\mathbb{T}_{r}(a,\infty)/\mathbb{I}_{r}(<a,\infty). \tag{19}\]
In view of Proposition 5.6 item 2
\[\mathbb{T}_{r}(a,\infty)/\iota\mathbb{T}_{r}(<a,\infty)=\varinjlim_{y\to \infty}\mathbb{T}_{r}(a\times(a,y])=\bigoplus_{y\in(a,\infty)}{}^{+}\hat{ \gamma}_{r}^{f}(a,y). \tag{20}\]
In view of (13) and Proposition 5.6 item 1 one has
\[\mathbb{I}_{a}(r)/\mathbb{I}_{<a}(r)=\bigoplus_{\{i|1\leq i\leq r\}}\hat{ \delta}_{r}(a,x_{i})\oplus\ ^{+}\hat{\mu}_{r}^{f}(a). \tag{21}\]
In view of (16) and Proposition 5.6 item 3 one has
\[T_{r-1}(<a,a)=\bigoplus_{\{i|1\leq i\leq k\}}\hat{\gamma}_{r-1}(x_{i},a)\oplus ^{+}\hat{\lambda}_{r-1}(a) \tag{22}\]
Then the statement of item 1 follows by combining (19), (20), (21) and ( 22).
_Proof:_ item 2. As \(f\) is tame and bounded from below, for any \(a\) there are only finitely many critical values smaller than \(a,\) say \(c_{1},c_{2},\cdots c_{k}\leq a,\) hence a finite numbers of values \(s,\)\(s\leq a,\) such that the vector space \(\mathbb{I}_{s}/\mathbb{I}_{<s}\) is not trivial and all these vector spaces are of finite dimension. Then \(\mathbb{I}_{a}(r)=\oplus_{s\leq a}\mathbb{I}_{s}/\mathbb{I}_{<s}\) and then
\[\mathbb{I}_{a}(r)=\oplus_{s\leq a}\mathbb{I}_{s}(r)/\mathbb{I}_{<s}(r)= \bigoplus_{s\leq a,t\in\mathbb{R}}\hat{\delta}_{r}^{f}(s,t)\oplus\bigoplus_{ s\leq a}\ ^{+}\hat{\mu}_{r}^{f}(s) \tag{23}\]
Note that \(H_{r}(X_{a})=\mathbb{I}_{a}(r)\oplus\mathbb{T}_{r}(a,\infty).\) To calculate \(\mathbb{T}_{r}(a,\infty)\) a few additional facts are necessary.
First note that since \(f\) is weakly tame in view of Propositon (5.6) item 2 one has
\[\mathbb{T}_{r}(a\times(\beta,\infty))=\varinjlim_{y\to\infty}\mathbb{T}_{r}(a \times(\beta,y])=\oplus_{s>\beta}{}^{+}\hat{\gamma}_{r}^{f}(a,s) \tag{24}\]
Since \(f\) is tame and bounded from below for any \(a\) the set of critical values smaller or equal to \(a\) is finite and let \(c_{1}<c_{2}<\cdots c_{k}\leq a\) be these critical values. Let \(K:=\bigcup_{1\leq i\leq k}supp\ ^{+}\hat{\gamma}_{r}\cap(c_{i}\times(c_{i},\infty))\subset\mathbb{R}_{+}^{2}\). In view of Proposition 5.6 item 2 this set is finite.
Since \(f\) is tame, hence \(\mathbb{T}_{r}(c_{i}\times(a,\infty))=\mathbb{T}_{r}(c_{i},\infty)/(t(\mathbb{T}_ {r}(c_{k-1},\infty))+\mathbb{T}_{r}(c_{i},a)),\) one has
\[\mathbb{T}_{r}(a,\infty)=\bigoplus_{1\leq i\leq k}\mathbb{T}_{r}(c_{i}\times( a,\infty))=\bigoplus_{1\leq i\leq k}(\bigoplus_{a<s}\ ^{+}\hat{\gamma}_{r}^{f}(c_{i},s)=\bigoplus_{(t,s)\in K}\ ^{+}\hat{\gamma}_{r}^{f}(t,s) \tag{25}\]
Combining (23) and (25) one completes the proof of part (a). To prove part (b) one replaces \(X\) by \(X_{b}\) and \(\infty\) by \(b\) and one repeats the arguments. \(\blacksquare\)
**Proof of Proposition 4.5**
For each critical point \(\{x_{1},x_{2},\cdots x_{p}\}\) located on the level \(f^{-1}(a)\) choose a Morse chart in whose coordinates \(f\) has the canonical form (quadratic functions) and \(\epsilon\) small enough that the discs of radius \(\epsilon\) in these charts provide closed disjoint neighborhoods of the critical points. Denote each such neighborhood by \(D(i)\) with center \(x_{i}\) and their union by \(D\). The set of critical points \(\{x_{1},x_{2},\cdots x_{p}\}\) is contained in the interior of \(D\). Let \(r\) be the number of critical points of index \(r\); clearly \(r\leq p\). Let \(M^{\prime}:=M\setminus\{x_{1},\cdots x_{p}\}\) and observe that \(M^{\prime}_{a}\) is a manifold with boundary whose interior is \(M_{<a}\), hence \(H_{r}(M^{\prime}_{a},M_{<a})=0\) for any \(r\). Then \(H_{r}(M_{a},M_{<a})=H_{r}(M_{a},M^{\prime}_{a})\) which by excision is isomorphic to \(H_{r}(D_{a},D_{a}\setminus\{x_{i_{1}},\cdots x_{i_{r}}\})\), \(x_{i_{1}},\cdots x_{i_{r}}\) the critical points of index \(r\), \(D_{a}=f^{-1}(-\infty,a])\cap D\), which by Morse lemma is isomorphic to the direct sum of \(r\) copies of \(H_{r}(D^{r},\partial D^{r})=\kappa\) which is =\(\kappa^{r}\).
|
2309.16170 | Precise Well-plate Placing Utilizing Contact During Sliding with
Tactile-based Pose Estimation for Laboratory Automation | Micro well-plates are an apparatus commonly used in chemical and biological
experiments that are a few centimeters thick and contain wells or divets. In
this paper, we aim to solve the task of placing the well-plate onto a
well-plate holder (referred to as holder). This task is challenging due to the
holder's raised grooves being a few millimeters in height, with a clearance of
less than 1 mm between the well-plate and holder, thus requiring precise
control during placing. Our placing task has the following challenges: 1) The
holder's detected pose is uncertain; 2) the required accuracy is at the
millimeter to sub-millimeter level due to the raised groove's shallow height
and small clearance; 3) the holder is not fixed to a desk and is susceptible to
movement from external forces. To address these challenges, we developed
methods including a) using tactile sensors for accurate pose estimation of the
grasped well-plate to handle issue (1); b) sliding the well-plate onto the
target holder while maintaining contact with the holder's groove and estimating
its orientation for accurate alignment. This allows for high precision control
(addressing issue (2)) and prevents displacement of the holder during placement
(addressing issue (3)). We demonstrate a high success rate for the well-plate
placing task, even under noisy observation of the holder's pose. | Sameer Pai, Kuniyuki Takahashi, Shimpei Masuda, Naoki Fukaya, Koki Yamane, Avinash Ummadisingu | 2023-09-28T04:57:23Z | http://arxiv.org/abs/2309.16170v2 | Laboratory Automation: Precision Insertion with Adaptive Fingers utilizing Contact through Sliding with Tactile-based Pose Estimation
###### Abstract
Micro well-plates are commonly used apparatus in chemical and biological experiments that are a few centimeters in thickness with wells in them. The task we aim to solve is to place (insert) them onto a well-plate holder with grooves a few millimeters in height. Our insertion task has the following facets: 1) There is uncertainty in the detection of the position and pose of the well-plate and well-plate holder, 2) the accuracy required is in the order of millimeter to sub-millimeter, 3) the well-plate holder is not fastened, and moves with external force, 4) the groove is shallow, and 5) the width of the groove is small. Addressing these challenges, we developed a) an adaptive finger gripper with accurate detection of finger position (for (1)), b) grasped object pose estimation using tactile sensors (for (1)), c) a method to insert the well-plate into the target holder by sliding the well-plate while maintaining contact with the edge of the holder (for (2-4)), and d) estimating the orientation of the edge and aligning the well-plate so that the holder does not move when maintaining contact with the edge (for (5)). We show a significantly high success rate on the insertion task of the well-plate, even though under added noise. 4
Footnote 4: An accompanying video is available at the following link: [https://drive.google.com/file/d/1UxyJ3XixqXPnGpfw-Ps5T5GYQxoc61/viewPusp=sharing](https://drive.google.com/file/d/1UxyJ3XixqXPnGpfw-Ps5T5GYQxoc61/viewPusp=sharing)
## I Introduction
Manipulation with precision in the millimeter to sub-millimeter order is often required when inserting a grasped object into a machine or measuring apparatus. In factories, robots are able to achieve this precision manipulation with high speed and efficiency through extremely accurate knowledge of the ground-truth location of both the object and apparatus. Similarly, existing laboratory automation research typically achieves precision manipulation by grounding the position of the apparatus, not allowing for apparatus or object movement [1, 2, 3, 4, 5, 6]. However, when the experimental process or the types and positions of apparatus to be used change from time to time, it is time and money-consuming for robot engineers to develop a system for each change. We believe the ability to precisely place objects at arbitrary positions without requiring robotic engineers' participation is necessary to enable automated biological and chemical experiments that are dynamic or not likely repeated enough times to warrant full automation.
One of the approaches to achieve this is to accurately detect the position and pose of the object, even though the object's position is not pre-defined. However, even state-of-the-art object position detection methods using deep learning are unable to achieve positional accuracy on the millimeter or sub-millimeter order, which is necessary for successful insertion. Since laboratory automation requires tracking reagents, for example, it is realistic to provide a fiducial marker, such as an arUco marker, to detect object position. Although the arUco marker can detect object position and pose, the detection results are inaccurate due to recognition errors and marker misalignment, making it costly to achieve accurate recognition. Even with significant effort, estimating an arbitrary object's position with high accuracy is challenging.
Because it is challenging to achieve high-precision manipulation using only object detection with image processing, approaches combining force and tactile sensors have also been explored [7, 8, 9, 10]. Although peg-insertion tasks have been researched broadly, the task and setting are often based on a fixed base with a deep hole.
The task we aim to solve is to insert a plate of a few centimeters in thickness, called a well-plate, commonly used in chemical and biological experiments, onto a well-plate holder with grooves a few millimeters in height (Fig. 1). The clearance between the well-plate and the holder is less than 1 mm. The well-plate and holder are located arbitrarily for evaluation and analysis. This holder is representative of typical loading areas for apparatus, such as microplate readers or reagent dispensers. To perform this task, the following issues must be addressed (Fig. 1):
1. The position of the well-plate and holder can be estimated by markers, but more than a few millimeters shift occurs due to detection error and marker misalignment.
Fig. 1: Challenges of an insertion task for a well-plate onto holder
2. The required accuracy is a millimeter to a sub-millimeter order due to the height of the groove and clearance between the well-plate and the holder.
3. Because the height of the groove of the holder is shallow, the widely used method for estimating contact points and edges by pivoting motion of a peg used in the peg-insertion does not work well because the plate and the surface of the holder come into contact.
4. Because the height of the groove of the holder is shallow and the width of the grooves that make up the "hole" (of the standard robotic peg-in-hole task) is small, there is a possibility that the plate may fit the wrong way.
5. The holder is not grounded to a desk and is lightweight, so the holder will move if the external force is large. This needs to be avoided because of potential collision with other apparatus leading to possible breakage or spillage of chemicals.
The following methods and the contributions of this paper to the challenges in (1)-(5) are as follows:
* By developing a new gripper with an adaptive mechanism, the position errors are absorbed. However, since the adaptive mechanism is challenging to manipulate precisely, we developed a mechanism to detect the finger position (for (1)).
* The gripper is also equipped with a tactile sensor, and using this sensor, we developed a method to accurately estimate the pose of the well-plate after the gripper grasps it (for (1)).
* Development of a method to insert the well-plate into the target holder by sliding the well-plate while maintaining contact with the edge in the holder (for (2)-(4)).
* Develop a method to prevent the holder from moving by estimating the orientation of the edge and aligning the well-plate so that the holder does not move when maintaining contact with the edge (for (2)-(5)).
## II Related Works
Our insertion task has five challenges described in Section I. Although the insertion task has been researched broadly, most target challenges 1) and 2). To overlook the others, prior work assumes that 3) the area around the hole is flat, 4) the hole is deep, and 5) the holder is fixed.
The peg-insertion task has been researched from both hardware and software. The approach from a hardware point of view introduces adaptive/compliance mechanisms that absorb uncertainties in the object's shape and position and modeling errors and prevent the breaking of the object. The compliant behavior utilizing contact also makes it possible to realize tasks with millimeter to sub-millimeter order accuracy without precise control [11, 12]. However, a holder with a hole must be fixed to assume contact with the environment with an adaptive mechanism.
Research from a software point of view can be broadly divided into learning-based approaches and analytical approaches. In learning-based approaches, there are many vision-based approaches. Even if the position of a peg or hole is unknown, it can be recognized from a camera image. However, the rough accuracy of vision-based object pose estimation limits its adaptability to tasks requiring millimeter to sub-millimeter orders [13, 14].
The research using tactile sensors has been demonstrated to provide accurate pose estimation [15, 16, 17]. In particular, vision-based tactile sensors, such as GelSight, have been researched recently, and tasks with millimeter to sub-millimeter order accuracy have been realized [8, 9, 10, 18]. Recently, the learning-based approach has been made more efficient, such as using transfer learning from simulation [19] and meta-learning [20], but in general, it is expensive to collect training data [21].
The analytical approach uses a force torque (F/T) sensor to estimate the contact model between a peg and a hole [22]. In addition, it also uses contact to reduce uncertainty [23] and considers insertion strategies to enable pegs of various shapes [7]. In many analytical approaches, to accurately estimate the hole's geometry during insertion, contact is first made between the object and the edge of the hole and then pivoted to orient it. Adapting this method for objects with extremely shallow holes is challenging because the peg and the holder come into contact when rotating motion is attempted. In addition, since it is assumed that an external force is applied, the holder would move. Some studies assume that the holder moves [24], but the holder can rotate on only one axis, not move horizontally.
## III Method
Our proposed method consists of the following three components (Fig. 2 & Fig. 3): A) object grasping using an adaptive fingers gripper, B) object pose estimation using tactile sensors, and C) insertion utilizing contact through sliding.
### _Custom adaptive fingers gripper_
This section describes our custom gripper. There are various factors that contribute to errors in object position estimation, such as recognition errors and misalignment of markers. If grasping is performed under such errors, there is a possibility that either the object to be grasped or the robot may be damaged. By absorbing such errors on the hardware side, pick and insertion can be realized safely.
We have developed a parallel gripper with adaptive fingers that can adaptively move in two directions, horizontal and vertical. Fig 3 shows the overall diagram and the finger's movement. The gripper is operated by a servomotor (Dynamixel XM430-W350-R) and is composed of adaptive fingers, two GelSight Mini tactile sensors [25], and Intel RealSense Depth Cameras D435 and D435i (Fig. 3).
Using a linear slider, the adaptive finger allows the fingertip to slide smoothly up to \(5\,\,\mathrm{mm}\) in two directions. In the horizontal direction, the amount of slide can be changed arbitrarily up to \(0\,\,\mathrm{mm}\) using an adjustment screw. Depending on the task, the horizontal springs can be easily replaced with springs of different spring constants. In the horizontal and vertical directions, spring constants of \(0.067\,\,\mathrm{N/mm}\) and \(0.074\,\,\mathrm{N/mm}\) were used, respec
in addition to the reaction force of the spring causing the finger to return rapidly to its initial position when the gripped object is released. Rapid finger movements are an issue when handling an object containing liquid. For this reason, a small aluminum oil damper (Big Bore Shock for Minute Buggy, Kyosho Corp.) with the damper oil of viscosity # 2000 (OP.1656 Silicon Oil # 2000, Tamiya) was mounted in parallel in the vertical direction to stabilize the motion.
The adaptive finger passively moves in contact with an object. Precise manipulation requires accurate finger positioning for object pose estimation. Therefore, to measure this position, we developed a structure to measure the amount of movement of the fingertip using arUco markers [26]. The displacement of the adaptive finger can be measured with two arUco markers attached to the base of the finger and the movable part of the finger. The RealSense D435i detects the arUco markers. The size of arUco markers is \(15\;\mathrm{mm}\). The use of a physical sensor would be a factor that interferes with adaptive behavior by exerting small forces and impeding delicate manipulation.
GelSight Mini is attached to each finger tip for object pose estimation. It can detect contact states with high accuracy and sensitivity, but excessive contact can damage it. The proposed adaptive function makes it possible to reduce the impact force at the time of contact, making it easier to place the grasped object stably and helping to protect the sensor.
### _Pose estimation using tactile sensors_
In this section, we describe the pose estimation module for the well-plate using the GelSight Mini tactile sensors. After a successful grasp, the contact area between the sensor and the well-plate will be a single edge (Fig. 2 (a)). To detect the orientation and location of this edge, we first process the GelSight Mini image into a depth map by deep learning, as described in [27]. We then take the depth map and use the Canny edge detector to find areas of high gradient in the map. Finally, we take the output of the Canny edge detector and use the Hough transform to find a straight line in the image. In the event that multiple lines are detected by the Hough transform, the longest is chosen, as longer contact lines are more likely to be robust to sensor noise. We perform this line detection on both sensors and then fit a plane to the two lines as an estimate for the pose of the well-plate. This estimated pose is used to change the pose of the well-plate before performing the insertion. Details of the insertion process are described in Section IV-C.
### _Insertion utilizing contact through sliding_
Throughout this section, \(p_{EE},R_{EE},v_{EE},\omega_{EE}\) denote the world-frame position, rotation matrix, velocity, and angular velocity of the end-effector (Fig. 2 (b)). In other words, a point \(x\) in the end-effector frame would have coordinates \(R_{EE}x+p_{EE}\) in the world frame.
Because we will be dealing with contact with the environment, traditional position-based control does not suffice for this task. Instead, we design a _velocity-based force controller_ to command a desired force \(F_{des}\) against the environment. Specifically, given the desired \(F_{des}\) and the current (calibrated) force \(F_{m}\) from the F/T sensor, we impose the proportional velocity law:
\[v_{EE}=k_{F}(F_{m}-F_{des}), \tag{1}\]
where \(k_{F}\) is a control constant.
Using this controller, we design a multi-phase insertion process for the well-plate, inspired by the "human" insertion motion of inserting one edge of the plate and then sliding it into place. First, we insert the edge of the well-plate into the holder by rotating it pre-defined angle and then lowering it until contact is detected (See attached video). The rotation significantly increases the tolerance of the insertion motion, allowing for significant inaccuracy in the position estimation of the holder. After the edge of the well-plate is contact to the holder's surface, the desired force is commanded along the long axis of the well-plate, eventually driving the plate into contact with the groove of the holder.
After contact with the holder, the end-effector's motion is analyzed over time. First, consider the motion along a normal \(\mathbf{n}\) to the groove of the holder (Fig. 2 (b)). On this axis, Eq. (1) gives us that
\[\mathbf{n}\cdot v_{EE}=k_{F}\mathbf{n}\cdot(F_{m}-F_{des}), \tag{2}\]
This implies that if the desired force along \(\mathbf{n}\) is reached, \(\mathbf{n}\cdot v_{EE}=0\), and so the end-effector moves perpendicular to the normal. In contrast, on the axis parallel to the holder's geometry, the only force acting on the end-effector is static friction, meaning that \(v_{EE}\) will always have a component on this axis. We can conclude, then, that the overall trajectory of the well-plate after contact under this control law is a sliding motion along the edge of the holder. Since the post-contact displacement is entirely parallel to the holder, we can collect
Fig. 2: Diagram of pose estimation and insertion task.
the position data over time and then apply linear regression to estimate the orientation of the holder's edge.
Once the holder edge orientation is accurately estimated, we continue the insertion process by orienting the end-effector to be parallel to the holder edge. We do this by applying proportional control on the end-effector's orientation. Specifically, let \(\mathbf{x}_{est}\) be the estimated orientation of the holder's edge, and \(\mathbf{x}_{EE}\) be the current orientation of the well-plate's edge. Then, we command the angular velocity:
\[\omega_{EE}=k_{R}(\mathbf{x}_{EE}\times\mathbf{x}_{est}). \tag{3}\]
However, the commanded rotation is about the center of the end-effector, and, therefore, would cause the contact point between the well-plate and the holder to move. This movement would result in the well-plate losing contact with the holder, and, therefore, a failed insertion. To fix this, we assume that there is a fixed translation \(p_{c}\) from the end-effector to the contact point between the well-plate and the holder so that the contact point's position in the world frame is \(p_{EE}+R_{EE}p_{c}\) (Fig. 2 (b)). Then, the velocity of the contact point due to the above angular velocity is simply \(\omega_{EE}\times R_{EE}p_{c}\). Under this assumption, we can ensure that the contact point's velocity is zero by adding a feed-forward term to the velocity control law that counteracts this velocity:
\[v_{EE}=k_{F}(F_{m}-F_{des})-\omega_{EE}\times R_{EE}p_{c}, \tag{4}\]
Combining positional angular control with force-based positional control ensures a smooth insertion while maintaining contact with the well-plate holder.
However, we do not know what the translation \(p_{c}\) is before insertion. To estimate it during sliding, consider again the normal \(\mathbf{n}\) to the holder's groove. Since the groove is linear, it can be modeled by the equation \(\mathbf{x}\cdot\mathbf{n}=c\) for some constant \(c\). Then, at any point in time that the well-plate is in contact with the holder, the contact point \(p_{EE}+R_{EE}p_{c}\) must be on the linear boundary of the holder, giving us the equation
\[\mathbf{n}\cdot(p_{EE}+R_{EE}p_{c})=c. \tag{5}\]
This is a linear constraint in \(p_{c}\) and \(c\), and so if we collect many samples of \(p_{EE},R_{EE}\) while contact occurs, we can again use linear regression to estimate both the holder's location and the contact point in real-time. Once \(p_{c}\) is estimated, we can use the angular control with the feed-forward velocity above to complete the insertion motion.
To summarize, the insertion technique consists of the following stages:
1. Insert one edge of the well-plate at an angle to increase tolerance
2. Push the well-plate into the holder without rotation to regress the holder's orientation
3. Using the estimated orientation and end-effector position data, estimate the contact point \(p_{c}\).
4. Use proportional angular control to orient the well-plate while maintaining contact to complete insertion.
## IV Experimental Setup
### _Robot Setup_
Our robotic system, shown in Fig. 3, consists of a Franka Emika Panda Arm 7-DoF robotic arm equipped with the proposed adaptive finger gripper with a vision-based tactile sensor, GelSight Mini [25], as an end-effector as described in Section III-A. The Leptrino force/torque (F/T) sensor (FFS055YA501U6) is mounted between the robot arm and the gripper. The force of F/T sensor is \(F_{xyz}=\pm 500\,\mathrm{N}\), torque is \(Txyz=\pm 4\,\mathrm{N}\mathrm{m}\), resolution is \(\pm 1/2000\), and sampling rate is \(1.2\,\mathrm{kHz}\). Furthermore, we use two Intel RealSense Depth Cameras D435 and D435i to overlook the workspace and use them to capture RGB images to detect arUco markers for adaptive finger and pick and insertion of well-plate. Note that no depth information is retrieved in this study. The robot, gripper, and all sensors are connected to a PC running Ubuntu 20.04.6 LTS with ROS Noetic. The PC is equipped with 32 GB RAM, an Intel Core i7-7700K CPU, and a GeForce RTX 2060 6G Rev.A.
### _Task Setup_
The well-plate holder, shown in Fig. 4, consists of two plates and four pillars. The well-plate holder is placed on a desk and moves when an external force is applied. The plate on the pillars into which the well-plate is inserted was created by 3D printing, and because the surface is uneven depending on the printing accuracy of the 3D printer, a \(0.1\,\mathrm{m}\mathrm{m}\) thick sticker is attached to it. The height of the grooves (hereafter referred to as the edge of the well-plate holder for simplicity) after the seal is given is \(1.4\,\mathrm{m}\mathrm{m}\). The size of the well-plate is \(85.3\,\mathrm{m}\mathrm{m}\times 127.4\,\mathrm{m}\), and the size of the inside edge of the holder is \(86.2\,\mathrm{m}\mathrm{m}\times 128.2\,\mathrm{m}\mathrm{m}\). The clearance between the two objects is less than \(1\,\mathrm{m}\mathrm{m}\). A holder for attaching the arUco marker is fixed to the pillar. The size of the arUco marker is \(50\,\mathrm{m}\mathrm{m}\).
### _Process of pick and insertion_
In the process of pick and insertion, the robot grasps the well-plate placed on the holder (A) and inserts it into the holder (B) (Fig. 4 (a)). Since the holders (A) and (B) have arUco markers attached, the position for grasping and inserting the well-plate is known, although it is not accurate. Note that there are various marker systems other than arUco, but they are not significantly different in the accuracy [28],
Fig. 3: Robot setup used for the experiment
so the easy-to-use arUco marker is used in this study.
First, the robot detects the positions of the two arUco markers. To improve the detection accuracy, dozens of images are acquired, and a low-pass filter is applied. Even with increased accuracy, there are errors of \(1-2\;\mathrm{mm}\). Next, the robot moves to the position of the holder (A), grasps the well-plate, and moves to a height of \(4\;\mathrm{cm}\) from the holder (B) with a tilted pose. The path planning for the movement was done using MoveIt! [29]. At this time, the tilt angle between the robot's wrist pose of roll, pitch, and yaw is about \((0\,deg,15\,deg,25\,deg)\) in the holder's coordinate system (Fig. 4 (b)). This pose was provided to the robot by direct teaching. Then, using the tactile sensor, the well-plate pose is estimated, and the well-plate pose is controlled to be \((0\,deg,15\,deg,25\,deg)\) in the well-plate holder's coordinate system. Finally, the insertion motion described in Section III-C is performed.
Since the relative 6D pose of the holder from the arUco marker is pre-defined, the different positions and poses of the well-plate holder (A) and (B) make no difference in the grasping or insertion behavior from the holder's coordinate system. In this experiment, the holders for pick and insertion are placed on the desk in random positions (Fig. 4 (a)).
## V Experiment Results
In this section, we evaluate the following two items: A) accuracy of pose estimation using tactile sensors with adaptive fingers (Section V-A), and B) evaluation of the insertion task of the proposed method (Section V-B).
### _Accuracy of pose estimation_
This section evaluates the accuracy of pose estimation. Let the robot grasp the well-plate on a horizontal desk. Since the well-plate is on a horizontal desk, the values of roll and pitch of the well-plate are known in robot coordinate, which is used as the ground truth (Fig. 3). Since the parallel gripper is used to grasp a plate, the surfaces of the fingers and the object are parallel, so the yaw value is not evaluated. We evaluate two conditions of grippers: a) w/ arUco (ours, with detection of adaptive finger displacement by arUco marker) and b) w/o arUco. The error between the estimated well-plate pose using GelSight and ground truth is calculated. The mean and standard deviation of 10 grasps by the robot under these experimental conditions are shown in Table I.
From the end-effector's coordinate system, roll and pitch accuracy is very high with the proposed method. On the other hand, without arUco attached, the roll error was huge. This is because the amount of horizontal (y-axis) adaptive finger displacement has little effect on the object's posture, but the amount of vertical (z-axis) displacement affects the object's posture calculation. In this experiment, various size of external force is applied to the adaptive finger, so w/o arUco marker, which cannot detect its displacement, not only the error but also the variance is significant. The proposed method could estimate the object pose accurately by using arUco marker to calculate the fingertip position. Whether the error in the proposed method is acceptable is verified by insertion task, which is evaluated in the next section.
### _Insertion task results_
This section evaluates the performance of the proposed method. As an evaluation experiment, a robot grasps a well-plate placed on a holder (A) and inserts it into a holder (B). The conditions without noise and with noise were performed to evaluate the adaptation of the inaccurate marker's position. To the arUco marker's 6D pose (x, y, z, roll, pitch, yaw) in world coordinate, noise \((s_{1},s_{2},s_{3},0,0,r_{1})\) is added. The noise is \(\pm 3\;\mathrm{mm}\) randomly in the translation direction \(s\) and \(\pm 1.5\) degrees randomly in the rotational direction \(r\). Since the well-plate and holder are assumed to be placed on the apparatus on the desk, it is assumed to be level, so the noise for roll and pitch is set to 0 in the world coordinate.
In the evaluation, we measured the success rate of the well-plate placement on the holder (B) and holder (B)'s translation and rotation displacement measured by arUco during the insertion process. Success is defined as the well-plate being within the edges of the holder. A failure is a misalignment with the edge of the holder (e.g., the well-plate is on top of the holder's groove or outside it).
To compare the methods, we perform the case without pose estimation of well-plate and edge estimation. When the pose estimation of the grasped well-plate is not considered, the well-plate pose is adjusted only by using the information from the arUco marker. Also, when edge estimation is not considered, the movement along the edge described in Section III-C is not performed. Instead, the admittance control is used with the F/T sensor. As a baseline in the comparison experiment, an arUco marker-based approach is performed. The holders' poses are detected, and the robot executes pick-and-place. Note that the relative positional relationship between the arUco marker and the well-plate holder is provided to the robot. In addition, we evaluate the adaptive finger without arUco marker. Comparisons without adaptive fingers were not evaluated because tactile sensors, grippers, and plates could be broken. The fact that compliance is effective for precision manipulation has been studied in many
\begin{table}
\begin{tabular}{c||c c} \hline Type & w/ arUco (ours) & w/o arUco \\ \hline \hline roll & \(\mathbf{0.17\pm 0.097}\) & \(2.43\pm 1.36\) \\ pitch & \(\mathbf{0.34\pm 0.24}\) & \(0.44\pm 0.34\) \\ roll \& pitch & \(\mathbf{0.52\pm 0.29}\) & \(2.87\pm 1.35\) \\ \hline \end{tabular}
\end{table} TABLE I: The accuracy of pose estimation. The pose is calculated in end-effector coordinate (Fig. 3). The error b/w estimated pose and ground truth \(mean\pm std\) (deg).
Fig. 4: Task setup and size of the holder and well-plate.
related studies, so it is out of scope in this paper.
From the above, the following five comparison experiments were conducted. (a) w/ edge estimation, w/ pose estimation (ours. proposed method), (b) arUco markers-based approach (baseline of the conventional method), (c) w/ edge estimation, w/o pose estimation, (d) w/o edge estimation, w/ pose estimation, and (e) w/ edge estimation, w/ pose estimation, w/o arUco for adaptive finger. Under these experimental conditions, the number of successes and the mean and standard deviation of the amount of displacement are shown in Table II when the robots experimented 10 times. Since w/o noise condition is easier than w/ noise, w/ noise conditions were not conducted if the number of successes was zero in w/o noise. Since the arUco-based approach does not apply an external force that causes the holder to move, the amount of displacement was not measured.
Under w/o noise conditions, the (b) arUco-based approach was highly accurate, but it still failed. The compliance by the adaptive finger absorbs the error, but the clearance between the well-plate and the holder is less than \(1\,\mathrm{mm}\), so a slight error can cause a failure (Fig. 5 (b)). (c) w/o pose estimation was never successful. If there was even a slight tilt to the roll during the insertion of the well-plate, the well-plate was over the edge of the well-plate holder (Fig. 5 (c)). Also, (d) w/o edge estimation was never successful. The well-plate pose was controlled by admittance control so that the well-plate and the holder were parallel, but even after adjusting the gain, it failed because the well-plate reached the end of the holder before the well-plate was parallel (Fig. 5 (d)). The well-plate oscillated and failed when the gain was set higher. On the other hand, the proposed method achieved 100% accuracy. By estimating the orientation of the groove of the holder, the well-plate can be controlled to an appropriate horizontal pose to the holder. Since there is no tilt in the roll of the well-plate by pose estimation using the tactile sensor in robot coordinate, the well-plate can contact the groove of the holder (Fig. 5 (a)). (e) w/o arUco for adaptive finger also showed 100% success. It has been shown that the roll error is significant in well-plate pose estimation, but this is the case when an external force is applied to one finger in the vertical direction (Section V-A). The well-plate was assumed to be horizontal in this task, so no external force was applied to only one finger. Therefore, there was no difference between the proposed method and w/o arUco in this study. Since adding noise did not change the results, we did not experiment with noise in (e).
The holder's displacement after insertion of the well-plate is small. Unless the situation is in a chemical or biological laboratory, where the apparatuses are placed close enough to touch each other, the proposed method will not break or fall over other objects. Under conditions with added noise, the proposed method succeeded 100%, while the arUco-based one never succeeded. The results indicate the effectiveness of the method and show that the task can be realized with sub-millimeter order accuracy even in noise conditions.
## VI Conclusion
In this paper, we developed a method for placing a well-plate onto a holder for laboratory automation. Our method consists of three components. 1) object grasping using an adaptive fingers gripper for inaccurate localization, 2) object pose estimation using tactile sensors to ensure well-plate contact on the holder's edge, and 3) insertion task utilizing contact through sliding to realize sub-millimeter order accuracy. The pose estimation using tactile sensors in this method achieved high accuracy; the mean error is less than 1 degree. In addition, the sub-millimeter order insertion task was realized with high accuracy with small displacement,
\begin{table}
\begin{tabular}{c|c||c|c} \hline Noise & Conditions & Success & Translation [\(mm\)] & Rotation [\(deg\)] \\ \hline \hline & (a) w/ edge \& w/ pose estimation (Ours) & **10/10** & \(1.0\pm 5.7\) & \(4.2\pm 1.0\) \\ & (b) arUco-based approach & 9/10 & - & - \\ & (c) w/ edge \& w/o pose estimation & 0/10 & - & - \\ & (d) w/o edge \& w/ pose estimation & 0/10 & - & - \\ & (e) w/ edge \& w/ pose estimation \& w/o arUco & **10/10** & \(11.9\pm 6.9\) & \(4.5\pm 1.6\) \\ \hline w/ & (a) w/ edge \& w/ pose estimation (Ours) & **10/10** & \(12.1\pm 7.6\) & \(5.1\pm 0.42\) \\ & (b) arUco-based approach & 0/10 & - & - \\ \hline \end{tabular}
\end{table} TABLE II: Results of insertion task
Fig. 5: Sequential images of successful insertion task by the proposed method and failure by other methods.
even under noisy conditions.
## Acknowledgment
The authors thank Dr. Shin-ichi Maeda and Prof. Tadahiro Taniguchi for the many discussions about this research.
|
2309.10501 | A geometric perspective on plus-one generated arrangements of lines | We give a geometric characterisation of plus-one generated projective line
arrangements that are next-to-free. We present new succinct proofs, via
associated line bundles, for some properties of plus-one generated projective
line arrangements. | Anca Macinic, Jean Vallès | 2023-09-19T10:24:41Z | http://arxiv.org/abs/2309.10501v1 | # A geometric perspective on plus-one generated arrangements of lines
###### Abstract.
We give a geometric characterisation of plus-one generated projective line arrangements that are next-to-free. We present new succinct proofs, via associated line bundles, for some properties of plus-one generated projective line arrangements.
Key words and phrases:projective lines arrangement; plus-one generated arrangement, vector bundle, splitting type \({}^{*}\) Partially supported by a grant of the Romanian Ministry of Education and Research, CNCS - UEFIS-CDI, project number PN-III-P4-ID-PCE-2020-2798, within PNCDI III \({}^{**}\) Partially supported by Bridges ANR-21-CE40-0017
which proves to be in fact a geometric lattice (see [11]) for details), called _the intersection lattice of the arrangement_. Terao conjectured in [11] that, if an arrangement is free, then all the other arrangements in the realisation space of its intersection lattice are free.
Motivated by the long standing Terao conjecture, the study of free arrangements is a very active area of research, and, in connection to that, a series of freeness-like notions emerged in the recent literature. We will refer in this note to Abe's recently introduced notion of plus-one generated arrangements from [1]. They appear in subsequent papers on generalized deletion-addition ([3, 5]), respectively deletion-restriction problems ([2]).
Since we will only work with arrangements of lines in \(\mathbb{P}^{2}\), we will call them simply arrangements, considering the context implicit.
**Definition 1.2**.: The arrangement \(\mathcal{A}\) is called _plus-one generated (POG)_ of exponents \((a,b)\) and level \(d\) if its associated vector bundle \(\mathcal{T}_{\mathcal{A}}\) has a resolution of type
(2)
We consider here the exponents of a POG arrangement to be ordered, i.e. \(a\leq b\). Notice that, if if \(d=b\), then 1.2 restricts to the definition of the nearly free arrangements introduced by Dimca-Sticlaru in [8]. Also, we have that \(c_{1}(\mathcal{T}_{\mathcal{A}})=1-a-b\), where \(c_{1}(\mathcal{T}_{\mathcal{A}})\) is the first Chern class of \(\mathcal{T}_{\mathcal{A}}\).
Our interest in plus-one generated arrangements is justified by their occurrence in the vicinity of free arrangements, in the following sense.
**Theorem 1.3**.: _[_1_]_ _Let \(\mathcal{A}\) be a free arrangement. Then:_
1. _For any_ \(H\in\mathcal{A},\ \mathcal{A}\setminus\{H\}\) _is either free or plus-one generated._
2. _For any_ \(H\in\mathbb{P}^{2},\ \mathcal{A}\cup\{H\}\) _is either free or plus-one generated._
**Definition 1.4**.: An arrangement is called _next to free (NT-free)_ if it can be obtained either by deletion of a line from a free arrangement or by addition of a line to a free arrangement. In the first situation we call the arrangement _next to free minus (NT-free minus)_, whereas in the second situation we call the arrangement _next to free plus (NT-free plus)_.
Theorem 1.3 states in particular that NT-free arrangements are either free or plus-one generated. One could naturally ask when is a plus-one generated arrangement also an NT-free one. In [1, Theorem 1.11] the author implicitly gives conditions for a plus-one generated arrangement to be NT-free, in terms of exponents and combinatorics. We will give in Theorem 3.5 a geometric characterisation of the situation when a plus-one generated arrangement is NT-free.
In section 2 we prove some deletion results for plus-one generated arrangements, and we revisit and present new simplified geometric proofs, using vector bundles, for a number of results from [7], see Theorem 2.1 and Proposition 2.5.
## 2. Deletion for plus-one generated line arrangements
Let \(\mathcal{A}\) be a plus-one generated arrangement of exponents \((a,b)\) and level \(d\). From Definition 1.2 if follows that \(d\geq b\).
The resolution (2) induces a non-zero section:
(3)
where \(Z\) is a finite scheme of length \(d+1-b\) defined by a complete intersection of a line \(l_{0}=l_{0}^{\mathcal{A}}\) and a degree \(d+1-b\) curve:
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-1-d)@>{}>{}>\mathcal{O}_{ \mathbb{P}^{2}}(-d)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-b)@>{}>{}>I_{Z}(1-b)@>{ }>{}>0.\end{CD}\]
This line \(l_{0}^{\mathcal{A}}\) will play an important role in formulating necessary and sufficient conditions for a plus-one generated arrangement to be NT-free, see Theorem 3.5.
To state the next theorem, we need to recall a classic result on vector bundles. Given a rank 2 vector bundle \(\mathcal{E}\) over \(\mathbb{P}^{2}\) and an arbitrary line \(l\in\mathbb{P}^{2}\), the restriction of \(\mathcal{E}\) to \(l\) splits as a sum of two line bundles, by Grothendieck's splitting theorem:
\[\mathcal{E}|_{l}:=\mathcal{E}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(\alpha) \oplus\mathcal{O}_{l}(\beta)\]
where the pair \((\alpha,\beta)\in\mathbb{Z}^{2}\) is called _the splitting type_ of \(\mathcal{E}\) on \(l\).
**Theorem 2.1**.: _Let \(\mathcal{A}\) be a plus-one generated arrangement of exponents \((a,b)\) and level \(d\) and \(l\in\mathbb{P}^{2}\) arbitrary._
1. _If_ \(l\cap Z=\emptyset\) _then_ \(\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(-a)\oplus \mathcal{O}_{l}(1-b)\)_._
2. _If_ \(|l\cap Z|=1\) _then_ \(\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(1-a)\oplus \mathcal{O}_{l}(-b)\)_._
3. _If_ \(l=l_{0}^{\mathcal{A}}\)_, i.e._ \(|l\cap Z|=d+1-b\)_, then_ \(\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(-d)\oplus \mathcal{O}_{l}(d+1-a-b)\)_._
Proof.: Tensoring the exact sequence (3) by \(\mathcal{O}_{l}\), where \(l\subset\mathbb{P}^{2}\) is a line, we get
(4)
There are three different cases for a line \(l\) meeting \(Z\): \(l\) does not meet \(Z\), then \(|l\cap Z|=0\); \(l\) cuts transversally \(l\) and meets \(Z\), then \(|l\cap Z|=1\); or \(l=l_{0}^{\mathcal{A}}\), then \(l\cap Z=Z\) and \(|l\cap Z|=d+1-b\). These three cases give the three different splitting types of \(\mathcal{T}_{\mathcal{A}}\) along \(l\).
**Corollary 2.2**.: _Let \(\mathcal{A}\) be a plus-one generated arrangement of exponents \((a,b)\) and level \(d>b\). Then there exists a unique line \(l_{0}^{\mathcal{A}}\subset\mathbb{P}^{2}\) such that the splitting type of \(\mathcal{T}_{\mathcal{A}}\) on \(l_{0}^{\mathcal{A}}\) is \((a+b-d-1,d)\)._
**Remark 2.3**.:
1. Theorem 2.1 retrieves [7, Theorem 4.4] and extends similar results for nearly free arrangements from [4, 10].
2. Considering the last splitting type one can also deduce that \(d+1\leq a+b\). Indeed since \(\mathcal{T}_{\mathcal{A}}\) is the kernel of the Jacobian map (1), which restricted to \(l\) remains exact, then \(\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}\) cannot have a strictly positive component.
Let \(l\) be a line in \(\mathcal{A}\) and denote by \(\mathcal{A}\setminus l\) the arrangement obtained from \(\mathcal{A}\) by removing \(l\). We first recall the well known relation:
**Lemma 2.4**.: _Let \(h:=|l\cap\mathcal{A}|\) be the number of distinct intersection points and \(t\) be the number of triple points of \(\mathcal{A}\) on \(l\) counted with multiplicity. Then \(t=|\mathcal{A}|-h-1\)._
We have also two canonical exact sequences according to the data \(l,\mathcal{A},\mathcal{A}\setminus l\) and \(t\) the number of triple points of \(\mathcal{A}\) on \(l\):
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}}@>{}>{}>\mathcal{T}_{\mathcal{A} \setminus l}@>{}>{}>\mathcal{O}_{l}(-t)=\mathcal{O}_{l}(h+1-a-b)@>{}>{}>0\end{CD} \tag{5}\]
and after dualizing this exact sequence we get
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus l}@>{}>{}>\mathcal{T}_{ \mathcal{A}}^{\vee}@>{}>{}>\mathcal{O}_{l}(a+b-h)@>{}>{}>0.\end{CD}\]
Since \(\mathcal{T}_{\mathcal{A}\setminus l}^{\vee}=\mathcal{T}_{\mathcal{A}\setminus l }(a+b-2)\) and \(\mathcal{T}_{\mathcal{A}}^{\vee}=\mathcal{T}_{\mathcal{A}}(a+b-1)\) we obtain after shifting by \(1-a-b\):
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus l}(-1)@>{}>{}>\mathcal{T}_ {\mathcal{A}}@>{}>{}>\mathcal{O}_{l}(1-h)@>{}>{}>0.\end{CD} \tag{6}\]
Now these exact sequences force \(h\) to be one of the following numbers, giving a short geometric argument for [7, Proposition 4.7]:
**Proposition 2.5**.: _The allowed values for \(h\) are:_
1. \(h<a\)_;_
2. \(h=a\)_;_
3. \(h=a+1\)_;_
4. \(h=b\)_;_
5. \(h=b+1\)_;_
6. \(h=d+1\)_._
Proof.: The surjective map
\[\begin{CD}\mathcal{T}_{\mathcal{A}}@>{}>{}>\mathcal{O}_{l}(1-h)\end{CD}\]
induces a surjective map:
\[\begin{CD}\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}@>{}>{}>\mathcal{O}_ {l}(1-h).\end{CD}\]
When \(\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(-a)\oplus \mathcal{O}_{l}(1-b)\) the allowed values for \(h\) are \(h\leq a\), \(h=a+1\) or \(h=b\). Other values would not give a surjection.
When \(\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(1-a)\oplus \mathcal{O}_{l}(-b)\) the allowed values for \(h\) are \(h<a\), \(h=a\) or \(h=b+1\). Other values would not give a surjection.
When \(\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(-d)\oplus \mathcal{O}_{l}(d+1-a-b)\) the allowed values for \(h\) are \(h=d+1\), or \(h=a+b-d\) or \(h\leq min(d,a+b-d-1)\). Other values would not give a surjection. Since \(d\geq b\geq a\), we get in the last two cases \(h\leq a\), respectively \(h<a\).
For lines \(l\in\mathcal{A}\) exhibiting some of the values of \(h\) from Proposition 2.5, one can precisely describe the arrangement obtained by deletion of the line \(l\) from \(\mathcal{A}\).
**Proposition 2.6**.: _Let \(\mathcal{A}\) be a plus-one generated of type \((a,b)\) and level \(d,\ a<d\). Let \(l\in\mathcal{A}\) and \(h=|l\cap\mathcal{A}|=a+1\). Then \(\mathcal{A}\setminus l\) is plus-one generated of type \((a,b-1)\) and level \((d-1)\)._
Proof.: We have according to (5):
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}}(-1)@>{}>{}>\mathcal{T}_{ \mathcal{A}}@>{}>{}>\mathcal{O}_{l}(-a)@>{}>{}>0\end{CD}\]
and a commutative diagram:
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b)\oplus\mathcal{O}_{ \mathbb{P}^{2}}(-d)@>{}>{}>\mathcal{T}_{\mathcal{A}}(-1)\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-d-1)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b) \oplus\mathcal{O}_{\mathbb{P}^{2}}(-d)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-a) @>{}>{}>\mathcal{T}_{\mathcal{A}}@>{}>{}>0\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a-1)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a) @>{}>{}>\mathcal{O}_{l}(-a)@>{}>{}>0\end{CD}\]
which implies using the snake lemma:
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b)\oplus\mathcal{O}_{ \mathbb{P}^{2}}(-d)@>{}>{}>\mathcal{T}_{\mathcal{A}}(-1)@>{}>{}>\mathcal{O}_{ \Gamma}(-a-1)@>{}>{}>0\end{CD}\]
where \(\deg(\Gamma)=d-a\). We then deduce
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-d-1)@>{}>{}>\mathcal{O}_{ \mathbb{P}^{2}}(-d-1)\\ @V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-d) @>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-d) \oplus\mathcal{O}_{\mathbb{P}^{2}}(-a-1)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(- a-1)@>{}>{}>0\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-d) @>{}>{}>\mathcal{T}_{\mathcal{A}}(-1)@>{}>{}>\mathcal{O}_{\Gamma}(-a-1)@>{}>{}>0.\end{CD}\]
**Proposition 2.7**.: _Let \(\mathcal{A}\) be a plus-one generated of type \((a,b)\) and level \(d>b\). Let \(l\in\mathcal{A}\) and \(h=|l\cap\mathcal{A}|=b+1\). Then \(\mathcal{A}\setminus l\) is plus-one generated of type \((a-1,b)\) and level \((d-1)\)._
Proof.: We have according to (5):
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}}(-1)@>{}>{}>\mathcal{T}_{ \mathcal{A}}@>{}>{}>\mathcal{O}_{l}(-b)@>{}>{}>0\end{CD}\]
and a commutative diagram:
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a)\oplus\mathcal{O}_{\mathbb{P}^{2} }(-d)@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus I}(-1)\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-d-1)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b) \oplus\mathcal{O}_{\mathbb{P}^{2}}(-d)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-a) @>{}>{}>\mathcal{T}_{\mathcal{A}}@>{}>{}>0\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b-1)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-b) @>{}>{}>\mathcal{O}_{I}(-b)@>{}>{}>0\end{CD}\]
which implies using the snake lemma:
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a)\oplus\mathcal{O}_{ \mathbb{P}^{2}}(-d)@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus I}(-1)@>{}>{}> \mathcal{O}_{\Delta}(-b-1)@>{}>{}>0\end{CD}\]
where \(\deg(\Delta)=d-b\). We then deduce
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-d-1)@>{}>{}>\mathcal{O}_{ \mathbb{P}^{2}}(-d-1)\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-d )@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-1-b)\oplus\mathcal{O}_{\mathbb{P}^{2}}( -d)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-a)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}( -b-1)@>{}>{}>0\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-d )@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus I}(-1)@>{}>{}>\mathcal{O}_{\Gamma}( -b-1)@>{}>{}>0.\end{CD}\]
**Proposition 2.8**.: _Let \(\mathcal{A}\) be a plus-one generated of type \((a,b)\) and level \(d\). Let \(l\in\mathcal{A}\) and \(h=|l\cap\mathcal{A}|=d+1\). Then \(\mathcal{A}\setminus l\) is free with exponents \((a-1,b-1)\)._
Proof.: We have according to (5):
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus I}(-1)@>{}>{}>\mathcal{T}_ {\mathcal{A}}@>{}>{}>\mathcal{O}_{I}(-d)@>{}>{}>0\end{CD}\]
and a commutative diagram:
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a)\oplus\mathcal{O}_{ \mathbb{P}^{2}}(-b)@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus I}(-1)\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-d-1)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}( -b)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-d)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-a )@>{}>{}>\mathcal{T}_{\mathcal{A}}@>{}>{}>0\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-d-1)@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}( -d)@>{}>{}>\mathcal{O}_{I}(-d)@>{}>{}>0\end{CD}\]
which implies using the snake lemma:
\[\begin{CD}0@>{}>{}>\mathcal{O}_{\mathbb{P}^{2}}(-a)\oplus\mathcal{O}_{ \mathbb{P}^{2}}(-b)@>{}>{}>\mathcal{T}_{\mathcal{A}\setminus I}(-1).\end{CD}\]
**Remark 2.9**.: Proposition 2.8 is also a consequence of [1, Theorem 1.11].
If \(h\leq a\) then \(\mathcal{A}\setminus l\) is not necessarily free or plus-one generated.
**Example 2.10**.: Let \(\mathcal{A}\) be the arrangement of equation \(xyz(x+y)(x-y)(x+4y+z)(y+z)=0\). Then \(\mathcal{A}\) is a plus-one generated arrangement of exponents \((3,4)\) and level \(5\).
The line \(l\) of equation \(x+4y+z=0\) intersects generically the rest of the lines in the arrangement, so \(h=6=d+1\) in this case. Then \(\mathcal{A}\) with this line deleted gives a free arrangement of exponents \((2,3)\), see for instance Proposition 2.8.
If \(l\) is one of the lines of equations \(x-y=0,x+y=0,x=0\), we have \(h=4=a+1\). If we delete from \(\mathcal{A}\) any one of these lines we get, by Proposition 2.6, a plus-one generated arrangement of exponents \((3,3)\) and level \(4\).
For any of the lines of equations \(z=0,y+z=0\) we have \(h=5=b+1\). If we delete from \(\mathcal{A}\) any of these lines we get, by Proposition 2.7, a plus-one generated arrangement of exponents \((2,4)\) and level \(4\).
For the line \(l:y=0\) we have \(h=3=a\). Then one easily checks (using for instance Macaulay2) that the arrangement obtained by deleting the line \(l\) from \(\mathcal{A},\ \mathcal{A}\setminus l\), is neither free nor plus-one generated, since its associated derivation module has \(5\) generators.
## 3. A geometric characterisation of NT-free arrangements of lines
Given an arrangement \(\mathcal{A}\), for each \(H\in\mathcal{A}\) one has an associated multiarrangement, the Ziegler restriction of \(\mathcal{A}\) onto \(H\), as introduced in [15]. In our context, where \(\mathcal{A}\) is a complex projective line arrangement, the Ziegler restrictions are multiarrangements in \(\mathbb{C}^{2}\), and their associated graded module of derivations is always free, of rank \(2\) (see for instance [4, 14] for details). The exponents of a Ziegler restriction are by definition the pair of degrees of the generating set of derivations for this graded module.
Understanding exponents of Ziegler restrictions turned out to be an essential tool in the study of freeness of arrangements, as proved by Yoshinaga in [14]. Notably, the splitting type of the vector bundle associated to an arrangement \(\mathcal{A}\) onto a line \(l\in\mathcal{A}\) coincides to the exponents of the Ziegler restriction of \(\mathcal{A}\) onto \(l\), again by [14].
We have the following generalization of an addition-type formula from exponents of Ziegler restrictions to splitting types.
**Proposition 3.1**.: _Let \(\mathcal{A},\ \mathcal{B}\) be two line arrangements such that \(\mathcal{B}=\mathcal{A}\cup\{H\}\), for some line \(H\subset\mathbb{P}^{2}\). Take \(l\subset\mathbb{P}^{2}\) a line such that \(l\notin\mathcal{B}\) and denote by \((a^{\mathcal{A}},b^{\mathcal{A}})\) the splitting type along the line \(l\) for \(\mathcal{T}_{\mathcal{A}}\) and by \((a^{\mathcal{B}},b^{\mathcal{B}})\) the splitting type along the line \(l\) for \(\mathcal{T}_{\mathcal{B}}\). Then_
\[(a^{\mathcal{B}},b^{\mathcal{B}})\in\{(a^{\mathcal{A}}+1,b^{\mathcal{A}}),\ (a^{ \mathcal{A}},b^{\mathcal{A}}+1)\} \tag{7}\]
Proof.: We always have the exact sequence:
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}\cup\{H\}}@>{}>{}>\mathcal{T}_{ \mathcal{A}}@>{}>{}>\mathcal{O}_{H}(-t)@>{}>{}>0\end{CD}\]
where \(t\) is the number of triple points on \(H\) of \(\mathcal{A}\cup\{H\}\).
Restricting to \(\mathcal{O}_{l}\) we obtain :
\[\begin{CD}0@>{}>{}>\mathcal{T}_{\mathcal{A}\cup[H]}\otimes\mathcal{O}_{l}@>{}>{}> \mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}@>{}>{}>\mathcal{O}_{p}@>{}>{}>0 \end{CD}\]
where \(p=l\cap H\). So, if
\[\mathcal{T}_{\mathcal{A}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(-a^{ \mathcal{A}})\oplus\mathcal{O}_{l}(-b^{\mathcal{A}}),\]
then we necessarily have
\[\mathcal{T}_{\mathcal{B}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(-a^{ \mathcal{A}}-1)\oplus\mathcal{O}_{l}(-b^{\mathcal{A}})\]
or
\[\mathcal{T}_{\mathcal{B}}\otimes\mathcal{O}_{l}=\mathcal{O}_{l}(-a^{ \mathcal{A}})\oplus\mathcal{O}_{l}(-b^{\mathcal{A}}-1).\]
**Remark 3.2**.:
1. If we change the hypothesis of Proposition 3.1 by assuming \(l\) to be a line in \(\mathcal{A}\), then the splitting types are exponents of Ziegler restrictions, and the conclusion of the proposition still holds, see [12], [6].
2. For \(l=H\), the claim of Proposition 3.1 no longer holds, as we can see in the following counterexample.
**Example 3.3**.: Let \(\mathcal{B}\) be an arrangement with precisely two multiple points \(P,Q\) of multiplicities \(p+1\), respectively \(q+1\), with \(p,q>2\), and only multiple points of multiplicity \(2\) in rest, such that the line \(l:=PQ\in\mathcal{B}\). Then \(\mathcal{B}\) is free of exponents \((p,q)\) and \(|\mathcal{B}|=p+q+1\). \(\mathcal{A}:=\mathcal{B}\setminus\{l\}\) is plus-one generated with exponents \((p,q)\) and level \(p+q-2\).
The splitting type along the line \(l\) for the vector bundle associated to the arrangement \(\mathcal{B}\) is \((p,q)\) and the splitting type along the line \(l\) for the vector bundle associated to the arrangement \(\mathcal{A}\) is \((1,p+q-2)\), hence an equality of type (7) does not take place.
In particular, the previous example shows that any pair of positive integers can be realized as exponents of a plus-one generated arrangement of lines, compare to [9] for similar results on nearly free arrangements.
Recall that, for \(\mathcal{A}\) plus-one generated of exponents \((a,b)\) and level \(d>b\), there exists a unique line \(l_{0}^{\mathcal{A}}\subset\mathbb{P}^{2}\) as in Corollary 2.2.
**Lemma 3.4**.:
1. _Let_ \(\mathcal{A}\) _be plus-one generated of exponents_ \((a,b)\) _and level_ \(d>b\)_, such that_ \(l_{0}^{\mathcal{A}}\notin\mathcal{A}\)_. Then_ \(\mathcal{A}\) _cannot be NT-free plus._
2. _Let_ \(\mathcal{A}\) _be plus-one generated of exponents_ \((a,b)\) _and level_ \(d>b\)_, such that_ \(l_{0}^{\mathcal{A}}\in\mathcal{A}\)_. Then_ \(\mathcal{A}\) _cannot be NT-free minus._
Proof.: _Part (1)_ Assume the contrary, that \(\mathcal{A}\) is NT-free plus. That is, there is a line \(H\in\mathcal{A}\) such that \(\mathcal{A}\setminus\{H\}\) is free. Then, by [1, Thm. 1.11], \(|\mathcal{A}\cap H|=d+1\). By [12] the exponents of the Ziegler restriction of \(\mathcal{A}\) onto \(H\) are \((a+b-d-1,d)\), which coincides with the splitting type onto \(H\) for the bundle of logarithmic vector fields associated to the arrangement \(\mathcal{A}\). But, since \(l_{0}^{\mathcal{A}}\) is unique with the property that the splitting type
onto \(l_{0}^{\mathcal{A}}\) for the bundle of logarithmic vector fields associated to the arrangement equals \((a+b-d-1,d)\), from Corollary 2.2, it follows that \(l_{0}^{\mathcal{A}}=H\), so \(l_{0}^{\mathcal{A}}\in\mathcal{A}\), contradiction.
_Part (2)_ Just as before, assume the contrary, that \(\mathcal{A}\) is NT-free minus. That is, assume there exists a line \(H\subset\mathbb{P}^{2}\) such that \(\mathcal{B}:=\mathcal{A}\cup\{H\}\) is free. Then \(exp(\mathcal{B})=(a,b)\). Since the exponents of the Ziegler restriction of \(\mathcal{A}\) onto \(l_{0}^{\mathcal{A}}\) are \((a+b-d-1,d)\), it follows that the exponents of the Ziegler restriction of \(\mathcal{B}\) onto \(l_{0}^{\mathcal{A}}\) should be one of the two pairs \(\{(a+b-d,d),(a+b-d-1,d+1)\}\) (see Remark 3.2(1)). At the same time, since \(\mathcal{B}\) is free of exponents \((a,b)\), the exponents of the Ziegler restriction of \(\mathcal{B}\) onto \(l_{0}^{\mathcal{A}}\) should be equal to \((a,b)\), but this implies \(b\in\{d,d+1\}\), contradiction.
**Theorem 3.5**.: _Let \(\mathcal{A}\) be plus-one generated of exponents \((a,b)\) and level \(d>b\)._
1. _Assume_ \(l_{0}^{\mathcal{A}}\not\in\mathcal{A}\)_. Then the following are equivalent:_ 1. \(\mathcal{A}\) _is NT-free._ 2. \(\mathcal{A}\) _is NT-free minus._ 3. \(d=|\mathcal{A}|-|\mathcal{A}\cap l_{0}^{\mathcal{A}}|\)_._ 4. \(\mathcal{A}\cup\{l_{0}^{\mathcal{A}}\}\) _is free._
2. _Assume_ \(l_{0}^{\mathcal{A}}\in\mathcal{A}\)_. Then the following are equivalent:_ 1. \(\mathcal{A}\) _is NT-free._ 2. \(\mathcal{A}\) _is NT-free plus._ 3. \(d+1=|\mathcal{A}\cap l_{0}^{\mathcal{A}}|\)_._ 4. \(\mathcal{A}\setminus\{l_{0}^{\mathcal{A}}\}\) _is free._
Proof.: (1) _Case \(l_{0}^{\mathcal{A}}\not\in\mathcal{A}\)_
Assume \(\mathcal{A}\) is NT-free. Since, by Lemma 3.4(1), \(\mathcal{A}\) cannot be NT-free plus, it follows that \(\mathcal{A}\) is NT-free if and only if \(\mathcal{A}\) is NT-free minus (i.e. there exists a line \(H\subset\mathbb{P}^{2}\) such that \(\mathcal{B}:=\mathcal{A}\cup\{H\}\) is free). From [1, Thm. 1.11], this holds if and only if \(d=|\mathcal{A}|-|\mathcal{A}\cap H|\). Moreover, \(exp(\mathcal{B})=(a,b)\).
To conclude the proof, we only need to show that \(H=l_{0}^{\mathcal{A}}\).
Assume the contrary, \(H\neq l_{0}^{\mathcal{A}}\). Then \(l_{0}^{\mathcal{A}}\not\in\mathcal{B}\). Denote by \((a^{\mathcal{A}},b^{\mathcal{A}})\) the splitting type onto \(l_{0}^{\mathcal{A}}\) for the vector bundle associated to \(\mathcal{A}\) and by \((a^{\mathcal{B}},b^{\mathcal{B}})\) the splitting type onto \(l_{0}^{\mathcal{A}}\) for the vector bundle associated to \(\mathcal{B}\). By Proposition 3.1, we have:
\[(a^{\mathcal{B}},b^{\mathcal{B}})\in\{(a^{\mathcal{A}}+1,b^{\mathcal{A}}),\ (a^{ \mathcal{A}},b^{\mathcal{A}}+1)\}.\]
But \((a^{\mathcal{B}},b^{\mathcal{B}})=(a,b)\) and \((a^{\mathcal{A}},b^{\mathcal{A}})=(a+b-d-1,d)\), so \(b\in\{d,d+1\}\), contradiction. Then necessarily \(H=l_{0}^{\mathcal{A}}\).
(2) _Case \(l_{0}^{\mathcal{A}}\in\mathcal{A}\)_
Assume \(\mathcal{A}\) is NT-free. Since, by Lemma 3.4(2), \(\mathcal{A}\) cannot be NT-free minus, it follows that \(\mathcal{A}\) is NT-free if and only if \(\mathcal{A}\) is NT-free plus, i.e. there exists a line \(H\) such that \(\mathcal{B}:=\mathcal{A}\setminus\{H\}\) is free. According to [1, Thm. 1.11], this latter claim holds if and only if \(|\mathcal{A}^{H}|=d+1\), where \(\mathcal{A}^{H}\) is the restriction of \(\mathcal{A}\) to \(H\). In this case, \(\mathcal{B}\) is free of exponents \((a-1,b-1)\).
Assume \(H\neq I_{0}^{\mathcal{A}}\). To conclude the proof, it is enough to show that this assumption leads to a contradiction. Notice that in this assumption \(I_{0}^{\mathcal{A}}\in\mathcal{B}\). Since the exponents of the Ziegler restriction of \(\mathcal{A}\) onto \(I_{0}^{\mathcal{A}}\) are \((a+b-d-1,d)\), it follows that the exponents of the Ziegler restriction of \(\mathcal{B}\) onto \(I_{0}^{\mathcal{A}}\) should be either \((a+b-d-2,d)\) or \((a+b-d-1,d-1)\). But, since \(\mathcal{B}\) is free of exponents \((a-1,b-1)\), we get \(b\in\{d,d+1\}\), contradiction.
**Remark 3.6**.: If \(\mathcal{A}\) is a plus-one generated arrangement of exponents \((a,b)\) and level \(d\), the condition \(d>b\) ensures that \(\mathcal{A}\) cannot be simultaneously NT-free minus and NT-free plus (see Lemma 3.4). But there exist plus-one generated arrangements with \(b=d\) (i.e. nearly free) that are at the same time NT-free minus and NT-free plus. Take for instance the arrangement \(\mathcal{A}\) of equation \(xyz(x+y)(y+z)(x+2y+z)=0\). \(\mathcal{A}\) is plus-one generated of exponents \((3,3)\) and level \(3\). \(\mathcal{A}\setminus\{x=0\}\) is free of exponents \((2,2)\), so \(\mathcal{A}\) is NT-free plus, and \(\mathcal{A}\cup\{y+\frac{1}{2}z=0\}\) is free of exponents \((3,3)\), so \(\mathcal{A}\) is also NT-free minus.
**Question**: Are there any examples of plus-one generated arrangements that satisfy one of the two conditions below?
1. A plus-one generated arrangement \(\mathcal{A}\) of exponents \((a,b)\) and level \(d>b\) with \(I_{0}^{\mathcal{A}}\in\mathcal{A}\) such that \(|\mathcal{A}I_{0}^{\mathcal{A}}|\neq d+1\). Notice that this latter condition is equivalent to \(|\mathcal{A}I_{0}^{\mathcal{A}}|<d+1\), by Proposition 2.5.
2. A plus-one generated arrangement \(\mathcal{A}\) of exponents \((a,b)\) and level \(d>b\) with \(I_{0}^{\mathcal{A}}\notin\mathcal{A}\) such that \(|\mathcal{A}|-|\mathcal{A}\cap I_{0}^{\mathcal{A}}|\neq d\).
|
2306.00197 | SSL-CPCD: Self-supervised learning with composite pretext-class
discrimination for improved generalisability in endoscopic image analysis | Data-driven methods have shown tremendous progress in medical image analysis.
In this context, deep learning-based supervised methods are widely popular.
However, they require a large amount of training data and face issues in
generalisability to unseen datasets that hinder clinical translation.
Endoscopic imaging data incorporates large inter- and intra-patient variability
that makes these models more challenging to learn representative features for
downstream tasks. Thus, despite the publicly available datasets and datasets
that can be generated within hospitals, most supervised models still
underperform. While self-supervised learning has addressed this problem to some
extent in natural scene data, there is a considerable performance gap in the
medical image domain. In this paper, we propose to explore patch-level
instance-group discrimination and penalisation of inter-class variation using
additive angular margin within the cosine similarity metrics. Our novel
approach enables models to learn to cluster similar representative patches,
thereby improving their ability to provide better separation between different
classes. Our results demonstrate significant improvement on all metrics over
the state-of-the-art (SOTA) methods on the test set from the same and diverse
datasets. We evaluated our approach for classification, detection, and
segmentation. SSL-CPCD achieves 79.77% on Top 1 accuracy for ulcerative colitis
classification, 88.62% on mAP for polyp detection, and 82.32% on dice
similarity coefficient for segmentation tasks are nearly over 4%, 2%, and 3%,
respectively, compared to the baseline architectures. We also demonstrate that
our method generalises better than all SOTA methods to unseen datasets,
reporting nearly 7% improvement in our generalisability assessment. | Ziang Xu, Jens Rittscher, Sharib Ali | 2023-05-31T21:28:08Z | http://arxiv.org/abs/2306.00197v1 | SSL-CPCD: Self-supervised learning with composite pretext-class discrimination for improved generalisability in endoscopic image analysis
###### Abstract
Data-driven methods have shown tremendous progress in medical image analysis. In this context, deep learning-based supervised methods are widely popular. However, they require a large amount of training data and face issues in generalisability to unseen datasets that hinder clinical translation. Endoscopic imaging data incorporates large inter- and intra-patient variability that makes these models more challenging to learn representative features for downstream tasks. Thus, despite the publicly available datasets and datasets that can be generated within hospitals, most supervised models still underperform. While self-supervised learning has addressed this problem to some extent in natural scene data, there is a considerable performance gap in the medical image domain. In this paper, we propose to explore patch-level instance-group discrimination and penalisation of inter-class variation using additive angular margin within the cosine similarity metrics. Our novel approach enables models to learn to cluster similar representative patches, thereby improving their ability to provide better separation between different classes. Our results demonstrate significant improvement on all metrics over the state-of-the-art (SOTA) methods on the test set from the same and diverse datasets. We evaluated our approach for classification, detection, and segmentation. SSL-CPCD achieves 79.77% on Top 1 accuracy for ulcerative colitis classification, 88.62% on mAP for polyp detection, and 82.32% on dice similarity coefficient for segmentation tasks are nearly over 4%, 2%, and 3%, respectively, compared to the baseline architectures. We also demonstrate that our method generalizes better than all SOTA methods to unseen datasets, reporting nearly 7% improvement in our generalisability assessment.
Deep learning, contrastive loss, endoscopy data, generalisation, self-supervised learning
## I Introduction
Image classification, detection, and segmentation tasks have been extensively studied by the biomedical image analysis community [1]. Recent advances in data-driven approaches are mostly based on convolutional neural networks (CNNs) and have gained interest due to their ability to surpass traditional machine learning approaches. CNNs have been widely used for multiple tasks and different imaging modalities, including computed tomography (CT) [2], X-ray [3], magnetic resonance imaging (MRI) [4] and endoscopy [5].
Supervised learning-based approaches in machine learning (ML) are data-voracious and not generalisable on out-of-distribution datasets. Obtaining labelled data is a significant hurdle for medical image analysis as it requires clinical expertise. Additionally, it accounts for the risk of human bias proportional to the sample size [6]. Data curation challenges are thus harder to tackle, leading only to sub-optimal results in supervised learning frameworks [7]. Several studies have also found that most supervised methods lead to a huge performance drop when applied to different centre datasets [7]. Changes in patient population, the appearance of lesions, imaging modalities used, and differences in hardware all affect data variability, pose a bottleneck during training, and adversely affect model performance. We ask if we can leverage already available high-quality public datasets with and without labels to fine-tune these models on a new, small, and out-of-distribution dataset without compromising algorithmic performance but instead boosting them.
Self-supervised learning (SSL) learns more semantically meaningful features by training a ML model using unlabelled data first which is then fine-tuned on a smaller training sample with the available labelled samples for each specific downstream task, thus eliminating the requirement of a large amount of labelled data during training improving generalisation capability for the next downstream task and expansion to other out-of-distribution datasets [8]. In the medical imaging field, SSL has been used extensively for different tasks, including disease classification [3, 9], lesion region detection [4, 10] and segmentation [11, 12].
Endoscopy remains the clinical standard for diagnosing and surveying disease in hollow organs. In contrast to data obtained from other imaging modalities, the analysis of endoscopy video is extremely challenging [5] due to various factors such as internal organ deformation, light interaction with tissue at different depths, imaging artefacts such as bubbles, fluid and other floating objects, and a considerable operator dependency. Subtle and fine-grained changes often indicate the onset of disease. The robust computer-aided techniques of such changes poses a significant challenge. In this work, we focus on two different lesions found in the colon and rectum, and we aim to devise a robust SSL-based approach to build automated techniques with CNN-based networks. To this end, we propose to develop SSL approaches for ulcerative colitis (UC), a chronic intestinal inflammatory disease, and polyps that precursor lesions for colorectal cancer. Gastroenterologists use the Mayo Endoscopic Score (MES, see Fig. 1 (on the left)), a widely accepted predictive indicator for malignant transformation in UC, as a classification task based on visual appearances. Similarly for polypectomy (aka removal of
polys) is also based on visual cues. Automated classification, detection and segmentation methods can therefore help reduce missed operator variability in these procedures.
Supervised learning methods struggle to learn a feature representation that discriminates between the different categories even if trained on large, labelled datasets. The data presented in Fig. 1 illustrates this problem in the context of ulcerative colitis scoring. After supervised learning, we can still observe significant confusion between the different classes. In this work, we propose a novel self-supervised learning strategy for endoscopic image analysis, referred to as "SSL-CPCD". Our approach is based on novel ideas on combining loss functions both at the single instance-level and group-level instance using image frames and patch-level representations. The proposed losses are used in a pretext-invariant representation learning context where patch-level and image-level representations are learnt at single and clustered group-level instances, amplifying the power of learning discriminative features and are framed as a Noise Contrastive Estimation (NCE) function. Jointly, we refer to this loss as a composite pretext-class discrimination loss (CPCD). Our novel approach improves learning on the pretext task by using both image-level and patch-level discrimination. To this extent, we also use memory banks to store positive and negative samples with moving weights that help to learn features that are semantically meaningful for downstream tasks. In addition, we use penalisation of inter-class variation between positive and negative samples using additive angular margin in our instance-level contrastive loss. By transforming images into jigsaw puzzles and computing contrastive losses between different feature embeddings, we learn a representation capable of differentiating between the subtle characteristics of the different classes.
In addition, we also explore the introduction of an attention mechanism in our network for further improvement. Key contributions of our presented work can be summarised below:
* Novel SSL-CPCD method can learn semantically meaningful features from a large amount of unlabeled data, leading to improved performance on subsequent tasks, including classification, detection, and segmentation of two different lesion types.
* Single and group-level instances are used to minimise noise contrastive estimation loss to increase the inter-class separation and minimise the intra-class distance.
* Inclusion of an additive angular margin within the cosine similarity metric in the contrastive loss to further penalise decision boundary with respect to the negative samples further increases inter-class separation.
* Evaluation of our method on four different datasets including Kvasir-SEG [13], CVC-ClinicDB [14], LIMUC [15], and our in-house dataset.
* We show that our SSL-CPCD-based method outperforms several SOTA SSL strategies by a large margin.
## II Related work
### _Deep learning in gastrointestinal endoscopy_
#### Ii-A1 Classification task
Ulcerative colitis (UC) scoring in clinic is based on Mayo Endoscopic Scoring (MES). CNN-based methods were used in efforts for automating these scoing. For example, Stidham et al. [16] used an Inception V3 model to train and evaluate MES scores in still endoscopic frames where they used 16k UC images and obtained an accuracy of 67.6%, 64.3% and 67.9% for the three MES classes. Recently, Mokter et al. [17] proposed a method to classify UC severity in colonoscopy videos by detecting vascular (vein) patterns using three CNN networks and using a training dataset comprising over 67k frames. Similarly, Ozawa et al. [18] used a CNN for binary classification only to elevate the problem of poor accuracies across classes and used still frames comprising 26k training images, which first between normal (comprising of MES 0 and MES 1) while next class as combined moderate (MES 2) and severe (MES 3). Gutierrez et al. [19] also used the CNN model to predict only a binary version of the MES scoring.
#### Ii-A2 Detection task
Polyp detection task has been more widely researched compared to UC classification. Lee et al. [20] used YOLOv2 and validated the algorithm on public datasets and colonoscopy videos demonstrating real-time capability as one of the milestone. Zhang et al. [21] proposed Single Shot MultiBox Detector (SSD) for gastric polyps. They linked the feature maps from the lower layers, and the feature maps deconvolved from the upper layers and improved the mean precision (mAP) from 88.5% to 90.4%. Qadir et al. [22], and Shin et al. [23] used Mask R-CNN and Faster RCNN with different backbones to detect polyps, respectively. Although high precision is obtained but limited in processing speed.
#### Ii-A3 Segmentation task
Polyp segmentation task is the most widely researched topic in the endoscopic image analysis. Zhou et al. [24] proposed a technique called U-Net++ based on U-Net, which fully utilises multi-scale features to obtain superior results. Fan et al. [25] proposed a parallel inverse attention based network (PraNet). PraNet employs a partial decoder to aggregate features in high-level layers, and mine boundary cues using an inverse attention module. A Shallow Attention Network (SANet) was proposed by [26]. SANet used color swap operation to decouple image content and color, and force the model to pay more attention to the shape and structure of the object. Recently, Srivastava et al. [27] proposed a Multi-Scale Residual Fusion Network (MSRF-Net). MSRF-Net can exchange multi-scale features of different receptive fields using dual-scale dense fusion (DSDF) blocks.
Fig. 1: Endoscopic image analysis for ulcerative colitis scoring. (on left) Representative images for Mayo endoscopic scoring (MES) from 0 to 3, and (on right) t-SNE plots for all test samples before learning and after supervised learning using ResNet50.
### _Attention mechanism_
Attention can make the model more focused, extract the most relevant features, and ignore irrelevant information. It also overcomes the size limitation of the receptive field and can focus on the contribution of global features to the current region [28]. Attention-based models have achieved state-of-the-art performance in medical images such as skin cancer, endoscopy, CT, and X-ray (Sinha and Dolz [29], Zhao et al. [30], Kaul et al. [2], Gu et al. [31]). Zhao et al. [30] proposed an adaptive cosine similarity network with a self-attention module to automatically classify gastrointestinal endoscope images. The self-attention block replaces the conv+BN/Relu operation in traditional CNN and uses cosine-based self-adapting loss function to adjust the scale parameters automatically achieving 95.7% on average accuracy in the wireless capsule endoscopy dataset.
### _Self-supervised learning_
Self-supervised learning (SSL) uses pretext tasks to mine self-supervised information from large-scale unsupervised data, thereby learning valuable image representations for downstream tasks. By doing so, the limitation of network performance on predefined annotations is greatly reduced. In SSL, the pretext task typically applies a transformation to the input image and predicts the properties of the transformation from the transformed image. Chen _et al._[32] proposed the SimCLR model, which performs data enhancement on the input image to simulate the input from different perspectives of the image. A contrastive loss is then used to maximise the similarity of the same object under different data augmentations and minimise the similarity between similar objects. Later, the MoCo model proposed by He et al. [33] also used contrastive loss to compare the similarity between a query and the keys of a queue to learn feature representation. The authors used a dynamic memory, rather than a static memory bank, to store feature vectors used in training. In contrast to these methods that encourage the construction of covariant image representations to the transformations, pretext-invariant representation learning (PIRL) [34] pushes the representations to be invariant under image transformations. PIRL computes high similarity to the image representations similar to the representation of the transformed versions and low resemblance to representations for the different images. The notion of a Jigsaw puzzle [35] was used as a pretext task for PIRL representation learning.
In recent years, self-supervised learning has also been applied in the field of medical image analysis but not much on the endoscopic image analysis. Azizi et al. [3] used multi-instance contrastive learning based on self-supervision on medical images, followed by a fully supervised fine-tuning method for the final classification of available task-specific losses. They improved top-1 accuracy by 6.7% and 1.1% on dermatology and chest X-ray classification, respectively. Zeng et al. [11] proposed SeSe-Net for medical image segmentation. SeSe-Net is divided into two neural networks, "worker" and "supervisor". In the first stage, the standard data set is learned and segmented, and a training set is generated, and then the supervisor further supervises the learning process in the second stage so that the worker further improves the performance on the non-labelled dataset. Chen et al. [4] proposed a novel self-supervised learning strategy based on context restoration to change the spatial information of an image by selecting and exchanging two patches in the same image to learn enough pronounced semantic representations. It was validated on 2D fetal ultrasound images, abdominal computed tomography images, and brain magnetic resonance images. Recently, Ciga et al. [12] used a residual network pre-trained with self-supervised learning to learn generalisable features and then used the pre-trained network in downstream tasks to perform multiple tasks on multiple multi-organ digital histopathology datasets.
## III Methodology
We propose a novel self-supervised approach that exploits the invariant representation learning beneficial for downstream tasks by using both image-level instance-group discrimination and patch-level instance-group discrimination losses. To this extent, we propose two novel approaches - firstly, exploiting positive and negative samples for noise contrastive loss estimation (\(\mathcal{L}_{\text{NCE}}\)). Unlike classically used \(\mathcal{L}_{\text{NCE}}\)[36], we integrate an added angular margin between the negative embeddings and learned normalised weights. This enables to dissociation of different samples further. Secondly, we apply group-wise cross-view associations between patches and images. Our novel group-wise loss enables us to learn fine-grained features at both patch-level \(\mathcal{L}_{\text{PC}}^{k}\) and image level \(\mathcal{L}_{\text{C}}^{k}\) that can enhance more local representations. For grouping of the embeddings, here we utilise a \(k\)-means clustering technique with class numbers similar to downstream tasks to provide representative clusters. Our approach uses memory banks to store all representations useful for various loss function estimations. Below we have described each element of our approach presented in the block diagram in Fig. 2.
### _Feature extraction (FE) block_
Let the endoscopy dataset \(D\), which consist \(N\) represents image samples denoted as \(\mathcal{D}=\{\mathbf{I}_{1},\mathbf{I}_{2},...,\mathbf{I}_{N}\}\). We use a transformation \(\mathcal{T}\) to create and reshuffle \(m\) number of image patches for each image in each dataset(\(\mathcal{D}\), \(\mathcal{P}=\{\mathbf{I}_{1t}^{1},...,\mathbf{I}_{1t}^{m},...,\mathbf{I}_{Nt}^ {1},...,\mathbf{I}_{Nt}^{m}\}\) with \(\mathcal{T}\in t\)). We train a convolutional neural network (_in our case_, ResNet50) with free parameters \(\theta\) that embody representation \(\phi_{\theta}(\mathbf{I})\) for a given sample \(\mathbf{I}\) and \(\phi_{\theta}(\mathbf{I}_{t})\) for patches \(\mathcal{P}\).
**Image-level embedding**: Candidate images are fed in batches which are transformed using simple geometric (horizontal and vertical flips) and photometric (colour jitter with 0.4 for hue, saturation, contrast and brightness) transformations and fed into an encoder giving a feature representation \(\phi_{\theta}(\mathbf{I})\). We then apply a projection head \(f(.)\) to re-scale the representations to a \(128\)-dimensional feature vector.
**Patch-level embedding**: For image patches, representations of each patch constituting the image \(\mathbf{I}\) are concatenated to form \(\phi_{\theta}(\mathbf{I}_{t})\). A projection head \(g(.)\) is applied to re-scale the representations to a \(128\)-dimensional feature vector. However, in this case, we perform random cropping and shuffle of the
cropped areas into patch size of \(64\times 64\) along with the colour transforms used for the original images.
**Memory banks**: The memory bank \(\mathcal{M}\) stores all the feature representations of the dataset \(\mathcal{D}\) at the image level computed from the original images **I**. These embedding weights are moving average of feature representation \(f(\phi_{\theta}(\textbf{I}))\) represented as \(\textbf{m}_{\textbf{I}}\) with assigned indexes that helps to build negative samples \(\textbf{m}_{\textbf{I}^{{}^{\prime}}}\) for each image during contrastive loss estimation. \(\mathcal{M}\) is updated at every epoch with the step-size of 0.5 \(\times\) initial weight and normalised to between 0 and 1 similar to [34].
### _Contrastive loss estimation with margin (CEM) block_
Noise contrastive estimator (NCE) [34, 37] is used to measure the similarity scores \(s\). In our noise contrastive estimator, a positive sample pair (**I**, \(\textbf{I}_{t}\)) has \(n^{-}\) corresponding "negative samples", _i.e.,_ representations of each sample other than **I** (\(\textbf{I}^{{}^{\prime}}\)). Moving average representations for the positive and negative samples (\(\textbf{m}_{\textbf{I}}\), \(\textbf{m}_{\textbf{I}^{{}^{\prime}}}\)) are used from the memory bank to perform a dot-product between the normalised target feature embedding \(t^{+}\) and the normalised positive sample representations \(\textbf{m}_{\textbf{I}}\) (_i.e., a cosine similarity_). However, unlike [34, 37], we propose to add an angular margin to increase the separation between the target embedding \(t^{+}\) and the "negative samples" \(\textbf{m}_{\textbf{I}^{{}^{\prime}}}\) in our contrastive loss estimation with margin block (CEM-block). We do this by first computing angular separation between the positive target embedding and negative embedding (\(\psi\)) \(\psi=\arccos<t^{+},\textbf{m}_{\textbf{I}^{{}^{\prime}}}>\) and then add an angular margin \(m\) to the computed angle, i.e. \(\psi_{new}=\psi+m\). Finally, \(cosine\) of the \(\psi_{new}\) gives our new similarity between the positive and negative samples. The same CE block is applied for both image-level and patch-level NCE loss computations. The NCE models the probability of the binary event given a data distribution \(\mathcal{D}\) and temperature parameter \(\tau\):
\[h(\phi_{\theta}(\textbf{I}),\phi_{\theta}(\textbf{I}_{t})) =\frac{\exp\frac{<\phi_{\theta}(\textbf{I}),\phi_{\theta}(\textbf{I}_{t})> }{\tau}}{\exp\frac{<\phi_{\theta}(\textbf{I}),\phi_{\theta}(\textbf{I}_{t})> }{\tau}+\sum_{\textbf{I}^{{}^{\prime}}\in n^{-}\exp\frac{<\phi_{\theta}( \textbf{I}^{{}^{\prime}}),\phi_{\theta}(\textbf{I}_{t})>}{\tau}}} \tag{1}\]
Adding the angular additive margin to further penalise dissimilarity between target embedding and negative samples, the above equation can be rewritten as:
\[h^{{}^{\prime}}(\phi_{\theta}(\textbf{I}),\phi_{\theta}(\textbf{I}_{t}))= \frac{\exp\frac{<\phi_{\theta}(\textbf{I}),\phi_{\theta}(\textbf{I}_{t})>}{ \tau}}{\exp\frac{<\phi_{\theta}(\textbf{I}),\phi_{\theta}(\textbf{I}_{t})>}{ \tau}+\frac{\cos(\phi_{new})}{\tau}}, \tag{2}\]
\[\text{with }\phi_{new}=\arccos(\exp(<\phi_{\theta}(\textbf{I}^{{}^{\prime}}), \phi_{\theta}(\textbf{I}_{t})>)+m) \tag{3}\]
The total NCE loss entails to minimise the joint-loss function both at the image-level and patch-level configurations:
\[\mathcal{L}^{total}_{NCE}(\textbf{I},\textbf{I}_{t})=\lambda \mathcal{L}_{NCE_{I}}(\textbf{m}_{\textbf{I}},f(\phi_{\theta}(\textbf{I})))+ \tag{4}\] \[(1-\lambda)\mathcal{L}_{NCE_{I}}(\textbf{m}_{\textbf{I}},g(\phi_{ \theta}(\textbf{I}^{\textbf{t}})))\]
with each loss component can be established as a joint-probability distribution between the target embedding, positive
Fig. 2: **Block diagram of our proposed self-supervised learning framework with composite pretext-class discrimination losses (SSL-CPCD).** ResNet50 encoder network is fine-tuned with original images with transformations and jigsaw puzzle patches in a self-supervised setting to enable semantically meaningful representation learning for improved generalisability and accuracy in downstream tasks. Contrastive estimator with margin (CEM-block) is separately computed for image-level and path-level instances. Further, a group-wise contrastive loss is computed by comparing the centroids at patch-level (PC\({}^{k}\)) group and at image-level (C\({}^{k}\)). Memory bank \(\mathcal{M}\) is used for storing all representations.
and negative embeddings given as:
\[\mathcal{L}_{NCE_{I}}(\textbf{m_{I}},f(\phi_{\theta}(\textbf{I})))=- \log[h(f(\phi_{\theta}(\textbf{I})),\textbf{m_{I}})]\] \[-\sum_{\textbf{{Y}}\in\mathcal{D}_{n}}\log[1-h(\textbf{m_{I^{ \prime}}},f(\phi_{\theta}(\textbf{I})))],\text{and} \tag{5}\] \[\mathcal{L}_{NCE_{I_{t}}}(\textbf{m_{I}},g(\phi_{\theta}(\textbf{ I^{\prime}})))=-\log[h(g(\phi_{\theta}(\textbf{I^{\prime}})),\textbf{m_{I}})]\] \[-\sum_{\textbf{{Y}}\in\mathcal{D}_{n}}\log[1-h(\textbf{m_{I^{ \prime}}},g(\phi_{\theta}(\textbf{I^{\prime}})))]. \tag{6}\]
The configured joint-loss \(\mathcal{L}_{NCE}^{total}(\textbf{I},\textbf{I}_{t})\) enables to learn representations of image **I** closer to its transformed counterpart \(\textbf{I}_{t}\) and also to the memory representation \(\textbf{m_{I}}\) that will damp the parameter updates in the weights \(\phi_{\theta}\). It also further penalises the representations from other set of images \(\textbf{I^{\prime}}\).
### \(k\)_-means feature grouping_
One important limitation of single instance discrimination as done in NCE loss is that they focus on within-instance similarity by data augmentation assuming a single distinctive instance, but in downstream tasks, these can appear as various similar observations of the same instance. Thus, a grouping strategy can help mitigate such limitations, as presented in this Section. **Normalised projection head:** We utilise the linear projection heads to normalise the feature embedding with \(l_{2}\)-norm that enables to reduce variance from data augmentation and maps the features onto a unit hypersphere, \(\bar{f}(\phi_{\theta}(\textbf{I}))=\frac{f(\phi_{\theta}(\textbf{I}))}{\|f( \phi_{\theta}(\textbf{I}))\|}\), and \(\bar{g}(\phi_{\theta}(\textbf{I}_{t}))=\frac{g(\phi_{\theta}(\textbf{I}_{t}))} {\|g(\phi_{\theta}(\textbf{I}_{t}))\|}\).
**Feature grouping:** To overcome the limitation of the single instance approach, we have used grouping instances based on the local clusters within a batch of samples. We create \(k\) clusters where \(k\) is the number of classes (say \(n\)) in the downstream tasks and use this to define clusters at image and patch levels. Using spherical \(k\)-means clustering, we group the unit-length feature vectors. We compute the cluster centroids for each image embedding \(C^{k}\) and patch embedding \(PC^{k}\) in batch input with \(k=\{1,...,n\}\), where \(n\) is the number of cluster classes depending on the downstream task. We assign each instance in the image and patches to each of their corresponding nearest centroids, say \(C(i)=j\), meaning instance \(i\) is assigned to centroid \(j\) and so on.
### _Cross-level discrimination at image and patch-levels_
**Cross-level grouping:** Clusters could be noisy, so we applied a cross-view local group for each instance by an element-wise multiplication of the feature embedding at image-level \(\bar{f}(\phi_{\theta}(\textbf{I}))\) with the cluster centroid of image patches \(PC^{k}{}_{i}\), and at patch-level \(\bar{g}(\phi_{\theta}(\textbf{I}))\) with the cluster centroid \(C_{i}^{k}\) of the images in the batch where \(i\) is the feature embedding assigned to the cluster.
**Cross-level contrastive loss:** The noise contrastive estimation (NCE) loss across the views can be defined using a very similar expression as in Eq. (1). However, here, we will use the group cluster embeddings and centroids, and we want to assume that the group in the patch-level cluster is identical to the group in image level for that specific class. Thus, our cross-level contrastive loss at group-level can be defined as:
\[h_{g}^{f}(\bar{f}_{i},PC_{i})\!\!=\!\!-\log\frac{\exp\!\frac{\!\leq\!\bar{f}_{i }\!,PC_{i}\!\geq}{\exp\!\frac{<\!f_{i}\!,PC_{i}\!>}{\tau}}+\!\sum_{j\neq i}\exp \frac{<\!f_{i}\!,PC_{i}\!>}{\tau}} \tag{7}\]
Similarly, cross-level grouping of the patch-level representations can be written as:
\[h_{g}^{g}(\bar{g}_{i},C_{i})\!\!=\!\!-\log\frac{\exp\!\frac{<\!\bar{g}_{i}\!, C_{i}\!>}{\tau}}{\exp\!\frac{<\!\bar{g}_{i}\!,C_{i}\!>}{\tau}}+\!\sum_{j\neq i}\exp \frac{<\!\bar{g}_{i}\!,C_{i}\!>}{\tau} \tag{8}\]
The final group-wise cross-level discrimination loss \(\mathcal{L}_{GCLD}\) incorporating both the image-level and the patch-level representations combined with equal weights (\(\lambda=0.5\)) can be written as:
\[\mathcal{L}_{GCLD}\!\!=\!\!\sum_{k=1}^{k=4}\sum_{i=1}^{N}\{\lambda h_{g}^{f}( \bar{f}_{i},PC_{i}^{k})+(1-\lambda)h_{g}^{g}(\bar{g}_{i},C_{i}^{k})\} \tag{9}\]
### _Proposed CPCD loss_
Our final novel loss function that defines single instance-level, group-level, and cross-level representations as a joint loss optimisation problem is referred to as _composite pretext-class discrimination loss_ (CPCD). In this work, the contrastive noise loss referring to single instance-level representations and the group-wise cross-level discrimination loss are equally weighted (say, \(\lambda^{{}^{\prime}}=0.5\). Thus, the final CPCD loss \(\mathcal{L}_{CPCD}\) is given as:
\[\mathcal{L}_{CPCD}=\sum_{k=1}^{k=4}\lambda^{{}^{\prime}}\underbrace{\sum_{i=1 }^{N}\{\lambda h_{g}^{f}(\bar{f}_{i},PC_{i}^{k})+(1-\lambda)h_{g}^{g}(\bar{g}_{ i},C_{i}^{k})\}}_{\mathcal{L}_{GCLD}}+\]
\[(1-\lambda^{{}^{\prime}})\sum_{i=1}^{N}\{-\lambda\log(\textbf{m_{I}},f(\phi_{ \theta}(\textbf{I})))-\log(1-\lambda)(\textbf{m_{I}},g(\phi_{\theta}(\textbf{I^{ \prime}})))\}{\mathcal{L}_{NCE}} \tag{10}\]
## IV Experiments
### _Dataset and setup_
#### Iv-A1 Dataset
We have explored various colonoscopic imaging datasets that are available publicly and in-house for three different downstream tasks. For the classification task,
LIMUC [15] and one in-house dataset (collected under universal patient consenting at the Translational Gastroenterology Unit, John Radcliffe Hospital, Oxford) are applied. Kvasir-SEG [13] and CVC-ClinicDB [14] are used for the segmentation task. Similarly, Kvasir-SEG [13] for experiments on polyp detection as a downstream task. In the pretext task for the detection and segmentation tasks, we have used polyp samples from Kvasir-SEG [13] and non-polyp samples from the SUN [38] dataset for training our SSL model. The details about the datasets and the number of training, validation, and testing samples used are presented in Table I.
#### Iv-A2 Evaluation metrics
We have used standard top-\(k\) accuracy (percentage of samples predicted correctly, top1 and top2 are used), F1-score (\(=\frac{2tp}{2tp+fp+fn}\), tp: true positive, fp: false positive), specificity (\(=\frac{tn}{tp+fn}\)), sensitivity or recall (\(=\frac{tn}{tn+fp}\)), and Quadratic Weighted Kappa (QWK) for our classification task. For the detection task, standard computer vision metrics, including mean average precision (mAP at an IoU interval [0.25:0.05:0.75]) and AP small, medium and large, were used for our experiments. Dice similarity coefficient (DSC), which is also known as F1-score, and type-II error referred to as F2-score, recall and positive predictive values (PPV, \(=\frac{tp}{tp+fp}\)) have been used for evaluating our segmentation task.
#### Iv-A3 Implementation details
The proposed method is implemented using PyTorch [40]. All experiments were conducted on an NVIDIA Quadro RTX 6000 graphics card. For pretext tasks in self-supervised learning, we have used the batch size of 32 and trained our model for 2000 epochs in all experiments or until convergence with stopping criteria. The SGD optimiser with a learning rate of \(1e^{-3}\) was used for training. All input images were resized to \(224\times 224\) pixels.
For the downstream classification task, we have fine-tuned the model with a learning rate of \(1e^{-4}\), the SGD optimiser with a batch size of 32, and the learning rate decay of 0.9 times per 20 epochs. In Table II, we tested the effect of k-fold cross-validation on model training for different k-value settings. The experimental results show that the model achieves the highest Top-1 accuracy and QWK when k = 5. Therefore, we use 5-fold cross-validation in our experiments. For the detection task, we have used the Adam optimiser with a learning rate of 1\(e^{-5}\) and a batch size of 32 to finetune for 400 epochs. For the segmentation task, 300 epochs with a batch size of 16 and an SGD optimiser with a learning rate of \(1e^{-3}\) were used to finetune the model. All experiments used 80% of the dataset for training, 10% for validation, and the remaining held-out 10% for testing. We additionally have used out-of-centre unseen centre datasets for generalisability study. **Hyperparameters:** For group-wise cross-level discrimination loss (\(\mathcal{L}_{GCLD}\) in Eq. (9)), we set \(k=4\) for a number of clusters in classification pretext task, \(k=2\) in detection and segmentation pretext task, \(s=6\) for the re-scaling and \(m=0.5\) for an angular margin. Memory bank proposed in [34] has been used with the same hyperparameters. For Eq. (4) we use \(\lambda=0.5\) and use \(\tau=0.4\) for computing the function \(h(.,.)\) in Eq. (1, 2, 7, and 8). We used an updated weight of 0.5 for the memory bank exponential moving average representations. These values are justified in our ablation study provided in Section IV-D2.
### _Results_
In this section, we present the comparison of our proposed SSL-CPCD approach with other SOTA SSL methods.
#### Iv-B1 Comparison for UC classification task
ResNet50 (R50) and ResNet50 with convolution-block attention module (R50-Att.) are established as the baseline model for supervised learning first, and then the same are used for other SOTA SSL-based method comparisons in Table III for ulcerative colitis classification task on LIMUC dataset. Baseline networks R50 and R50-Att., respectively, obtained 75.39% and 75.62% on top-1 accuracy, and 82.51% and 82.78% on QWK. Our proposed SSL-CPCD method yielded the best results with 79.77%, 72.79%, 90.08%, 72.59% and 87.46% on top 1 accuracy, F1 score, specificity, recall and QWK, respectively. Compared to the supervised learning-based baseline models (R50), the top 1 accuracy and QWK is improved by 4.38% and 4.95%, respectively, using our proposed SSL-CPCD with the same backbones. We also compared our proposed SSL-CPCD approach with other SOTA SSL methods, including popular SimCLR [32], SimCLR+DCL [39], MoCoV2+CLD [33] and PIRL [34] methods. Our proposed network (R50-Att.) outperformed all these methods with at least nearly 2.4% (PIRL) up to 6% (SimCLR) on top-1 accuracy. Similar improvements can be observed on other metrics as well.
#### Iv-B2 Comparison for polyp detection task
The Kvasir-SEG polyp dataset was used to evaluate the performances of SSL on detection as the downstream task in endoscopy. The quantitative results from Table IV show that our proposed SSL-CPCD approach outperforms all the other SOTA methods on all metrics. It achieves 2.29%, 2.7% and 3.3% improvement on mAP compared to SSL methods, including MoCoV2+CLD, SimCLR+DCL and SimCLR, respectively. Our method also improves 1.83% on AP50 and 1.4% on APmedium (medium polyp sizes) compared to MoCoV2+CLD, respectively. Compared to the widely used supervised technique RetinaNet, our method is better on mean average mAP but significantly improves over AP50, AP75 and size-based metrics.
#### Iv-B3 Comparison for polyp segmentation task
The Kvasir-SEG dataset was also used to assess the performance of SSL-based approaches in our experiment for segmentation as a downstream task in endoscopy. Table V compares the result of the proposed SSL-CPCD with other SOTA SSL approaches and baseline supervised model. While proposed SSL-CPCD provided an improvement of 3.12% and 3.72% on DSC and Recall, respectively, for the baseline ResNetUNet in a supervised setting, our approach also showed improvements of 2.03%, 1.28%, 2.36% and 0.58% over MoCoV2 + CLD in DSC, F2-score, recall and PPV, respectively. Higher recall
while keeping the precision (PPV) high (over 90%) indicates that our method is more medically relevant.
### _Generalisation_
To ensure the generalisation of the proposed approach, we trained our model and other methods on one dataset and then tested them on an unseen dataset from different institutions.
#### Iv-C1 Generalisibility study for UC classification
We used the UC classification model trained on the LIMUC dataset collected at Marmara University School of Medicine. We tested this model on our in-house dataset (collected at the John Radcliffe Hospital, Oxford). Table VI assess the generalisability of our SSL-CPCD model and other SOTA approaches on UC classification task. Our proposed SSL-CPCD obtained an acceptable Top 1 accuracy of 67.33%, F1-score of 64.69%, specificity of 86.77%, recall of 64.03% and QWK of 78.87%. While outperforming all SOTA approaches, compared with MoCoV2+CLD, our method achieves an improvement of 5.98% on top 1 accuracy and nearly 9% in QWK. Table VI shows that our SSL-CPCD outperforms other SOTA methods in various evaluation metrics.
#### Iv-C2 Generalisability study for polyp segmentation
All models for both baseline and SOTA approaches were first trained on the Kvasir-SEG dataset and then tested on the CVC-ClinicDB dataset, for which the results are presented in Table VII. Our proposed SSL-CPCD drastically surpassed baseline supervised approaches (over 10% on DSC for U-Net
and over 7% on DSC with the same backbone on ResNet). In addition, our method obtained an improvement of 4.73% and 6.8%, respectively, over in MoCoV2+CLD and SimCLR on DSC. Similarly, over 5% improvement on PIRL is evident in both backbone settings (R50 and R50-Att.).
### _Ablation studies_
We have conducted an extensive ablation study of our approach. First, we ablated the impact of multiple loss functions, including NCE, GCLD, and the added angular margin \(m\). Then, we conducted an ablation study experiment to further evaluate the performance of our proposed approach under different parameter settings.
#### Iv-D1 Loss functions
Table VIII shows the quantitative results of our ablation study in loss functions. Initially, our proposed method, which contains three loss functions, achieves 79.12% on top 1 accuracy and 72.09% on the F1 score for the classification task. Similarly, it has the best AP50 and mAP of 91.92% and 87.09%, respectively. On the segmentation task, the combined loss also showed improvement when combined with various strategies yielding 81.13% on DSC and 92.18% on PPV. It can be observed that compared with classically using noise contrastive loss only, our approach and modifications led to significant improvements in all downstream tasks by a larger margin (top 1 accuracy, mAP, and DSC improved respectively by 2.61%, 0.97% and 2.07%).
#### Iv-D2 Impact of hyper-parameters
The quantitative results for the ablation study of different parameter settings are shown in Table IX. We set different weight and temperature in Eq. (4-9). Weight parameter \(\lambda=\{0.1,0.25,0.5,1\}\) and temperature \(\tau=\{0.2,0.4,0.6\}\) are used for searching best parameters experimentally. As shown in Table IX, when weight and temperature parameters are 0.5 and 0.4, respectively, our method achieves the best results in classification and segmentation tasks with 79.77% on Top 1 accuracy and 82.32% on DSC, respectively. For the detection task, the best performance of our SSL-CPCD was obtained when \(\lambda=0.5\) and \(\tau=0.6\).
### _Qualitative Analysis_
In the UC classification task in Fig. 3, a t-distributed stochastic neighbour embedding (t-SNE) plot of test image samples embedding, and gradient weighted activation map (Grad-CAM) method is used to visualise model performance. It can be observed (Fig. 3 a) that the test images are stochastically distributed in raw sample distributions. After model training, images of the same class cluster in the same region. The SSL-CPCD method using clustering loss has improved more than the baseline supervised model and SSL-based PIRL approach. Using SSL-CPCD, it can be observed that the same categories are more concentrated in the same area, and there are clear boundaries between different categories, which in other cases are not apparent. Similarly, while looking at the attention (Fig. 3 b), the baseline method focuses on the wrong
Fig. 3: t-SNE plot for the raw test data, baseline network (supervised) and two SSL approaches (top), and attention maps of the proposed SSL-CPCD compared to other SOTA methods for multi-class ulcerative colitis classification task (MES 0, MES 1, and MES 2 (bottom)).
location in some images (see the first and second rows in Fig. 3 b). In other SOTA SSL methods, the model notices the correct location, but the lesion location is not accurate. Our proposed SSL-CPCD can accurately identify the severely affected lesion area and shape. For the polyp detection task (Fig. 4), it can be clearly seen that baseline and other SSL methods cannot accurately locate the polyp's spatial location. Especially for the second and fourth examples in the figure, most methods have enlarged boundaries and even multiple bounding boxes. However, our SSL-CPCD approach can locate the polyp position more accurately, and the bounding boxes are closer to ground truth.
In the polyp segmentation task (Fig. 5), the baseline method incorrectly identifies non- polyp regions as polyps and over or under-segments the area. Although other SSL methods did not misidentify the polyp region, they only segmented part of the polyp. SSL-CPCD can segment polyps more accurately, similar to ground truth labels. Our proposed SSL-CPCD maintains the best segmentation results in all examples.
## V Discussion and conclusion
Supervised learning methods are unsuitable for discriminating between disease-relevant changes in the tissue structure especially in endoscopic image analysis affecting model accuracy and its generalisability capability. We explored self-supervised-based learning approach (SSL) that can learn semantically meaningful features and representations invariant to texture and illumination changes in endoscopic images that are more robust. We show that these representations using number of unlabeled endoscopic images, mitigates the risk of limited labels and provide improved results compared to widely used supervised techniques. Even though the SSL-based approaches have been proposed in the past for natural scenes [32, 33, 34], to our knowledge no study has been conducted comprehensively for endoscopic image analysis. We propose a novel composite pretext-class discrimination loss (CPCD) that combines noise contrastive losses for the single instance level and group-based instance showing significant improvements compared to other SSL methods. Here, instance discrimination obtains meaningful representations through instance-level contrastive learning, which can be used to reflect the apparent similarities between instances.
The assumption that instance discrimination is established is based on the fact that each example is significantly different from others and can be treated as a separate category. However, in endoscopic image data tend to have higher similarity in their video images making it extremely hard to learn reliable features. Thus, there is a significant similarity between training data in conventional self-supervised learning, which will lead to the negative pairs used in the contrastive learning process being likely to be composed of high similarity instances, which will lead to a large number of false positives in the training process of contrastive learning repulsion. We solve this problem in two directions. First, we propose a patch-level instance-group discrimination, GCLD loss, which can perform \(k\)-means clustering on instances so that similar instances are clustered into the same group. The error rejection of high-similarity instances was alleviated in the subsequent contrastive loss. In addition, we further optimise the loss function by adding an angular margin \(m\) between positive and negative samples in contrastive learning (see ablation study results in Table VIII). Our proposed SSL-CPCD achieves significant improvement in all three representative tasks for anomalies in colonoscopy images. In the ulcerative colitis classification task, SSL-CPCD succeeded with the highest Top 1 accuracy of 79.77% and the highest F1 score of 72.79% on LIMUC (see Table III). Likewise, we reported the highest values of 88.62%, 94.69%, and 92.27% for mAP, AP25, and AP50 on Kvasir-SEG in the polyp detection task (see Table IV). Furthermore, we report the best DSC, recall and PPV for the polyp segmentation task on the Kvasir-SEG dataset (see Table V). Furthermore, SSL-CPCD on the generalisability assessment it achieves the highest Top 1 accuracy and QWK of 67.33% and 78.87% (see Table VI), and highest DSC of 67.93% (see Table VII).
To the best of our knowledge our proposed approach combining image-level and group-level instances in a contrastive loss-based framework for self-supervised learning in endoscopic image analysis in unique and not been explored before. Our SSL-CPCD approach is capable of learning representative features from unlabeled images that are evident to improve any downstream tasks. Our strategy of added angular margin increases geometric distance between positive and negative samples. Our experiments demonstrate the effectiveness and
Fig. 4: Qualitative comparison of our proposed SSL-CPCD with other SOTA methods for polyp detection task.
Fig. 5: Qualitative comparison of our proposed SSL-CPCD with other SOTA methods for polyp segmentation task.
improvement of our SSL-CPCD method over several SOTA self-supervised methods on three downstream tasks for complex colonoscopic images. Cross-dataset testing confirmed the generalisation ability of our SSL-CPCD approach which is more superior to all SOTA SSL-based methods.
|
2309.14767 | Theoretical determination of the effect of a screening gate on
plasmon-induced superconductivity in twisted bilayer graphene | The microscopic pairing mechanism for superconductivity in magic-angle
twisted bilayer graphene remains an open question. Recent experimental studies
seem to rule out a purely electronic mechanism due to the insensitivity of the
critical superconducting temperature to either a highly doped screening layer
or the proximity to a metallic screening gate. In this theoretical work, we
explore the role of external screening layers on the superconducting properties
of twisted bilayer graphene within a purely electronic mechanism. Consistent
with the experimental observations, we find that the critical temperature is
unaffected by screening unless the screening layer is closer than 3 nanometers
from the superconductor. Thus, the available transport data is not in
contradiction with a plasmon-mediated mechanism. We also investigate other
properties of this plasmon-mediated superconductivity including signatures in
the tunneling density of states as probed in spectroscopy experiments. | Liangtao Peng, Indra Yudhistira, Giovanni Vignale, Shaffique Adam | 2023-09-26T09:00:17Z | http://arxiv.org/abs/2309.14767v1 | Theoretical determination of the effect of a screening gate on plasmon-induced superconductivity in twisted bilayer graphene
###### Abstract
The microscopic pairing mechanism for superconductivity in magic-angle twisted bilayer graphene remains an open question. Recent experimental studies seem to rule out a purely electronic mechanism due to the insensitivity of the critical superconducting temperature to either a highly doped screening layer or the proximity to a metallic screening gate. In this theoretical work, we explore the role of external screening layers on the superconducting properties of twisted bilayer graphene within a purely electronic mechanism. Consistent with the experimental observations, we find that the critical temperature is unaffected by screening unless the screening layer is closer than 3 nanometers from the superconductor. Thus, the available transport data is not in contradiction with a plasmon-mediated mechanism. We also investigate other properties of this plasmon-mediated superconductivity including signatures in the tunneling density of states as probed in spectroscopy experiments.
## I Introduction
The discovery of superconductivity in magic-angle twisted bilayer graphene (MATBG) [1; 2] has attracted tremendous interest, in part due to the similarity of the observed phase diagram with the long-standing problem of high-temperature superconductivity in cuprates. At present, there is no consensus on the microscopic mechanism for superconductivity in a system as simple as two rotated sheets of carbon. A number of theoretical studies based on the Bardeen-Cooper-Schrieffer (BCS) approach have focused on a phonon-mediated mechanism [3; 4; 5]. Typically, a pure phonon mechanism gives a critical temperature \(T_{c}\sim 1\)K (slightly below what is seen in experiments). However, considering the dynamical polarizability, we have argued recently [6; 7] that the phonon deformation potential is likely to be strongly screened by the large density of states in MATBG. In contrast, as was pointed out recently by [8; 9], Umklapp processes in the reduced moire Brillouin zone might act to increase the strength of phonon pairing. On the other hand, the emergence of flat bands [10] strongly enhances the electron-electron interaction which favors plasmonic superconductivity [8; 9; 11] with larger critical temperatures \(T_{c}\sim 10\)K.
Recently, three experimental papers [12; 13; 14] have investigated the role of screening on this superconducting state. Refs. [12; 13] varied the distance of an external metal screening gate from 6.7 nm to 68 nm and found that the correlated insulating phases were killed when the screening gate was close by, but the superconductivity survived. Similarly, Ref. [14] used a nearby Bernal-stacked bilayer graphene with varying carrier density to provide external screening. They also observed that superconductivity was more robust when Coulomb interaction was weakened by screening, and the critical temperature remained roughly constant over a wide range of doping in the screening layer. One might naively expect that these experiments rule out an electronic mechanism for superconductivity in twisted bilayer graphene. To the contrary, in this theoretical work, we show that the critical superconducting temperature predicted using the plasmon pairing mechanism is unchanged in the regimes probed in these experiments. For both experimental configurations, we find that the critical temperature is only suppressed for \(l\lesssim 3\)nm, which was not the case in either of the experiments. We can understand this as the length scale at which the bare Coulomb interaction is modified by the metal gates.
In what follows we systematically investigate the robustness of the plasmon mechanism for superconductivity mediated by a screened Coulomb repulsion. We first study the unscreened case, where it was previously shown [11] that the momentum-averaged Coulomb interaction can be modeled by a Lorentzian form. In this limit, we prove that the critical temperature for the plasmonic mechanism is non-monotonic as a function of experimental parameters such as doping and twist angle. We discuss this non-monotonicity in terms of averaged plasmon frequency and the unscreened Coulomb interaction. For the screened Coulomb interaction, we find that the Lorentzian approximation no longer holds. A full numerical calculation reveals that the critical temperature is not sensitive to the external screening gate unless it is closer than \(l\approx 3\)nm. Finally, we solve the full-bandwidth Eliashberg equation and compute the tunneling density of state and find that, similar to the phonon-like mechanism, the plasmon mechanism gives a hard gap in the spectral function.
The paper is organized as follows. In Sec. II, we intro
duce a minimal theoretical model for plasmonic superconductivity. In Sec. III, we discuss plasmon-induced superconductivity in magic angle twisted bilayer graphene. Sec. IV focuses on superconductivity under external screening, including the hybrid double-layer structure and the metal gate structure. In Sec. V, we solve the full-bandwidth Eliashberg equation and compute the tunneling density of states. Finally, we discuss the conclusions and future directions of our work in Sec. VI. All derivations and technical details are provided in Appendices A-G.
## II Minimal model for plasmonic superconductivity
We begin by introducing a minimal theoretical model that encapsulates the essential characteristics of plasmonic superconductivity. It has been shown that superconductivity mediated by purely electronic mechanism can be studied by introducing a momentum-averaged frequency-dependent Coulomb interaction [11; 15]:
\[\lambda_{n,m}=N\left(E_{F}\right)\langle\left\langle V(i\omega_{n}-i\omega_{m })\right\rangle\rangle, \tag{1}\]
Here, \(\lambda_{n,m}\) signifies the pairing strength of Cooper pairs, \(N\left(E_{F}\right)\) represents the density of states (DOS) at the Fermi level, and \(\langle\left\langle V(i\omega_{n}-i\omega_{m})\right\rangle\rangle\) denotes the momentum-averaged Coulomb interaction:
\[\langle\left\langle V\left(i\omega_{n}\right)\right\rangle\rangle=\frac{\sum_{ \mathbf{k},\mathbf{p}}\Theta\left(\mathbf{k}_{c}-\mathbf{k}\right)\Theta\left( \mathbf{k}_{c}-\mathbf{p}\right)V\left(\mathbf{k}-\mathbf{p},i\omega_{n} \right)}{\sum_{\mathbf{k},\mathbf{p}}\Theta\left(\mathbf{k}_{c}-\mathbf{k} \right)\Theta\left(\mathbf{k}_{c}-\mathbf{p}\right)}, \tag{2}\]
where \(\mathbf{k}_{c}=2\mathbf{k}_{F}\) represents the momentum cutoff, \(\Theta\) is the Heaviside function, and \(V\left(\mathbf{k},i\omega_{n}\right)\) denotes the dynamically screened Coulomb interaction. Following the procedures proposed by Grabowski and Sham [15], the linearized isotropic Eliashberg gap equation can be formulated as
\[\Delta_{n}=-2\tilde{T}_{c}\sum_{m=-\infty}^{\infty}\frac{1}{Z_{m}\bar{\omega} _{m}}\arctan\frac{1}{Z_{m}\bar{\omega}_{m}}\lambda_{n,m}\Delta_{m}, \tag{3}\]
where \(\widetilde{T}_{c}=k_{B}T_{c}/E_{F}\) represents the superconducting critical temperature, and \(\widetilde{\omega}_{n}=(2n+1)\pi\tilde{T}_{c}\) denotes the dimensionless Matsubara frequency. Both quantities are scaled by the Fermi energy to be dimensionless. \(Z_{n}\) is the mass renormalization function considering self-energy corrections, and \(\Delta_{n}\) is the order parameter. Equation (3) has the form of an eigenvalue equation \(\tilde{\Delta}=\tilde{C}\tilde{\Delta}\), where the critical temperature \(\tilde{T}_{c}\) can be determined by identifying the largest eigenvalue of the matrix \(\tilde{C}\) that equals 1 (refer to Appendix D for details).
In general, \(\lambda_{n,m}\) needs to be determined numerically due to the complexity of the Coulomb interaction. To simplify the problem, we adopt the Lorentzian approximation, which has been demonstrated to hold well in 2D systems [15], allowing us to model the pairing interaction by the Lorentzian form:
\[\lambda_{n,m}=\mu\left(1-\frac{\tilde{\Omega}_{b}^{2}}{\tilde{\Omega}_{b}^{2 }+(\tilde{\omega}_{n}-\tilde{\omega}_{m})^{2}}\right), \tag{4}\]
where \(\mu\) represents the high-frequency limit of the pairing strength, set by both the bare Coulomb interaction and the density of states at the Fermi level. \(\tilde{\Omega}_{b}=\Omega_{b}/E_{F}\) represents the averaged (scaled) plasmon frequency, controlling the overall transition of the pairing strength from low to high frequency.
Figure 1(a) shows the color map of the critical temperature \(\tilde{T}_{c}(\mu,\tilde{\Omega}_{b})\) within the Lorentzian approximation. The dashed black line traces the maximum critical temperature for a fixed \(\tilde{\Omega}_{b}\), while the dashed yellow line traces the same for a fixed \(\mu\). Notably, we observe a non-monotonic behavior of the superconducting transition temperature for both fixed \(\mu\) and \(\tilde{\Omega}_{b}\) cases in Fig. 1(b) and Fig. 1(c).
Figure 1: Critical temperature predicted by the Lorentzian model (Eq. (4)) for plasmon-mediated superconductivity valid in the absence of a screening gate. Within this model, the critical temperature (scaled by Fermi energy) \(\tilde{T}_{c}\) is a function of two parameters: high-frequency limit of the Coulomb interaction \(\mu\) and averaged (scaled) plasmon frequency \(\tilde{\Omega}_{b}\). (a) Color map of \(\tilde{T}_{c}(\mu,\tilde{\Omega}_{b})\). The dashed black line traces the maximum critical temperature for fixed \(\tilde{\Omega}_{b}\), while the dashed yellow line traces the same for fixed \(\mu\). (b) \(\tilde{T}_{c}\) versus \(\tilde{\Omega}_{b}\) for a given coupling strength \(\mu\). (c) \(\tilde{T}_{c}\) versus \(\mu\) for a given averaged (scaled) plasmon frequency \(\tilde{\Omega}_{b}\). In both cases, \(\tilde{T}_{c}\) shows non-monotonic behavior that can be understood analytically (see Appendix A).
In both cases, the non-monotonic behavior can be understood analytically, as detailed in Appendix A. A similar phenomenon has also been reported in recent works [16], where non-monotonicity arises from the on-site repulsion \(U\) in bond-Peierls bipolaronic superconductors. This unique non-monotonic behavior, absent in conventional phonon mechanisms, assumes significant importance in the context of MATBG, a point we will elucidate later.
## III Plasmonic superconductivity in magic angle twisted bilayer graphene
In this section, we will focus on superconductivity in MATBG. The dynamic properties of the screened Coulomb interactions for MATBG have been widely investigated in recent years [17; 18; 19; 20; 21; 22]. The intrinsically undamped plasmon modes were reported near magic angle which was believed to dominate in moire system. In this work we consider the dynamically screened Coulomb interaction
\[V(\mathbf{q},i\omega)=\frac{V(\mathbf{q})}{\epsilon(\mathbf{q},i\omega)}, \tag{5}\]
where \(V(\mathbf{q})=2\pi e^{2}/\kappa q\) represents the bare Coulomb interaction, \(\kappa\) is the background dielectric constant. We adopt \(\kappa=3.03\) to account for the background dielectric subtraction for hexagonal boron nitride (hBN). \(\epsilon(\mathbf{q},i\omega)\) is the dynamic dielectric function calculated via random phase approximation (RPA)
\[\epsilon(\mathbf{q},i\omega)=1-V(\mathbf{q})\Pi(\mathbf{q},i\omega), \tag{6}\]
and the polarization function \(\Pi(\mathbf{q},i\omega)\) is given by
\[\Pi(\mathbf{q},i\omega)=2\sum_{\mathbf{k}}\sum_{m,n}\frac{\left(f_{\mathbf{k }+\mathbf{q}}^{n}-f_{\mathbf{k}}^{m}\right)F_{\mathbf{k},\mathbf{k}+\mathbf{ q}}^{nm}}{E_{\mathbf{k}+\mathbf{q}}^{n}-E_{\mathbf{k}}^{m}-i\omega}, \tag{7}\]
where \(m,n\) are band indices, \(f_{\mathbf{k}}^{m}\) is the Fermi-Dirac distribution, and \(F_{\mathbf{k},\mathbf{k}+\mathbf{q}}^{nm}=\left|\psi_{\mathbf{n},\mathbf{k}+ \mathbf{q}}^{\dagger}\psi_{m,\mathbf{k}}\right|^{2}\) is the form factor associated with different Bloch states. The additional factor 2 accounts for the degeneracy of the spin and momentum \(\mathbf{k}\) spans the whole Brillouin zone. Starting from the continuum model [10], we apply RPA calculations to obtain the dynamic polarizability based on rigid/relaxed band structures, enabling us to compute the dynamic Coulomb interaction and momentum-averaged Coulomb interaction. For the realistic model, the Lorentzian approximation may not be valid, therefore we sought the full numerical solution instead (see Appendix D for more details).
Figure 2(a) shows the numerical results for the plasmon-mediated superconducting critical temperature in MATBG. We found that the lattice relaxation suppresses the superconductivity across a wide range of filling factors [23]. A weak bimodal feature is observed in the relaxed structures, which is similar to our previous observations [11]. This peak behavior qualitatively agrees with the filling factor window where the angle-dependent dome feature is reported in experiment. We also investigate the twist angle dependence as shown in Fig. 2(b). The bimodal structure persists even when the lattice relaxation effects are included. Relaxation acts as a redefinition of the magic angle, with a weak suppression of the critical temperature.
The non-monotonic behavior of the critical temperature with twist-angle and filling can be understood through the Lorentzian approximation discussed in Sec. II. By fitting the frequency-dependent Coulomb interaction to the Lorentzian form (Eq (4)), we can extract the corresponding parameters, as shown in Fig. 3. The results show that the averaged plasmon frequency \(\tilde{\Omega}_{b}\) depends weakly on filling but strongly on the twist angle, while the high-frequency limit of the Coulomb interaction \(\mu\) mostly follows the density of states. The trajectory mapped in the \(\tilde{T}_{c}(\mu,\tilde{\Omega}_{b})\) phase space when changing the filling is shown in Fig. 3(c), and similarly, it is shown in Fig. 3(f) for changing the twist angle. In both cases, we observe a non-monotonic dependence of \(T_{c}\). This non-monotonic behavior primarily results from the increase in \(\mu\), which is due to the enhanced density of states near
Figure 2: Plasmon-mediated superconducting critical temperature for magic-angle twisted bilayer graphene. (a) \(T_{c}\) versus band filling factor both with and without lattice relaxation effects for \(\theta=1.05^{\circ}\). The results show a weak bimodal feature. The suppression of \(T_{c}\) with relaxation is attributed partially to the change in magic angle when relaxation effects are included. (b) \(T_{c}\) versus twist angle for fixed filling factor \(\nu=2\). The bimodal dependence on twist angle persists even when lattice relaxation effects are included.
the magic angle. Additionally, it's worth noting that the maximum value of \(\tilde{T}_{c}\) is largely set by the momentum-averaged plasmon frequency, which within the BM continuum model does not exceed the values.
## IV The role of external screening
Now we turn to the case where external screening is involved. We start with the derivation of effective Coulomb interaction and then focus on hybrid double-layer structure and metal gate structure.
### Effective Coulomb interaction
Inspired by the graphene double-layer experiments [24; 25; 26; 27; 28; 29], we consider the hybrid structure involving MATBG and a two-dimensional (2D) material, as depicted in the inset of Fig. 4. The structure consists of MATBG and the 2D material being separated by a distance \(l\) within a homogeneous background with a dielectric constant \(\kappa_{m}\). The MATBG layer is designated as the first layer, while the 2D layer constitutes the second layer. On either side of the background, materials with dielectric constants \(\kappa_{t}\) and \(\kappa_{b}\) are placed. For simplicity, we assume that both materials are at the same distance \(d\) from the hybrid structure.
The electron-electron interaction among the charge carriers in MATBG is affected by the dielectric properties of the environment, which are encoded in the bare Coulomb interaction as introduced in Sec. III. When the second layer is introduced, the charge density in the second layer will alter the dielectric properties of the environment and the bare Coulomb interaction, thereby influencing its superconductivity. We analytically derive the effective bare Coulomb interaction and find
\[V_{\rm eff}(\mathbf{q},i\omega)=V_{11}(q)\left[1-\frac{V_{12}(q)V_{21}(q)}{V_ {11}(q)V_{22}(q)}\left(1-\frac{1}{\epsilon_{2}(\mathbf{q},i\omega)}\right) \right]. \tag{8}\]
The detailed derivation of this result can be found in Appendix G. Here, \(\epsilon_{2}=1-V_{22}\Pi_{2}\) represents the dielectric function of the 2D layer. Consequently, the dynamic
Figure 3: The non-monotonic behavior of critical temperature with twist-angle and filling can be understood by mapping the finite-temperature, finite-frequency random phase approximation calculation of the Bistritzer-MacDonald continuum model without a screening gate to the Lorentzian approximation used in this work. The averaged plasmon frequency depends weakly on filling factor (a) and strongly on twist angle (d). By contrast, the high-frequency limit of the Coulomb interaction mostly just follows the density of states, shown in (b) and (e). The trajectory mapped in the \(\tilde{T}_{c}(\mu,\tilde{\Omega}_{b})\) phase space when changing filling is shown in (c), and similarly, shown in (f) for changing the twist angle. In both cases, there is a non-monotonic dependence of \(T_{c}\). The maximum value of \(\tilde{T}_{c}\) is largely set by the momentum-averaged plasmon frequency, which within the BM continuum model does not exceed the values shown here.
screened Coulomb interaction within MATBG layer is
\[V(\mathbf{q},i\omega)=\frac{V_{\mathrm{eff}}(\mathbf{q},i\omega)}{1-V_{\mathrm{ eff}}(\mathbf{q},i\omega)\Pi_{1}(\mathbf{q},i\omega)}. \tag{9}\]
Notably, Eq. (8) enables us to determine the role of external screening on the superconducting properties of MATBG. In particular, we can calculate how the dielectric properties of the screening layer \(\epsilon_{2}(\mathbf{q},i\omega)\) impact the Coulomb interaction within the MATBG layer. We have checked that in the static limit, our result reproduces the effective screened Coulomb interaction previously used in the literature [14].
We first discuss two limiting cases, (i) where the screening layer is an insulator like bulk h-BN commonly used as a spacer in heterostructures, and (ii) where it is a metal like graphite or silicon used as gates in such structures. For simplicity, we assume \(\kappa_{t}=\kappa_{b}=\kappa_{m}=\kappa\), but this could be relaxed without any complication. In the insulator limit, the screening layer is replaced by a background insulator, leading to a zero polarizability (\(\Pi_{2}\to 0\)). The effective bare Coulomb interaction simplifies to
\[\lim_{\Pi_{2}\to 0}V_{\mathrm{eff}}(\mathbf{q},i\omega)=V_{11}(q)=\frac{2\pi e ^{2}}{\kappa q}, \tag{10}\]
which resembles the commonly used Coulomb interaction for 2D materials in a dielectric environment. In the metallic limit, on the other hand, \(\Pi_{2}\to\infty\). This results in
\[\begin{split}\lim_{\Pi_{2}\to\infty}V_{\mathrm{eff}}(\mathbf{q}, i\omega)&=V_{11}(q)\left(1-\frac{V_{12}(q)V_{21}(q)}{V_{11}(q)V_{22}(q)} \right)\\ &=\frac{2\pi e^{2}}{\kappa q}\left(1-e^{-2lq}\right),\end{split} \tag{11}\]
which again corresponds to the standard case where MATBG is screened by an external metal [30; 9; 31]. These two limits serve to benchmark the role of external screening materials. We note that the metallic limit is a lower bound for critical temperature, and in the presence of a screening layer \(T_{c}\) should exceed this value.
### Numerical results
To make a comparison with experiments, we mainly focus on two types of structures: (i) the hybrid double-layer structure [14], where the screening layer is a 2D semiconductor such as single-layer graphene (SLG) or bilayer graphene (BLG), and (ii) the metal gate structure [12; 13], where a metal like graphite or silicon is used as gates.
In the first structure, we assume that the top and bottom layers are metals, i.e., \(\kappa_{t},\kappa_{b}\to\infty\). Figure 4 shows the critical temperature \(T_{c}\) as a function of the carrier density in the screening layer, denoted as \(n^{\mathrm{2D}}\), for different screening materials. Here, we assume that the 2D material layer is either SLG (purple line) or BLG (yellow line). Within the range of carrier densities commonly used in experiments, the critical temperature remains almost unchanged. Our results align with those of Ref. [14], where a nearly constant critical temperature is observed in this structure. We also observe that the critical temperature is slightly higher than the insulator limit (the point labeled as "hNB"), indicating that the superconductivity is stabilized by the screening layer. This finding is akin to an earlier experimental study [32], where insulating tungsten diselenide (WSe2) monolayers sandwiched between hBN and TBG contributed to the stabilization of superconductivity.
In the second structure, a single metal gate is considered as the screening layer, i.e., \(\Pi_{2}\to\infty\). The corresponding bare Coulomb interaction has been discussed in Eq. (11). Figure 5(a) shows the critical temperature as a function of the filling factor \(\nu\) for various separation distances \(l\). As \(l\) decreases, the bimodal feature disappears and is replaced by a single peak near the VHS. In Fig. 5(b), a non-monotonic transition temperature is also observed for fixed filling factor cases. This non-monotonicity can also be understood by mapping to the Lorentzian model, as shown in Fig. 5(c) and (d). Notably, we observed that the critical temperature remains relatively constant across a wide range of separations but experiences a significant drop when \(l\lesssim 3\)nm. However, the typical value of \(l\) in experiments is around 7 nm to 68 nm, which is larger than the distance where superconductivity is visibly suppressed.
The conclusion above can be understood as a result of the comparison between the size of the moire Brillouin zone and the separation distance. According to Eq. (11),
Figure 4: Critical temperature \(T_{c}\) as a function of carrier density in screening layer \(n^{\mathrm{2D}}\) for different screening materials. Here \(\theta=1.05^{\circ}\) and \(\nu=2\) were used for the MATBG layer. Purple and yellow lines represent the 2D material layer – either single layer graphene (SLG) or bilayer graphene (BLG). Within the range of carrier density doping commonly used in experiments, the critical temperature is almost unchanged. The point labeled by “hNB” and “metal” represents the two limits discussed in the main text, corresponding to \(\Pi_{2}\to 0\) and \(\Pi_{2}\to\infty\). The inset shows the schematic of the MATBG-2D heterostructure.
the external metal gate will suppress the bare Coulomb interaction with a factor of \(1-e^{-2lq}\). The corresponding dynamic screened Coulomb interaction can be obtained by combining Eq. (5) and Eq. (6),
\[\begin{split} V(\mathbf{q},i\omega)&=\frac{V_{0}( \mathbf{q})\left(1-e^{-2lq}\right)}{1-V_{0}(\mathbf{q})\left(1-e^{-2lq}\right) \Pi(\mathbf{q},i\omega)}\\ &=\frac{V_{0}(\mathbf{q})}{\frac{1}{1-e^{-2lq}}-V_{0}(\mathbf{q}) \Pi(\mathbf{q},i\omega)},\end{split} \tag{12}\]
where \(V_{0}(\mathbf{q})=2\pi e^{2}/\kappa q\) represents the bare Coulomb interaction without external screening. Eq. (12) approaches zero in the limit of \(l\to 0\), indicating the Coulomb interaction is screened out when the external metal gate is close by. The dimensionless quantity \(1/\left(1-e^{-2lq}\right)\) sets the scale for when the screening due to external metal gate is important. We note that due to the large lattice constant in moire systems, a small separation distance is needed to screen out the Coulomb interaction. As approximation, we take the Fermi momentum \(\mathbf{q}_{F}\approx q(\Gamma\to K)/2=2\pi/\sqrt{3}L_{M}/2\approx 0.15\;\mathrm{nm}^{-1}\), which gives \(1/\left(1-e^{-2lq}\right)\approx 1.05\) for \(l=10\;\mathrm{nm}\), \(1.29\) for \(l=5\;\mathrm{nm}\), and \(1.69\) for \(l=3\;\mathrm{nm}\), indicating significant suppression happens only when \(l\lesssim 3\mathrm{nm}\). Since \(\ell/L_{M}\) is the relevant quantity, for even smaller twist angles (with larger \(L_{M}\)), this suppression could be achieved gate distances comparable to existing experiments. Alternatively, we can understand this conclusion in terms of the superconducting coherence length that is known to be small in MATBG [33; 34]. According to BCS theory, the superconductivity coherence length is given by \(\xi_{0}=\hbar v_{F}/\pi\Delta_{0}\), where \(v_{F}\) is the Fermi velocity and \(\Delta_{0}\) is the quasi-particle gap. Approximating \(\hbar v_{F}\approx 32\mathrm{meV}\cdot\mathrm{\AA}\) near the magic angle and \(\Delta_{0}\approx 2\mathrm{meV}\) from tunneling density of states results (see Sec. V), we obtain \(\xi_{0}\approx 5\mathrm{\AA}\). This small coherence length confirms that a small gate separation is necessary to disrupt the superconducting phase.
## V Spectral function and tunneling density of states
The connection between the tunneling density of states and scanning tunneling spectroscopy/microscopy (STS/STM) spectra has been understood for a long time [35; 36]. Advances in angle-resolved photoemission spectroscopy (ARPES) have made it possible to compare theory with experiments [37; 38; 39]. Recently, a series of experiments have attempted to measure the superconductivity gap in moire systems [40; 41]. In particular, Ref. [40] reported strong spectroscopic evidence of a "V"-shaped gap, suggesting an unconventional pairing. While some previous work has discussed the pairing symmetry for phonon-based superconductivity [42; 43], it remains unclear what the pairing symmetry is for the plasmonic mechanism. In this section, we will explore the signatures in the tunneling density of states as probed in spectroscopy experiments for both mechanisms.
To determine the tunneling density of states, it is necessary to obtain the superconductivity gap in real frequency, i.e., \(i\omega_{n}\rightarrow\omega+i\eta\). This process can be achieved by employing the approach introduced by Marsiglio et al. [44], where we need to solve the full-bandwidth Eliashberg equations in both the real and imaginary axes. This method provides an exact solution, but it is computationally intensive. To reduce computational demands, we first solve the full-bandwidth gap equations in the imaginary axis and then adopt the Pade approximation [45; 46] to analytically continue the results to real frequencies. The anisotropic Eliashberg gap equations consist of three com
Figure 5: Numerical results for superconducting critical temperature with a single screening gate. (a) Critical temperature as a function of filling factor \(\nu\) for different separation distances \(l\). As \(l\) decreases, the bimodal structure disappears and is replaced by a single peak near Van Hove singularity (VHS). (b) Critical temperature versus separation distance \(l\) at fixed filling. The dashed lines show the value for an infinite gate separation. (c) and (d) show the extracted parameters \(\mu\) and \(\tilde{\Omega}_{b}\) for the Lorenzian model at \(\nu=2\), where the dashed lines here show the value in the static screening limit. All calculations were performed with \(\theta=1.05^{\circ}\).
Figure 6: Tunneling density of state for both phonon and plasmon mechanism given by solving full-bandwidth Eliashberg equation. Other than the asymmetry in the peaks, both phonons and plasmons show a relatively hard superconducting gap consistent with s-wave superconductivity.
ponents: the order parameter \(\phi\), the mass renormalization \(Z\), and the chemical potential shift \(\chi\) (see Appendix C for more details). To simplify the calculations, we consider only the singlet superconducting channel in our calculations and set \(\chi=0\). This simplification allows us to compare the spectral function from both the phonon-pairing and plasmon-pairing on equal footing. After we obtain \(Z(\mathbf{k},\omega)\) and \(\phi(\mathbf{k},\omega)\), the band and momentum resolved spectral function can be computed via
\[A_{n}(\mathbf{k},\omega)=-\frac{1}{\pi}\operatorname{Im}\left(\left[\hat{G}_{ n}(\mathbf{k},\omega+i\delta)\right]_{11}\right), \tag{13}\]
where \(n\) is the band index. The tunneling density of state is then computed by further summation over the momentum degree of freedom [42; 43],
\[\frac{dI}{dV}\propto A(\omega)=\sum_{\mathbf{k},n}A_{n}(\mathbf{k},\omega). \tag{14}\]
Figure 6 shows the tunneling density of state for both phonon and plasmon mechanisms. Other than the asymmetry in the peaks, both phonons and plasmons show a relatively hard superconducting gap consistent with s-wave superconductivity. It is worth noting that both mechanisms yield similar spectral features, making them potentially challenging to differentiate directly through STM measurements. Moreover, our calculations do not account for scattering from impurities, which has been shown to potentially dampen the spectral measurement and consequently soften the superconductivity gap. In general, we observe that the superconducting gap for the plasmonic mechanism is bigger than the phonon mechanism. This is partially because the electron-electron interaction is stronger than the electron-phonon interaction in the flat band system.
## VI Conclusions
In this work, we primarily focus on superconductivity in MATBG mediated by electron-electron interaction within the framework of Eliashberg theory. We show that in the absence of an external screening layer, the plasmonic superconductivity is well-described by a Lorentzian model giving a non-monotonic dependence of the transition temperature on experimentally tunable parameters, such as doping and twist angle, consistent with some of the features observed experimentally. With external screening, this approximation no longer works, and our computationally intensive calculation shows that critical temperature is insensitive to the external screening gate unless this gate is closer than \(l\approx 3\) nm. We qualitatively understand this result as equivalent to \(l\ll L_{M}\); implying that for a gate at fixed separation, its screening will become more visible at smaller twist angles.
At present, we are unable to conclude definitively the microscopic nature of the superconducting pairing in MATBG. The higher transition temperature and the dome feature in both angle and filling predicted by the plasmon mechanism are both favorable features when compared to experimental observations. However, there is much that this mechanism also gets incorrect including a hard gap of the spectral function, and a second dome-like feature at low-angles that is robust to relaxation effects. Theoretically, there are properties not included in the Eliashberg theory, for example, it has been argued that it's important to consider the superfluid weight as a criterion for determining the superconducting phase in the strongly correlated regime [47; 48]. Moreover, it has been shown that the geometry of the band structure may also contribute to the superfluid weight [49; 50]. This interplay between transition temperature and superfluid weight for the plasmon-mediated pairing mechanism is an interesting question that we leave for future work.
###### Acknowledgements.
It is a pleasure to thank Alexey Bergyudin and Gargee Sharma for helpful comments and for collaboration on related projects. We acknowledge the financial support from the Singapore National Research Foundation Investigator Award (NRF-NRFI06-2020-0003).
## Appendix A Proof of non-monotonicity for Eq. (3)
In this section, we will prove that the solution of Eq. (3) is non-monotonic. We first noticed that the coupling strength in Eq. (1) becomes a constant value in the limit of (i) \(\mu\to 0\), (ii) \(\tilde{\Omega}_{b}\to 0\), and (iii) \(\tilde{\Omega}_{b}\to\infty\). We assume \(\lambda_{nm}=\lambda_{0}\neq 0\) without loss of generality, which allows us to rewrite Eq. (3) as:
\[\Delta_{n}=-2\tilde{T}_{c}\lambda_{0}\sum_{m=-\infty}^{\infty}\frac{1}{Z_{m} \tilde{\omega}_{m}}\arctan\frac{1}{Z_{m}\tilde{\omega}_{m}}\Delta_{m}. \tag{15}\]
In those limits, Eq. (15) becomes a decoupled equation: the summation on right-hand side is independent of \(n\), indicating that the order parameter on the left-hand side is constant. Considering the nontrivial solution \(\Delta_{n}=\Delta_{0}\neq 0\), \(\Delta_{n}\) can be safely canceled from both sides:
\[1=-2\tilde{T}_{c}\lambda_{0}\sum_{m=-\infty}^{\infty}\frac{1}{Z_{m}\tilde{ \omega}_{m}}\arctan\frac{1}{Z_{m}\tilde{\omega}_{m}}. \tag{16}\]
However, the summation on right-hand always gives a negative value. To reconcile the signs on both sides, Eq. (16) only allows the trivial solution \(\tilde{T}_{c}=0\).
In the limit of (iv) \(\mu\to\infty\), we obtain the limit below:
\[\begin{split}\lim_{\mu\to\infty}\frac{1}{Z_{n}\tilde{\omega}_{n} }\to&\frac{1}{\mu\frac{\tilde{\Omega}_{b}}{\tilde{\omega}_{n}} \arctan\frac{\tilde{\omega}_{n}}{\tilde{\omega}_{n}^{2}+\tilde{\Omega}_{b} \left(1+\tilde{\Omega}_{b}\right)}}\to 0,\\ \lim_{\mu\to\infty}\frac{\mu}{Z_{n}\tilde{\omega}_{n}}\to& \frac{1}{\frac{\tilde{\Omega}_{b}}{\tilde{\omega}_{n}}\arctan \frac{\tilde{\omega}_{n}}{\tilde{\omega}_{n}^{2}+\tilde{\Omega}_{b}\left(1+ \tilde{\Omega}_{b}\right)}}>0,\end{split} \tag{17}\]
where we have used the analytical solution of \(Z_{n}\) derived in Eq. (10) (see also [15]). By substituting Eq. (4) into Eq. (3), we obtain:
\[\Delta_{n}=-2\tilde{T}_{c}\] \[\times\sum_{m=-\infty}^{\infty}\frac{\mu}{Z_{m}\tilde{\omega}_{m}} \arctan\frac{1}{Z_{m}\tilde{\omega}_{m}}\left(1-\frac{\tilde{\Omega}_{b}^{2}} {\tilde{\Omega}_{b}^{2}+(\tilde{\omega}_{n}-\tilde{\omega}_{m})^{2}}\right) \Delta_{m}. \tag{14}\]
Using the expressions listed Eq. (13), we find that \(\lim_{\mu\to\infty}\arctan((Z_{n}\tilde{\omega}_{n})^{-1})=0\). Consequently, the summation over index \(m\) gives a zero contribution in Eq. (14), leading to a trivial solution, \(\tilde{T}_{c}=0\).
In summary, we have demonstrated that \(\tilde{T}_{c}\) approaches zero in the limits of \(\mu\to 0\) and \(\mu\to\infty\), as well as \(\tilde{\Omega}_{b}\to 0\) and \(\tilde{\Omega}_{b}\to\infty\). If there exists a non-zero solution for the critical temperature in \(\tilde{T}_{c}(\mu,\tilde{\Omega}_{b})\) space, the solution for the gap equation must exhibit non-monotonic behavior.
## Appendix B Details of the continuum model
Here we provide a brief review of the continuum model introduced by Ref. [10]. We begin with AA-stacked bilayer graphene with rotation \(-\theta/2\) and \(\theta/2\) for layers 1 and 2. The lattice vector before rotation is defined as \(\mathbf{a}_{1}=a(1,0)\) and \(\mathbf{a}_{2}=a(1/2,\sqrt{3}/2)\), where \(a=2.46\) A is the lattice constant for monolayer graphene. The corresponding reciprocal lattice vectors are \(\mathbf{a}_{1}^{*}=(2\pi/a)(1,-1/\sqrt{3})\) and \(\mathbf{a}_{2}^{*}=(2\pi/a)(0,2/\sqrt{3})\). After we apply the rotation, the lattice vectors and reciprocal lattice vectors are given by \(\mathbf{a}_{i}^{(l)}=R(\mp\theta/2)\mathbf{a}_{i}\) and \(\mathbf{a}_{i}^{*(l)}=R(\mp\theta/2)\mathbf{a}_{i}^{*}\), with \(\mp\) for \(l=1,2\), respectively, where \(R(\pm\theta/2)\) is the rotation matrix. The Dirac points for rotated graphene are located at \(\mathbf{K}_{\xi}^{(l)}=-\xi\left[2\mathbf{a}_{1}^{(l)*}+\mathbf{a}_{2}^{(l)*} \right]/3\) for layer \(l\), where \(\xi=\pm 1\) is the valley index.
In the case of a small twist angle, the commensurate structure can be approximately defined. The reciprocal lattice vectors for moire Brillouin zone are given by \(\mathbf{G}_{i}^{\mathrm{M}}=\mathbf{a}_{i}^{(1)}-\mathbf{a}_{i}^{(2)}\) for \(i=1,2\). The effective Hamiltonian of the continuum model in valley \(\xi\) takes the form
\[H^{(\xi)}=\left(\begin{array}{cc}H_{1}&U\\ U^{\dagger}&H_{2}\end{array}\right), \tag{15}\]
in the basis of \((A_{1},B_{1},A_{2},B_{2})\) site. Here \(H_{l}\) is the intralayer Hamiltonian for layer \(l\)
\[H_{l}=-\hbar v\left[R(\pm\theta/2)\left(\mathbf{k}-\mathbf{K}_{\xi}^{(l)} \right)\right]\cdot\left(\xi\sigma_{x},\sigma_{y}\right), \tag{16}\]
and \(U\) is the interlayer coupling
\[U= \left(\begin{array}{cc}w_{AA}&w_{AB}\\ w_{AB}&w_{AA}\end{array}\right)+\left(\begin{array}{cc}w_{AA}&w_{AB}\omega^{ -\xi}\\ w_{AB}\omega^{\xi}&w_{AA}\end{array}\right)e^{i\xi\mathbf{G}_{1}^{\mathrm{M}} \cdot\mathbf{r}}\] \[+\left(\begin{array}{cc}w_{AA}&w_{AB}\omega^{\xi}\\ w_{AB}\omega^{-\xi}&w_{AA}\end{array}\right)e^{i\xi\left(\mathbf{G}_{1}^{ \mathrm{M}}+\mathbf{G}_{2}^{\mathrm{M}}\right)\cdot\mathbf{r}}, \tag{17}\]
where \(\omega=e^{2i\pi/3}\) and \(L_{M}=a/2\sin\left(\theta/2\right)\) is lattice constant for real space. In this paper, we use the parameters \(\hbar v=5250\)\(\mathrm{meV}\cdot\mathrm{\AA}\), \(w_{AA}=79.7\) meV, and \(w_{AB}=97.5\) meV due to the relaxation effect [51]. For a more detailed analysis of the origin of the Hamiltonian, we refer the reader to Ref. [51; 52]. For a given Bloch vector \(\mathbf{k}\) in the moire Brillouin zone, there are many states associated with each other by the interlayer coupling matrix \(U\), which can be mapped by \(\mathbf{q}=\mathbf{k}+n\mathbf{G}_{1}^{M}+m\mathbf{G}_{2}^{M}\), where \(n\) and \(m\) are integers. To ensure convergence, we select the state inside the circle \(\left|\mathbf{q}-\mathbf{q}_{0}\right|<q_{c}\), where \(\mathbf{q}_{0}\) is the midpoint between \(\mathbf{K}_{\xi}^{(1)}\) and \(\mathbf{K}_{\xi}^{(2)}\), and \(q_{c}\) is set to \(4G_{\mathrm{M}}\left(G_{\mathrm{M}}=\left|\mathbf{G}_{1}^{\mathrm{M}}\right|= \left|\mathbf{G}_{2}^{\mathrm{M}}\right|\right)\). The calculation is done independently for each valley.
## Appendix C Migdal-Eliashberg theory
The Eliashebrg theory was built within the framework of the Nambu-Gor'kov formalism [53; 54; 55; 56; 57; 58]. The two-component electron spinor within the formalism is written down as
\[\psi_{\mathbf{k}}=\left(\begin{array}{c}c_{\mathbf{k}\uparrow}\\ c_{-\mathbf{k}\downarrow}^{\dagger}\end{array}\right),\quad\psi_{\mathbf{k}}^{ \dagger}=\left(\begin{array}{cc}c_{\mathbf{k}\uparrow}^{\dagger}&c_{-\mathbf{ k}\downarrow}\end{array}\right), \tag{18}\]
where the operator \(c_{\mathbf{k}\uparrow}\) (\(c_{-\mathbf{k}\downarrow}^{\dagger}\)) destroy (create) an electron of Bloch state in momentum \(\mathbf{k}\) (\(-\mathbf{k}\)) and spin up (down). By this definition, the Green function of electron is \(2\times 2\) matrix
\[\hat{G}(\mathbf{k},\tau)=-\left[\begin{array}{cc}\left\langle\mathcal{T}c_{ \mathbf{k}\uparrow}(\tau)c_{\mathbf{k}\uparrow}^{\dagger}(0)\right\rangle& \left\langle\mathcal{T}c_{\mathbf{k}\uparrow}(\tau)c_{-\mathbf{k}\downarrow}(0) \right\rangle\\ \left\langle\mathcal{T}c_{-\mathbf{k}\downarrow}^{\dagger}(\tau)c_{\mathbf{k} \uparrow}^{\dagger}(0)\right\rangle&\left\langle\mathcal{T}c_{-\mathbf{k} \downarrow}^{\dagger}(\tau)c_{-\mathbf{k}\downarrow}(0)\right\rangle\end{array} \right], \tag{19}\]
where \(\mathcal{T}\) is the time-ordering operator and the braces indicate a grand-canonical thermodynamic average. Here the diagonal term is the conventional electron Green function and the off-diagonal terms represent Gor'kov's anomalous Green functions \(F(\mathbf{k},\tau)\) and \(F^{*}(\mathbf{k},\tau)\), which describe the energy properties of superconducting state. The Green function can then be expanded using the Fourier series:
\[\hat{G}(\mathbf{k},\tau)=T\sum_{i\omega_{n}}e^{-i\omega_{n}\tau}\hat{G}\left( \mathbf{k},i\omega_{n}\right), \tag{20}\]
where \(T\) is temperature and \(\omega_{n}=(2n+1)\pi T_{C}\) is the Matsubara frequency. Combining Eq. (19) and Eq. (20), we obtain the Green function in momentum space in imaginary frequency
\[\hat{G}\left(\mathbf{k},i\omega_{n}\right)=\left[\begin{array}{cc}G\left( \mathbf{k},i\omega_{n}\right)&F\left(\mathbf{k},i\omega_{n}\right)\\ F^{*}\left(\mathbf{k},i\omega_{n}\right)&-G\left(-\mathbf{k},-i\omega_{n} \right)\end{array}\right]. \tag{21}\]
The Eliashberg theory aims to solve the generalized Green function Eq. (21) using Dyson equation
\[\hat{G}^{-1}\left(\mathbf{k},i\omega_{n}\right)=\hat{G}_{0}^{-1}\left(\mathbf{ k},i\omega_{n}\right)-\hat{\Sigma}\left(\mathbf{k},i\omega_{n}\right), \tag{22}\]
where \(\hat{G}_{0}^{-1}\left(\mathbf{k},i\omega_{n}\right)\) is non-interacting Green function given by
\[\hat{G}_{0}^{-1}\left(\mathbf{k},i\omega_{n}\right)=i\omega_{n}\hat{\tau}_{0}- \xi_{\mathbf{k}}\hat{\tau}_{3}, \tag{10}\]
where \(\xi_{\mathbf{k}}=E_{\mathbf{k}}-E_{\mathrm{F}}\) and \(\hat{\Sigma}\left(\mathbf{k},i\omega_{n}\right)\) is self-energy. In general, it is very difficult to solve exact self-energy due to the complexity of the phonon propagator. However, Migdal's theorem states that the phonon vertex corrections are small [59]. It's therefore a good approximation to set the phonon vertex to the bare phonon vertex. Within the Migdal-Eliashberg approximation, one can write down self-energy as
\[\hat{\Sigma}\left(\mathbf{k},i\omega_{n}\right)=-T\sum_{\mathbf{k}^{\prime}m}V _{\mathbf{k}-\mathbf{k}^{\prime},n-m}\hat{\tau}_{3}\hat{G}\left(\mathbf{k}^{ \prime},i\omega_{m}\right)\hat{\tau}_{3}. \tag{11}\]
The interaction \(V_{\mathbf{k}-\mathbf{k}^{\prime},n-m}\) defined as
\[\begin{split} V_{\mathbf{k}-\mathbf{k}^{\prime},n-m}& =\left\{\left|g_{\mathbf{k}\mathbf{k}^{\prime}}\right|^{2}D\left( \mathbf{k}-\mathbf{k}^{\prime},i\omega_{n}-i\omega_{m}\right)\right.\\ &-V_{c}\left(\mathbf{k}-\mathbf{k}^{\prime},i\omega_{n}-i\omega_ {m}\right)\left|M(\mathbf{k},\mathbf{k}^{\prime})\right|^{2},\end{split} \tag{12}\]
where \(D\left(\mathbf{q},i\omega_{n}\right)=2\omega_{\mathbf{q}}/\left[\left(i\omega _{n}\right)^{2}-\omega_{\mathbf{q}}^{2}\right]\) is the dressed phonon propagator in momentum \(\mathbf{q}\), \(\left|g_{\mathbf{k}\mathbf{k}^{\prime}}\right|^{2}=\hbar D^{2}q/2A\rho c_{ph}\) is the electron-phonon coupling. \(\omega_{\mathbf{q}}=c_{ph}q\) is phonon dispersion, \(c_{ph}\) is the phonon velocity, \(D\) is the deformation potential, \(A\) is sample area, \(\rho\) is the mass density, and \(\left|\psi_{\mathbf{k}}\right\rangle\) is the wavefunction for momentum \(\mathbf{k}\). \(M(\mathbf{k},\mathbf{k}^{\prime})=\left\langle\psi_{\mathbf{k}}|\psi_{ \mathbf{k}^{\prime}}\right\rangle\) is the form factor which involved the projection to the occupied bands [60; 61]. In this paper, we use \(D=25\) eV, \(c_{ph}=20000\) m/s, \(\rho=7.6\times 10^{-8}\) g/cm\({}^{2}\)[62; 63].
Assuming the ansatz for self-energy
\[\begin{split}\hat{\Sigma}\left(\mathbf{k},i\omega_{n}\right)=& i\omega_{n}\left[1-Z_{\mathbf{k},n}\right]\hat{\tau}_{0}+\chi_{\mathbf{k},n} \hat{\tau}_{3}\\ &+\phi_{\mathbf{k},n}\hat{\tau}_{1}+\tilde{\phi}_{\mathbf{k},n} \hat{\tau}_{2},\end{split} \tag{13}\]
where \(Z_{\mathbf{k},n}\) is the mass renormalization function, \(\chi_{\mathbf{k},n}\) is the chemical potential shift, and \(\phi_{\mathbf{k},n}\) is the order parameter. If the phase of the superconductivity gap is not important, one can choose the gauge \(\bar{\phi}_{\mathbf{k},n}=0\). Combining Eq. (13), Eq. (10), Eq. (11) and Eq. (13), we arrive at the anisotropic Eliashberg equation
\[\begin{split} i\omega_{n}\left(1-Z_{\mathbf{k},n}\right)& =T\sum_{\mathbf{k}^{\prime}m}V_{\mathbf{k}-\mathbf{k}^{\prime},n- m}\frac{i\omega_{m}Z_{\mathbf{k}^{\prime},m}}{\Theta_{\mathbf{k}^{\prime},m}}\\ \chi_{\mathbf{k},n}&=T\sum_{\mathbf{k}^{\prime}m}V_{ \mathbf{k}-\mathbf{k}^{\prime},n-m}\frac{\xi_{\mathbf{k}}+\chi_{\mathbf{k}^{ \prime},m}}{\Theta_{\mathbf{k}^{\prime},m}}\\ \phi_{\mathbf{k},n}&=-T\sum_{\mathbf{k}^{\prime}m}V _{\mathbf{k}-\mathbf{k}^{\prime},n-m}\frac{\phi_{\mathbf{k}^{\prime},m}}{ \Theta_{\mathbf{k}^{\prime},m}},\end{split} \tag{14}\]
where \(\Theta\) is the denominator defined as
\[\Theta_{\mathbf{k},n}=\left(\omega_{n}Z_{\mathbf{k},n}\right)^{2}+\left(\xi_{ \mathbf{k}}+\chi_{\mathbf{k},n}\right)^{2}+\phi_{\mathbf{k},n}^{2}. \tag{15}\]
## Appendix D Linearized Isotropic Gap Equation
In order to determine the critical temperature, we consider the linearized Eliashberg equation by setting \(\phi\left(\mathbf{k},i\omega_{n}\right)=0\) in the denominator of Eq. (15) and we have
\[\begin{split} i\omega_{n}\left(1-Z_{\mathbf{k},n}\right)& =T\sum_{\mathbf{k}^{\prime}m}V_{\mathbf{k}-\mathbf{k}^{\prime},n- m}\frac{i\omega_{m}Z_{\mathbf{k}^{\prime},m}}{\omega_{n}^{2}Z_{\mathbf{k}^{ \prime},m}^{2}+\left(\xi_{\mathbf{k}^{\prime}}+\chi_{\mathbf{k}^{\prime},m} \right)^{2}}\\ \chi_{\mathbf{k},n}&=T\sum_{\mathbf{k}^{\prime}m}V_{ \mathbf{k}-\mathbf{k}^{\prime},n-m}\frac{\xi_{\mathbf{k}}+\chi_{\mathbf{k}^{ \prime},m}}{\omega_{n}^{2}Z_{\mathbf{k}^{\prime},m}^{2}+\left(\xi_{\mathbf{k}^{ \prime}}+\chi_{\mathbf{k}^{\prime},m}\right)^{2}}\\ \phi_{\mathbf{k},n}&=-T\sum_{\mathbf{k}^{\prime}m}V _{\mathbf{k}-\mathbf{k}^{\prime},n-m}\frac{\phi_{\mathbf{k}^{\prime},m}}{\omega_ {n}^{2}Z_{\mathbf{k}^{\prime},m}^{2}+\left(\xi_{\mathbf{k}^{\prime}}+\chi_{ \mathbf{k}^{\prime},m}\right)^{2}}.\end{split} \tag{16}\]
Combining first two equations of Eq. (16), we get
\[\begin{split} R_{\mathbf{k},n}&=i\omega_{n}\left(1-Z_{ \mathbf{k},n}\right)+\chi_{\mathbf{k},n}\\ &=T\sum_{\mathbf{k}^{\prime}m}V_{\mathbf{k}-\mathbf{k}^{\prime},n- m}\frac{i\omega_{m}Z_{\mathbf{k}^{\prime},m}+\xi_{\mathbf{k}}+\chi_{\mathbf{k}^{ \prime},m}}{\omega_{n}^{2}Z_{\mathbf{k}^{\prime},m}^{2}+\left(\xi_{\mathbf{k}^{ \prime}}+\chi_{\mathbf{k}^{\prime},m}\right)^{2}}\\ &=-T\sum_{\mathbf{k}^{\prime}m}\frac{V_{\mathbf{k}-\mathbf{k}^{ \prime},n-m}}{i\omega_{m}Z_{\mathbf{k}^{\prime}m}-\xi_{\mathbf{k}^{\prime}}-\chi_{ \mathbf{k}^{\prime},m}}\\ &=-T\sum_{\mathbf{k}^{\prime}m}\frac{V_{\mathbf{k}-\mathbf{k}^{ \prime},n-m}}{i\omega_{m}-\xi_{\mathbf{k}^{\prime}}-\left(i\omega_{m}\left(1-Z_{ \mathbf{k}^{\prime},n}\right)+\chi_{\mathbf{k}^{\prime},m}\right)}\\ &=-T\sum_{\mathbf{k}^{\prime}m}\frac{V_{\mathbf{k}-\mathbf{k}^{ \prime},n-m}}{i\omega_{m}-\xi_{\mathbf{k}^{\prime}}-R_{\mathbf{k}^{\prime},m}}, \end{split} \tag{17}\]
thus Eq. (17) is a self-consistent equation containing all information about mass renormalization and chemical potential shift. The first-order self-energy correction can now be evaluated by setting \(R_{\mathbf{k}^{\prime},m}=0\) in right hand side of Eq. (17),
\[R_{\mathbf{k},n}=-T_{c}\int_{-1}^{1}d\tilde{E}\sum_{m}\frac{\lambda_{n,m}}{i \tilde{\omega}_{m}-\tilde{E}}. \tag{18}\]
Follow paper [15; 11], we consider the isotropic Coulomb interaction:
\[N(0)V_{\mathbf{k}-\mathbf{k}^{\prime},n-m}=\lambda_{n,m}\Theta\left(k_{c}-| \mathbf{k}|\right)\Theta\left(k_{c}-|\mathbf{k}^{\prime}|\right), \tag{19}\]
where \(\lambda_{n,m}\) is dimensionless coupling and \(\mathbf{k}_{c}=2\mathbf{k}_{F}\) represents the momentum cutoff. The gap equation under isotropic approximation takes the form:
\[\phi_{n}=-2\tilde{T}_{c}\sum_{m=-\infty}^{\infty}\frac{1}{Z_{m}\tilde{\omega}_{m }}\arctan\frac{1}{Z_{m}\tilde{\omega}_{m}}\lambda_{n,m}\phi_{m}, \tag{20}\]
which is Eq. (3) in main text. In general, the function form of mass renormalization function \(Z_{n}\) strongly depends on the form of coupling interaction \(\lambda_{n,m
where we have defined thee dimension quantities \(\tilde{\Omega}_{b}=\Omega_{b}/E_{F}\). In this case, Eq. (43) can be evaluated analytically
\[R_{\mathbf{k},n}=-\mu\left(E_{\mathrm{F}}+i\Omega_{b}\arctan\frac{\tilde{\omega} _{n}}{\tilde{\omega}_{n}^{2}+\tilde{\Omega}_{b}\left(1+\tilde{\Omega}_{b} \right)}\right), \tag{47}\]
which gives the solution of renormalization
\[Z_{n}=1+\mu\frac{\tilde{\Omega}_{b}}{\tilde{\omega}_{n}}\arctan\frac{\tilde{ \omega}_{n}}{\tilde{\omega}_{n}^{2}+\tilde{\Omega}_{b}\left(1+\tilde{\Omega}_ {b}\right)}. \tag{48}\]
The procedures to determine critical temperature are the following: (i) Discrete the Brillouin zone and compute the band structures. We choose \(100\times 100\) mesh to ensure convergence. (ii) For a given temperature \(T\), we generate Mastubara frequency grid by setting the energy cut-off \(E_{c}=2000\) meV. (iii) The dynamic screened Coulomb interaction is therefore computed via Eq. (2) and the coupling strength is given by Eq. (1). (iv) To compute critical temperature beyond Lorentzian approximation, we compute the mass renormalization \(Z_{m}\) function numerically by integrating Eq. (43). (v) Gap equation (Eq. (3)) is then solved by the power method to reduce the computational time. (vi) We repeat the above procedures and find the critical temperature \(T_{c}\) when the largest eigenvalue is exactly equal to 1.
## Appendix E Results for double-gate screening
Similar to single-gate screening, we consider the double-gate screening structure where the MATBG sample is placed between two external metal gates with the same separations \(l\). The bare Coulomb interaction can be obtained in the limit of \(\Pi_{2},\kappa_{t},\kappa_{b}\rightarrow\infty\):
\[V(\mathbf{q})=\frac{2\pi e^{2}}{\kappa_{m}q}\tanh d_{g}q. \tag{49}\]
Figure 7(a) shows the critical temperature as a function of the filling factor \(\nu\) for various separation distances \(l\). Similar to single-gate screening, the bimodal feature disappears and is replaced by a single peak near Van Hove singularity as \(l\) decreases. We found that double metal gates will provide a lower critical temperature due to the extra screening from other metal gates. In Fig. 7(b), a non-monotonic transition is also observed for fixed filling factor cases. This non-monotonicity can also be understood by mapping to the Lorentzian model, as shown in Fig. 7(c) and (d). Again, we noted that the critical temperature remains relatively constant across a wide range of separations but experiences a significant drop when \(l\lesssim 3\)nm.
## Appendix F Bare Coulomb potential
Here we derive the general form of Coulomb interaction for the hybrid structure mentioned in the main text. The bare Coulomb interaction can be obtained by solving the displacement field \(\mathbf{D}\) to satisfy
\[\nabla\cdot\mathbf{D}=0 \tag{50}\]
everywhere in space. Here, the displacement field is related to the electric field by \(\mathbf{D}=\kappa\mathbf{E}\). The material in each slab divides the space into several parts
\[\begin{cases}\kappa_{1}&;\text{ for }z>d_{1}\\ \kappa_{2}&;\text{ for }0<z<d_{1}\\ \kappa_{3}&;\text{ for }-l<z<0\\ \kappa_{4}&;\text{ for }-l-d_{2}<z<-l\\ \kappa_{5}&;\text{ for }z<-l-d_{2}\end{cases}, \tag{51}\]
where we have set the 2D layer at \(z=0\) and the MATBG at \(z=-l\). The corresponding Poisson equation can be obtained by invoking the relation between electric field and electric potential \(\mathbf{E}=-\nabla\phi\)
\[\kappa_{i}\nabla^{2}\phi_{i}=0;i=1,2,3,4,5. \tag{52}\]
The solution of the Poisson equation is given by the form of
\[\begin{cases}\phi_{1}\left(\mathbf{r},z\right)=Ae^{i\mathbf{q}\cdot\mathbf{r}}e^{-qz}&;z> d_{1}\\ \phi_{2}\left(\mathbf{r},z\right)=e^{i\mathbf{q}\cdot\mathbf{r}}(Be^{qz}+Ce^{-qz})&;0<z<d_{ 1}\\ \phi_{3}\left(\mathbf{r},z\right)=e^{i\mathbf{q}\cdot\mathbf{r}}(De^{qz}+Ee^{-qz})&;-l<z<0 \\ \phi_{4}\left(\mathbf{r},z\right)=e^{i\mathbf{q}\cdot\mathbf{r}}(Fe^{qz}+Ge^{-qz})&;-l-d_{ 2}<z<-l\\ \phi_{5}\left(\mathbf{r},z\right)=He^{i\mathbf{q}\cdot\mathbf{r}}e^{qz}&;z<-l-d_{2}\end{cases}, \tag{53}\]
Figure 7: Numerical results for superconducting critical temperature with a double screening gate. (a) Critical temperature as a function of filling factor \(\nu\) for different separation distances \(l\). As \(l\) decreases, the bimodal structure disappears and is replaced by a single peak near Van Hove singularity (VHS). (b) Critical temperature versus separation distance \(l\) at fixed filling. The dashed lines show the value for an infinite gate separation. (c) and (d) show the extracted parameters \(\mu\) and \(\tilde{\Omega}_{b}\) for the Lorenzian model, where the dashed lines here show the value in the static screening limit.
where we have use shorthand \(\mathbf{r}=\begin{pmatrix}x\\ y\end{pmatrix}\), \(\mathbf{q}=\begin{pmatrix}q_{x}\\ q_{y}\end{pmatrix}\), and \(q=|\mathbf{q}|\). This form has been chosen such that the potential doesn't diverge towards \(z=\pm\infty\). The undetermined coefficients can be obtained by matching the electric potential at the boundary between each layer/slab [64]. The solution of the Poisson equation can be written down as a linear equation
\[\begin{pmatrix}e^{-qd_{1}}&-e^{qd_{1}}&-e^{-qd_{1}}&0&0&0&0&0\\ 0&1&1&-1&-1&0&0&0\\ 0&0&0&e^{-ql}&e^{ql}&-e^{-ql}&-e^{ql}&0\\ 0&0&0&0&0&e^{-q(l+d_{2})}&e^{q(l+d_{2})}&-e^{-q(l+d_{2})}\\ -\kappa_{1}e^{-qd_{1}}&-\kappa_{2}e^{qd_{1}}&\kappa_{2}e^{-qd_{1}}&0&0&0&0\\ 0&\kappa_{2}&-\kappa_{2}&-\kappa_{3}&\kappa_{3}&0&0&0\\ 0&0&0&\kappa_{3}e^{-ql}&-\kappa_{3}e^{ql}&-\kappa_{4}e^{-ql}&\kappa_{4}e^{ql} &0\\ 0&0&0&0&\kappa_{4}e^{-q(l+d_{2})}&-\kappa_{4}e^{q(l+d_{2})}&-\kappa_{5}e^{-q(l +d_{2})}\end{pmatrix}\begin{pmatrix}A\\ B\\ C\\ D\\ E\\ F\\ G\\ H\end{pmatrix}=\begin{pmatrix}0\\ 0\\ 0\\ 0\\ 0\\ 0\end{pmatrix}, \tag{101}\]
which can be inverted to give us the undetermined coefficients. The intralayer bare Coulomb potential for the first layer is given by \(V_{11}=-e\phi_{3}\left(0,0\right)=-e(D+E)\), and the interlayer bare Coulomb potential is given by \(V_{12}=-e\phi_{3}\left(0,-l\right)=-e(De^{-ql}+Ee^{ql})\).
Due to the symmetry of system, the intralayer bare Coulomb potential \(V_{22}\) can be obtained from \(V_{11}\) by interchanging \(\kappa_{1}\leftrightarrow\kappa_{5},\kappa_{2}\leftrightarrow\kappa_{4}\) and \(d_{1}\leftrightarrow d_{2}\). Similarly, the interlayer bare Coulomb potential \(V_{12}\) can be obtained from \(V_{21}\) using the same procedure. Finally, we obtain the bare Coulomb potential matrix elements
\[\begin{split}& V_{11}(q)=\frac{4\pi e^{2}}{c_{0}q}e^{2lq}f_{1} \left(\kappa_{3}\left(e^{2lq}+1\right)f_{2}+\kappa_{4}\left(e^{2lq}-1\right)f_ {3}\right)\\ & V_{12}(q)=V_{21}(q)=\frac{8\pi e^{2}}{c_{0}q}e^{3lq}\kappa_{3}f_{1}f_{2} \\ & V_{22}(q)=V_{11}(\kappa_{1}\leftrightarrow\kappa_{5},\kappa_{2} \leftrightarrow\kappa_{4},d_{1}\leftrightarrow d_{2}),\end{split} \tag{102}\]
where
\[\begin{split}& c_{0}=(\kappa_{1}+\kappa_{2})(\kappa_{2}-\kappa_{3 })(\kappa_{3}-\kappa_{4})(\kappa_{4}+\kappa_{5})e^{2q(d_{1}+d_{2}+l)}\\ &+(\kappa_{1}+\kappa_{2})(\kappa_{2}+\kappa_{3})(\kappa_{3}+ \kappa_{4})(\kappa_{4}+\kappa_{5})e^{2q(d_{1}+d_{2}+2l)}\\ &+(\kappa_{1}+\kappa_{2})(\kappa_{2}+\kappa_{3})(\kappa_{3}- \kappa_{4})(\kappa_{4}-\kappa_{5})e^{2q(d_{1}+2l)}\\ &+(\kappa_{1}+\kappa_{2})(\kappa_{2}-\kappa_{3})(\kappa_{3}+ \kappa_{4})(\kappa_{4}-\kappa_{5})e^{2q(d_{1}+l)}\\ &+(\kappa_{1}-\kappa_{2})(\kappa_{2}+\kappa_{3})(\kappa_{3}- \kappa_{4})(\kappa_{4}+\kappa_{5})e^{2q(d_{2}+l)}\\ &+(\kappa_{1}-\kappa_{2})(\kappa_{2}-\kappa_{3})(\kappa_{3}+ \kappa_{4})(\kappa_{4}+\kappa_{5})e^{2q(d_{2}+2l)}\\ &+(\kappa_{1}-\kappa_{2})(\kappa_{2}-\kappa_{3})(\kappa_{3}- \kappa_{4})(\kappa_{4}-\kappa_{5})e^{4lq}\\ &+(\kappa_{1}-\kappa_{2})(\kappa_{2}+\kappa_{3})(\kappa_{3}+ \kappa_{4})(\kappa_{4}-\kappa_{5})e^{2lq},\end{split} \tag{103}\]
and
\[\begin{split}& f_{1}=e^{2d_{1}q}(\kappa_{1}+\kappa_{2})-\kappa_{1 }+\kappa_{2}\\ & f_{2}=e^{2d_{2}q}(\kappa_{4}+\kappa_{5})+\kappa_{4}-\kappa_{5} \\ & f_{3}=e^{2d_{2}q}(\kappa_{4}+\kappa_{5})-\kappa_{4}+\kappa_{5}.\end{split} \tag{104}\]
In the case of \(\kappa_{2}=\kappa_{3}=\kappa_{4}=\kappa_{m},\kappa_{1}\rightarrow\kappa_{t}, \kappa_{5}\rightarrow\kappa_{m},d_{1}=d_{2}=d\), we obtain the following solution
\[\begin{split}& V_{11}(q)=\frac{4\pi e^{2}}{\kappa_{m}q}\frac{ \left[\kappa_{m}\cosh(qd)+\kappa_{t}\sinh(qd)\right]\left\{\kappa_{m}\cosh[q(l +d)]+\kappa_{b}\sinh[q(l+d)]\right\}}{\left(\kappa_{t}+\kappa_{b}\right) \kappa_{m}\cosh[q(l+2d)]+\left(\kappa_{t}\kappa_{b}+\kappa_{m}^{2}\right) \sinh[q(l+2d)]}\\ & V_{12}(q)=V_{21}(q)=\frac{4\pi e^{2}}{\kappa_{m}q}\frac{\left[ \kappa_{m}\cosh(qd)+\kappa_{b}\sinh(qd)\right]\left[\kappa_{m}\cosh(qd)+ \kappa_{t}\sinh(qd)\right]}{\left(\kappa_{t}+\kappa_{b}\right)\kappa_{m} \cosh[q(l+2d)]+\left(\kappa_{t}\kappa_{b}+\kappa_{m}^{2}\right)\sinh[q(l+2d) ]}\\ & V_{22}(q)=\frac{4\pi e^{2}}{\kappa_{m}q}\frac{\left[\kappa_{m} \cosh(qd)+\kappa_{b}\sinh(qd)\right]\left\{\kappa_{m}\cosh[q(l+d)]+\kappa_{t }\sinh[q(l+d)]\right\}}{\left(\kappa_{t}+\kappa_{b}\right)\kappa_{m}\cosh[q(l +2d)]+\left(\kappa_{t}\kappa_{b}+\kappa_{m}^{2}\right)\sinh[q(l+2d)]}\end{split} \tag{105}\]
## Appendix G Effective Coulomb interaction for MATBG-2D hybrid system
The Hamiltonian which describes MATBG-2D system can be written as
\[\mathcal{H}=\mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{ee}, \tag{106}\]
where \(H_{1}\) (\(H_{2}\)) indicates the non-interacting Hamiltonian of MATBG (2D material). The coupling Hamiltonian reads
\[\mathcal{H}_{\text{ee}}=\frac{1}{2}\sum_{\mathbf{q},\ell\neq\ell^{\prime}}V_{\ell \ell^{\prime}}(q)\hat{n}_{\mathbf{q},\ell}\hat{n}_{-\mathbf{q},\ell^{\prime}}, \tag{107}\]
where \(\hat{n}_{\mathbf{q},\ell}\) is the density operator of the \(\ell\)-th layer
\[\hat{n}_{\mathbf{q},\ell}=\sum_{\mathbf{k},\alpha}\hat{\psi}^{\dagger}_{\mathbf{k}-\mathbf{q}, \alpha,\ell}\hat{\psi}_{\mathbf{k},\alpha,\ell}, \tag{11}\]
and \(V_{\ell\ell^{\prime}}(q)\) (\(\ell\neq\ell^{\prime}\)) is the 2D Fourier transform of the inter-layer Coulomb interaction.
For convenience, we introduce the matrix
\[\hat{W}=\left(\begin{array}{cc}W_{11}&W_{12}\\ W_{21}&W_{22}\end{array}\right),\hat{V}=\left(\begin{array}{cc}V_{11}&V_{12} \\ V_{21}&V_{22}\end{array}\right), \tag{12}\]
to represent the screened and bare Coulomb interaction respectively. The details about bare Coulomb interaction can be found in Appendix F. From the random phase approximation, the screened Coulomb potential matrix \(\hat{W}\) is given by
\[\begin{split}\hat{W}&=\hat{V}+\hat{V}\hat{\Pi}\hat{V}+\hat{V} \hat{\Pi}\hat{V}\hat{\Pi}\hat{V}+\dots\\ &=\hat{V}+\hat{V}\hat{\Pi}\hat{W},\end{split} \tag{13}\]
where \(\hat{\Pi}\) is the polarizability matrix. Eq. (13) can be rearranged into
\[\begin{split}\hat{W}&=\left(\hat{1}-\hat{V}\hat{\Pi} \right)^{-1}\hat{V}\\ &=\hat{\epsilon}^{-1}\hat{V},\end{split} \tag{14}\]
where \(\hat{\epsilon}\) is the dielectric matrix which can be evaluated to be \(\hat{\epsilon}^{-1}=(\hat{1}-\hat{V}\hat{\Pi})^{-1}\). For the system with few nanometers' separations, the interlayer tunneling is negligible compare with intralayer coupling, we assume the off-diagonal term of \(\hat{\Pi}\) is zero, i.e. \(\Pi_{ij}=\Pi_{i}\delta_{ij}\). Under this assumption, the dielectric matrix is written as
\[\begin{split}\hat{\epsilon}^{-1}&=\left(\begin{array} []{cc}1-V_{11}\Pi_{1}&-V_{12}\Pi_{2}\\ -V_{21}\Pi_{1}&1-V_{22}\Pi_{2}\end{array}\right)^{-1}\\ &=\frac{1}{\epsilon_{1}\epsilon_{2}-V_{12}V_{21}\Pi_{1}\Pi_{2}} \left(\begin{array}{cc}\epsilon_{2}&V_{12}\Pi_{2}\\ \underline{V_{21}\Pi_{1}}&\epsilon_{1}\end{array}\right),\end{split} \tag{15}\]
where we have defined dielectric function \(\epsilon_{i}=1-V_{ii}\Pi_{i}\). Combine Eq. (14) and Eq. (15), we obtain
\[\hat{W}=\frac{1}{\epsilon_{1}\epsilon_{2}-V_{12}V_{21}\Pi_{1}\Pi_{2}}\left( \begin{array}{cc}\epsilon_{2}&V_{12}\Pi_{2}\\ V_{21}\Pi_{1}&\epsilon_{1}\end{array}\right)\left(\begin{array}{cc}V_{11}& V_{12}\\ V_{21}&V_{22}\end{array}\right). \tag{16}\]
We are particularly interested in the Coulomb interaction in the active layer, which corresponds to the \(W_{11}\) element
\[V(\mathbf{q},i\omega)=W_{11}=\frac{\epsilon_{2}V_{11}+V_{12}V_{21}\Pi_{2}}{ \epsilon_{1}\epsilon_{2}-V_{12}V_{21}\Pi_{1}\Pi_{2}}. \tag{17}\]
Eq. (17) can be rewritten as more compact form
\[V(\mathbf{q},i\omega)=\frac{V_{\mathrm{eff}}}{1-\Pi_{1}V_{\mathrm{eff}}}, \tag{18}\]
where we have defined the effective bare Coulomb interaction as
\[V_{\mathrm{eff}}(\mathbf{q},i\omega)=V_{11}\left[1-\frac{V_{12}V_{21}}{V_{11}V _{22}}\left(1-\frac{1}{\epsilon_{2}}\right)\right], \tag{19}\]
which is the Eq. (8) in the main text.
In the case of bare Coulomb interaction
\[V_{11}=V_{22}=\frac{2\pi e^{2}}{q};V_{12}=V_{21}=\frac{2\pi e^{2}}{q}e^{-ql}, \tag{20}\]
where \(l\) is the distance between the active and passive layers. Eq. (19) then become
\[V_{\mathrm{eff}}=\frac{2\pi e^{2}}{q}\left[1-e^{-2ql}\left(1-\frac{1}{\epsilon_ {2}}\right)\right], \tag{21}\]
which is exactly the Coulomb interaction derived in [14]. Academy of Sciences **118**, e210784118 (2021).
|
2301.00130 | Accuracy-Guaranteed Collaborative DNN Inference in Industrial IoT via
Deep Reinforcement Learning | Collaboration among industrial Internet of Things (IoT) devices and edge
networks is essential to support computation-intensive deep neural network
(DNN) inference services which require low delay and high accuracy. Sampling
rate adaption which dynamically configures the sampling rates of industrial IoT
devices according to network conditions, is the key in minimizing the service
delay. In this paper, we investigate the collaborative DNN inference problem in
industrial IoT networks. To capture the channel variation and task arrival
randomness, we formulate the problem as a constrained Markov decision process
(CMDP). Specifically, sampling rate adaption, inference task offloading and
edge computing resource allocation are jointly considered to minimize the
average service delay while guaranteeing the long-term accuracy requirements of
different inference services. Since CMDP cannot be directly solved by general
reinforcement learning (RL) algorithms due to the intractable long-term
constraints, we first transform the CMDP into an MDP by leveraging the Lyapunov
optimization technique. Then, a deep RL-based algorithm is proposed to solve
the MDP. To expedite the training process, an optimization subroutine is
embedded in the proposed algorithm to directly obtain the optimal edge
computing resource allocation. Extensive simulation results are provided to
demonstrate that the proposed RL-based algorithm can significantly reduce the
average service delay while preserving long-term inference accuracy with a high
probability. | Wen Wu, Peng Yang, Weiting Zhang, Conghao Zhou, Xuemin, Shen | 2022-12-31T05:53:17Z | http://arxiv.org/abs/2301.00130v1 | # Accuracy-Guaranteed Collaborative DNN Inference in Industrial IoT via Deep Reinforcement Learning
###### Abstract
Collaboration among industrial Internet of Things (IoT) devices and edge networks is essential to support computation-intensive deep neural network (DNN) inference services which require low delay and high accuracy. Sampling rate adaption which dynamically configures the sampling rates of industrial IoT devices according to network conditions, is the key in minimizing the service delay. In this paper, we investigate the collaborative DNN inference problem in industrial IoT networks. To capture the channel variation and task arrival randomness, we formulate the problem as a constrained Markov decision process (CMDP). Specifically, sampling rate adaption, inference task offloading and edge computing resource allocation are jointly considered to minimize the average service delay while guaranteeing the long-term accuracy requirements of different inference services. Since CMDP cannot be directly solved by general reinforcement learning (RL) algorithms due to the intractable long-term constraints, we first transform the CMDP into an MDP by leveraging the Lyapunov optimization technique. Then, a deep RL-based algorithm is proposed to solve the MDP. To expedite the training process, an optimization subroutine is embedded in the proposed algorithm to directly obtain the optimal edge computing resource allocation. Extensive simulation results are provided to demonstrate that the proposed RL-based algorithm can significantly reduce the average service delay while preserving long-term inference accuracy with a high probability.
Sampling rate adaption, inference accuracy, collaborative DNN Inference, deep reinforcement learning.
## I Introduction
With the development of advanced neural network techniques and ubiquitous industrial Internet of Things (IoT) devices, deep neural network (DNN) is widely applied in extensive industrial IoT applications, such as facility monitoring and fault diagnosis [1]. Industrial IoT devices (e.g., vibration sensors) can sense the industrial operating environment and feed sensing data to a DNN, and then the DNN processes the sensing data and renders inference results, namely DNN inference. Although DNN inference can achieve high inference accuracy as compared to traditional alternatives (e.g., decision tree), executing DNN inference tasks requires extensive computation resource due to tremendous multiply-and-accumulation operations [2]. A device-only solution that purely executes DNN inference tasks at resource-constrained industrial IoT devices, becomes intractable due to prohibitive energy consumption and a high service delay. For example, processing an image using AlexNet incurs up to 0.45 W energy consumption [3]. An edge-only solution which purely offloads large-volume sensing data to resource-rich edge nodes, e.g., access point (AP), suffers from an unpredictable service delay due to time-varying wireless channel [4]. Hence, neither a device-only nor an edge-only solution can effectively support low-delay DNN inference services.
Collaborative inference, which coordinates resource-constrained industrial IoT devices and the resource-rich AP, becomes a de-facto paradigm to provide low-delay and high-accuracy inference services [5]. Within the collaborative inference, sensing data from industrial IoT devices can be either processed locally or offloaded to the AP. At industrial IoT devices, light-weight _compressed_ DNNs (i.e., neural networks are compressed without significantly decreasing their performance) are deployed due to constrained on-board computing capability, which saves computing resource at the cost of inference accuracy [6, 7]. At the AP, _uncompressed_ DNNs are deployed to provide high-accuracy inference services at the cost of network resources. Through the resource allocation (e.g., task offloading) between industrial IoT devices and the AP, the overall service performance can be enhanced.
However, the sampling rate adaption technique that dynamically configures the sampling rates of industrial IoT devices, is seldom considered. Through dynamically adjusting the sampling rates according to channel conditions and AP's workload, sensing data from industrial IoT devices can be compressed, thereby reducing not only the offloaded data volume, but also task computation workload. In our experiments, we implement AlexNet to conduct bearing fault diagnosis based on the collected bearing vibration signal from dataset [8].1 As shown in Fig. 1, inference accuracy grows sub-linearly with the sampling rate. For example, when the sampling rate increases from 18 KHz to 24 KHz, the accuracy increases from 95% to 98.7%. Hence, when the channel condition is poor or edge computation workload is heavy, decreasing the sampling rate can reduce the offloaded data volume and requested computation workload, thereby reducing the service delay at
the cost of limited inference accuracy. When channel condition is good and edge computation workload is light, increasing the sampling rate can help deliver a high-accuracy service with an acceptable service delay. Hence, sampling rate adaption can effectively reduce the service delay, which should be incorporated in the collaborative DNN inference.
The sampling rate adaption and resource allocation for collaborative DNN inference are entangled with the following challenges. Firstly, due to time-varying channel conditions and random task arrivals, sampling rate and resource allocation should be dynamically adjusted to achieve the minimum service delay. Minimizing the long-term service delay requires the stochastic information of network dynamics. Secondly, in addition to minimizing the service delay, the long-term accuracy requirements should be guaranteed for different inference services. The long-term accuracy performance is determined by decisions of sampling rate adaption and resource allocation over time, and hence the optimal decisions require future network information. To address the above two challenges, a reinforcement learning (RL) technique is leveraged to interact with the unknown environment to capture the network dynamics, and then a Lyapunov optimization technique is utilized within the RL framework to guarantee the long-term accuracy requirements without requiring future network information.
In this paper, we investigate the collaborative DNN inference problem in industrial IoT networks. _Firstly_, we formulate the problem as a constrained Markov decision process (CMDP) to account for time-varying channel conditions and random task arrivals. Specifically, sampling rates of industrial IoT devices, task offloading, and edge computation resource allocation are optimized to minimize the average service delay while guaranteeing the long-term accuracy requirements of multiple services. _Secondly_, since traditional RL algorithms target at optimizing a long-term reward without considering policy constraints, they cannot be applied to solve CMDP with long-term constraints. To solve the problem, we transform the CDMP into an MDP via the Lyapunov optimization technique. The core idea is to construct accuracy deficit queues to characterize the satisfaction status of the long-term accuracy constraints, thereby guiding the learning agent to meet the long-term accuracy constraints. _Thirdly_, to solve the MDP, a learning-based algorithm is developed based on the deep deterministic policy gradient (DDPG) algorithm. Within the learning algorithm, to reduce the training complexity, edge computing resource allocation is directly solved via an optimization subroutine based on convex optimization theory, since it only impacts one-shot delay performance according to theoretical analysis. Extensive simulations are conducted to validate the effectiveness of the proposed algorithm in reducing the average service delay while preserving the long-term accuracy requirements.
Our main contributions in this paper are summarized as follows:
* We formulate the collaborative DNN inference problem as a CMDP, in which the objective is to minimize the average service delay while guaranteeing the long-term accuracy constraints;
* We transform the CMDP into an MDP via the Lyapunov optimization technique which constructs accuracy deficit queue to characterize the satisfaction status of the long-term accuracy constraints;
* We propose a deep RL-based algorithm to make the optimal sampling rate adaption and resource allocation decisions. To reduce the training complexity, an optimization subroutine is embedded in the proposed algorithm for the optimal edge computing resource allocation.
The remainder of this paper is organized as follows. Section II reviews related works. The system model and problem formulation are presented in Section III. Section IV proposes a learning-based solution. Simulation results are given in Section V. Finally, Section VI concludes this paper.
## II Related Work
DNN inference for resource-constrained industrial IoT devices has garnered much attention recently. A device-only solution aims to facilitate DNN inference services resorting to on-board computing resources. To reduce the computational complexity, DNN compression techniques are applied, such as weight pruning [6] and knowledge distillation [9]. Considering the widely-equipped energy-harvesting functionality in IoT devices, Gobieski _et al._ designed a light-weight DNN inference model, which can dynamically compress the model size in order to balance inference accuracy and energy efficiency [2]. In another line of research, edge-assisted DNN inference solutions can provide high-accuracy inference services by utilizing powerful edge computing servers. To facilitate low-delay and accurate DNN-based video analytics, Yang _et al._ proposed an online video quality and computing resource allocation strategy to maximize video analytic accuracy [10]. Another inspiring work proposed a novel device-edge collaborative inference scheme, in which the DNN model is partitioned and deployed at both the device and the edge, and intermediate results are transferred via wireless links [5]. The above works can provide possible resource allocation solutions to enhance DNN inference performance. Different from existing works, our work takes the sampling rate adaption of industrial IoT devices into account, aiming at providing accuracy-guaranteed inference services in dynamic industrial IoT networks.
RL algorithms have been widely applied in allocating network resources in wireless networks, such as service migra
Fig. 1: Inference accuracy with respect to sampling rates on the bearing vibration dataset [8].
tion in vehicular networks [11], network slicing in cellular networks [12], content caching in edge networks [13], and task scheduling in industrial IoT networks [14]. Hence, RL algorithms are considered as plausible solutions to manage network resources for DNN inference services. However, DNN inference services require minimizing the average delay while satisfying the long-term accuracy constraints. Traditional RL algorithms, e.g., DDPG, can be applied to solve MDPs, in which learning agents seek to optimize a long-term reward without policy constraints, while they cannot deal with constrained long-term optimization problems [15, 16]. Our proposed deep RL-based algorithm can address long-term constraints within the RL framework by the modification of reward based on the Lyapunov optimization technique. In addition, an optimization subroutine is embedded in our algorithm to further reduce the training complexity.
## III System Model and Problem Formulation
### _Network Model_
As shown in Fig. 2, we consider a wireless network with one AP to serve multiple types of industrial IoT devices. The AP is in charge of collecting network information and resource orchestration within the network. Consider \(M\) types of inference services, denoted by a set \(\mathcal{M}\), such as facility fault diagnosis and facility monitoring services. Taking the facility fault diagnosis service as an example, vibration sensors installed on industrial IoT devices sense the operating conditions at a sampling rate, and feed the sensed vibration signal into a DNN, then the DNN diagnoses the facility fault type. The set of industrial IoT devices subscribed to service \(m\) is denoted by \(\mathcal{N}_{m}\), and the set of all industrial IoT devices is denoted by \(\mathcal{N}=\cup_{m\in\mathcal{M}}\mathcal{N}_{m}\). In the collaborative inference framework, two types of DNNs are deployed. One is a compressed DNN, which is deployed at industrial IoT devices. The compressed DNN can be implemented via the weight pruning technique, which prunes less-important weights to reduce computational complexity while maintaining similar inference accuracy [6]. The other is an uncompressed DNN, which is deployed at the AP. In this way, \(M\) types of uncompressed DNNs share the edge computing resource to serve different inference requests. Important notations are summarized in Table I.
The collaborative DNN inference framework operates in a time-slotted manner. Let \(t\) denote the time index, where \(t\in\mathcal{T}=\{1,2,...,T\}\). The detailed procedure is given as follows.
1. Sampling rate selection: Industrial IoT devices first select their sampling rates according to channel conditions and computation workloads. The set of candidate sampling rates is denoted by \(\mathcal{K}=\{\theta_{1},\theta_{2},...,\theta_{K}\}\), where \(\theta_{K}\) denotes the raw sampling rate. We assume the sampling rate in \(\mathcal{K}\) increases linearly with the index, i.e., \(\theta_{k}=k\theta_{K}/K\). Let \(\mathbf{X}^{t}\) denote the sampling rate decision matrix in time slot \(t\), whose element \(x_{n,k}^{t}=1\) indicates industrial IoT device \(n\in\mathcal{N}\) selects the \(k\)-th sampling rate.
2. Task processing: The sensing data from industrial IoT devices within a time slot is deemed as a computation task, which can be either offloaded to the AP or executed locally. Let \(\mathbf{o}^{t}\in\mathbb{R}^{|\mathcal{N}|\times 1}\) denote the offloading decision vector in time slot \(t\), whose element \(o_{n}^{t}=0\) indicates offloading the computation task from industrial IoT device \(n\). Otherwise, \(o_{n}^{t}=1\) indicates executing the computation task locally.
### _Service Delay Model_
A computation task can be either processed locally or offloaded to the AP. In what follows, we analyze the service delay in these two cases.
#### Iii-B1 Executing locally
Let \(\lambda_{n}^{t}\) denote the task arrival rate of the \(n\)-th industrial IoT device in time slot \(t\), which is assumed to follow a general random distribution. The raw data size of the generated tasks at the \(n\)-th device is denoted by \(\xi_{n}^{t}=\lambda_{n}^{t}\nu_{m},\forall n\in\mathcal{N}_{m}\), where \(\nu_{m}\) denotes the raw data size of a task for service \(m\). After the sampling rate is selected, the data size of the generated task is represented by \(\zeta\left(\mathbf{x}_{n}^{t}\right)=\sum_{k=1}^{K}x_{n,k}^{t}\xi_{n}^{t}k/K\), where \(\mathbf{x}_{n}^{t}=\{x_{n,k}^{t}\}_{k\in\mathcal{K}}\) is the sampling rate selection decision vector of the \(n\)-th device. When the inference task is processed locally by a compressed
Fig. 2: The collaborative DNN inference framework for industrial IoT devices.
DNN, the service delay includes the queuing delay in the local computing queue and task processing delay, which is given by
\[d_{n,l}^{t}=\frac{{{o_{n}^{t}}{\eta_{m,c}}\left({B_{n}^{t}+\zeta\left({\bf{x}}_{n}^ {t}\right)}\right)}}{{f_{n}}},\forall n\in{\cal N}_{m}, \tag{1}\]
where \(f_{n}\) is the CPU frequency of the \(n\)-th industrial IoT device, and \(\eta_{m,c}\) denotes the computation intensity of the compressed DNN for the \(m\)-th service. Here, \(B_{n}^{t}\) is the backlogged computation tasks (in bits) in the local computing queue, which is updated via
\[B_{n}^{t+1}=\min\left\{{\left[{B_{n}^{t}+o_{n}^{t}\zeta\left({\bf{x}}_{n}^{t} \right)-\frac{{{f_{n}}\tau}}{{\eta_{m,c}}}}\right]^{+},B_{n}^{max}}\right\}, \tag{2}\]
where \(\left[x\right]^{+}=\max\{x,0\}\), \(B_{n}^{max}\) is the capacity of the local computing queue, and \(\tau\) is the duration of a time slot. Tasks will be dropped if the local computing queue is full. Let
\[\Psi_{b,n}^{t}=\max\left\{{B_{n}^{t}+o_{n}^{t}\zeta\left({\bf{x}}_{n}^{t} \right)-\frac{{{f_{n}}\tau}}{{\eta_{m,c}}}-B_{n}^{max},0}\right\} \tag{3}\]
denote the amount of the dropped tasks in the local computing queue of device \(n\). Here, \(\Psi_{n,b}^{t}>0\) indicates that an event of local computing queue overflow occurs at the \(n\)-th device, and the corresponding penalty will be incurred to avoid queue overflow.
#### Iii-B2 Offloading to AP
When a task is offloaded to the AP, it will be processed by an uncompressed DNN. The service delay consists of task offloading delay, queuing delay in the edge computing queue, and task processing delay, which are analyzed respectively as follows.
* Task offloading delay: For the \(n\)-th industrial IoT device, the offloading delay is given by \[d_{n,o}^{t}=\frac{{\left({1-o_{n}^{t}}\right)\zeta\left({\bf{x}}_{n}^{t} \right)}}{{R_{n}^{t}}},\] (4) where transmission rate between the \(n\)-th industrial IoT device and the AP, \(R_{n}^{t}\), is given by \[R_{n}^{t}=\frac{W}{N}\log_{2}\left({1+\frac{{{P_{T}}G(H_{n}^{t})}}{{{N_{f}} \sigma^{2}}}}\right).\] (5) Here, \(W\), \({{P_{T}},G(H_{n}^{t})}\), and \({{N_{f}}}\) represent the system bandwidth, transmit power, channel gain, and noise figure, respectively. \(\sigma^{2}={{N_{o}}W}/{N}\) denotes the background noise where \({{N_{o}}}\) is thermal noise spectrum density. Channel gain \(G(H_{n}^{t})\) varies in terms of channel state \(H_{n}^{t}\). Based on extensive real-time measurements, channel state \(H_{n}^{t}\) can be modeled with a finite set of channel states \(\mathcal{H}\)[17]. The evolution of channel states is characterized by a discrete-time and ergodic Markov chain model, whose transition matrix is \(\mathbf{P}\in\mathbb{R}^{\left[\mathcal{H}\right]\times\left[\mathcal{H} \right]}\).
* Task processing delay: The tasks from all industrial IoT devices subscribed to the \(m\)-th service are placed in the edge computing queue for the \(m\)-th service. The amount of aggregated tasks is given by \(\sum_{n\in\mathcal{N}_{m}}\left({1-o_{n}^{t}}\right)\zeta\left({\bf{x}}_{n}^{ t}\right)\). The computing resource is dynamically allocated among multiple services at the AP according to service task arrivals, which can be realized via containerization techniques, such as Dockers and Kubernetes [18]. Let \({\bf{c}}^{t}\in\mathbb{R}^{M\times 1}\) denote the computing resource allocation decision vector in time slot \(t\). Each element \(0\leq c_{m}^{t}\leq 1\) denotes the portion of computing resource allocated to the \(m\)-th service. Hence, the processing delay is given by \[d_{n,p}^{t}=\frac{{{\eta_{m,u}}\left({1-o_{n}^{t}}\right)\zeta\left({\bf{x}}_{ n}^{t}\right)}}{{c_{m}^{t}f_{b}}},\forall n\in\mathcal{N}_{m},\] (6) where \(f_{b}\) is the CPU frequency of the computing server at the AP, and \(\eta_{m,u}\) denotes the computation intensity of processing the \(m\)-th service task by the uncompressed DNN. Note that \(\eta_{m,u}>\eta_{m,c}\), since the uncompressed DNN consumes more computing resource.
* Queuing delay: The queuing delay consists of two components: (i) the time taken to process backlogged tasks in the edge computing queue, which is given by \[d_{n,q}^{t}=\frac{{Q_{m}^{t}\eta_{m,u}}}{{c_{m}^{t}f_{b}}},\forall n\in \mathcal{N}_{m}.\] (7) Here, \(Q_{m}^{t}\) denotes the edge computing queue backlog for the \(m\)-th service in time slot \(t\), which is updated according to \[Q_{m}^{t+1}=\min\left\{{\left[{Q_{m}^{t}+a_{m}^{t}-\frac{{c_{m}^{t}f_{b}\tau}}{ {\eta_{m,u}}}}\right]^{+},Q_{m}^{max}}\right\}.\] (8) Here, \(a_{m}^{t}=\sum_{n\in\mathcal{N}_{m}}\left({1-o_{n}^{t}}\right)\zeta\left({\bf {x}}_{n}^{t}\right)\) and \(Q_{m}^{max}\) denotes the capacity of the \(m\)-th edge computing queue. Similar to that in local computing queues, tasks will also be dropped if the edge computing queue is full, and the amount of dropped tasks for the \(m\)-th edge computing queue is given by \[\Psi_{q,m}^{t}=\max\left\{{Q_{m}^{t}+a_{m}^{t}-\frac{{c_{m}^{t}f_{b}\tau}}{{ \eta_{m,u}}}-Q_{m}^{max},0}\right\}.\] (9) Here, \(\Psi_{q,m}^{t}>0\) indicates that an event of edge computing queue overflow occurs; and (ii) average waiting time among all newly arrived tasks until the task of industrial IoT device \(n\) is processed, which is given by \[d_{n,w}^{t}=\frac{{{\eta_{m,u}}\sum_{i\neq n,i\in\mathcal{N}_{m}}\left({1-o_{i}^{t }}\right)\zeta\left({\bf{x}}_{i}^{t}\right)}}{{2c_{m}^{t}f_{b}}}.\] (10) Here, \(\sum_{i\neq n,i\in\mathcal{N}_{m}}\left({1-o_{i}^{t}}\right)\zeta\left({\bf{x} }_{n}^{t}\right)\) denotes the amount of aggregated tasks except the task of industrial IoT device \(n\).
Taking both local execution and offloading into account, the service delay in time slot \(t\) is given by
\[D\left({\bf{X}}^{t},{\bf{o}}^{t},{\bf{c}}^{t}\right) =\sum_{n\in\mathcal{N}}\left({d_{n,l}^{t}+d_{n,o}^{t}+d_{n,p}^{t}+ d_{n,q}^{t}+d_{n,w}^{t}}\right)\] \[+w_{p}\left(\sum_{n\in\mathcal{N}}\mathbb{1}_{\left\{{\Psi_{n,u}^{t}>0 }\right\}}+\sum_{m\in\mathcal{M}}\mathbb{1}_{\left\{{\Psi_{q,m}^{t}>0}\right\}} \right), \tag{11}\]
where \(\mathbb{1}_{\{x\}}=1\) and \(w_{p}>0\) are the indicator function and the positive unit penalty cost for queue overflow, respectively. The first term represents the experienced delay to complete all tasks in time slot \(t\). The second term represents the penalty for potential overflow events in local and edge computing queues.
### _Inference Accuracy Model_
The inference accuracy depends on the sampling rate of a task and the type of DNN that executes a task. Firstly, we characterize the relationship between the inference accuracy and the sampling rate, which is specified by accuracy function \(g(\theta_{k}),\forall\theta_{k}\in\mathcal{K}\). Specifically, we implement a DNN inference algorithm, i.e., AlexNet [19], and apply the AlexNet to diagnose facility fault type based on the collected bearing vibration signal from the dataset [8], and then measure the accuracy function values with respect to sampling rates, as shown in Fig. 1. Secondly, the relationship between the inference accuracy and the type of DNN is also characterized via experiments. Here, \(h_{m,c}\) and \(h_{m,u}\) represent the inference accuracy of the compressed DNN and the uncompressed DNN for the \(m\)-th service, respectively. Note that, \(h_{m,c}<h_{m,u}\), as an uncompressed DNN achieves higher fault diagnosis accuracy.
Since the DNN model selection (i.e., task offloading decision) and the sampling rate selection are independent, inference accuracy is the product of the accuracy value with respect to the selected sampling rate and the accuracy value with respect to the selected DNN type, i.e., \(g\left(\sum_{k\in K}x_{n,k}^{t}\theta_{k}\right)\left(o_{n}^{t}h_{m,c}+\left(1 -o_{n}^{t}\right)h_{m,u}\right)\). Hence, the average inference accuracy for the \(m\)-th service in time slot \(t\) can be given by
\[\begin{split} A_{m}\left(\mathbf{X}^{t},\mathbf{o}^{t}\right)=& \sum_{n\in\mathcal{N}_{m}}\frac{1}{|\mathcal{N}_{m}|}g\left(\sum_{k\in K}x_{ n,k}^{t}\theta_{k}\right)\cdot\\ &\left(o_{n}^{t}h_{m,c}+\left(1-o_{n}^{t}\right)h_{m,u}\right). \end{split} \tag{12}\]
Note that the model can be readily extended to cases when other inference methods are adopted, since the accuracy values with respect to sampling rates and DNN types are obtained via practical experiments.
### _Problem Formulation_
DNN inference services require not only minimizing service delay, but also guaranteeing their long-term accuracy requirements, which can be modeled via a CMDP. Its action, state, reward, and state transition matrix are defined as follows:
* Action: The action includes the sampling rate selection, task offloading, and edge computing resource allocation decisions, i.e., \(\hat{a}^{t}=\{\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t}\}\). Note that the components of the action should satisfy following constraints: (1) \(x_{n,k}^{t}\in\{0,1\}\) constrains the sampling rate selection decision; (2) \(o_{n}^{t}\in\{0,1\}\) requires the binary task offloading decision; and (3) \(\sum_{m\in\mathcal{M}}c_{m}^{t}\leq 1\) and \(0\leq c_{m}^{t}\leq 1\) constrain a continuous computing resource allocation decision.
* State: The state includes local computing queues backlog of industrial IoT devices \(B_{n}^{t}\), edge computing queues backlog \(Q_{m}^{t}\), channel conditions of industrial IoT devices \(H_{n}^{t}\), and the raw data size of the generated tasks at industrial IoT devices \(\xi_{n}^{t}\), i.e., \[\hat{s}^{t}= \{\{B_{n}^{t}\}_{n\in\mathcal{N}},\{Q_{m}^{t}\}_{m\in\mathcal{M}},\{H_{n }^{t}\}_{n\in\mathcal{N}},\{\xi_{n}^{t}\}_{n\in\mathcal{N}}\}.\] (13) The queue backlogs, i.e., \(\{B_{n}^{t}\}_{n\in\mathcal{N}}\) and \(\{Q_{m}^{t}\}_{m\in\mathcal{M}}\), adopt a unit in bits, which result in large state space, especially for a large number of industrial IoT devices.
* Reward: The reward is designed to minimize the service delay in (22) in time slot \(t\), which is defined as \(\hat{r}^{t}=-D\left(\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t}\right).\)
* State transition probability: State transition probability is given by \[\begin{split}&\Pr\left(\hat{s}^{t+1}|\hat{s}^{t},\hat{a}^{t} \right)=\prod_{n\in\mathcal{N}}\Pr\left(B_{n}^{t+1}|B_{n}^{t},x_{n,k}^{t},o_{n} ^{t}\right)\cdot\\ &\prod_{m\in\mathcal{M}}\Pr\left(Q_{m}^{t+1}|Q_{m}^{t},\mathbf{ X}^{t},\mathbf{o}^{t}\right)\cdot\\ &\prod_{n\in\mathcal{N}}\Pr\left(H_{n}^{t+1}|H_{n}^{t}\right) \cdot\prod_{n\in\mathcal{N}}\Pr\left(\xi_{n}^{t+1}|\xi_{n}^{t}\right).\end{split}\] (14) The equality holds due to the independence of different state components. The first two components are governed by the evolution of local computing queues and edge computing queues in (2) and (8), respectively. The third component is evolved according to the discrete-time Markov chain of channel conditions, and the last component is governed by the memoryless task arrival pattern. Note that each of those state components only depends on its previous state components, which means the state transition is Markovian.
Our goal is to find a stationary policy \(\pi\in\Pi\) that dynamically configures sampling rates selection \(\mathbf{X}^{t}\), task offloading \(\mathbf{o}^{t}\), and edge computing resource allocation \(\mathbf{c}^{t}\) according to state \(\hat{s}^{t}\), to minimize the service delay while guaranteeing long-term inference accuracy requirements \(\{A_{m}^{th}\}_{m\in\mathcal{M}}\), which is formulated as the following problem:
\[\mathbf{P}_{0}:\underset{\pi\in\Pi}{\text{min}} \lim_{T\rightarrow\infty}\frac{1}{\mathcal{T}}\sum_{t=1}^{T} \mathbb{E}_{\pi}\left[D\left(\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t} \right)\right]\] (15a) s.t. \[\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}A_{m}\left( \mathbf{X}^{t},\mathbf{o}^{t}\right)\geq A_{m}^{th},\forall m\in\mathcal{M}.\] (15b) Here, \[\mathbf{P}_{0}\] is a CMDP. Directly solving the above CMDP via dynamic programming solutions [15] is challenging due to the following reasons. Firstly, state transition probability is unknown due to the lack of statistic information on the channel condition variation and task arrival patterns of all industrial IoT devices. Secondly, even the state transition probability is known, large action space and state space that grow with respect to the number of industrial IoT devices incur an extremely high computational complexity, which makes dynamic programming solutions intractable. Hence, we propose a deep RL-based algorithm to solve the CMDP, which can be applied in large-scale networks without requiring statistic information of network dynamics.
## IV Deep RL-based Sampling Rate Adaption and Resource Allocation Algorithm
As mentioned before, a CDMP problem cannot be directly solved via traditional RL algorithms. We first leverage the Lyapunov optimization technique to deal with the long-term constraints and transform the problem into an MDP. Then,
we develop a deep RL-based algorithm to solve the MDP. To further reduce the training complexity, an optimization subroutine is embedded to directly obtain the optimal edge computation resource allocation.
### _Lyapunov-Based Problem Transformation_
The major challenge in solving problem \(\mathbf{P}_{0}\) is to handle the long-term constraints. We leverage the Lyapunov technique [20, 21] to address this challenge. The _core idea_ is to construct accuracy deficit queues to characterize the satisfaction status of the long-term accuracy constraints, thereby guiding the learning agent to meet the long-term accuracy constraints. The problem transformation procedure is presented as follows.
Firstly, we construct inference accuracy _deficit queues_ for all services, whose dynamics evolves as follows:
\[Z_{m}^{t+1}=\left[A_{m}^{th}-A_{m}\left(\mathbf{X}^{t},\mathbf{o}^{t}\right)+Z _{m}^{t}\right]^{+},\forall m\in\mathcal{M}. \tag{16}\]
Here, \(Z_{m}^{t}\) indicates the deviation of the achieved instantaneous accuracy from the long-term accuracy requirement, whose initial state is set to \(Z_{m}^{0}=0\). Then, a Lyapunov function is introduced to characterize the satisfaction status of the long-term accuracy constraint, which is defined as \(L\left(Z_{m}^{t}\right)=\left(Z_{m}^{t}\right)^{2}/2\)[20, 21, 22]. A smaller value of \(L\left(Z_{m}^{t}\right)\) indicates better satisfaction of the long-term accuracy constraint.
Secondly, the Lyapunov function should be consistently pushed to a low value in order to guarantee the long-term accuracy constraints. Hence, we introduce a _one-shot Lyapunov drift_ to capture the variation of the Lyapunov function across two subsequent time slots [20]. Given \(Z_{m}^{t}\), the one-shot Lyapunov drift is defined as \(\Delta\left(Z_{m}^{t}\right)=L\left(Z_{m}^{t+1}\right)-L\left(Z_{m}^{t}\right)\), which is upper bounded by
\[\Delta\left(Z_{m}^{t}\right)=\frac{1}{2}\left(\left(Z_{m}^{t+1} \right)^{2}-\left(Z_{m}^{t}\right)^{2}\right)\] \[\leq\frac{1}{2}\left(\left(Z_{m}^{t}+A_{m}^{th}-A_{m}\left( \mathbf{X}^{t},\mathbf{o}^{t}\right)\right)^{2}-\left(Z_{m}^{t}\right)^{2}\right)\] \[=\frac{1}{2}\left(A_{m}^{th}-A_{m}\left(\mathbf{X}^{t},\mathbf{o} ^{t}\right)\right)^{2}+Z_{m}^{t}\left(A_{m}^{th}-A_{m}\left(\mathbf{X}^{t}, \mathbf{o}^{t}\right)\right)\] \[\leq C_{m}+Z_{m}^{t}\left(A_{m}^{th}-A_{m}\left(\mathbf{X}^{t}, \mathbf{o}^{t}\right)\right), \tag{17}\]
where \(C_{m}=\left(A_{m}^{th}-A_{m}^{min}\right)^{2}/2\) is a constant, and \(A_{m}^{min}\) is the lowest inference accuracy that can be achieved for service \(m\). The first inequality is due to the substitution of (16), and the second inequality is because \(A_{m}\left(\mathbf{X}^{t},\mathbf{o}^{t}\right)\geq A_{m}^{min}\).
Thirdly, based on the Lyapunov optimization theory, the original CMDP minimizing the service delay while guaranteeing the long-term accuracy requirements boils down to minimizing a _drift-plus-cost_, i.e.,
\[\sum_{m\in\mathcal{M}}\Delta\left(Z_{m}^{t}\right)+V\cdot D\left( \mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t}\right) \tag{18}\] \[\leq\sum_{m\in\mathcal{M}}C_{m}+\sum_{m\in\mathcal{M}}Z_{m}^{t} \left(A_{m}^{th}-A_{m}\left(\mathbf{X}^{t},\mathbf{o}^{t}\right)\right)\] \[+V\cdot D\left(\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t} \right),\]
where the inequality is due to the upper bound in (17). Here \(V\) is a positive parameter to adjust the tradeoff between the service delay minimization and the satisfaction status of the long-term accuracy constraints. The underlying rationale is that, if the long-term accuracy constraint is violated, i.e., \(Z_{m}^{t}>0\), stratifying the long-term constraints by improving the instantaneous inference accuracy becomes more urgent than reducing the service delay.
In this way, the CMDP is transformed into an MDP with the objective of minimizing the drift-plus-cost in each time slot.
### _Equivalent MDP_
In the equivalent MDP, the action, state, reward, and state transition matrix are modified as follows due to the incorporation of accuracy deficit queues.
* Action: The action is the same as that in the CMDP, i.e., \(a^{t}=\hat{a}^{t}=\{\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t}\}\).
* State: Compared with the state of the CMDP, the accuracy deficit queue backlog of services \(\{Z_{m}^{t}\}_{m\in\mathcal{M}}\) should be incorporated, i.e., \[s^{t}=\{\hat{s}^{t},\{Z_{m}^{t}\}_{m\in\mathcal{M}}\}.\] (19)
* Reward: The reward is modified to minimize the drift-plus-cost in (18) in time slot \(t\), i.e., \[r^{t} =-V\cdot D\left(\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t}\right)\] (20) \[-\sum_{m\in\mathcal{M}}Z_{m}^{t}\left(A_{m}^{th}-A_{m}\left( \mathbf{X}^{t},\mathbf{o}^{t}\right)\right).\] Note that the constant term \(\sum_{m\in\mathcal{M}}C_{m}\) in (18) is ignored in the reward for brevity.
* State transition probability: Since accuracy deficit queue backlogs are incorporated in the state, the state transition probability evolves according to \[\Pr\left(s^{t+1}|s^{t},a^{t}\right) =\Pr\left(\hat{s}^{t+1}|\hat{s}^{t},\hat{a}^{t}\right)\cdot\] \[\prod_{m\in\mathcal{M}}\Pr\left(Z_{m}^{t+1}|Z_{m}^{t},\mathbf{X} ^{t},\mathbf{o}^{t}\right).\] (21) where the second term is the evolution of the accuracy deficit queue backlog according to (16). Note that the overall state transition is still Markovian.
Then, problem \(\mathbf{P}_{0}\) is transformed into the following MDP problem:
\[\mathbf{P}_{1}:\underset{\pi\in\Pi}{\text{min}} \lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T} \mathbb{E}_{\pi}\left[\sum_{m\in\mathcal{M}}Z_{m}^{t}\left(A_{m}^{th}-A_{m} \left(\mathbf{X}^{t},\mathbf{o}^{t}\right)\right)\right. \tag{22}\] \[\left.+V\cdot D\left(\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t }\right)\right].\]
Similar to CMDP, solving an MDP via dynamic programming solutions also suffers from the curse of dimensionality due to large state space. Hence, we propose a deep RL-based algorithm to solve the MDP, which is detailed in Section IV-D.
### _Optimization Subroutine for Edge Computing Resource Allocation_
Although \(\mathbf{P}_{1}\) can be directly solved by RL algorithms, an inherent property on edge computing resource allocation can
be leveraged, in order to reduce the training complexity of RL algorithms. Through analysis on (22), the edge computing resource allocation is independent of the inference accuracy performance, and hence it only impacts the one-shot service delay performance. In time slot \(t\), once task offloading and sampling rate selection decisions are made, the optimal computing resource allocation decision can be obtained via solving the following optimization problem:
\[\mathbf{P}_{2}:\text{min}\limits_{\mathbf{e}^{t}} D\left(\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t}\right)\] (23a) s.t. \[\sum_{m\in\mathcal{M}}c_{m}^{t}\leq 1\] \[0\leq c_{m}^{t}\leq 1. \tag{23b}\]
A further analysis of (11) indicates that only the task processing delay and queuing delay at the AP are impacted by the edge computing resource allocation, i.e., \(\sum_{m\in\mathcal{N}}\left(d_{n,p}^{t}+d_{n,q}^{t}+d_{n,w}^{t}\right)\). In addition, the aggregated delay from the perspective of all devices is equivalent to the aggregated delay from the perspective of all services. Hence, the objective function in \(\mathbf{P}_{2}\) can be rewritten as \(\sum_{m\in\mathcal{M}}d_{m}^{t}\), where
\[d_{m}^{t} =\sum_{n\in\mathcal{N}_{m}}\left(\frac{\eta_{m,u}\left(1-o_{n}^{t }\right)\zeta\left(\mathbf{x}_{n}^{t}\right)}{c_{m}^{t}f_{b}}+\frac{Q_{m}^{t} \eta_{m,u}}{c_{m}^{t}f_{b}}\right. \tag{24}\] \[+\left.\frac{\eta_{m,u}\sum_{i\neq n,i\in\mathcal{N}_{m}}\left(1- o_{i}^{t}\right)\zeta\left(\mathbf{x}_{i}^{t}\right)}{2c_{m}^{t}f_{b}}\right)\]
denotes the experienced delay of the \(m\)-th service. By analyzing the convexity property of the problem, we have the following theorem to obtain the optimal edge computation resource allocation in each time slot.
**Theorem 1**.: _The optimal edge computing resource allocation for problem \(\mathbf{P}_{2}\) is given by_
\[c_{m}^{t,\star}=\frac{\sqrt{\Lambda_{m}^{t}}}{\sum_{m\in\mathcal{M}}\sqrt{ \Lambda_{m}^{t}}},\forall m\in\mathcal{M}, \tag{25}\]
_where_
\[\Lambda_{m}^{t}= \sum_{n\in\mathcal{N}_{m}}\left(\eta_{m,u}\left(1-o_{n}^{t} \right)\zeta\left(\mathbf{x}_{n}^{t}\right)+Q_{m}^{t}\eta_{m,u}\right. \tag{26}\] \[\left.+\frac{\eta_{m,u}}{2}\sum_{i\neq n,i\in\mathcal{N}_{m}} \left(1-o_{i}^{t}\right)\zeta\left(\mathbf{x}_{i}^{t}\right)\right).\]
Proof.: Proof is provided in Appendix A.
This optimization subroutine for the edge computing resource allocation is embedded in the following proposed deep RL-based algorithm. In this way, the training complexity can be reduced, because it is no longer necessary to train the neural networks to obtain optimal edge computing resource allocation policy.
### _Deep RL-based Algorithm_
To solve problem \(\mathbf{P}_{1}\), we propose a deep RL-based algorithm, which is extended from the celebrated DDPG algorithm [23]. The main difference between DDPG and the proposed algorithm is that the above optimization subroutine for computing resource allocation is embedded to reduce the training complexity. The proposed algorithm can be deployed at the AP, which collects the entire network state information and enforces the policy to all connected industrial IoT devices.
In the algorithm, the learning agent has two parts: (a) an actor network, which is to determine the action based on the current state; and (b) a critic network, which is to evaluate the determined action based on the reward feedback from the environment. Let \(\mu(s|\phi^{\mu})\) and \(Q(s,a|\phi^{Q})\) denote the actor network and the critic network, respectively, whose neural network weights are \(\phi^{\mu}\) and \(\phi^{Q}\). As shown in Algorithm 1, the deep RL-based algorithm operates in a time-slotted manner, which consists of the following three steps.
```
1Initialization: Initialize all neural networks and the experience replay memory;
2foreachepisodedo
3 Reset the environment and obtain initial state \(s_{0}\);
4fortime slot \(t\in\mathcal{T}\)do
5 Determine the sampling rate selection and task offloading actions \(\{\mathbf{X}^{t},\mathbf{o}^{t}\}\) by the actor network according to current state \(s^{t}\);
6 Determine edge computing resource allocation action \(\mathbf{c}^{t}\) by (25);
7 Send joint action \(a^{t}=\{\mathbf{X}^{t},\mathbf{o}^{t},\mathbf{c}^{t}\}\) to all industrial IoT devices by the AP;
8 Execute the joint action at industrial IoT devices;
9 Observe reward \(r^{t}\) and new state \(s^{t+1}\);
10 Store transition \(\{s^{t},a^{t},r^{t},s^{t+1}\}\) in the experience replay memory;
11 Sample a random minibatch transitions from the experience replay memory;
12 Train the critic and actor network by (27) and (28), respectively;
13 Update target networks by (29);
14
15 end for
16
17 end for
```
**Algorithm 1**Deep RL-based algorithm for sampling rate adaption and resource allocation
The first step is to obtain experience by interacting with the environment. Based on current network state \(s^{t}\), the actor network generates the sampling rate selection and task offloading actions with an additive policy exploration noise that follows Gaussian distribution \(\mathcal{N}\left(0,\sigma^{2}\right)\). The optimization subroutine generates the edge computation resource allocation action. Then, the joint action is executed at all industrial IoT devices. The corresponding reward \(r^{t}\) and the next state \(s^{t+1}\) are observed from the environment. The state transition \(\{s^{t},a^{t},r^{t},s^{t+1}\}\) is stored in the experience replay memory for actor and critic network training.
The second step is to train the actor and critic network based on the stored experience. To avoid the divergence issue caused by DNN, a minibatch of transitions are randomly sampled from the experience replay memory to break experience
correlation. The critic network is trained by minimizing the loss function
\[Loss\left(\phi^{Q}\right)=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left(y_{i}-Q(s_{i},a_{ i}|\phi^{Q})\right)^{2}, \tag{27}\]
where \(y_{i}=r_{i}+\gamma Q^{\prime}(s_{i+1},\mu^{\prime}(s_{i+1}|\phi^{\mu^{\prime}})| \phi^{Q^{\prime}})\), and \(N_{b}\) is the minibatch size. Here, \(\mu^{\prime}(s|\phi^{\mu})\) and \(Q^{\prime}(s,a|\phi^{Q^{\prime}})\) represent actor and critic target networks with weights \(\phi^{\mu^{\prime}}\) and \(\phi^{Q^{\prime}}\). The actor network is trained via the policy gradient
\[\nabla_{\phi^{\mu}}\approx\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\nabla_{a}Q(s_{i},a |\phi^{Q})|_{s=s_{i},a=\mu(s_{i})}\nabla_{\theta^{\mu}}\mu(s_{i}|\phi^{\mu})|_{ s_{i}}. \tag{28}\]
The third step is to update target networks. In order to ensure network training stability, the actor and critic target networks are softly updated by
\[\phi^{Q^{\prime}}=\delta\phi^{Q}+(1-\delta)\phi^{Q^{\prime}},\phi^{\mu^{\prime }}=\delta\phi^{\mu}+(1-\delta)\phi^{\mu^{\prime}}, \tag{29}\]
where \(0<\delta\ll 1\) denotes the target network update ratio.
## V Simulation Results
### _Simulation Setup_
We consider a smart factory scenario in our simulation, in which industrial IoT devices, e.g., vibration sensors, are randomly scattered. The industrial IoT devices installed on industrial facilities (e.g., robot arms) sense their operating conditions. The sensing data is processed locally or offloaded to an AP in the smart factory for processing. The transmit power of an industrial IoT device is set to 20 dBm [24]. The channel condition is modeled with three states, i.e., "Good (G)", "Normal (N)", and "Bad (B)", and the corresponding transition matrix is given by [17]
\[\mathbf{P}=\begin{bmatrix}P_{GG}&P_{GN}&0\\ P_{NG}&P_{PN}&P_{NB}\\ 0&P_{NB}&P_{BB}\end{bmatrix}=\begin{bmatrix}0.3&0.7&0\\ 0.25&0.5&0.25\\ 0&0.7&0.3\end{bmatrix}. \tag{30}\]
Two types of DNN inference services are considered. _Type I service_: a facility fault diagnosis service to identify the fault type based on the collected bearing vibration signal from the dataset [8]. Since the duration of a time slot in the simulation is set to be one second, the task data size is the data volume of a one-second signal, which is a product of the raw sampling rate and the quantization bits of the signal. In the dataset, the bearing vibration signal is collected at 48 KHz sampling rate and 16 bit quantization, and hence the corresponding task data size is 768 kilobits. The long-term accuracy requirement of the service is set to 0.8. _Type II service_: a service extended from the Type I service to diagnose facility fault based on a low-grade bearing vibration dataset while requiring higher inference accuracy 0.9. The low-grade dataset collects vibration signal at a lower sampling rate of 32 KHz, and hence the task data size is 512 kilobits. For both services, the task arrival rate of each industrial IoT device at each time slot follows a uniform distribution \(\mathcal{U}(\lambda-0.5,\lambda+0.5)\), where \(\lambda\) is the average task arrival rate. We consider four candidate sampling rates for industrial IoT devices, which are 25%, 50%, 75% and 100% of the raw sampling rate. The corresponding accuracy with respect to the sampling rates are 0.59, 0.884, 0.950 and 0.987, respectively, based on extensive experiments
Fig. 3: Performance of the proposed algorithm in the training stage.
on the bearing vibration dataset [8]. Balance parameter \(V\) is set to 0.05 based on extensive simulations. Other important simulation parameters are listed in Table II. The parameters of the proposed algorithm are given in Table III. The proposed algorithm is compared to the following benchmarks:
* **Delay myopic**: Each industrial IoT device dynamically makes sampling rate selection and task offloading decisions by maximizing the one-step reward in (20) according to the network state.
* **Static configuration**: Each industrial IoT device takes a static configuration on the sampling rate selection and task offloading decisions, which can guarantee services' accuracy requirements.
### _Performance Evaluation_
#### Iv-B1 Convergence of the proposed algorithm
The service delay performance in the training stage is shown in Fig. 3(a). We can clearly see that the average service delay gradually decreases as the increase of training episodes, which validates the convergence of the proposed algorithm. In addition, Fig. 3(b) shows the accuracy performance for both services with respect to training episodes. The accuracy performance is not good at the beginning of the training stage, but after 1,000 episodes of training, the accuracy performance converges to the pre-determined requirements.
#### Iv-B2 Impact of task arrival rate
Once well-trained offline, we evaluate the performance of the proposed algorithm in the online inference. As shown in Fig. 4, we compare the average service delay performance of the proposed algorithm with benchmark schemes in terms of task arrival rates for \(W=20\) MHz. Each simulation point is plotted with a 95% confidence interval. Several observations can be obtained from the figure. Firstly, the service delay increases with the task arrival rate due to constrained communication and computing resources in the network. Secondly, the proposed algorithm significantly outperforms benchmark schemes. The reason is that the proposed RL-based algorithm can capture network dynamics, such as the task arrival pattern and channel condition variation, via interacting with the environment. The learned knowledge is utilized to make online decisions that target at the long-term performance, while benchmark schemes only focus on the short-term performance and do not adapt to network dynamics. Specifically, the proposed algorithm can reduce the average service delay by 19% and 25%, respectively, as compared with delay myopic and static configuration schemes.
As shown in Fig. 5, boxplot accuracy distribution of two services is presented with respect to different task arrival rates. The long-term accuracy requirements for two services are 0.8 and 0.9, respectively. It can be seen that the proposed algorithm guarantees the long-term accuracy requirements of both services with a high probability. Specifically, the maximum error probability is less than 0.5%.
#### Iv-B3 Impact of communication bandwidth
Fig. 6 shows the impact of communication bandwidth on the average service delay. Firstly, we can see that the average service delay decreases as the growth of bandwidth. The reason is that the transmission delay is reduced when the communication resource becomes sufficient. In addition, the proposed al
Fig. 4: Service delay performance with respect to task arrival rates.
Fig. 5: Inference accuracy performance with respect to task arrival rates.
Fig. 6: Service delay performance with respect to communication bandwidth.
Fig. 7: Service delay in terms of CPU frequency of the edge server.
gorithm achieves good performance when the bandwidth is scarce. When system bandwidth is only 5 MHz, the proposed algorithm achieves 1.20\(\times\) and 1.42\(\times\) delay reduction compared with delay myopic and static configuration schemes, respectively, which is larger than that when the system bandwidth is 25 MHz (1.15\(\times\) and 1.31\(\times\)). The reason is that the proposed algorithm efficiently utilizes the on-board computing resources. Simulation results show that the proposed algorithm decides 47.5% computation tasks to be executed locally with 5 MHz bandwidth, while the delay myopic benchmark only decides 17%. Due to the efficient resource orchestration among industrial IoT devices and the AP, the proposed algorithm can effectively reduce average service delay for both services.
#### V-B4 Impact of optimization subroutine
As shown in Fig. 7, we evaluate the performance of the proposed algorithm with the fixed computing resource allocation (referred to as proposed-fixed), in which the edge computing resource is allocated based on the average computing demand of two services. Compared with the proposed-fixed solution, the proposed algorithm achieves significant performance gain when the edge computing resource is constrained. Specifically, the performance gain in reducing the service delay decreases from 1.98\(\times\) at 1 GHz CPU frequency to only 1.02\(\times\) at 1.2 GHz CPU frequency. The reason is that efficient resource allocation is more important in resource-constrained scenarios, as compared to resource-rich scenarios. The results validate the effectiveness of the optimization subroutine for edge computing resource allocation. In addition to the performance gain, another merit of the optimization subroutine is to reduce the training complexity of RL algorithms.
## VI Conclusion
In this paper, we have studied the sampling rate adaption and resource allocation problem for collaborative DNN inference in industrial IoT networks. A deep RL-based algorithm has been developed to determine the channel variation and the task arrival pattern which are then exploited to provide accuracy-guaranteed DNN inference services. The proposed algorithm can optimize service delay performance on the fly, without requiring statistic information of network dynamics. The Lyapunov-based transformation technique can be applied to other CMDPs. For the future work, we will investigate the impact of device mobility on the inference performance.
### _Proof of Theorem 1_
Firstly, the problem is proved to be a convex optimization problem. For brevity of notations, we omit \(t\) in the proof. With the definition of \(\Lambda_{m}\) in (26), the objective function can be rewritten as \(\sum_{m\in\mathcal{M}}\Lambda_{m}/(c_{m}f_{b})\). The second-order derivative of the objective function shows \(2\Lambda_{m}/\left(f_{b}c_{m}^{3}\right)>0\). In addition, the inequality constraint is linear. Hence, the problem is a convex optimization problem.
Secondly, a Lagrange function for the problem without considering the inequality constraints is constructed, i.e.,
\[\mathcal{L}\left(\mathbf{c},a\right)=\sum_{m\in\mathcal{M}}\frac{\Lambda_{m}}{ c_{m}f_{b}}+a\left(\sum_{m\in\mathcal{M}}c_{m}-1\right), \tag{31}\]
where \(a\) denotes the Lagrange multiplier. Based on Karush-Kuhn-Tucker conditions [26], we have
\[\frac{\partial L\left(\mathbf{c},a\right)}{\partial c_{m}}=-\frac{\Lambda_{m} }{f_{b}c_{m}^{2}}+a=0,\forall m\in\mathcal{M}. \tag{32}\]
By solving the above equation, we can obtain \(c_{m}^{\star}=\sqrt{\Lambda_{m}/af_{b}},\forall m\in\mathcal{M}\). Substituting the above result into the complementary slackness condition \(\sum_{m\in\mathcal{M}}c_{m}^{\star}-1=0\), the optimal value of \(a\) is given by \(a^{\star}=\left(\sum_{m\in\mathcal{M}}\sqrt{\Lambda_{m}}\right)^{2}/f_{b}\). From the above equation, \(a^{\star}\) takes a positive value, and hence \(\{c_{m}^{\star}\}_{m\in\mathcal{M}}\) are positive values, which means constraint (23b), i.e., \(c_{m}^{t}\geq 0,\forall m\in\mathcal{M}\), is automatically satisfied. Substituting \(a^{\star}\) into the complementary slackness condition proves Theorem 1.
|
2309.10529 | Metrical properties of exponentially growing partial quotients | A fundamental challenge within the metric theory of continued fractions
involves quantifying sets of real numbers, when represented using continued
fractions, exhibit partial quotients that grow at specific rates. For any
positive function $\Phi$, Wang-Wu theorem (2008) comprehensively describes the
Hausdorff dimension of the set \begin{equation*} \EE_1(\Phi):=\left\{x\in [0,
1): a_n(x)\geq \Phi(n) \ {\rm for \ infinitely \ many} \ n\in \N\right\}.
\end{equation*} Various generalisations of this set exist, such as substituting
one partial quotient with the product of consecutive partial quotients in the
aforementioned set which has connections with the improvements to Dirichlet's
theorem, and many other sets of similar nature. Establishing the upper bound of
the Hausdorff dimension of such sets is significantly easier than proving the
lower bound. In this paper, we present a unified approach to get an optimal
lower bound for many known setups, including results by Wang-Wu [Adv. Math.,
2008], Huang-Wu-Xu [Israel J. Math. 2020], Bakhtawar-Bos-Hussain [Nonlinearity
2020], and several others, and also provide a new theorem derived as an
application of our main result. We do this by finding an exact Hausdorff
dimension of the set $$S_m(A_0,\ldots,A_{m-1}) \defeq \left\{ x\in[0,1): \, c_i
A_i^n \le a_{n+i}(x) < 2c_i A_i^n,0 \le i \le m-1 \ \text{for infinitely many }
n\in\N \right\},$$ where each partial quotient grows exponentially and the base
is given by a parameter $A_i>1$. For proper choices of $A_i$'s, this set serves
as a subset for sets under consideration, providing an optimal lower bound of
Hausdorff dimension in all of them. The crux of the proof lies in introducing
of multiple probability measures consistently distributed over the Cantor-type
subset of $S_m(A_0,\ldots,A_{m-1})$. | Mumtaz Hussain, Nikita Shulga | 2023-09-19T11:16:23Z | http://arxiv.org/abs/2309.10529v1 | # Metrical properties of exponentially growing partial quotients
###### Abstract.
A fundamental challenge within the metric theory of continued fractions involves quantifying sets of real numbers, when represented using continued fractions, exhibit partial quotients that grow at specific rates. For any positive function \(\Phi\), Wang-Wu theorem (2008) comprehensively describes the Hausdorff dimension of the set
\[\mathcal{E}_{1}(\Phi):=\left\{x\in[0,1):a_{n}(x)\geq\Phi(n)\text{ for infinitely many }n\in\mathbb{N}\right\}.\]
Various generalisations of this set exist, such as substituting one partial quotient with the product of consecutive partial quotients in the aforementioned set which has connections with the improvements to Dirichlet's theorem, and many other sets of similar nature. Establishing the upper bound of the Hausdorff dimension of such sets is significantly easier than proving the lower bound. In this paper, we present a unified approach to get an optimal lower bound for many known setups, including results by Wang-Wu [Adv. Math., 2008], Huang-Wu-Xu [Israel J. Math. 2020], Bakhtawar-Bos-Hussain [Nonlinearity 2020], and several others, and also provide a new theorem derived as an application of our main result. We do this by finding an exact Hausdorff dimension of the set
\[S_{m}(A_{0},\ldots,A_{m-1})\stackrel{{\text{def}}}{{=}}\left\{x \in[0,1):\,c_{i}A_{i}^{n}\leq a_{n+i}(x)<2c_{i}A_{i}^{n},0\leq i\leq m-1\text{ for infinitely many }n\in\mathbb{N}\right\},\]
where each partial quotient grows exponentially and the base is given by a parameter \(A_{i}>1\). For proper choices of \(A_{i}\)'s, this set serves as a subset for sets under consideration, providing an optimal lower bound of Hausdorff dimension in all of them. The crux of the proof lies in introducing of multiple probability measures consistently distributed over the Cantor-type subset of \(S_{m}(A_{0},\ldots,A_{m-1})\).
## 1. Introduction
It is well-known that every irrational number \(x\in(0,1)\) has a unique infinite continued fraction expansion. This expansion can be induced by the Gauss map \(T:[0,1)\to[0,1)\) defined by
\[T(0)=0,\ T(x)=\frac{1}{x}-\left\lfloor\frac{1}{x}\right\rfloor\text{ for }x\in(0,1),\]
where \(\lfloor x\rfloor\) denotes the integer part of \(x\). We write \(x:=[a_{1}(x),a_{2}(x),a_{3}(x),\ldots]\) for the continued fraction of \(x\), where \(a_{1}(x)=\lfloor 1/x\rfloor\), \(a_{n}(x)=a_{1}(T^{n-1}(x))\) for \(n\geq 2\) are called the partial quotients of \(x\) (or continued fraction digits of \(x\)). The metric theory of continued fractions concerns the quantitative study of properties of partial quotients for almost all \(x\in(0,1)\). This area of research is closely connected with metric Diophantine approximation, for example, fundamental theorems in this field by Khintchine (1924) and Jarnik (1931) can be represented in terms of the growth of partial quotients. The classical Borel-Bernstein theorem (1912) states that the Lebesgue measure of the set
\[\mathcal{E}_{1}(\Phi):=\left\{x\in[0,1):a_{n}(x)\geq\Phi(n)\text{ for infinitely many }n\in\mathbb{N}\right\}\]
is either zero or one depending upon the convergence or divergence of the series \(\sum_{n=1}^{\infty}\Phi(n)^{-1}\) respectively. Here and throughout \(\Phi:\mathbb{N}\to[1,\infty)\) will be an arbitrary function such that \(\Phi(n)\to\infty\) as \(n\to\infty\). For rapidly increasing functions \(\Phi\), the Borel-Bernstein (1911, 1912) theorem gives no further information than the zero Lebesgue measure of \(\mathcal{E}_{1}(\Phi)\). To distinguish between the sets of zero Lebesgue measure, the Hausdorff dimension is an appropriate tool. For an arbitrary function \(\Phi\), the dimension of \(\mathcal{E}_{1}(\Phi)\) was computed by Wang-Wu [23].
The consideration of the growth of product of consecutive partial quotients played a significant role in understanding the uniform approximation theory (improvements to Dirichlet's theorem as opposed to improvements to Dirichlet's corollary). In particular, the set
\[\mathcal{E}_{m}(\Phi):=\left\{x\in[0,1):\prod_{i=0}^{m-1}a_{n+i}(x)\geq\Phi(n) \text{ for infinitely many }n\in\mathbb{N}\right\}\]
received significant attention recently. See [12] for the Lebesgue measure of \(\mathcal{E}_{2}(\Phi)\), [4, 8] for the Hausdorff measure of \(\mathcal{E}_{2}(\Phi(q_{n}))\), [7] for the Lebesgue measure and Hausdorff dimension of \(\mathcal{E}_{m}(\Phi)\), and [1, 9] for the Hausdorff dimension of difference of sets \(\mathcal{E}_{2}(\Phi)\setminus\mathcal{E}_{1}(\Phi)\).
In this paper, we provide a unified treatment to all of these results (and some others) and retrieve them from our main theorem proved below. Given the breadth of the generality we envisage there will be many more applications other than those listed below.
### Main result
For a fixed integer number \(m\) and for all integers \(0\leq i\leq m-1\), let \(A_{i}>1\) be a real number. Define the set
\[S_{m}=S_{m}(A_{0},\ldots,A_{m-1}):=\left\{x\in[0,1):\,c_{i}A_{i}^{n}\leq a_{n+ i}(x)<2c_{i}A_{i}^{n},\,0\leq i\leq m-1\quad\text{for infinitely many }n\in\mathbb{N}\right\},\]
where \(c_{i}\in\mathbb{R}_{>0}\). For any \(0\leq i\leq m-1\), define the quantities
\[\beta_{-1}=1,\,\,\,\beta_{i}=A_{0}\cdots A_{i}.\]
Let
\[d_{i}=\inf\{s\geq 0:P(T,-s\log|T^{\prime}(x)|-s\log\beta_{i}+(1-s)\log\beta_{i -1})\leq 0\}, \tag{1.1}\]
where \(P(T,\psi)\) is a pressure function, the definition is given in Section 2.3.
The main result of the paper is the following theorem.
**Theorem 1.1**.: \[\dim_{\rm H}S_{m}=\min_{0\leq i\leq m-1}d_{i}.\]
**Remark 1.2**.: Even though the set \(S_{m}\) depends on \(c_{0},\ldots,c_{m-1}\), the exact values of these constants does not change the Hausdorff dimension of the set.
### Applications (summary)
As applications of our theorem, we obtain optimal lower bounds of Hausdorff dimension of the sets listed below, the detailed proofs are given in section 4. Note that, at their own, the proofs of the lower bound of the Hausdorff dimension of these sets were the main ingredients in the papers listed next to them, excluding the last set which is a new application.
* Wang-Wu [23]. \[F(B):=\{x\in[0,1):a_{n}(x)\geq B^{n}\,\text{ for infinitely many }n\in\mathbb{N}\}.\]
* Bakhtawar-Bos-Hussain [1]. \[\mathcal{F}(B):=\mathcal{E}_{2}(B)\setminus\mathcal{E}_{1}(B)=\left\{x\in[0, 1):\begin{array}{c}a_{n+1}(x)a_{n}(x)\geq B^{n}\text{ for infinitely many }n\in\mathbb{N}\text{ and }\\ a_{n+1}(x)<B^{n}\text{ for all sufficiently large }n\in\mathbb{N}\end{array} \right\}.\]
* Hussain-Li-Shulga [9]. \[E(A_{1},A_{2}):=\{x\in[0,1):\,c_{1}A_{1}^{n}\leq a_{n}(x)<2c_{1}A_{1}^{n},\ c_{2}A_{2}^{n}\leq a _{n+1}(x)<2c_{2}A_{2}^{n},\,\text{for infinitely many $n\in\mathbb{N}$}\}\] and \[\mathcal{F}_{B_{1},B_{2}}:=\left\{x\in[0,1):\begin{array}{rl}a_{n}(x)a_{n+1} (x)\geq B_{1}^{n}&\text{for infinitely many $n\in\mathbb{N}$}\\ a_{n+1}(x)<B_{2}^{n}&\text{for all sufficiently large $n\in\mathbb{N}$}\end{array} \right\}.\]
* Huang-Wu-Xu [7]. For \(m\geq 1\), the set \[E_{m}(B):=\{x\in[0,1):a_{n}(x)a_{n+1}(x)\cdots a_{n+m-1}(x)\geq B^{n}\text{ for infinitely many $n\in\mathbb{N}$}\}\]
* Bakhtawar-Hussain-Kleinbock-Wang [2]. \[\mathcal{D}_{2}^{\mathbf{t}}(B):=\left\{x\in[0,1):a_{n}^{t_{0}}a_{n+1}^{t_{1} }\geq B^{n}\text{ for infinitely many $n\in\mathbb{N}$}\right\}.\]
* Tan-Tian-Wang [21]. \[E(\psi):=\{x\in[0,1):\exists 1\leq k\neq l\leq n,a_{k}(x)\geq\psi(n),a_{l}(x) \geq\psi(n),\text{ for infinitely many $n\in\mathbb{N}$}\}.\]
* For \(m\geq 2\), the set (1.2) \[\mathcal{F}_{B_{1},B_{2}}^{m}:=\left\{x\in[0,1):\begin{array}{rl}a_{n}(x) \cdots a_{n+m-1}(x)\geq B_{1}^{n}&\text{for infinitely many $n\in\mathbb{N}$}\\ a_{n+1}(x)\cdots a_{n+m-1}(x)<B_{2}^{n}&\text{for all sufficiently large $n\in\mathbb{N}$}\end{array} \right\},\] which is a new application of Theorem 1.1. We also provide an upper bound of Hausdorff dimension for this set.
The proof of Theorem 1.1 will be given in Section 3. In Section 4 we apply this theorem to the sets listed above, showing that Theorem 1.1 gives an optimal lower bound in all of them.
**Acknowledgements.** This research is supported by the Australian Research Council Discovery Project (200100994). We thank Bixuan Li for useful discussions.
## 2. Preliminaries and auxiliary results
For completeness we give a brief introduction to Hausdorff measures and dimension. For further details we refer to [3, 5].
### Hausdorff measure and dimension
Let \(s\geq 0\) and \(E\subset\mathbb{R}^{n}\). Then, for any \(\rho>0\) a countable collection \(\{B_{i}\}\) of balls in \(\mathbb{R}^{n}\) with diameters \(\operatorname{diam}(B_{i})\leq\rho\) such that \(E\subset\bigcup_{i}B_{i}\) is called a \(\rho\)-cover of \(E\). Let
\[\mathcal{H}_{\rho}^{s}(E)=\inf\sum_{i}\operatorname{diam}(B_{i})^{s},\]
where the infimum is taken over all possible \(\rho\)-covers \(\{B_{i}\}\) of \(E\). It is easy to see that \(\mathcal{H}_{\rho}^{s}(E)\) increases as \(\rho\) decreases and so approaches a limit as \(\rho\to 0\). This limit could be zero or infinity, or take a finite positive value. Accordingly, the _\(s\)-Hausdorff measure_\(\mathcal{H}^{s}\) of \(E\) is defined to be
\[\mathcal{H}^{s}(E)=\lim_{\rho\to 0}\mathcal{H}_{\rho}^{s}(E).\]
It is easy to verify that Hausdorff measure is monotonic and countably sub-additive, and that \(\mathcal{H}^{s}(\varnothing)=0\). Thus it is an outer measure on \(\mathbb{R}^{n}\). For any subset \(E\) one can verify that there exists a unique critical value of \(s\) at which \(\mathcal{H}^{s}(E)\) 'jumps' from infinity to zero. The value taken by \(s\) at this discontinuity is referred to as the _Hausdorff dimension of \(E\)_ and is denoted by \(\dim_{\mathrm{H}}E\); i.e.,
\[\dim_{\mathrm{H}}E:=\inf\{s\in\mathbb{R}_{+}\ :\ \mathcal{H}^{s}(E)=0\}.\]
When \(s=n\), \(\mathcal{H}^{n}\) coincides with standard Lebesgue measure on \(\mathbb{R}^{n}\). Computing Hausdorff dimension of a set is typically accomplished in two steps: obtaining the upper and lower bounds separately. Upper bounds often can be handled by finding appropriate coverings. When dealing with a limsup set, one usually applies the Hausdorff measure version of the famous Borel-Cantelli lemma (see Lemma 3.10 of [3]):
**Proposition 2.1**.: _Let \(\{B_{i}\}_{i\geq 1}\) be a sequence of measurable sets in \(\mathbb{R}\) and suppose that,_
\[\sum_{i}\operatorname{diam}(B_{i})^{s}\,<\,\infty.\]
_Then_
\[\mathcal{H}^{s}(\limsup_{i\to\infty}B_{i})=0.\]
The main tool in establishing the lower bound for the dimension of \(S_{m}(A_{0},\ldots,A_{m-1})\) will be the following well-known mass distribution principle [5].
**Proposition 2.2** ([5], Mass Distribution Principle).: _Let \(\mu\) be a probability measure supported on a measurable set \(F\). Suppose there are positive constants \(c\) and \(r_{0}\) such that_
\[\mu(B(x,r))\leq cr^{s}\]
_for any ball \(B(x,r)\) with radius \(r\leq r_{0}\) and center \(x\in F\). Then \(\dim_{\mathrm{H}}F\geq s\)._
### Continued fractions and Diophantine approximation
Recall that the Gauss map \(T:[0,1)\to[0,1)\) is defined by
\[T(0)=0,\ T(x)=\frac{1}{x}-\left\lfloor\frac{1}{x}\right\rfloor\ \text{for}\ x\in(0,1),\]
where \(\lfloor x\rfloor\) denotes the integer part of \(x\). We write \(x:=[a_{1}(x),a_{2}(x),a_{3}(x),\ldots]\) for the continued fraction of \(x\) where \(a_{1}(x)=\lfloor 1/x\rfloor\), \(a_{n}(x)=a_{1}(T^{n-1}(x))\) for \(n\geq 2\) are called the partial quotients of \(x\). The sequences \(p_{n}=p_{n}(x)\), \(q_{n}=q_{n}(x)\), referred to as \(n^{\mathrm{th}}\) convergents, has the recursive relation
\[p_{n+1}=a_{n+1}(x)p_{n}+p_{n-1},\ \ q_{n+1}=a_{n+1}(x)q_{n}+q_{n-1},\ \ n\geq 0. \tag{2.1}\]
Thus \(p_{n}=p_{n}(x),q_{n}=q_{n}(x)\) are determined by the partial quotients \(a_{1},\ldots,a_{n}\), so we may write \(p_{n}=p_{n}(a_{1},\ldots,a_{n}),q_{n}=q_{n}(a_{1},\ldots,a_{n})\). When it is clear which partial quotients are involved, we denote them by \(p_{n},q_{n}\) for simplicity. For any integer vector \((a_{1},\ldots,a_{n})\in\mathbb{N}^{n}\) with \(n\geq 1\), write
\[I_{n}(a_{1},\ldots,a_{n}):=\{x\in[0,1):a_{1}(x)=a_{1},\ldots,a_{n}(x)=a_{n}\}\]
for the corresponding 'cylinder of order \(n\)', that is, the set of all real numbers in \([0,1)\) whose continued fraction expansions begin with \((a_{1},\ldots,a_{n}).\) We will frequently use the following well known properties of continued fraction expansions. They are explained in the standard texts [10, 11].
**Proposition 2.3**.: _For any positive integers \(a_{1},\ldots,a_{n}\), let \(p_{n}=p_{n}(a_{1},\ldots,a_{n})\) and \(q_{n}=q_{n}(a_{1},\ldots,a_{n})\) be defined recursively by (2.1). Then:_
1. _One has_ \[I_{n}(a_{1},a_{2},\ldots,a_{n})=\left\{\begin{array}{ll}\left[\frac{p_{n}} {q_{n}},\frac{p_{n}+p_{n-1}}{q_{n}+q_{n-1}}\right)&\text{if}\ \ n\ \text{is even;}\\ \left(\frac{p_{n}+p_{n-1}}{q_{n}+q_{n-1}},\frac{p_{n}}{q_{n}}\right)&\text{if} \ \ n\ \text{is odd.}\end{array}\right.\] Its length is given by \[\frac{1}{2q_{n}^{2}}\leq|I_{n}(a_{1},\ldots,a_{n})|=\frac{1}{q_{n}(q_{n}+q_{n- 1})}\leq\frac{1}{q_{n}^{2}}.\]
* _For any_ \(n\geq 1\)_,_ \(q_{n}\geq 2^{(n-1)/2}\) _and_ \[1\leq\frac{q_{n+m}(a_{1},\ldots,a_{n},b_{1},\ldots,b_{m})}{q_{n}(a_{1},\ldots,a_{ n})\cdot q_{m}(b_{1},\ldots,b_{m})}\leq 2.\]
* \[\prod_{i=1}^{n}a_{i}\leq q_{n}\leq\prod_{i=1}^{n}(a_{i}+1)\leq 2^{n}\prod_{i=1}^{ n}a_{i}.\]
* \[\frac{1}{3a_{n+1}(x)q_{n}^{2}(x)}\,<\,\Big{|}x-\frac{p_{n}(x)}{q_{n}(x)}\Big{|} =\frac{1}{q_{n}(x)(q_{n+1}(x)+T^{n+1}(x)q_{n}(x))}\,<\,\frac{1}{a_{n+1}q_{n}^{2 }(x)},\] _and for any_ \(n\geq 1\)_, the derivative of_ \(T^{n}\) _is given by_ \[(T^{n})^{\prime}(x)=\frac{(-1)^{n}}{(xq_{n-1}-p_{n-1})^{2}}.\]
* _There exists a constant_ \(K>1\) _such that for almost all_ \(x\in[0,1)\)_,_ \[q_{n}(x)\leq K^{n},\text{ for all $n$ sufficiently large}.\]
Let \(\mu\) be the Gauss measure given by
\[d\mu=\frac{1}{(1+x)\log 2}dx.\]
It is clear that \(\mu\) is \(T\)-invariant and equivalent to Lebesgue measure \(\mathcal{L}\). The next proposition concerns the position of a cylinder in \([0,1)\).
**Proposition 2.4** ([11, Khintchine]).: _Let \(I_{n}=I_{n}(a_{1},\ldots,a_{n})\) be a cylinder of order \(n\), which is partitioned into sub-cylinders \(\{I_{n+1}(a_{1},\ldots,a_{n},a_{n+1}):a_{n+1}\in\mathbb{N}\}\). When \(n\) is odd, these sub-cylinders are positioned from left to right, as \(a_{n+1}\) increases from 1 to \(\infty\); when \(n\) is even, they are positioned from right to left._
The following result is due to Luczak [14].
**Lemma 2.5** ([14, Luczak]).: _For any \(b,c>1\), the sets_
\[\left\{x\in[0,1):a_{n}(x)\geq c^{b^{n}}\text{ for infinitely many }n\in\mathbb{N}\right\},\] \[\left\{x\in[0,1):a_{n}(x)\geq c^{b^{n}}\text{ for all }\ n\geq 1\right\},\]
_have the same Hausdorff dimension \(\frac{1}{b+1}\)._
### Pressure function
When dealing with the Hausdorff dimension problems in non-linear dynamical systems, pressure functions and other concepts from thermodynamics are good tools. The concept of a general pressure function was introduced by Ruelle in [20] as a generalisation of entropy which describes the exponential growth rate of ergodic sums. We are interested in a way of obtaining the Hausdorff dimension of certain sets using the pressure functions. A method in [19, Theorem 2.2.1] can be used to calculate the Hausdorff dimension of self-similar sets for linear system. As for the non-linear setting, the relation between Hausdorff dimension and pressure functions is given in [19] as the corresponding generalisation of Moran [18]. More details and context on pressure functions can be found in [15, 16, 17]. We use the fact that the pressure function with a continuous potential can be approximated by the pressure function restricted to the sub-systems in continued fractions.
Let \(\mathcal{A}\subset\mathbb{N}\) be a finite or infinite set and define
\[X_{\mathcal{A}}=\{x\in[0,1):a_{n}(x)\in\mathcal{A}\text{ for all }n\geq 1\}.\]
Then \((X_{\mathcal{A}},T)\) is a subsystem of \(([0,1),T)\) where \(T\) is a Gauss map. Given any real function \(\psi:[0,1)\to\mathbb{R}\), the pressure function restricted to the system \((X_{\mathcal{A}},T)\) is defined by
\[\mathsf{P}_{\mathcal{A}}(T,\psi):=\lim_{n\to\infty}\frac{1}{n}\log\sum_{a_{1}, \ldots,a_{n}\in\mathcal{A}}\sup_{\pi\in X_{\mathcal{A}}}e^{S_{n}\psi([a_{1}, \ldots,a_{n}+\pi])}, \tag{2.2}\]
where \(S_{n}\psi(x)\) denotes the ergodic sum \(\psi(x)+\cdots+\psi(T^{n-1}x)\). For simplicity, we denote \(\mathsf{P}_{\mathbb{N}}(T,\psi)\) by \(\mathsf{P}(T,\psi)\) when \(\mathcal{A}=\mathbb{N}\). We note that the supremum in equation (2.2) can be removed if \(\psi\) satisfy the continuity property. For each \(n\geq 1\), the \(n^{\text{th}}\) variation of \(\psi\) is denoted by
\[\operatorname{Var}_{n}(\psi):=\sup\Big{\{}|\psi(x)-\psi(y)|:I_{n}(x)=I_{n}(y) \Big{\}},\]
where \(I_{n}\) is defined in (2.2).
The following result [13, Proposition 2.4] ensure the existence of the limit in the equation (2.2).
**Proposition 2.6** ([13, Li-Wang-Wu-Xu]).: _Let \(\psi:[0,1)\to\mathbb{R}\) be a real function with \(\operatorname{Var}_{1}(\psi)<\infty\) and \(\operatorname{Var}_{n}(\psi)\to 0\) as \(n\to\infty\). Then the limit defining \(\mathsf{P}_{\mathcal{A}}(T,\psi)\) exists and the value of \(\mathsf{P}_{\mathcal{A}}(T,\psi)\) remains the same even without taking supremum over \(x\in X_{\mathcal{A}}\) in (2.2)._
The system \(([0,1),T)\) is approximated by its subsystems \((X_{\mathcal{A}},T)\) then the pressure function has a continuity property in the system of continued fractions. A detailed proof can be seen in [6, Proposition 2] or [13].
**Proposition 2.7** ([6, Hanus-Mauldin-Urbanski]).: _Let \(\psi:[0,1)\to\mathbb{R}\) be a real function with \(\operatorname{Var}_{1}(\psi)<\infty\) and \(\operatorname{Var}_{n}(\psi)\to 0\) as \(n\to\infty\). We have_
\[\mathsf{P}_{\mathbb{N}}(T,\psi)=\sup\{\mathsf{P}_{\mathcal{A}}(T,\psi): \mathcal{A}\text{ is a finite subset of }\mathbb{N}\}.\]
Now we consider the specific potentials,
\[\psi_{1}(x)=-s\log|T^{\prime}(x)|-s\log B\]
and
\[\psi_{2}(x)=-s\log|T^{\prime}(x)|-s\log B_{1}+(1-s)\log B_{2}\]
for some \(1<B,B_{1},B_{2}<\infty\) and \(s\geq 0\). Note that if we let \(B=\beta_{0}\) and \(B_{1}=\beta_{i},B_{2}=\beta_{i-1}\) for \(i=1,\ldots,m-1\), then we will have potential functions used in the formulation of the main result. It is clear that \(\psi_{1}\) and \(\psi_{2}\) satisfy the variation condition and then Proposition 2.7 holds.
Thus, the pressure function (2.2) with potential \(\psi_{1}\) is represented by
\[\mathsf{P}_{\mathcal{A}}\Big{(}T,-s(\log B+\log|T^{\prime}(x)|) \Big{)} =\lim_{n\to\infty}\frac{1}{n}\log\sum_{a_{1},\ldots,a_{n}\in \mathcal{A}}e^{S_{n}(-s(\log B+\log|T^{\prime}(x)|))}\] \[=\lim_{n\to\infty}\frac{1}{n}\log\sum_{a_{1},\ldots,a_{n}\in \mathcal{A}}\left(\frac{1}{B^{n}q_{n}^{2}}\right)^{s},\]
where we also used Proposition 2.6. The last equality holds by
\[S_{n}(-s(\log B+\log|T^{\prime}(x)|))=-ns\log B-s\log q_{n}^{2}.\]
which is easy to check by Proposition 2.3. As before, we obtain the pressure function with potential \(\psi_{2}\) by
\[\mathsf{P}_{\mathcal{A}}(T,-s\log|T^{\prime}(x)|-s\log B_{1}+(1- s)\log B_{2}) =\lim_{n\to\infty}\frac{1}{n}\log\sum_{a_{1},\ldots,a_{n}\in \mathcal{A}}e^{S_{n}(-s\log|T^{\prime}(x)|-s\log B_{1}+(1-s)\log B_{2})}\] \[=\lim_{n\to\infty}\frac{1}{n}\log\sum_{a_{1},\ldots,a_{n}\in \mathcal{A}}\left(\frac{1}{B_{1}^{n}q_{n}^{2}}\right)^{s}B_{2}^{(1-s)n}.\]
For any \(n\geq 1\) and \(s\geq 0\), we write
\[f_{n}^{\left(1\right)}\left(s\right) :=\sum_{a_{1},\ldots,a_{n}\in\mathcal{A}}\frac{1}{\left(B^{n}q_{n} ^{2}\right)^{s}},\] \[f_{n}^{\left(2\right)}\left(s\right) :=\sum_{a_{1},\ldots,a_{n}\in\mathcal{A}}\left(\frac{1}{B_{1}^{n }q_{n}^{2}}\right)^{s}B_{2}^{\left(1-s\right)n}.\]
and denote
\[s_{n,B}\left(\mathcal{A}\right) =\inf\left\{s\geq 0:f_{n}^{\left(1\right)}\left(s\right)\leq 1 \right\}, g_{n,B_{1},B_{2}}\left(\mathcal{A}\right)=\inf\left\{s\geq 0:f_{n}^{ \left(2\right)}\left(s\right)\leq 1\right\},\] \[s_{B}(\mathcal{A}) =\inf\left\{s\geq 0:\mathsf{P}_{\mathcal{A}}(T,\psi_{1})\leq 0 \right\}, g_{B_{1},B_{2}}(\mathcal{A})=\inf\{s\geq 0:\mathsf{P}_{\mathcal{A}}(T, \psi_{2})\leq 0\},\] \[s_{B}(\mathbb{N}) =\inf\left\{s\geq 0:\mathsf{P}(T,\psi_{1})\leq 0\right\}, g_{B_{1},B_{2}}(\mathbb{N})=\inf\left\{s\geq 0:\mathsf{P}(T, \psi_{2})\leq 0\right\}.\]
If \(\mathcal{A}\in\mathbb{N}\) is a finite set, then by [23] it is straightforward to check that both \(f_{n}^{\left(i\right)}\left(s\right)\) and \(\mathsf{P}_{\mathcal{A}}(T,\psi_{i})\) for \(i=1,2\) are monotonically decreasing and continuous with respect to \(s\). Thus, \(s_{n,B}\left(\mathcal{A}\right)\), \(s_{B}(\mathcal{A})\), \(g_{n,B_{1},B_{2}}\left(\mathcal{A}\right)\) and \(g_{B_{1},B_{2}}(\mathcal{A})\) are, respectively, the unique solutions of \(f_{n}^{\left(1\right)}\left(s\right)=1\), \(\mathsf{P}_{\mathcal{A}}(T,\psi_{1})=0\), \(f_{n}^{\left(2\right)}\left(s\right)=1\) and \(\mathsf{P}_{\mathcal{A}}(T,\psi_{2})=0.\) For simplicity, when \(\mathcal{A}=\{1,2,\ldots,M\}\) for some \(M>0\), we write \(s_{n,B}\left(M\right)\) for \(s_{n,B}\left(\mathcal{A}\right)\), \(s_{B}\left(M\right)\) for \(s_{B}\left(\mathcal{A}\right)\), \(g_{n,B_{1},B_{2}}\left(M\right)\) for \(g_{n,B_{1},B_{2}}\left(M\right)\) for \(g_{B_{1},B_{2}}\left(M\right)\). When \(\mathcal{A}=\mathbb{N}\), we write \(s_{n,B}\) for \(s_{n,B}(\mathbb{N})\), \(s_{B}\) for \(s_{B}(\mathbb{N})\), \(g_{n,B_{1},B_{2}}\) for \(g_{n,B_{1},B_{2}}(\mathbb{N})\) and \(g_{B_{1},B_{2}}\) for \(g_{B_{1},B_{2}}(\mathbb{N})\). As a consequence, we have
**Corollary 2.8**.: _For any integer \(M\in\mathbb{N},\)_
\[\lim_{n\to\infty}s_{n,B}(M)=s_{B}(M),\ \ \lim_{M\to\infty}s_{B}(M)=s_{B},\ \ \lim_{n\to\infty}g_{n,B_{1},B_{2}}(M)=g_{B_{1},B_{2}}(M),\ \ \lim_{M\to\infty}g_{B_{1},B_{2}}(M)=g_{B_{1},B_{2}},\]
_where \(s_{B}\) and \(g_{B_{1},B_{2}}\) are defined in (4.2) and (4.3) respectively. Note that, \(s_{B}\) and \(g_{B_{1},B_{2}}\) are continuous respectively as a function of \(B\) and \(B_{1},B_{2}\). Moreover,_
\[\lim_{B\to 1}s_{B}=1,\ \ \lim_{B\to\infty}s_{B}=1/2.\]
Proof.: The last two equations are proved in [23, Lemma 2.6] and others are consequences of Proposition 2.7.
As before, we will set \(B=\beta_{0}\) and \(B_{1}=\beta_{i},B_{2}=\beta_{i-1}\) for \(i=1,\ldots,m-1\) in Corollary 2.8, so that \(s_{B}\) and \(g_{B_{1},B_{2}}\) will become \(d_{0}\) and \(d_{i}\) with \(i=1,\ldots,m-1\) respectively.
## 3. Hausdorff dimension of \(S_{m}(A_{0},\ldots,A_{m-1})\).
The proof of Theorem 1.1 consists of two parts, the upper bound and the lower bound. For notational simplicity, we take \(c_{0}=\cdots=c_{m-1}=1\) and the other case can be done with appropriate modifications. That is, we will be dealing with the set
\[S_{m}(A_{0},\ldots,A_{m-1})=\left\{x\in\left[0,1\right):\,A_{i}^{n}\leq a_{n+i} (x)<2A_{i}^{n},0\leq i\leq m-1,\,\text{for infinitely many}\ n\in\mathbb{N}\right\}.\]
### Upper bound
At first, for each \(n\geq 1\) and \((a_{1},\ldots,a_{n-1})\in\mathbb{N}^{n-1}\) define
\[F_{n} =\left\{x\in\left[0,1\right):\,A_{i}^{n}\leq a_{n+i}(x)<2A_{i}^{n},\,0\leq i\leq m-1\right\}\] \[=\bigcup_{a_{1},\ldots,a_{n-1}\in\mathbb{N}}\left\{x\in\left[0,1 \right):a_{j}(x)=a_{j},1\leq j<n,A_{i}^{n}\leq a_{n+i}(x)<2A_{i}^{n},\,0\leq i \leq m-1\right\}\] \[:=\bigcup_{a_{1},\ldots,a_{n-1}\in\mathbb{N}}F_{n}(a_{1},\ldots,a_ {n-1}).\]
Then
\[S_{m}(A_{0},\ldots,A_{m-1})=\bigcap_{N=1}^{\infty}\bigcup_{n=N}^{\infty}F_{n}= \bigcap_{N=1}^{\infty}\bigcup_{n=N}^{\infty}\bigcup_{a_{1},\ldots,a_{n-1}\in \mathbb{N}}F_{n}(a_{1},\ldots,a_{n-1}).\]
There are \(m\) potential optimal covers for \(F_{n}\) for each \(n\geq N\). Define
\[J_{n-1}(a_{1},\ldots,a_{n-1})=\bigcup_{A_{0}^{n}\leq a_{n}<2A_{0}^{n}}I_{n}(a_{ 1},\ldots,a_{n}).\]
Next, for any \(1\leq i\leq m-1\) and any \(A_{i}^{n}\leq a_{n+i}(x)<2A_{i}^{n}\), define
\[J_{n-1+i}(a_{1},\ldots,a_{n-1+i})=\bigcup_{A_{i}^{n}\leq a_{n+i}<2A_{i}^{n}}I_{ n+i}(a_{1},\ldots,a_{n+i}).\]
Then, by using Proposition 2.3 and Proposition 2.4 recursively, for every \(0\leq i\leq m-1\) we obtain
\[|J_{n-1+i}(a_{1},\ldots,a_{n-1+i})| =\sum_{A_{i}^{n}\leq a_{n+i}<2A_{i}^{n}}\left|\frac{p_{n+i}}{q_{n +i}}-\frac{p_{n+i}+p_{n-1+i}}{q_{n+i}+q_{n-1+i}}\right|\] \[\asymp\frac{1}{A_{i}^{n}q_{n-1+i}^{2}}\] \[\asymp\frac{1}{\beta_{i}^{n}\cdot\beta_{i-1}^{n}q_{n-1}^{2}}.\]
Therefore, for covering by \(J_{n-1}\), for any \(\varepsilon>0\), the \((d_{0}+2\varepsilon)\)-dimensional Hausdorff dimension of \(S_{m}(A_{0},\ldots,A_{m-1})\) can be estimated as
\[\mathcal{H}^{d_{0}+2\varepsilon}(S_{m}(A_{0},\ldots,A_{m-1})) \leq\liminf_{N\to\infty}\sum_{n=N}^{\infty}\sum_{a_{1},\ldots,a_{n- 1}}|J_{n-1}(a_{1},\ldots,a_{n-1})|^{d_{0}+2\varepsilon}\] \[\leq\liminf_{N\to\infty}\sum_{n=N}^{\infty}\frac{1}{2^{(n-1) \varepsilon}}\sum_{a_{1},\ldots,a_{n-1}}\left(\frac{1}{\beta_{0}^{n}q_{n-1}^{ 2}}\right)^{d_{0}}\] \[\leq\liminf_{N\to\infty}\sum_{n=N}^{\infty}\frac{1}{2^{(n-1) \varepsilon}}<\infty,\]
where we used that \(\beta_{-1}=1\). Hence, from the definition of Hausdorff dimension it follows that
\[\dim_{\mathrm{H}}S_{m}\leq d_{0}. \tag{3.1}\]
For coverings by \(J_{n-1+i}\) for \(1\leq i\leq m-1\), we get that \((d_{i}+2\varepsilon)\)-dimensional Hausdorff dimension of \(S_{m}(A_{0},\ldots,A_{m-1})\) can be estimated as
\[\mathcal{H}^{d_{i}+2\varepsilon}(S_{m}(A_{0},\ldots,A_{m-1})) \leq\liminf_{N\to\infty}\sum_{n=N}^{\infty}\sum_{\begin{subarray}{ c}A_{1}^{j}\leq a_{n+j}<2A_{j}^{n}\\ \text{for all }0\leq j\leq i-1\end{subarray}}|J_{n-1+i}(a_{1},\ldots,a_{n-1+i}) |^{d_{i}+2\varepsilon}\] \[\leq\liminf_{N\to\infty}\sum_{n=N}^{\infty}\sum_{\begin{subarray}{ c}A_{1},\ldots,a_{n-1}\\ \text{for all }0\leq j\leq i-1\end{subarray}}\left(\frac{1}{\beta_{i}^{n} \cdot\beta_{i-1}^{n}q_{n-1}^{2}}\right)^{d_{i}+2\varepsilon}\] \[\leq\liminf_{N\to\infty}\sum_{n=N}^{\infty}\sum_{a_{1},\ldots,a_{ n-1}}\beta_{i-1}^{n}\left(\frac{1}{\beta_{i}^{n}\cdot\beta_{i-1}^{n}q_{n-1}^{2}} \right)^{d_{i}+2\varepsilon}\] \[=\liminf_{N\to\infty}\sum_{n=N}^{\infty}\sum_{a_{1},\ldots,a_{n- 1}}\frac{\beta_{i-1}^{(1-d_{i})n}}{(\beta_{i}^{n}q_{n-1}^{2})^{d_{i}}}\frac{1} {(\beta_{i}^{n}\cdot\beta_{i-1}^{n}q_{n-1}^{2})^{2\varepsilon}}\] \[\leq\liminf_{N\to\infty}\sum_{n=N}^{\infty}\frac{1}{2^{(n-1) \varepsilon}}<\infty.\]
Thus, the upper bound is obtained immediately by combining the latter with (3.1), so
\[\dim_{\mathrm{H}}S_{m}\leq\min_{0\leq i\leq m-1}d_{i}.\]
### Lower bound
In this subsection we will determine the lower bound for the dimension of \(S_{m}(A_{0},\ldots,A_{m-1})\) by using the mass distribution principle (Proposition 2.2).
For convenience, let us define some dimensional numbers. For any integers \(N,M\) and \(0\leq i\leq m-1\) define the dimensional number \(\mathbf{d}_{i}=\mathbf{d}_{i,N}(M)\) as the solution to
\[\sum_{1\leq a_{1},\ldots,a_{N}\leq M}\frac{\beta_{i-1}^{N}}{((\beta_{i}\beta_ {i-1})^{N}q_{N}^{2})^{\mathbf{d}_{i}}}=1. \tag{3.2}\]
More specifically, each equation has a unique solution and, by Corollary 2.8,
\[\lim_{M\to\infty}\lim_{N\to\infty}\mathbf{d}_{i,N}(M)=d_{i}.\]
Take a sequence of large sparse integers \(\{\ell_{k}\}_{k\geq 1}\), say, \(\ell_{k}\gg e^{\ell_{1}+\cdots+\ell_{k-1}}.\) For any \(\varepsilon>0\), choose integers \(N,M\) sufficiently large such that
\[\mathbf{d}_{i}>d_{i}-\varepsilon,\qquad\left(2^{(N-1)/2}\right)^{\varepsilon /2}\geq 2^{100}.\]
Let
\[n_{k}-n_{k-1}=\ell_{k}N+m,\,\forall k\geq 1, \tag{3.3}\]
such that
\[\left(2^{\ell_{k}(N-1)/2}\right)^{\frac{\varepsilon}{2}}\geq\prod_{t=1}^{k-1} (M+1)^{\ell_{t}N}(\beta_{m-1})^{\sum_{i=1}^{t}\ell_{i}N+t}.\]
At this point, define a subset of \(S_{m}(A_{0},\ldots,A_{m-1})\) as
\[E=E_{M,N}=\{x\in[0,1):\,A_{i}^{n_{k}}\leq a_{n+i}(x)<2A_{i}^{n_{k }}\text{ for all }k\geq 1,\text{ for all }0\leq i\leq m-1\\ \text{ and }a_{n}(x)\in\{1,\ldots,M\}\text{ for other }n\in\mathbb{N}\}. \tag{3.4}\]
Next we proceed to make use of a symbolic space. Define \(D_{0}=\{\varnothing\}\), and for any \(n\geq 1\), define
\[D_{n}=\Bigg{\{}(a_{1},\ldots,a_{n})\in\mathbb{N}^{n}:A_{i}^{n_{k }}\leq a_{n_{k}+i}<2A_{i}^{n_{k}},\text{ for all }0\leq i\leq m-1,k\geq 1\text{ with }n_{k}+i\leq n;\] \[\text{ and }a_{j}\in\{1,\ldots,M\},\text{ for other }j\leq n \Bigg{\}}.\]
This set is just the collection of the prefixes of the points in \(E\). Moreover, the collection of finite words of length \(N\) is denoted by
\[\mathcal{U}=\{w=(\sigma_{1},\ldots,\sigma_{N}):1\leq\sigma_{i}\leq M,1\leq i \leq N\}\]
and, for the remainder of the paper we always use \(w\) to denote an element from \(\mathcal{U}\).
#### 3.2.1. Cantor structure of \(E\)
In this subsection, we depict the structure of \(E\) with the help of symbolic space as mentioned above. For any \((a_{1},\ldots,a_{n})\in D_{n}\), define
\[J_{n}(a_{1},\ldots,a_{n})=\bigcup_{a_{n+1}:(a_{1},\ldots,a_{n},a_{n+1})\in D_{n +1}}I_{n+1}(a_{1},\ldots,a_{n},a_{n+1})\]
and call it a _basic cylinder_ of order \(n\). More precisely, for any \(k\geq 0\)
* when \(n_{k}+m-1\leq n<n_{k+1}-1\) (by viewing \(n_{0}=0\)), \[J_{n}(a_{1},\ldots,a_{n})=\bigcup_{1\leq a_{n+1}\leq M}I_{n+1}(a_{1},\ldots,a_ {n},a_{n+1}).\]
* when \(n=n_{k+1}+i-1\) for some \(0\leq i\leq m-1\), \[J_{n}(a_{1},\ldots,a_{n})=\bigcup_{A_{i}^{n+1}\leq a_{n+1}<2A_{i}^{n_{k+1}}}I_ {n+1}(a_{1},\ldots,a_{n},a_{n+1}).\]
Then we define the level \(n\) of the Cantor set \(E\) as
\[\mathcal{F}_{n}=\bigcup_{(a_{1},\ldots,a_{n})\in D_{n}}J_{n}(a_{1},\ldots,a_{ n}).\]
Consequently, the Cantor structure of \(E\) is described as follows
\[E=\bigcap_{n=1}^{\infty}\mathcal{F}_{n}=\bigcap_{n=1}^{\infty}\bigcup_{(a_{1},\ldots,a_{n})\in D_{n}}J_{n}(a_{1},\ldots,a_{n}).\]
We observe that every element \(x\in E\) can be written as
\[x=[w_{1}^{(1)},\ldots,w_{\ell_{1}}^{(1)},a_{n_{1}},\ldots a_{n_{1}+m-1},w_{1} ^{(2)},\ldots,w_{\ell_{2}}^{(2)},a_{n_{2}},\ldots,a_{n_{2}+m-1},\ldots,w_{1}^{ (k)},\ldots,w_{\ell_{k}}^{(k)},a_{n_{k}},\ldots,a_{n_{k}+m-1},\ldots],\]
where
\[w_{k}^{(p)}\in\mathcal{U},\text{ and }\ A_{i}^{n_{k}}\leq a_{n_{k}+i}\leq 2A_{ i}^{n_{k}},\ \text{ for all }k,p\in\mathbb{N}\ \ 0\leq i\leq m-1.\]
Then the length of cylinder set can be estimated as follows.
**Lemma 3.1** (Length estimation).: _Let \(x\in E\) and \(n_{k-1}+m-1\leq n<n_{k}+m-1\) for some \(k\geq 1\)._
* _for_ \(n_{k-1}+m-1\leq n<n_{k}-1=n_{k-1}+k+\ell_{k}N\)_,_ \[\frac{1}{8q_{n}^{2}}\leq|J_{n}(x)|\leq\frac{1}{q_{n}^{2}}.\]
* _for_ \(n=n_{k}-1+i\) _for some_ \(0\leq i\leq m-1\)_, i.e. for_ \(n_{k}\leq n\leq n_{k}+m-2\)_,_ \[|J_{n_{k}-1+i}(x)|\geq\frac{1}{6\cdot 4^{i}\beta_{i}^{n_{k}}\beta_{i-1}^{n_{k}}q_{ n_{k}-1}^{2}}\geq\frac{1}{6\cdot 4^{i}\beta_{i}^{n_{k}}\beta_{i-1}^{n_{k}}}\left( \prod_{i=1}^{\ell_{k}}\frac{1}{q_{N}(w_{i}^{(k)})}\right)^{2(1+\varepsilon)}.\]
* _for_ \(n=n_{k}+m-1\)_,_ \[|J_{n_{k}+m-1}(x)|\geq\frac{1}{6\cdot 4^{i}A_{m-1}^{n_{k}}\beta_{m-1}^{n_{k}} \beta_{m-2}^{n_{k}}q_{n_{k}-1}^{2}}.\]
* _for each_ \(1\leq\ell<\ell_{k+1}\)_,_ (3.5) \[|J_{n_{k}+m-1+\ell N}(x)|\geq\frac{1}{2^{3}}\cdot\left(\frac{1}{2^{2\ell}} \cdot\prod_{i=1}^{\ell}\frac{1}{q_{N}^{2}(w_{i}^{(k+1)})}\right)\cdot\frac{1}{q _{n_{k}+1}^{2}}\geq\left(\prod_{i=1}^{\ell}\frac{1}{q_{N}^{2}(w_{i}^{(k+1)})} \right)^{1+\varepsilon}\cdot\frac{1}{q_{n_{k}+1}^{2}}.\]
* _for_ \(n_{k}+1+(\ell-1)N<n<n_{k}+1+\ell N\) _with_ \(1\leq\ell\leq\ell_{k+1}\)_,_ (3.6) \[|J_{n}(x)|\geq c\cdot|J_{n_{k}+1+(\ell-1)N}(x)|,\] _where_ \(c=c(M,N)\) _is an absolute constant._
### Mass distribution
In this subsection, we define \(m\) mass distributions along the basic intervals \(J_{n}(x)\) containing \(x\). These mass distributions then can be extended respectively into \(m\) probability measures supported on \(E\) by the Caratheodory extension theorem. Now let us distribute the measure by induction. For \(n\leq n_{1}+1\),
1. when \(n=\ell N\) for each \(1\leq\ell\leq\ell_{1}\), for \(0\leq j\leq m-1\) define \[\mu_{j}(J_{\ell N}(x))=\prod_{i=1}^{\ell}\frac{\beta_{j-1}^{N}}{q_{N}(w_{i}^{( 1)})^{\mathbf{d}_{j}}\cdot(\beta_{j}\beta_{j-1})^{\mathbf{d}_{j}N}}.\] We note that measures \(\mu_{j}\) for \(0\leq j\leq m-1\) can be defined on all basic cylinders of order \(\ell N\) since \(x\) is arbitrary.
2. when \((\ell-1)N<n<\ell N\) for some \(1\leq\ell\leq\ell_{1}\) and for all \(0\leq j\leq m-1\), define \[\mu_{j}(J_{n}(x))=\sum_{J_{\ell N}\subset J_{n}(x)}\mu_{j}(J_{\ell N}(x)).\] The consistency property as mentioned above fulfills the measure of other basic intervals of order less than \(n_{1}-1\).
3. when \(n=n_{1}+i\) for each \(0\leq i\leq m-1\) and \(0\leq j\leq m-1\), define \[\mu_{j}(J_{n_{1}+i}(x))=\prod_{k=0}^{i}\frac{1}{A_{k}^{n_{1}}}\cdot\mu_{j}(J_ {n_{1}-1}(x))=\frac{1}{\beta_{i}^{n_{1}}}\cdot\mu_{j}(J_{n_{1}-1}(x)).\] To make the proof more consistent, let us note that \[\mu_{j}(J_{n_{1}-1}(x))=\frac{1}{(\beta_{-1})^{n_{1}}}\mu_{j}(J_{n_{1}-1}(x)).\]
Assume the measure of all basic intervals of order \(n\) has been defined when \(n_{k}+m-1<n\leq n_{k+1}+m-1\).
1. When \(n=n_{k}+m-1+\ell N\) for each \(1\leq\ell\leq\ell_{k+1}\), for \(0\leq j\leq m-1\) define (3.7) \[\mu_{j}(J_{n_{k}+m-1+\ell N}(x))=\prod_{i=1}^{\ell}\frac{\beta_{j-1}^{N}}{q_{N }(w_{i}^{(k+1)})^{2\mathbf{d}_{j}}\cdot(\beta_{j}\beta_{j-1})^{\mathbf{d}_{j} N}}\mu_{j}(J_{n_{k}+m-1}(x)).\]
2. When \(n_{k}+m-1+(\ell-1)N<n<n_{k}+m-1+\ell N\) for some \(1\leq\ell\leq\ell_{1}\) and for \(0\leq j\leq m-1\), define \[\mu_{j}(J_{n}(x))=\sum_{J_{n_{k}+m-1+\ell N}\subset J_{n}(x)}\mu_{j}(J_{n_{k}+m -1+\ell N}).\] Furthermore, for each measure, compared with the measure of a basic cylinder of order \(n_{k}+1+(\ell-1)N\) and its offsprings of order \(n_{k}+1+\ell N\), there is only a multiplier between them. More precisely, for measure \(\mu_{j}\) it is the term \[\frac{\beta_{j-1}^{N}}{q_{N}^{\mathbf{d}_{j}}(w_{\ell}^{(k+1)})(\beta_{j} \beta_{j-1})^{\mathbf{d}_{j}N}}.\] Thus for each \(0\leq j\leq m-1\), there is an absolute constant \(\hat{c}>0\) such that \[\mu_{j}(J_{n}(x))\geq\hat{c}\cdot\mu_{j}\Big{(}J_{n_{k}+m-1+(\ell-1)N}(x)\Big{)},\] since the above two terms are uniformly bounded.
3. when \(n=n_{k+1}+i\) for each \(0\leq i\leq m-1\) and \(0\leq j\leq m-1\), define (3.8) \[\mu_{j}(J_{n_{k+1}+i}(x))=\prod_{j=0}^{i}\frac{1}{A_{j}^{n_{k+1}}}\cdot\mu_{j} (J_{n_{k+1}-1}(x))=\frac{1}{\beta_{i}^{n_{k+1}}}\cdot\mu_{j}(J_{n_{k+1}-1}(x)).\]
4. As for other orders of measure, to ensure the consistency property, let their measure equal to the summation of the measure of their offsprings. For each integer \(n\), the relation between measures of a basic cylinder and its predecessor acts like the case \(n_{k+1}+m-1+(\ell-1)N<n<n_{k+1}+m-1+\ell N\), for each \(0\leq j\leq m-1\), there is a constant \(\hat{c}>0\) such that (3.9) \[\mu_{j}(J_{n+1}(x))\geq\hat{c}\cdot\mu_{j}(J_{n}(x)).\]
### Holder exponent of \(\mu\) for basic cylinders
We need to compare the measure and length of \(J_{n}(x)\). Recall the definition (3.2) of \(\mathbf{d}_{i}\). For each \(N,M\) we can arrange \(\mathbf{d}_{i}\)'s in the ascending order. Note that if \(\mathbf{d}_{j}\leq\mathbf{d}_{k}\), then we have
\[\sum_{1\leq a_{1},\ldots,a_{N}\leq M}\frac{1}{q_{N}^{2\mathbf{d}_{j}}}\geq \sum_{1\leq a_{1},\ldots,a_{N}\leq M}\frac{1}{q_{N}^{2\mathbf{d}_{k}}}\]
and by definition (3.2) we get
\[\frac{\beta_{j-1}^{1-\mathbf{d}_{j}}}{\beta_{j}^{\mathbf{d}_{j}}}\leq\frac{ \beta_{k-1}^{1-\mathbf{d}_{k}}}{\beta_{k}^{\mathbf{d}_{k}}}.\]
Once again using the fact that \(\mathbf{d}_{j}\leq\mathbf{d}_{k}\), we obtain
\[\frac{\beta_{j-1}^{1-\mathbf{d}_{j}}}{\beta_{j}^{\mathbf{d}_{j}}}\leq\frac{ \beta_{k-1}^{1-\mathbf{d}_{j}}}{\beta_{k}^{\mathbf{d}_{j}}}. \tag{3.10}\]
Thus, if \(\mathbf{d}_{j}=\min_{0\leq k\leq m-1}\mathbf{d}_{k}\), then (3.10) holds for any \(0\leq k\leq m-1\). For every \(0\leq j\leq m-1\), will use measure \(\mu_{j}\) when \(\min_{0\leq k\leq m-1}\mathbf{d}_{k}=\mathbf{d}_{j}\).
1. When \(n=n_{k}-1+i\) for \(0\leq i\leq m-1\). For every \(0\leq j\leq m-1\) and for every \(0\leq i\leq m-1\), we have \[\mu_{j}(J_{n_{k}-1+i}(x)) \leq\frac{1}{\beta_{i-1}^{n_{k}}}\cdot\frac{\beta_{j-1}^{n_{k}}} {(\beta_{j}\beta_{j-1})^{\mathbf{d}_{j}n_{k}}}\prod_{i=1}^{\ell_{k}}\frac{1}{ q_{N}(w_{i}^{(k)})^{2\mathbf{d}_{j}}}\] \[\leq\frac{1}{(\beta_{i}\beta_{i-1})^{n_{k}\mathbf{d}_{j}}}\left( \frac{1}{q_{n_{k}-1}^{2}}\right)^{\frac{\mathbf{d}_{j}}{1+\varepsilon}}\leq \left(\frac{1}{(\beta_{i}\beta_{i-1})^{n_{k}}q_{n_{k}-1}^{2}}\right)^{\frac{ \mathbf{d}_{j}}{1+\varepsilon}}\] \[\leq\hat{c}_{1}\cdot|J_{n_{k}-1+i}(x)|^{\frac{\mathbf{d}_{j}}{1 +\varepsilon}}.\]
2. When \(n=n_{k}+m-1\). For every \(0\leq j\leq m-1\) we have \[\mu_{j}(J_{n_{k}+m-1}(x)) =\frac{1}{A_{m-1}^{n_{k}}}\mu_{j}(J_{n_{k}+m-2}(x))\leq\frac{1}{A _{m-1}^{n_{k}}}\cdot\hat{c}_{1}\cdot\left(\frac{1}{A_{m-1}^{n_{k}}q_{n_{k}+m- 2}^{2}}\right)^{\frac{\mathbf{d}_{j}}{1+\varepsilon}}\] \[\leq\hat{c}_{1}\cdot\left(\frac{1}{A_{m-1}^{2n_{k}}q_{n_{k}+m-2}^ {2}}\right)^{\frac{\mathbf{d}_{j}}{1+\varepsilon}}\leq\hat{c}_{2}|J_{n_{k}+m- 1}(x)|^{\frac{\mathbf{d}_{j}}{1+\varepsilon}}\leq\hat{c}_{2}\left(\frac{1}{q_{ n_{k}+m-1}^{2}}\right)^{\frac{\mathbf{d}_{j}}{1+\varepsilon}}.\]
3. When \(n=n_{k}+m-1+\ell N\) for some \(1\leq\ell<\ell_{k+1}\). Then for each \(0\leq j\leq m-1\), \[\mu_{j}(J_{n_{k}+m-1+\ell N}(x))\leq\hat{c}_{2}\prod_{i=1}^{\ell}\frac{1}{q_{N }(w_{i}^{(k+1)})^{2s}}\left(\frac{1}{q_{n_{k}+m-1}^{2}}\right)^{\frac{\mathbf{ d}_{j}}{1+\varepsilon}}\leq\hat{c}_{2}|J_{n_{k}+m-1+\ell N}(x)|^{\frac{\mathbf{d}_{j}}{1+ \varepsilon}}.\]
4. For other \(n\), let \(1\leq\ell\leq\ell_{k}\) be the integer such that \[n_{k}+m-1+(\ell-1)N\leq n<n_{k}+m-1+\ell N.\] Recall (3.6). Then for each \(0\leq j\leq m-1\), \[\mu_{j}(J_{n}(x))\leq\mu_{j}(J_{n_{k}+m-1+(\ell-1)N}(x))\leq\hat{c}_{2}\cdot \left|J_{n_{k}+m-1+(\ell-1)N}(x)\right|^{\frac{\mathbf{d}_{j}}{1+\varepsilon}} \leq\hat{c}_{2}\cdot c\cdot\left|J_{n}(x)\right|^{\frac{\mathbf{d}_{j}}{1+ \varepsilon}}.\]
In a summary, we have shown that for some absolute constant \(c_{3}\), for any \(n\geq 1\) and \(x\in E\),
\[\mu_{j}(J_{n}(x))\leq\hat{c}_{3}\cdot|J_{n}(x)|^{\frac{4_{j}}{1+\varepsilon}} \leq\hat{c}_{3}\cdot|J_{n}(x)|^{\frac{\min_{0\leq j\leq m-1}4_{j}}{1+\varepsilon}}. \tag{3.11}\]
### Holder exponent for a general ball
For simplicity, write
\[\tau=\frac{\min_{0\leq j\leq m-1}\mathbf{d}_{j}}{1+\varepsilon}.\]
The next lemma gives a minimum gap between two adjacent fundamental cylinders.
**Lemma 3.2** (Gap estimation).: _Denote by \(G_{n}(a_{1},\ldots,a_{n})\) the gap between \(J_{n}(a_{1},\ldots,a_{n})\) and other basic cylinders of order \(n\). Then_
\[G_{n}(a_{1},\ldots,a_{n})\geq\frac{1}{M}\cdot|J_{n}(a_{1},\ldots,a_{n})|.\]
Proof.: The proof of this lemma is derived from the positions of the cylinders in Proposition 2.4. We omit the details and refer the reader to its analogous proof in [7, Lemma 5.3].
Then for any \(x\in E\) and \(r\) small enough, there is a unique integer \(n\) such that
\[G_{n+1}(x)\leq r<G_{n}(x).\]
This implies that the ball \(B(x,r)\) can only intersect one basic cylinder \(J_{n}(x)\), and so all the basic cylinders of order \(n+1\) for which \(B(x,r)\) can intersect are all contained in \(J_{n}(x)\). Note that \(n_{k-1}+1\leq n<n_{k}+1\).
1. For \(n_{k-1}+1\leq n<n_{k}-1\), by (3.9) and (3.11), it follows that for each \(0\leq j\leq m-1\), \[\mu_{j}(B(x,r)) \leq\mu_{j}(J_{n}(x))\leq c\cdot\mu_{j}(J_{n+1}(x))\leq c\cdot \hat{c}_{3}\cdot\big{|}J_{n+1}(x)\big{|}^{\tau}\] \[\leq\hat{c}\cdot\hat{c}_{3}\cdot M\cdot(G_{n+1}(x))^{\tau}\leq \hat{c}\cdot\hat{c}_{3}\cdot M\cdot r^{\tau}.\]
2. For \(n=n_{k}-1+i\) for \(0\leq i\leq m-1\), the ball \(B(x,r)\) can only intersect one basic cylinder \(J_{n_{k}-1+i}(x)\) of order \(n_{k}-1+i\). Next, the number of basic cylinders of order \(n_{k}+i\) which are contained in \(J_{n_{k}-1+i}(x)\) and intersect the ball can be calculated as follows. We write \(x=[a_{1}(x),a_{2}(x),\ldots]\) and observe that any basic cylinder \(J_{n_{k}+i}(a_{1},\ldots,a_{n_{k}+i})\) is contained in the cylinder \(I_{n_{k}+i}(a_{1},\ldots,a_{n_{k}+i})\). Note that \(A_{i}^{n_{k}}\leq a_{n_{k}+i}\leq 2A_{i}^{n_{k}}\), the length of cylinder \(I_{n_{k}+i}\) is \[\frac{1}{q_{n_{k}+i}(q_{n_{k}+i}+q_{n_{k}-1+i})}\geq\frac{1}{2^{5}}\cdot\frac{ 1}{q_{n_{k}-1+i}^{2n_{k}+i}}.\] We also note that radius \(r\) is sometimes too small to cover a whole cylinder of order \(n_{k}+i\). The exposition needs to split into two parts. When \[r<\frac{1}{2^{5}}\frac{1}{q_{n_{k}-1+i}^{2}A_{i}^{2n_{k}+i}},\] then the ball \(B(x,r)\) can intersect at most three cylinders \(I_{n_{k}+i}(a_{1},\ldots,a_{n_{k}+i})\) and so three basic cylinders \(J_{n_{k}+i}(a_{1},\ldots,a_{n_{k}+i})\). Since each measure has the same distribution on these intervals, for \(0\leq j\leq m-1\), \[\mu_{j}(B(x,r)) \leq 3\mu_{j}(J_{n_{k}+i}(x))\leq 3\cdot\hat{c}_{3}\cdot|J_{n_{k}+i}(x)| ^{\tau}\] \[\leq 3\cdot\hat{c}_{3}\cdot M\cdot G_{n_{k}+i}(x)^{\tau}\leq 3 \cdot\hat{c}_{1}\cdot M\cdot r^{\tau}.\] When \[r\geq\frac{1}{2^{5}}\frac{1}{q_{n_{k}-1+i}^{2}A_{i}^{2n_{k}+i}},\] then the number of basic cylinders of order \(n_{k}+i\) for which the ball \(B(x,r)\) can intersect is at most \[2^{6}r\cdot q_{n_{k}-1+i}^{2}A_{i}^{2n_{k}+i}+2\leq 2^{7}\cdot r\cdot q_{n_{k}-1+i }^{2}A_{i}^{2n_{k}+i}.\]
Thus, for \(0\leq j\leq m-1\),
\[\mu_{j}(B(x,r)) \leq\min\left\{\mu_{j}(J_{n_{k}-1+i}(x)),\ \ 2^{7}\cdot r\cdot q_{n_{k}-1+i}^{2}(u)A_{i}^{2n_{k}+i}\cdot \frac{1}{A_{i}^{n_{k}+i}}\cdot\mu_{j}(J_{n_{k}-1+i}(x))\right\}\] \[\leq\hat{c}_{3}\cdot\left|J_{n_{k}-1+i}\right|^{\tau}\cdot\min \left\{1,2^{7}\cdot r\cdot q_{n_{k}-1+i}^{2}(u)A_{i}^{n_{k}+i}\right\}\] \[\leq\hat{c}_{3}\cdot\left(\frac{1}{q_{n_{k}-1+i}^{2}A_{i}^{n_{k}+ i}}\right)^{\tau}\cdot\left(2^{7}\cdot r\cdot q_{n_{k}-1+i}^{2}A_{i}^{n_{k}+i }\right)^{\tau(1-\epsilon)}\] \[=\hat{c}_{4}\cdot r^{\tau}.\]
### Conclusion
Thus by applying the mass distribution principle (Proposition 2.2)
\[\dim_{\mathrm{H}}S_{m}(A_{0},\ldots,A_{m-1})\geq\dim_{\mathrm{H}}E\geq\frac{ \min_{0\leq i\leq m-1}\mathbf{d}_{i}}{1+\varepsilon}.\]
Since \(\varepsilon>0\) is arbitrary, by letting \(N\to\infty\) and then \(M\to\infty\), we arrive at
\[\dim_{\mathrm{H}}S_{m}(A_{0},\ldots,A_{m-1})\geq\min_{0\leq i\leq m-1}d_{i}.\]
This finishes the proof.
### Remark on Theorem 1.1
Note that in order to prove a lower bound for the Hausdorff dimension of \(S_{m}(A_{0},\ldots,A_{m-1})\), we have considered a subset of this set for which the location of the blocks of exponentially growing partial quotients is given by some large sparse integers sequence of a specific type. Namely, we required in (3.3) that a number of partial quotients between two blocks of exponentially growing partial quotient is a multiple of \(N\). However, in some applications it is useful to have a result for sequences of less restricted type. To prove such a result, there is a little bit extra work to be done but the main idea of the construction is the same.
Consider an arbitrary sparse integer sequence \(\{n_{k}\}_{k\geq 1}\), express it in the form
\[n_{1}=\ell_{1}N+(N+r_{1})\text{ and }n_{k+1}-n_{k}=m+\ell_{k+1}+(N+r_{k+1}) \text{ for all }k\geq 1,\]
where \(0\leq r_{k}<N\) for all \(k\geq 1\). Denote \(m_{k}=n_{k-1}+m+\ell_{k}N\) with \(n_{0}=-m\). Consider a set
\[\hat{E}=\hat{E}_{M}^{N}(A_{0},\ldots,A_{m-1})=\{x\in[0,1)\colon A_ {i}^{n_{k}}\leq a_{n_{k}+i}(x)<2A_{i}^{n_{k}}\text{ for all }k\geq 1,\text{ for all }0\leq i\leq m-1,\] \[a_{m_{k}+1}=\cdots=a_{n_{k}-1}=2\text{ and }a_{n}(x)\in\{1,\ldots,M\} \text{ for other }n\in\mathbb{N}\}. \tag{3.12}\]
This means that we set \(N+r_{k}\) partial quotients prior to the beginning of each block of exponentially growing partial quotients to be equal to \(2\). Now for this set the proof is exactly the same as in Theorem 1.1, except we will have to deal with those new partial quotients. This can be done by defining our measures for each \(0\leq j\leq m-1\) and for all \(m_{k}<n<n_{k}\) as
\[\mu_{j}(J_{n})=\mu_{j}(J_{m_{k}}).\]
In the end, we will get
\[\dim_{\mathrm{H}}\hat{E}\geq\frac{\min_{0\leq i\leq m-1}\mathbf{d}_{i}}{1+ \varepsilon}.\]
## 4. Applications
Our main result is very helpful for obtaining lower bounds in different setups, which is usually the hardest part in determining the Hausdorff dimension of the underlying set. Here we give some examples both of known results, for which our Theorem 1.1 could have been used to derive the same result, as well as a new problem, where our set also helps to derive an optimal lower bound.
### Known results
#### 4.1.1. Wang-Wu Theorem
[23, 2008]. The most obvious example is a well-known theorem by Wang-Wu from [23]. The authors were concerned with the set
\[F(B)=\{x\in[0,1):a_{n}(x)\geq B^{n}\,\text{ for infinitely many }n\in\mathbb{N}\}.\]
Their main result about this set is
**Theorem 4.1** ([23, Wang-Wu]).: _For any \(1\leq B<\infty\),_
\[\dim_{\rm H}F(B)=s(B):=\inf\{s\geq 0:\mathsf{P}(T,-s(\log B+\log|T^{\prime}|)) \leq 0\}.\]
To get the optimal lower bound for this setup using Theorem 1.1, one can simply let \(m=1\), \(A_{0}=B\) and \(c_{0}=1\), that is to consider the set
\[S_{1}(B)=\{x\in[0,1):B^{n}\leq a_{n}(x)<2B^{n}\,\text{ for infinitely many }n\}.\]
By Theorem 1.1,
\[\dim_{\rm H}S_{1}(B) =d_{0}=\inf\{s\geq 0:P(T,-s\log|T^{\prime}|-s\log\beta_{0}+(1-s) \log\beta_{-1})\leq 0\}\] \[=\inf\{s\geq 0:P(T,-s\log|T^{\prime}|-s\log B)\leq 0\},\]
which coincides with the result from Theorem 4.1.
#### 4.1.2. Bakhtawar-Bos-Hussain Theorem
[1, 2020]. Recall that Kleinbock-Wadleigh showed that the set \(\mathcal{E}_{2}(\Phi)\) has connections with the set of Dirichlet non-improvable numbers. In [1] authors have considered the set
\[\mathcal{F}(B):=\mathcal{E}_{2}(B)\setminus\mathcal{E}_{1}(B)=\left\{x\in[0, 1):\begin{array}{c}a_{n+1}(x)a_{n}(x)\geq B^{n}\text{ for infinitely many }n\in\mathbb{N}\text{ and }\\ a_{n+1}(x)<B^{n}\text{ for all sufficiently large }n\in\mathbb{N}\end{array} \right\}.\]
They proved that the difference set \(\mathcal{F}(B)\) has positive Hausdorff dimension. More precisely they proved
**Theorem 4.2**.: (4.1) \[\dim_{\rm H}\mathcal{F}(B)=t_{B}:=\inf\{s\geq 0:P(T,-s\log|T^{\prime}|-s^{2} \log B)\leq 0\}.\]
To get a lower bound in this setup using our result, we take an arbitrary sparse sequence \(n_{k}\) and set \(m=2,A_{0}=B^{t_{B}},A_{1}=B^{1-t_{B}}\) for the set \(\hat{E}\) from Section 3.7, that is, we consider the set
\[\hat{E}(B^{t_{B}},B^{1-t_{B}})=\left\{x\in[0,1):\begin{array}{c}B^{n_{k}t_{ B}}\leq a_{n_{k}}(x)<2B^{n_{k}t_{B}}\\ B^{n_{k}(1-t_{B})}\leq a_{n_{k}+1}(x)<2B^{n_{k}(1-t_{B})}\end{array}\text{ for all }k\in\mathbb{N},\quad a_{m}\leq M\text{ for other }m \right\},\]
which is clearly a subset of \(\mathcal{F}(B)\). We can see that by the choice of the parameters \(d_{0}=d_{1}\), and so by the proof of Theorem 1.1 and the remark in Section 3.7, we have
\[\dim_{\rm H}\mathcal{F}(B)\geq\dim_{\rm H}\hat{E}(B^{t_{B}},B^{1-t_{B}})\geq d _{0}=\inf\{s\geq 0:P(T,-s\log|T^{\prime}|-s^{2}\log B)\leq 0\},\]
which coincides with the lower bound from (4.1).
#### 4.1.3. Hussain-Li-Shulga Theorem, [9, 2022]
Theorem 1.1 is also a direct generalisation of a Theorem 1.7 from [9] that gives the Hausdorff dimension of the set
\[E(A_{1},A_{2})\stackrel{{\rm def}}{{=}}\left\{x\in[0,1):\,c_{1}A_{ 1}^{n}\leq a_{n}(x)<2c_{1}A_{1}^{n},\ c_{2}A_{2}^{n}\leq a_{n+1}(x)<2c_{2}A_{2}^ {n},\ \text{for infinitely many}\ n\in\mathbb{N}\right\}.\]
**Theorem 4.3**.: _For any \(A_{1}>1\),_
\[\dim_{\rm H}E(A_{1},A_{2})=\min\left\{s(A_{1}),g_{(A_{1}A_{2}),A_{1}}\right\},\]
_where_
\[s(A_{1})=\inf\{s\geq 0:\mathsf{P}(T,-s(\log A_{1}+\log|T^{\prime}|))\leq 0\} \tag{4.2}\]
_and_
\[g_{(A_{1}A_{2}),A_{1}}=\inf\{s\geq 0:P(T,-s\log|T^{\prime}|-s\log A_{1}A_{2}+(1 -s)\log A_{1})\leq 0\}. \tag{4.3}\]
One can easily see that this is a special case of our Theorem 1.1 for \(m=2\). We also should note that in [9] this theorem was used to get a lower bound for the Hausdorff dimension of the set
\[\mathcal{F}_{B_{1},B_{2}}=\left\{x\in[0,1):\begin{aligned} a_{n}(x)a_{n+1}(x)& \geq B_{1}^{n}\ \ \text{for infinitely many}\ n\in\mathbb{N}\\ a_{n+1}(x)&<B_{2}^{n}\ \ \text{for all sufficiently large}\ n\in\mathbb{N}\end{aligned}\right\},\]
and as a corollary also for the set
\[\mathcal{F}(\Phi_{1},\Phi_{2})=\left\{x\in[0,1):\begin{aligned} a_{n}(x)a_{n+1}(x)& \geq\Phi_{1}(n)\ \ \text{for infinitely many}\ n\in\mathbb{N}\\ a_{n+1}(x)&<\Phi_{2}(n)\ \ \text{for all sufficiently large}\ n\in\mathbb{N}\end{aligned}\right\},\]
where \(\Phi_{i}:\mathbb{N}\to(0,\infty)\) are any functions such that \(\lim\limits_{n\to\infty}\Phi_{i}(n)=\infty\).
#### 4.1.4. Huang-Wu-Xu Theorem, [7, 2020]
Another example is the main result of Huang-Wu-Xu paper [7]. They have considered a set
\[E_{m}(B):=\{x\in[0,1):a_{n}(x)a_{n+1}(x)\cdots a_{n+m-1}(x)\geq B^{n}\ \text{for infinitely many}\ n\in\mathbb{N}\}.\]
At the heart of their paper is the following result.
**Theorem 4.4**.: _For \(1\leq B<\infty\), and any integer \(m\geq 1\),_
\[\dim_{\rm H}E_{m}(B)=t_{B}^{(m)}=\inf\{s:\mathsf{P}(T,-f_{m}(s)\log B-s\log|T^{ \prime}|)\leq 0\}, \tag{4.4}\]
_where \(f_{m}(s)\) is given by the following iterative formula:_
\[f_{1}(s)=s,\ \ f_{k+1}(s)=\frac{sf_{k}(s)}{1-s+f_{k}(s)},\ k\geq 1.\]
Denote \(t=t_{B}^{(m)}\). To get a lower bound in this setup, one should take the set \(S_{m}(A_{0},\ldots,A_{m-1})\) from Theorem 1.1 and let
\[A_{i}=B^{\frac{t-1-i}{t^{m}-(1-i)^{m}}},\ 0\leq i\leq m-1.\]
One can easily check that with this choice of parameters \(d_{0}=\ldots=d_{m-1}\), and so for this particular set of parameters we have
\[\dim_{\rm H}S_{m}(A_{0},\ldots,A_{m-1})=d_{0}=\inf\{s:\mathsf{P}(T,-f_{m}(s) \log B-s\log|T^{\prime}|)\leq 0\},\]
which coincides with the result from Theorem 4.4.
#### 4.1.5. Bakhtawar-Hussain-Kleinbock-Wang Theorem, [2, 2022]
Same construction was also used in [2] for the set with the weighted product of two partial quotients. For any \(t_{0},t_{1}\in\mathbb{R}_{>0}\), consider the set
\[\mathcal{D}_{2}^{\mathbf{t}}(B):=\left\{x\in[0,1):a_{n}^{t_{0}}a_{n+1}^{t_{1}} \geq B^{n}\text{ for infinitely many }n\in\mathbb{N}\right\}.\]
The Hausdorff dimension of this set is given by
**Theorem 4.5**.: \[\dim_{\mathrm{H}}\mathcal{D}_{2}^{\mathbf{t}}(B)=\mathcal{S}=\inf\{s\geq 0:P(-s \log|T^{\prime}|-f_{t_{0},t_{1}}(s)\log B)\leq 0\},\]
_where_
\[f_{t_{0}}(s)=\frac{s}{t_{0}},\ \ f_{t_{0},t_{1}}(s)=\frac{sf_{t_{0}}(s)}{t_{1}[f _{t_{0}}(s)+\max\{0,\frac{s}{t_{1}}-\frac{2s-1}{t_{0}}\}]}.\]
As in their paper, we separately consider the case, where the value of the maximum in denominator is \(0\), so when
\[\frac{\mathcal{S}}{t_{1}}-\frac{2\mathcal{S}-1}{t_{0}}\leq 0,\]
one should simply consider the subset
\[\left\{x\in[0,1):a_{n+1}^{t_{1}}(x)\geq B^{n},\text{ for infinitely many }n\in\mathbb{N}\right\}\]
of \(\mathcal{D}_{2}^{\mathbf{t}}(B)\) to get the desired
\[\dim_{\mathrm{H}}\mathcal{D}_{2}^{\mathbf{t}}(B)\geq\mathcal{S}.\]
When
\[\frac{\mathcal{S}}{t_{1}}-\frac{2\mathcal{S}-1}{t_{0}}>0,\]
in a small neighborhood of \(\mathcal{S}\) we have
\[f_{t_{0},t_{1}}(s)=\frac{sf_{t_{0}}(s)}{t_{1}\big{[}f_{t_{0}}(s)+\frac{s}{t_{1 }}-\frac{2s-1}{t_{0}}\big{]}}. \tag{4.5}\]
So to get the optimal lower bound, we can set
\[m=2,\ A_{0}=B^{\frac{\mathcal{S}}{t_{1}\big{(}1-S+\frac{sf_{0}}{t_{1}}\big{)} }},\ A_{1}=\left(\frac{B}{A_{0}^{t_{0}}}\right)^{1/t_{1}}.\]
For this choice of parameters, we will have \(d_{0}=d_{1}\) and so by Theorem 1.1 we get
\[\dim_{\mathrm{H}}S_{2}(A_{0},A_{1}) =d_{0}=\inf\left\{s\geq 0:P\left(T,-s\log|T^{\prime}|-\frac{ \mathcal{S}^{2}}{t_{1}(1-\mathcal{S}+\frac{\mathcal{S}_{t_{0}}}{t_{1}})}\log B \right)\leq 0\right\}\] \[=\inf\{s\geq 0:P(-s\log|T^{\prime}|-f_{t_{0},t_{1}}(s)\log B)\leq 0\}\] \[=\mathcal{S}.\]
This coincides with the lower bound from Theorem 4.5.
#### 4.1.6. Tan-Tian-Wang Theorem, [21, 2022]
An optimal lower bound from a recent result by Tan, Tian and Wang [21] can also be extracted from our general theorem. Let us formulate their result. Consider a set
\[E(\psi)=\{x\in[0,1):\exists 1\leq k\neq l\leq n,a_{k}(x)\geq\psi(n),a_{l}(x) \geq\psi(n),\text{ for infinitely many }n\in\mathbb{N}\}.\]
One of the results of their paper is
**Theorem 4.6**.: _Let \(\psi:\mathbb{N}\to\mathbb{R}^{+}\) be a non-decreasing function, and_
\[\log B=\liminf_{n\to\infty}\frac{\log\psi(n)}{n},\ \log b=\liminf_{n\to\infty} \frac{\log\log\psi(n)}{n}.\]
_We see that_
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log \psi(n)}{n}.\]
Proof.: We first show that \(\liminf_{n\to\infty}\frac{\log\psi(n)}{n}\) is a non-decreasing function. Since \(\psi(n)\) is a non-decreasing function, we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi(n)} {n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi(n)} {n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi(n)} {n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi (n)}{n}.\]
By the definition of \(\psi\), we have
\[\liminf_{n\to\infty}\frac{\log\psi(n)}{n}=\liminf_{n\to\infty}\frac{\log\psi(n)} {n}.
* _when_ \(1\leq B<\infty\)_,_ \[\dim_{\mathrm{H}}E(\psi)=\inf\{s\geq 0:P(T,-(3s-1)\log B-s\log|T^{\prime}(x)|)\leq 0\}\] _(remarking that_ \(\dim_{\mathrm{H}}E(\psi)=1\) _if_ \(B=1\)_);_
* _when_ \(B=\infty\)_,_ \[\dim_{\mathrm{H}}E(\psi)=\frac{1}{1+b}.\]
As always, the hardest part is to prove the lower bound in the case where \(B\) is finite. However, this result can be easily extracted from Remark 3.7. By the definition of \(B\), we can find a subsequence \(\{n_{k}\}_{k\geq 1}\) of integers such that
\[\log B=\lim_{k\to\infty}\frac{\psi(n_{k}+2)}{n_{k}+2}\quad\text{and}\quad\psi( n_{k}+2)\leq B^{n_{k}}\text{ for all }k\geq 1.\]
Next, we set \(m=2,A_{0}=A_{1}=B\) and apply the result from Remark 3.7 for the sequence \(\{n_{k}\}_{k\geq 1}\) while letting \(N\to\infty\) and \(M\to\infty\). This leads us to
\[\dim_{\mathrm{H}}E(\psi)\geq\min\{d_{0},d_{1}\}=d_{1},\]
where
\[d_{0} =\inf\{s\geq 0:P(T,-s\log B-s\log|T^{\prime}(x)|)\leq 0\},\] \[d_{1} =\inf\{s\geq 0:P(T,-(3s-1)\log B-s\log|T^{\prime}(x)|)\leq 0\}.\]
This coincides with the lower bound from Tan-Tian-Wang result.
### New results
We present one new result in this section.
As we mentioned above, in [9] authors have found a Hausdorff dimension for the set
\[\mathcal{F}_{B_{1},B_{2}}=\left\{x\in[0,1):\begin{aligned} a_{n}(x)a_{n+1}(x)& \geq B_{1}^{n}\ \text{ for infinitely many }n\in\mathbb{N}\\ a_{n+1}(x)&<B_{2}^{n}\ \text{ for all sufficiently large }n\in\mathbb{N}\end{aligned}\right\}\]
This result can be generalised to the case of a product of \(m\geq 2\) partial quotients. For \(m\geq 2\) consider the set
\[\mathcal{F}_{B_{1},B_{2}}^{m}=\left\{x\in[0,1):\begin{aligned} a_{n}(x) \cdots a_{n+m-1}(x)&\geq B_{1}^{n}\ \text{ for infinitely many }n\in\mathbb{N}\\ a_{n+1}(x)\cdots a_{n+m-1}(x)&<B_{2}^{n}\ \text{ for all sufficiently large }n\in\mathbb{N}\end{aligned}\right\}. \tag{4.6}\]
For \(t_{B_{1}}^{(m)}\) from (4.4) denote it as \(t_{B_{1}}^{(m)}=t\) and let
\[\theta_{m}=\frac{t^{m}-t(1-t)^{m-1}}{t^{m}-(1-t)^{m}}.\]
**Theorem 4.7**.: _For any \(B_{1},B_{2}>1\),_
* _when_ \(B_{1}^{\theta_{m}}\leq B_{2}\)_,_ \[\dim_{\mathrm{H}}\mathcal{F}_{B_{1},B_{2}}^{m}=t_{B_{1}}^{(m)};\]
* _when_ \(B_{1}^{\theta_{m}}>B_{2}>B_{1}^{1/2}\)_,_ \[\dim_{\mathrm{H}}\mathcal{F}_{B_{1},B_{2}}^{m}=g_{B_{1},B_{2}};\]
* _when_ \(B_{1}^{1/2}\geq B_{2}\)_,_ \[\mathcal{F}_{B_{1},B_{2}}^{m}=\mathscr{B}.\]
Proof.: We split the proof into two cases.
#### 4.2.1. \(B_{1}^{1/2}\geq B_{2}\)
By definition of our set we know that
\[a_{n+1}\cdots a_{n+m-1}<B_{2}^{n}\]
and
\[a_{n}\cdots a_{n+m-2}<B_{2}^{n-1}.\]
Multiplying two inequalities, we get
\[a_{n}\cdots a_{n+m-1}\leq a_{n}(a_{n+2}\cdots a_{n+m-2})^{2}a_{n+m-1}<B_{2}^{2n -1}\leq\frac{B_{1}^{n}}{B_{2}}<B_{1}^{n}. \tag{4.7}\]
This contradicts the first condition of our set, that is \(a_{n}(x)\cdots a_{n+m-1}(x)\geq B_{1}^{n}\). Hence in this case the set \(\mathcal{F}_{B_{1},B_{2}}^{m}\) is empty.
#### 4.2.2. Case \(B_{1}^{1/2}<b_{2}\)
First, we will work out the upper bound for cases \(B_{1}^{\theta_{m}}\leq B_{2}\) and \(B_{1}^{\theta_{m}}>B_{2}>B_{1}^{1/2}\), and then we will present a unified approach to lower bounds for both of these cases.
**The upper bounds.** When \(B_{1}^{\theta_{m}}\leq B_{2}\) one can see that \(\mathcal{F}_{B_{1},B_{2}}^{m}\subset E_{m}(B_{1})\). Hence
\[\dim_{\mathbb{H}}\mathcal{F}_{B_{1},B_{2}}^{m}\leq\dim_{\mathbb{H}}E_{m}(B_{1 })=t_{B_{1}}^{(m)},\]
where \(E_{m}(\psi)\) and \(t_{B_{1}}^{(m)}\) were defined in (4.4). When \(B_{1}^{\theta_{m}}>B_{2}>B_{1}^{1/2}\), consider the set
\[U=\Bigg{\{}x\in[0,1):1\leq a_{n}(x)\cdots a_{n+m-2}(x)\leq B_{2}^{n},\]
\[a_{n+m-1}(x)\geq\frac{B_{1}^{n}}{a_{n}(x)\cdots a_{n+m-2}(x)}\,\text{for infinitely many $n\in\mathbb{N}$}\Bigg{\}}.\]
Clearly, \(\mathcal{F}_{B_{1},B_{2}}^{m}\subset U\).
The limsup nature of \(U\) gives us a natural cover for it. For each \(n\geq 1\), define
\[U_{n}=\left\{x\in[0,1):1\leq a_{n}(x)\cdots a_{n+m-2}(x)\leq B_{2}^{n},a_{n+m- 1}(x)\geq\frac{B_{1}^{n}}{a_{n}(x)\cdots a_{n+m-2}(x)}\right\}.\]
Then \(U\) can be expressed as
\[U=\bigcap_{N=1}^{\infty}\bigcup_{n=N}^{\infty}U_{n}.\]
So a cover for \(U_{n}\) for each \(n\geq N\) will give a cover for \(U\). Naturally,
\[U_{n}\subseteq\bigcup_{a_{1},\ldots,a_{n-1}\in\mathbb{N}}\bigcup_{1\leq a_{n }\cdots a_{n+m-2}\leq B_{2}^{n}}J_{n+m-2}(a_{1},a_{2},\ldots,a_{n+m-2}),\]
where
\[J_{n+m-2}(a_{1},a_{2},\ldots,a_{n+m-2})=\bigcup_{a_{n+m-1}\geq\frac{B_{1}^{n} }{a_{n}\cdots a_{n+m-2}}}I_{n+m-2}(a_{1},a_{2},\ldots,a_{n+m-2}).\]
It is easy to see that
\[|J_{n+m-2}(a_{1},a_{2},\ldots,a_{n+m-2})| \asymp\frac{1}{\frac{B_{1}^{n}}{a_{n}\cdots a_{n+m-2}}q_{n+m-2}^{n }}\] \[\asymp\frac{1}{B_{1}^{n}a_{n}\cdots a_{n+m-2}q_{n-1}^{2}}.\]
Fix \(\epsilon>0\). Then there exists \(N_{0}\in\mathbb{N}\) such that for all \(n\geq N_{0}\), we have
\[\frac{(\log B_{2}^{n})^{m-1}}{(m-1)!}\leq\frac{(\log B_{1}^{n})^{m-1}}{(m-1)!} \leq B_{1}^{n\epsilon}.\]
By using this estimate with the following lemma we obtain the upper bound.
**Lemma 4.8** ([7, Lemma 4.2]).: _Let \(\beta>1\). For any integer \(k\geq 1,0<s<1\), we have_
\[\sum_{1\leq a_{n}\cdots a_{n+n-1}\leq\beta^{n}}\left(\frac{1}{a_{n}\cdots a_{n+ k-1}}\right)\asymp\frac{(\log\beta^{n})^{k-1}}{(k-1)!}\beta^{n(1-s)}. \tag{4.8}\]
Thus, the \(s\)-dimensional Hausdorff measure of \(U\) can be estimated as
\[\mathcal{H}^{s+\epsilon}(U) \leq\liminf_{N\to\infty}\sum_{n\geq N}\sum_{a_{1},\ldots,a_{n-1} }\sum_{1\leq a_{n}\cdots a_{n+m-2}\leq B_{2}^{n}}\left(\frac{1}{B_{1}^{n}a_{n} \cdots a_{n+m-2}q_{n-1}^{2}}\right)^{s+\epsilon}\] \[\leq\liminf_{N\to\infty}\sum_{n\geq N}\sum_{a_{1},\ldots,a_{n-1} }B_{2}^{n(1-s)}\left(\frac{1}{B_{1}^{n}q_{n-1}^{2}}\right)^{s}.\]
Hence,
\[\dim_{\rm H}\mathcal{F}_{B_{1},B_{2}}^{m}\leq\dim_{\rm H}U\leq\inf\{s\geq 0: P(T,(1-s)\log B_{2}-s\log B_{1}-s\log|T^{\prime}|)\leq 0\}=g_{B_{1},B_{2}}.\]
**The lower bounds.** For the lower bounds, we will apply Theorem 1.1. Namely, let us consider a set (3.4) from the proof of Theorem 1.1:
\[E=\{x\in[0,1):\,c_{i}A_{i}^{n_{k}}\leq a_{n_{k}+i}(x)<2c_{i}A_{i} ^{n_{k}} \text{ for all }k\geq 1,\text{ for all }0\leq i\leq m-1\] \[\text{ and }a_{n}(x)\in\{1,\ldots,M\}\text{ for other }n\in\mathbb{N}\}. \tag{4.9}\]
In Theorem 1.1 we considered this set with \(c_{0}=c_{2}=\cdots=c_{m-1}=1\). For this set to be a subset of \(\mathcal{F}_{B_{1},B_{2}}^{m}\) we need to choose suitable parameters \(A_{i},c_{i}\), where \(0\leq i\leq m-1\). Denote \(t=t_{B_{1}}^{(m)}\) and let \(A_{0}\cdots A_{m-1}=B_{1}\). Now for both of our cases, let us choose the parameters \(A_{i}\) for \(i=0,\ldots,m-3\) in such a way that quantities \(d_{i}\) for \(i=0,\ldots,m-2\) from (1.1) are all equal. This can be done by letting \(A_{k}=A_{k-1}^{\frac{1-t}{t}}\), or \(A_{k}=A_{0}^{(\frac{1-t}{t})^{k}}\) for \(k=1,\ldots,m-2\). In particular, for this choice of parameters \(A_{i}\), for \(k=0,\ldots,m-2\) we have
\[\beta_{k}=A_{0}\cdots A_{k}=A_{0}^{\frac{t^{k+1}-(1-t)^{k+1}}{t^{k(2n-1)}}}= \beta_{0}^{\frac{t^{k+1}-(1-t)^{k+1}}{t^{k(2n-1)}}} \tag{4.10}\]
and \(\beta_{m-1}=B_{1}\) by the choice we have previously made. By (4.10) and (1.1), we have that \(s(\beta_{0})=d_{0}=\cdots=d_{m-2}\), where \(s(B)\) was defined in Theorem 4.1. Note that using (4.10), \(\beta_{0}\) can be expressed in terms of \(\beta_{m-2}\). So using definitions of \(d_{0}=s(\beta_{0})=s\left(\beta_{m-2}^{\frac{t^{m-2}(2t-1)}{m-2}}\right)\) and \(d_{m-1}=g_{B_{1},B_{2}}\) we see that they both depend on a free parameter \(\beta_{m-2}\). Moreover, \(d_{0}\) is decreasing and \(d_{m-1}\) is increasing with respect to \(\beta_{m-2}\). One can easily check that if we let
\[\beta_{m-2}=B_{1}^{\theta_{m}},\]
we will have \(d_{0}=\cdots=d_{m-2}=d_{m-1}=t\), and so when \(\beta_{m-2}<B_{1}^{\theta_{m}}\), we have \(d_{m-1}<d_{0}\) and when \(\beta_{m-2}\geq B_{1}^{\theta_{m}}\), we have \(d_{m-1}\geq d_{0}\). By the proof of Theorem 1.1, we know that for \(M,N\to\infty\),
\[\dim_{\rm H}E\geq\min\{d_{0},d_{m-1}\}.\]
At this point we consider two cases.
**Case 1:**\(B_{1}^{\theta_{m}}\leq B_{2}\). Let \(\beta_{m-2}=B_{1}^{\theta_{m}}\) and
\[c_{0} =2^{m-1}B_{2},\] \[c_{1} =\frac{1}{B_{2}^{3}2^{22(m-1)}},\] \[c_{2} =\cdots=c_{m-2}=1,\] \[c_{m-1} =B_{2}^{2}2^{m-1}.\]
We can conclude that \(E\) is a subset of \(\mathcal{F}^{m}_{B_{1},B_{2}}\), because for our choice of \(A_{i},c_{i}\), where \(0\leq i\leq m-1\), we have
\[B_{1}^{n_{k}}=c_{0}\cdots c_{m-1}(A_{0}\cdots A_{m-1})^{n_{k}}\leq a _{n_{k}}\cdots a_{n_{k}+m-1},\] \[B_{2}^{n_{k}-1}>B_{2}^{n_{k}-2}\geq\frac{1}{B_{2}^{2}}B_{1}^{ \theta_{m}n_{k}}=2^{m-1}c_{0}\cdots c_{m-2}(A_{0}\cdots A_{m-2})^{n_{k}}\geq a _{n_{k}}\cdots a_{n_{k}+m-2},\] \[B_{2}^{n_{k}}>\frac{B_{1}^{\theta_{m}n_{k}}}{B_{2}}\geq\frac{1}{ B_{2}}(A_{0}\cdots A_{m-2})^{n_{k}}>2^{m-1}c_{1}\cdots c_{m-1}(A_{1}\cdots A_{m- 1})^{n_{k}}\geq a_{n_{k}+1}\cdots a_{n_{k}+m-1},\]
where on the last line we used that \(A_{i}<A_{i-1}\).
Hence by definition of our sets, set \(E\) is indeed a subset of \(\mathcal{F}^{m}_{B_{1},B_{2}}\). By the proof of Theorem 1.1,
\[\dim_{\rm H}\mathcal{F}^{m}_{B_{1},B_{2}}\geq\dim_{\rm H}E\geq t:=t^{(m)}_{B_{ 1}}\text{ when }M,N\to\infty.\]
Combining with the upper bound we conclude that \(\dim_{\rm H}\mathcal{F}^{m}_{B_{1},B_{2}}=t^{(m)}_{B_{1}}\) in the case \(B_{1}^{\theta_{m}}\leq B_{2}\).
**Case 2:**\(B_{1}^{\theta_{m}}>B_{2}>B_{1}^{1/2}\). Let \(\beta_{m-2}=B_{2}\). Then by what was said above, \(d_{m-1}<d_{0}\), where \(d_{m-1}=g_{B_{1},B_{2}}\). We need to make sure that \(E\) is a subset of \(\mathcal{F}^{m}_{B_{1},B_{2}}\). For this we need to check that
\[c_{0}\cdots c_{m-1}(A_{0}\cdots A_{m-1})^{n_{k}} \geq B_{1}^{n_{k}}, \tag{4.12}\] \[c_{0}\cdots c_{m-2}(A_{0}\cdots A_{m-2})^{n_{k}} <B_{2}^{n_{k}-1},\] (4.13) \[c_{1}\cdots c_{m-1}(A_{1}\cdots A_{m-1})^{n_{k}} <B_{2}^{n_{k}}. \tag{4.11}\]
We can choose values for the constants \(c_{i}\) as follows:
\[c_{0}=c_{1}=\cdots=c_{m-3}=1,\] \[c_{m-2}=\frac{1}{B_{2}^{2}},\] \[c_{m-1}=B_{2}^{2}.\]
Now with this choice of \(c_{i}\) inequalities (4.11) and (4.12) easily follow from \(\beta_{m-1}=B_{1}\) and \(\beta_{m-2}=B_{2}\). Notice that \(A_{m-1}=\frac{B_{1}}{B_{2}}\). Hence the inequality (4.13) is just \(A_{m-1}<A_{0}\), which is true. (Note that \(\frac{A_{k}}{A_{k-1}}<1\) for all \(k\).) Now by definition of our sets, set \(E\) is indeed a subset of \(\mathcal{F}^{m}_{B_{1},B_{2}}\). By the proof of Theorem 1.1,
\[\dim_{\rm H}\mathcal{F}^{m}_{B_{1},B_{2}}\geq\dim_{\rm H}E\geq g_{B_{1},B_{2}} \text{ when }M,N\to\infty.\]
Combining with the upper bound we conclude that \(\dim_{\rm H}\mathcal{F}^{m}_{B_{1},B_{2}}=g_{B_{1},B_{2}}\) in the case \(B_{1}^{\theta_{m}}>B_{2}>B_{1}^{1/2}\).
|
2301.00118 | Airborne Ultrasound Focusing Aperture with Binary Amplitude Mask Over
Planar Ultrasound Emissions | Phased arrays of airborne ultrasound transducers are widely utilized as a key
technology to achieve mid-air convergence of intense ultrasound, which is
applied to a variety of systems, such as contactless tactile presentation,
acoustic-levitation and its application, mid-air-flow acceleration, etc.
However, it requires considerably precise phase control with temporally severe
synchronization between elements, which leads to difficulty in scaling up the
entire system beyond the tabletop size as most of the current application
systems. Here, we propose a much simpler and easier scaling-up method of
airborne ultrasound convergence, where a binary amplitude mask that serves as a
Fresnel Zone Plate (FZP) is placed on the planar in-phase ultrasound sources.
We experimentally demonstrate that the FZP-based ultrasound focusing achieved
a spatial resolution that is comparable to conventional methods, based on the
use of phase-controlled transducers. The ultrasound foci created using FZPs are
sufficiently intense for most application scenarios that are currently in
practical use. We also determine favorable side effects of our method
suppressing grating lobes, which is inevitable with the conventional
phase-controlling method.
The FZPs and planar ultrasound sources are both readily implemented with
inexpensive ingredients and components. The result of our study contributes to
upsizing dimensions in which a mid-air convergent ultrasound field is
successfully generated. Accordingly, unprecedented application scenarios that
target the entire room as the workspace will be possible. | Masatake Kitano, Keisuke Hasegawa | 2022-12-31T04:27:52Z | http://arxiv.org/abs/2301.00118v2 | # Airborne Ultrasound Focusing with Amplitude Mask Over Planar Ultrasound Emissions
###### Abstract
Phased arrays of airborne ultrasound transducers are widely utilized as a key technology to achieve mid-air convergence of intense ultrasound, which is applied to a variety of systems, such as contactless tactile presentation, acoustic-levitation and its application, mid-air-flow acceleration, etc. However, it requires considerably precise phase control with temporally severe synchronization between elements, which leads to difficulty in scaling up the entire system beyond the tabletop size as most of the current application systems. Here, we propose a much simpler and easier scaling-up method of airborne ultrasound convergence, where a binary amplitude mask that serves as a Fresnel Zone Plate (FZP) is placed on the planar in-phase ultrasound sources.
We experimentally demonstrate that the FZP-based ultrasound focusing achieved a spatial resolution that is comparable to conventional methods, based on the use of phase-controlled transducers. The ultrasound foci created using FZPs are sufficiently intense for most application scenarios that are currently in practical use. We also determine favorable side effects of our method suppressing grating lobes, which is inevitable with the conventional phase-controlling method.
The FZPs and planar ultrasound sources are both readily implemented with inexpensive ingredients and components. The result of our study contributes to upsizing dimensions in which a mid-air convergent ultrasound field is successfully generated. Accordingly, unprecedented application scenarios that target the entire room as the workspace will be possible.
+
Footnote †: preprint: AIPborne Ultrasound Focusing with Amplitude Mask Over Planar Ultrasound Emissions
## I Introduction
Prevalent use of phase-controlled airborne ultrasound transducer arrays in nonlinear mid-air ultrasound applications
The nonlinear acoustic effect of strong mid-air ultrasound has been known [1; 2; 3; 4] for more than a century. However, its practical applications had mostly been limited within underwater cases. Recently, however, the situation has been altered by the advent of wave emission control devices employed for generating spatially localized intense ultrasound fields in the air, which can be electronically steered. Such strong and localized airborne ultrasonic power fields enable the generation of nonlinear acoustic phenomena at desired locations in space. This has led to the development of various applications of mid-air convergent ultrasound in several fields over the past decade. Examples of such real-world applications include mid-air ultrasound tactile presentation [5; 6; 7; 8], acoustic levitation systems [9; 10; 11], mid-air three-dimensional displays [12; 13], utilization of mid-air ultrasound-driven acoustic flows [14; 15; 16], etc. To date, new applications have been continuously devised by many researchers.
As aforementioned, most of these applications rely on the technique of creating ultrasound fields with controlled spatial distribution, which is based on the principle of ultrasound source emission that is spatially controlled in a greater emission area than the wavelength. Currently, the most prevalently employed method is the ultrasonic phased array technique, where the coherent ultrasound emission plane is constituted by a large number of ultrasonic transducers, with their individual emission amplitudes and phase delays electronically controlled. By appropriately controlling the driving phases of the transducers, a wide variety of spatial distributions of ultrasound fields are created. The fabrication of the first airborne ultrasound phased array device [5] triggered widespread research on its applications. Development of the airborne ultrasound phased arrays have been continued to date by several research groups and most of the aforementioned application scenarios are based on this technique.
### Difficulty in upsizing current phased-array-based ultrasound manipulation scenarios
However, the phased array technique suffers from difficulties in technical implementation [17]. Particularly, the need for synchronization and minute phase control of all individual transducers requiring a \(\mu\)s precision, prevents the workspace of mid-air ultrasound systems from being upscaled. Therefore, most of the current mid-air ultrasound applications are limited in tabletop systems, in terms of their spatial tract.
As a potential solution that could address this challenge, there is a different approach to manipulate airborne ultrasound. It is the development of wave manipulating elements that convert the incident ultrasound from a single fixed ultra
sound source on its surface into desired spatial distribution of the ultrasound field. They are passive wave-manipulation elements and upsizing, and such devices are much easier and less expensive than upsizing the phased-array system for most cases. Among them, many phase controlling element arrays are studied, which serves as a phased array with fixed spatial phase delay profiles in conjunction with wave sources. One of the most intuitive ones are phase delay elements directly placed above the plane wave source to convert the incident waves inside them into the desired field. They operate in the water [18], in vivo [19], and in the air [20; 21]. There are also a number of methods that handle incident waves at a distance from the wave source that comes into the elements [22; 23]. Several reflective elements that control the phase of incident wave have also been proposed both in water and air [24; 17; 25] as well. Such devices do not require electronical synchronizations among individuals once fabricated. At the same time, most of these elements do not allow their phase distribution to be changed once they are constructed, apart from some that can be adjusted manually [20]. Other examples of related technologies for wave control are the use and fabrication of acoustic metamaterials [26; 27; 28].
There is another approach for fabricating wave manipulation devices, which does not rely on phase control of incident waves. Instead, such devices control only the amplitude distribution of the wave emissions. There is no need for the phase control of the vibrating surface, which significantly simplifies the system implementation; in addition, the required precision in element fabrication is substantially less than that of phase-controlling alternatives. In addition, it is advantageous in terms of the simplicity of the fabrication in that amplitude-control-based devices do not require acoustic impedance matching to efficiently transmit incident waves, whereas this is indispensable for phase-controlling devices. The main drawback of this approach is that unlike the phase-controlling devices, a portion of radiated wave is weakened by amplitude control. This results in the requirement of a larger ultrasound-emitting aperture in applications that require strong ultrasonic output when compared with the phase-control scheme. However, fabrication simpleness of such amplitude-controlling devices enables them to be made larger with less cost and effort than that required with upsizing the phase-controlling devices. There are several examples of acoustic convergence in air and water achieved by concentric amplitude lenses [29; 30; 31; 32; 33; 34] and concentric transducer arrays [35]. With proper designing, the spatial resolution of the generated sound fields by the amplitude-controlled emissions are not particularly degraded, compared with the phase-controlled cases.
Proposed technique: Large-aperture amplitude mask on planar ultrasonic source for ultrasound manipulation
The acoustic convergence using concentric amplitude emissions described above is a technique called Fresnel Zone Plate (FZP). Converged sound field is realized by placing the FZP at a certain distance from, or directly on radiating sound wave sources, to block off the ultrasound emissions that are not "in phase" at the desired region in the workspace in the air. Several airborne ultrasonic applications including potential ones are implemented using 40 kHz ultrasonic transducers [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78]. As pointed out by a preceding study [39], one of the reason for many related works choosing 40 kHz is its low attenuation during propagation owing to relatively long wavelength as ultrasound. In addition, good availability, inexpensiveness, high emission efficiency and requirement of relatively coarse temporal timing control between the transducers would be another benefits of utilizing 40 kHz ultrasound transducers.
However, to the best of the authors' knowledge, there have been no examples of the use of large-aperture FZPs for convergence of 40 kHz large-amplitude airborne ultrasonic waves in mid-air ultrasound applications. FZPs can be easily scaled up in size, and the realization of a large-area ultrasonic radiation surface using FZP is expected to significantly expand the range of applications in mid-air ultrasound research, because of the ability to create well-concentrated ultrasound foci at a great distance from the aperture. For example, a mid-air tactile display will be realized that can present tactile stimuli all over the body of users at several locations in the room, whereas the current system can only stimulate a part of the user's limb situated in front of the fixed small-aperture phased arrays. It is also expected that a wide range of aerial object manipulation using the entire room as a workspace will be achieved.
As a fundamental technology to turn these potential applications into reality, we propose a method for realizing a convergent sound field as an alternative to the phased array, by placing a thin, large-area FZP binary amplitude mask on a planar ultrasonic source. More specifically, we demonstrate the generation of an ultrasonic focus with this setup, which is often utilized in various applications including mid-air tactile presentation. The proposed FZP amplitude mask can be made from any material that has acoustic impedance sufficiently different from that of the air and blocks off emissions from the plane wave source. In this study, we utilize an acrylic plate cut with a laser cutting machine to fabricate the FZP and demonstrate that a focus can be generated with it. In the proposed sound field control method, a machining accuracy of millimeters is sufficient for 40 kHz ultrasonic waves (8.5 mm in wavelength in the air), which are commonly utilized in airborne ultrasonic research, and any of such complicated machining processes as required in fabricating metamaterials are unnecessary. The proposed technique does not have a real-time focus shifting function like the phased array. Nevertheless, the fabricated FZP mask can be larger than the area of the ultrasound radiation surface, and the focus can be shifted by translating it over the fixed radiation surface. Although not as easy as a phased array whose transducers' phases can be electronically controlled, this focus shifting strategy can be achieved with appropriate actuators.
In this study, it is assumed that a large number of ultrasonic transducers forming a large emitting aperture is utilized as the plane wave radiation surface. A reason for this assumption is the difficulty in fabricating a monolithic plane-wave
radiation surface that utilizes a single flat plate to perform exclusive normal mode vibration at ultrasonic frequency. It is much easier to construct a planar radiation surface using a large number of separate ultrasonic transducers driving in phase instead. Although the fundamental physical principle of FZP-based ultrasound field control is not affected by the source frequency, we focused on the utilization of 40 kHz ultrasound transducer array in combination with FZP amplitude masks. This is because it is currently the most reasonable solution for construction of large ultrasound emitting aperture thanks to their availability and fabrication readiness and the prevalent use of 40 kHz midair ultrasound in many current practical applications.
In this study, phased arrays that have already been developed were used as a plane wave source in the experiment by driving all their transducers with no phase differences among individuals. For actual applications, we envision the use of transducer arrays in which all elements are driven by a common driving signal. This strategy does not require synchronization control of each element, unlike the case with phased arrays, and thus can be easily applied to large scale application systems.
There is another finding in this paper that is concerned with the grating lobe issue, which is the strong and localized radiation of ultrasonic energy in an unintended direction, caused by phased arrays whose element spacing on their radiation plane is wider than half of the wavelength. In contrast, the spatial resolution of the amplitude mask fabricated in this study is finer than half of the wavelength; therefore, it is experimentally demonstrated that the afore-mentioned grating lobes do not occur when the in-phase driven transducer array is covered with the FZP mask. This feature in this study has great practical significance, in that it suppresses people's unintentional exposure to strong ultrasound in several application scenarios.
## II Physical principles
### Ultrasound focusing by phased array systems
Prior to the description of the formation principle of ultrasound foci by FZPs, we start with brief introduction of focal formation by phased arrays: it has much in common with our method, and thus would bring better comprehension of this research. Figure 1 illustrates how the two strategies, the phase-controlled-transducer-based and FZP-based methods, form an ultrasound focus.
As aforementioned, an airborne ultrasonic phased array has a radiation surface with many transducers, and the output signal of each transducer can be individually controlled. With a phased array, focus formation is achieved by electronically controlling the phase delay of each transducer so that the sound waves from all transducers are in-phase and yield strong acoustic energy spot at a desired point (the focus). Let \(\mathbf{r}_{t}\) be the position of an element in the array, \(\mathbf{r}_{f}\) be the focal point, and \(k\) be the ultrasonic wave number. Then, the driving phase \(\mathbf{\theta}(\mathbf{r}_{t})\) of the ultrasonic wave emitted by the element at \(\mathbf{r}_{t}\) should be set to compensate for the phase delay owing to the distance between the element and focal point. Therefore, the driving phase \(\mathbf{\theta}(\mathbf{r}_{t})\) is expressed as
\[\theta(\mathbf{r}_{t})=k||\mathbf{r}_{t}-\mathbf{r}_{f}||+\alpha, \tag{1}\]
where \(\alpha\) is an arbitrary constant expressing the phase indefiniteness and \(||\cdot||\) denotes the Euclidean norm of a vector \(\cdot\).
Figure 1: Schematic illustration of how the ultrasonic focus is formed by phased array transducers (Left figure) and in-phase planar wave sources under an FZP (right figure).
### Principles and designing procedures of amplitude FZP for ultrasound focusing
The FZP converges ultrasound power around a desired position by blocking off a part of the wave emission from an ultrasonic source. Let a plane wave source be constructed by a set of in-phase driven ultrasonic elements and \(\mathbf{r}_{t}\) be an element position. Then at the focal position, the observed phase delay of the sound wave emitted from each element varies with the distance between the element and focus, as in the case with a phased array. Here, we consider driving the wave source with the rule that only those elements that are in phase or nearly in-phase at the focal point are activated, whereas the rest are deactivated. It is expected that nearly-in-phase addition of sound waves will be realized at the focus.
The designing procedure of FZPs follows this principle. The amplitude distribution \(P(\mathbf{r}_{t})\) on the FZP is generated from the distribution of the driving phase \(\theta(\mathbf{r}_{t})\) of the phased array element, calculated in Eq. (1) with arbitrary spatial distributions. The most commonly used phase-to-amplitude conversion rules are as follows: 1) determine \(\alpha\) so that the phase at the point on the irradiation plane closest to the focus is zero, 2) calculate the remainder of the driving phase divided by \(2\pi\), and 3) set the amplitude of the element to ON (\(P(\mathbf{r}_{t})=1\)) when the remainder is from zero to \(\pi\) and set the amplitude OFF (\(P(\mathbf{r}_{t})=0\)) otherwise:
\[P(\mathbf{r}_{t})=\left\{\begin{array}{ll}1,&2n\pi\leq\theta(\mathbf{r}_{t})<(2n+1) \pi\\ 0,&(2n+1)\pi\leq\theta(\mathbf{r}_{t})<2(n+1)\pi.\end{array}\right.,n=0,1,2,\ldots \tag{2}\]
The above rule one is considered as effective in creating a strong sound field, because each element cannot be regarded as completely nondirectional in practice, and its ultrasound emission to the front is the strongest. In the following parts of the paper, we describe our investigations on the spatial properties of the ultrasound field generated according to this method via numerical and real environment experiments.
## III Numerical experiments
### Calculation of acoustic wave convergence with amplitude FZP
First, we evaluated the focusing performance of FZPs attached to an in-phase planer wave source. In real-environment experiments described in the following section, we utilized airborne ultrasonic phased arrays with each element driven in-phase as a plane wave source. Therefore, the size of the plane wave source in the numerical simulations was set to 370 mm \(\times\) 290 mm, which is approximately equivalent to that of the actual phased arrays. We assumed that each point of the amplitude on the FZP was a omnidirectional point source. The free-field boundary condition and no dissipation of medium in the calculation domain of ultrasound propagation are also assumed. Under these conditions, the ultrasound field \(P_{f}(\mathbf{r})\) was calculated using the wave superposition principle:
\[P_{f}(\mathbf{r})=\sum_{t=1}^{N}P(\mathbf{r}_{t})\frac{e^{-\mathrm{j}\left|\left|\mathbf{ r}-\mathbf{r}_{t}\right|\right|}}{\left|\left|\mathbf{r}-\mathbf{r}_{t}\right|\right|}, \tag{3}\]
where \(\mathrm{j}=\sqrt{-1}\) denotes the imaginary unit and \(N\) denotes the number of source points. Here \(t=1,\ldots,N\) denotes the index of source locations. All simulations are performed using the above equation.
Two types of amplitude FZPs were designed in line with the above design criteria using Eq. (2) to converge sound around a focal point located apart from the plane wave source by 150 mm and 40 mm, respectively. The spatial resolution of the wave field calculation area was also set to 1 mm. The ultrasonic frequency was set to 40 kHz, which is widely utilized in current mid-air ultrasound applications. The FZP patterns were calculated with a spatial resolution of 1 mm, less than 1/4 of the wavelength. Figure 2 illustrates a simulation result of the ultrasonic sound field including the focal point for each FZP. The left column of the figures indicates the values of \(P(\mathbf{r}_{t})\). We adopted MATLAB for all numerical simulations in this study. The sound field generated by the designed FZPs indicates successful formation of an ultrasound focus for each case. Concentric acoustic emissions outside the focal region were observed in both FZPs in the \(xy\)-plane amplitude simulations.
### Focus formation by FZP using a plane wave source comprising multiple ultrasonic transducers
As mentioned in the introduction, we consider forming the vibrating surface by arranging cylindrical aerial ultrasonic transducers in a two-dimensional grid, which are commercially available and have been utilized for several preceding studies. In this second numerical simulation, we assumed that each transducer could be modeled as a set of infinitesimal point sources uniformly distributed on its vibrating surface of 10 mm diameter, which is equivalent to that of the transducers utilized in the following experiments. The arrangement of the transducer in the simulation was set identical to the real devices employed in the real environment experiment. The FZP pattern was superimposed onto the transducer array in which the driving signals of all transducers set were identical. As with the previous simulations, the spatial distribution of the source FZP plane was set to 1 mm. For the fidelity of simulations compared with the real transducer arrangement on the phased arrays, periodic gaps were created between the cylinders and some regions were set where transducers were not mounted, that corresponded to the screws in the actual devices utilized in the experiments. The transducer arrangement in the \(xy\)-plane in the numerical and real-world experiments is depicted in Fig. 3. Figure 4 illustrates the calculated source amplitude distributions and generated sound fields. The left column of the figures shows the amplitude patterns of the ultrasound source \(P(\mathbf{r}_{t})\), comprising multiple ultrasound transducers partially covered by the FZPs. In this simulation, all emissions were in the same phase and FZP was
modeled as complete ultrasound mask with no thickness. The results demonstrate that adequate focusing was achieved with a periodically perforated emission plane comprising a set of in-phase-driven transducers. Weak grid-like amplitude patterns that were superimposed on the focus in the \(xy\)-plane simulation are observed. These patterns were not observed in the amplitude distributions generated from the previous simulations, where no periodical source gaps were considered. Therefore, generation of these patterns can be ascribed to the periodical defects in the emission plane of the wave source under the FZPs. Apart from that, the generated patterns with these two simulations have great similarity to one another.
Next, we evaluated the focal formation by the conventional phase controlling method of transducer arrays, for the relative assessment of the focusing performances of the FZP-based methods. The arrangement and amplitudes of the transducers were set identical to those in the previous simulation. The output phase of each transducer was calculated in line with Eq. (1) with \(\alpha=0\), that is, \(P(\mathbf{r}_{t})\) in Eq. (3) was determined as \(P(\mathbf{r}_{t})=e^{\mathbf{j}\theta(\mathbf{r}_{t})}=e^{\mathbf{j}k||\mathbf{ r}_{t}-\mathbf{r}_{f}||}\). The output amplitude of the sources was all identical.
Figure 5 illustrates the calculation results of the amplitude distribution by the phase-controlled transducer arrays. With the focal depth of 150 mm, prominent grating lobes are observed at a distance from the focal point, instead of the widespread granular amplitude patterns observed in the case with FZPs. This is owing to the fact that the spatial resolution of the phase distribution control of the radiating surface depends on the size of the transducer elements, and therefore cannot achieve a phase distribution finer than half a wavelength. The spatial distributions of FZPs we create in the experiment are finer than half the wavelength, and this mitigated the generation of the grating lobes. In the case where the focal depth is 400 mm, the grating lobes are located further from the focus, and cannot be seen in the region of simulation.
The phased array is expected to be more efficient than FZPs in forming a focal point because the radiating surface is not shielded unlike the FZPs. As indicated in the next chapter, numerical simulation and real-environment experiments both indicate that the ultrasound intensity at the focus was reduced by using FZPs than performing phase control over all transducers, when the output of each element was the same.
Figure 3: Transducer arragement in the numerical simulations and measurement experiments. Yellow regions indicate ultrasound emitting areas.
Figure 2: Calculated FZP patterns (left column), normalized ultrasound amplitude fields by a continuous planar wave source under the FZPs in numerical simulations in the focal plane parallel to the \(xy\)-plane (Middle column), and that in the \(xz\)-plane (right column), for the focal depth of 150 mm and 400 mm, respectively. The coordinate system is as defined in the Fig. 8 illustrating the experiment setup.
### Lateral movement of FZPs over transducers driven in phase
We numerically investigated how the lateral movement of the FZPs over the transducers affects the focusing. In the simulations, we calculated the focal amplitudes generated by the in-phase transducers and the FZP over them laterally shifted in \(x\)-direction by 0, 50, 100, 150, 200, and 250 mm. Figure 6 depicts an example of amplitude pattern and its corresponding generated ultrasound field for the case with an FZP with the focal depth of 150 mm that is shifted by 50 mm in the \(x\)-direction. A lateral shift of the generated focus that corresponded to the lateral FZP shift is observed. The figure also includes a graph that shows the relative focal amplitude with respect to the lateral shift of the FZPs. The simulation results show that for the lateral shifts less than 10 mm do not significantly affect the focal amplitude, whereas the shift greater than the half of the aperture dimension drastically reduce the
Figure 4: Simulation results for the case where FZPs are located on the transducer array driven with synchronized uniform phase delays. Calculated amplitude patterns (left column), normalized amplitude fields in numerical simulations in the focal plane parallel to the \(xy\)-plane (middle column), and that in the \(xz\)-plane (right column), for the focal depths of 150 mm and 400 mm, respectively.
Figure 5: Simulation results for the case where the transducer array is driven with individual element phase delays for forming an ultrasound focus. Calculated amplitude patterns (left column), normalized amplitude fields in numerical simulations in the focal plane parallel to the \(xy\)-plane (middle column) and that in the \(xz\)-plane (right column), for the focal depths of 150 mm and 400 mm.
focal amplitude. It is also observed that the focal amplitude attenuation per a unit length shift of FZPs is smaller for the greater focal depth.
## IV Physical experiments
### Ultrasound focusing by the binary hologram
We fabricated two types of FZPs made of acrylic plate with 2 mm of thickness, which have 150 mm and 400 mm of focal depth, respectively (Fig. 7). The acrylic sheets were cut out by a laser cutter (VD-60100, COMMAX, Co., Ltd., Japan) based on CAD data with the geometric positioning and size of each component defined with spatial quantization of 1 mm.
In the experiment, we utilized custom-made 40 kHz phased arrays [40] with their all transducers driven in-phase as a wave source, on which the fabricated FZP was placed to form an ultrasound focus. The custom-made phased array unit utilized in the experiment contained 249 40 kHz ultrasound transducers (T4010A1, Nippon Ceramic, Co., Ltd., Japan) arranged in a two-dimensional lattice. We employed four units of the phased arrays forming an ultrasound emitting aperture of 374.0 mm \(\times\) 292.8 mm in the experiment, which corresponds to the spatial configuration of the numerical experiments. For ultrasound scanning measurement, a standard microphone system (1/8-in. microphone, type 4138-A-015; pre-amplifier, type 2670; condition amplifier, type 2690-A; all products of Hottinger, Bruel and Kjaer, Denmark) was utilized. The microphone was mounted on the tip of three-dimensional linear actuators (type ICSB3, product of IAI, Japan). The microphone on the actuators scanned the sound field to capture the acoustic pressure distributions in designated regions. The entire experimental setup and coordinate system in the experiments are illustrated in Fig. 8.
The scanning was completed with spatial interval of 5 mm for the \(x\)- and \(y\)- axes, and 10 mm for the \(z\)- axis illustrated in Fig. 8. The ultrasound output of the arrays was adequately weakened, compared with its possible maximum for avoiding measurement saturation of the microphone. Across all the
Figure 6: Examples of simulation results for the case where FZPs shifted in \(x\)-direction are located on the transducer array driven with synchronized uniform phase delays. Calculated amplitude pattern (left), normalized amplitude field in the focal plane parallel to the \(xy\)-plane (middle), and relative focal amplitudes normalized by the case with no FZP shift (right).
Figure 7: Fabricated FZPs with focal depths of (a) 150 mm and (b) 400 mm.
measurements, the output intensity of the transducers were set identical. For each measurement, the center of the coordinate system in the \(xy\)-plane was slightly adjusted so that it yielded the maximum observed pressure within the range less than 2 mm after manually setting the coordinate origin on the center of the arrays. In this manual calibration, the origin of the \(z\)-axis was set to the surface of the phased array transducers. While setting the transducer output power unchanged, we also sequentially measured the sound field created by the phase-controlled transducers at corresponding focal depths for the comparison of focusing performances among the two methods.
Figure 9 illustrates the results of the sound field measurements with the FZPs. As in the simulation experiment, it was confirmed that the generation of grating lobes was suppressed by the FZP with focal depth of 150 mm. Figure 10 illustrates spatially finer measurement results with 2 mm scanning intervals along the \(x\)- and \(z\)- axes with amplitude distributions obtained by the numerical simulation, normalized by the output obtained with the maximum value of each case. The graphs indicate that the spatial profile of the amplitude distributions indicate good agreement with those predicted by numerical simulations. The sizes of the focus with both cases with the FZP and phase-controlled transducers were comparable.
At the same time, The focal amplitudes with the FZPs clearly reduced when compared with that generated by the phase-controlled transducers (Table 1). Although the focal power efficiency was reduced, compared with the conventional phased array method, we consider it was still valid for most of the current mid-air ultrasound use. We confirmed that aerial vibrotactile presentation, one of such prevalent applications, can be achieved. By applying amplitude modulation at 150 Hz as is often done in airborne ultrasound presentation systems, distinct pinpoint tactile stimuli could be felt by placing bare hands over the ultrasound focus generated by the FZPs for the possible maximum ultrasound output from the transducers. From this result, it is expected that FZP-based ultrasound manipulation can be applied in other scenarios that have been demonstrated by precedent studies.
### The mobility of the focus in binary holograms and multi-focusing
Furthermore, we placed an FZP with focal depth of 150 mm on the in-phase transducer array. We shifted the center of the FZPs apart from the center of the arrays as the previous numerical simulations. We performed the sound field measurement as in the previous experiments and observed that the focus moved to the same position as the center of the FZP. Then, we placed another 150 mm-focal-depth FZP on that FZP and observed that two ultrasound foci were simultaneously generated at positions corresponding to the centers of both FZPs (Fig.11).
As observed in the numerical simulations, the experiment also demonstrates that the focal movement by only translational movement of FZP is realized, which is much easier than moving the entire sound sources toward desired focal positions. Simultaneously, the power of foci were approximately halved from that with a single focus case. Regarding the case with two foci, the relative power of unwanted acoustic emission around the focal region was stronger than that observed by the single-layer-FZP case. For this case, vibrotactile stimulation was faintly felt on the palm with 150 Hz amplitude modulation applied to the possible maximum ultrasound output from the transducers.
## V Discussions
### Focusing performance for varied focal depths
In both simulations and measurements, it is observed that the main lobe width of the ultrasound foci were comparable between the cases with FZPs and phase-controlled transducers. Based on this fact, out of the focal amplitude ratio in Table 1, the relative focusing efficacy of the FZP compared to that of phase-controlled transducer array can be roughly estimated because the amplitude distributions around the foci were almost identical for the both cases. According to the precedent study supposing that the focal sound intensity is in proportion to the square of the focal amplitude [34], the relative focusing efficacy of FZP with the focusing depth of 150 mm was 34.9 % in numerical simulation and 27.1 % in measurement, compared with that of phase-controlled transducers. With the focusing depth of 400 mm, the relative focusing efficacy of FZP compared to the phase-controlled transducers
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multirow{2}{*}{Focal Depth} & Amplitude Ratio & Amplitude Ratio \\ & (Simulation) & (Measurement) \\ \hline
150 mm & 59.1\% & 52.1\% \\ \hline
400 mm & 38.2\% & 33.3\% \\ \hline \end{tabular}
\end{table}
Table 1: Focal amplitude ratio of the FZP case to the phase-controlled transducer case.
Figure 8: Coordinate system in the experiments and experimental setup with four units of ultrasound phased arrays as planar wave sources, covered by the FZP.
was 14.6 % in numerical simulation and 11.1 % in measurement. In the preceding studies, the phase-controlling version of the FZP, which causes no ultrasound blockage on the plate was studied [34; 41]. Such devices have a physical effect similar to the phase-controlled transducer arrays in generating ultrasound foci, showing better focusing efficacy. In that study, the focusing efficacy of the phase-controlled FZP was approximately four times as great as the amplitude-controlled FZP, which was similar to the results in our experiments with the focusing depth of 150 mm. At the same time, the past research did not handle the issue of grating lobes because no periodic amplitude gaps in the source plane were taken into account.
Figure 9: Measured ultrasound amplitude distribution when focus was formed by the FZPs (left column) or the phase controlling of transducer arrays (right column), for the focal depth of 150 mm ((a), (b), (e), and (f)) and 400 mm ((c), (d), (g), and (h)).
Thus, the relative focusing efficacy with a longer focal depth gets smaller, which can be attributed to the fact that the "rings" on the FZP get more distant from one another for a longer focal depth, resulting in a smaller number of rings existing in a finite FZP aperture. For the most extreme case where the focus is fairly far from the FZP, there may be only one open circle in the FZP. In that case, the resulting ultrasound field is equivalent to that with a windowed planar emission, where no longer proper focusing is expected. In the same situation, a driving phase distribution on the emission plane is realized with the phase-controlled transducer case, which is still capable of forming a focus.
The generation of grating lobes by the phase-controlled transducers was suppressed with the use of FZP, at the cost of unintended ultrasound emission around the focal region. This is because of the emission pattern with the FZP being finer than the wavelength, unlike that with phase-controlled transducers inevitably becoming coarser owing to the transducer size. However, such strong grating lobes are not observed in the case with the focal depth of 400 mm in the measurement area. The grating lobes exist in a region more apart from the focus with a greater focal depth. Therefore, in the cases where the grating lobes are so far from the focus that they can be neglected, the phase-controlled scheme outperforms the FZP-based method owing to less unintended acoustic emissions.
The intervals of rings on the FZP also depend on the wavelength. As the wavelength decreases, the central circle becomes smaller, and more rings exist in the FZP, which is expected to improve sound collection performance. At the same time, it becomes much more difficult to decrease the dimension of the phase-controlled transducers to realize proper focusing. Therefore, FZP-based focusing scheme will be more suitable for ultrasound emission with a higher frequency.
### Effect of FZP thickness
In the numerical experiment, the thickness of the FZPs were not considered. In practice, direct waves to the focus were expected to be partially blocked off by the thickness of FZPs. The smaller the elevation angle from the sound source to the focus is, the greater the part of sound waves is blocked, because the sides of the FZP rings serve as walls. We utilized 2 mm-thick acrylic plates for fabricating FZPs in our experiments, with which the blockage percentage by the FZP walls was estimated to be several percent compared to the case with FZP with no thickness. The error in focal amplitude ratio of the FZP cases to the phase-controlled transducer cases be
Figure 10: Measured and simulated acoustic amplitude distributions normalized by the maximum amplitude of each graph, for the focal depth of 150 mm (left column) and 400 mm (right column), on the line parallel to the \(x\)-axis at the focal depth (upper row) and on the \(z\)-axis (lower row). “PC” indicates the case with phase-controlled transducers.
tween the numerical and experimental results may partially be explained by this effect.
With an experiment where two FZPs were stacked to create two individual foci, the amplitudes of the foci were less than that of the single focus created with one FZP. This is presumably because one FZP blocked the positive contribution to the focus from the other FZP, and the valid area of the planar sound source became considerably smaller. Another factor for this power reduction may be the total thickness of the two FZP layers being 4 mm, which might have caused more blocking-off effect of direct waves and complicated sound reflections between the layers.
### Variations of driving frequencies
FZP-based focusing can achieve finer spatial resolution than the conventional phased-array technique. This resolution gap between the two methods is more prominent for a higher ultrasound frequency, because of the difficulty in downsizing transducers of the corresponding resonant frequency. However, the spatial resolution of FZP patterning can be readily improved, as there have been a great bunch of minute machining techniques including the laser cutting. In addition, a higher frequency source results in more rings created in FZPs, which will contribute to a more proper focusing. Therefore, our method can be effectively implemented as a form of miniature emission plane with a higher frequency source, as well as enlarged mid-air-ultrasound workspaces.
## VI Frequency mismatches and misalignment between sources and FZPs
An FZP for generating ultrasound focus is fabricated for its corresponding fixed driving frequency and focal depth. Therefore, the frequency of the ultrasound sources should match that corresponds to the designing process of the FZP for the intended focusing. We numerically investigated how the frequency mismatch between the sources and the FZP affects the focusing performance. Figure 12 shows the normalized ultrasound amplitude around the focal region. The simulation results indicate that the resulting focal depth is affected by the frequency mismatch. At the same time, focal amplitude at the intended depth becomes highest when no frequency mismatch between the FZP and the sources occurs. On the other hand, the amplitude peak value at the focus with shifted depths
Figure 11: Measured acoustic amplitude distribution with shifted single FZP (Ueft figure) and two layers of FZPs (lower figure).
Figure 12: Numerically calculated amplitude distribution on the line parallel to the \(x\)-axis at the focal depth (upper) and on the \(z\)-axis (lower) by varying the driving frequency of the transducers. The graphs correspond to the case with the FZP fabricated for 40 kHz ultrasound focusing at 400 mm apart from the FZP.
becomes higher as the driving frequency is lowered. This is presumably because the lowered driving frequencies cause the foci to be formed closer to the FZP, where the attenuation of ultrasound emissions becomes accordingly lower.
Next, we discuss the effect of vertical misalignment of FZP from the source surface. As the FZP is located farther from the source surface, the input wave to the FZP layer ideally becomes more similar to the perfect plain waves for the infinitely large source aperture. In this case, the effect of vertical misalignment just appears as the focal depth shift according to the distance between the source and the FZP when they are located parallel to each other. However, when the source aperture is limited, a smaller portion of emitted wave travels through the FZP when they are located more apart from each other. This can result in lowered focal amplitude in addition to the focal depth shift.
## VII Conclusions
In this paper, a method of controlling ultrasound fields using an FZP amplitude binary mask on a plane wave source was developed. In the experiments, a 2 mm thick acrylic plate cut out by a laser cutting machine was utilized as a binary mask. We evaluated focusing performances with our proposed method, where ultrasonic convergence occurs to the same degree of spatial resolution as in the case with a conventional method using phase-controlled transducers. As a favorable side effect, FZP suppressed grating lobes observed with the conventional method. We also determined that shifting the FZP over a fixed source can move the focus. Furthermore, multi-focusing was achieved by using layers of multiple FZPs with their centers corresponding to the focal positions.
Configuration of our method is very simple. A planar source with its common driving voltage pulses and the FZPs can be made both inexpensive, thin, and readily implemented and upsized. Our method is suitable for mid-air ultrasound control system with large apertures or systems using ultrasound sources with a higher frequency. Implementation of large ultrasound aperture will lead to new large-scale mid-air ultrasonic application systems that employ the entire walls and ceilings as ultrasonic emission planes.
Subsequent challenges include electronical control of amplitude distribution over the source. We consider that this scenario has the possibility to be realized, as it only needs on-off binary control of emission patterns, instead of inter-element phase control with the temporal preciseness less than one-netenth of the ultrasound period. We believe that such amplitude controlling mechanism requires less effort to be realized than that required to upsize the phased array by simply consolidating a large number of array units with a tremendous number of transducers individually phase-controlled.
###### Acknowledgements.
This study was supported by JST, PRESTO Grant Number JPMJPR21R9, Japan.
## Author Declarations
The authors have no conflicts of interest.
## Author Contributions
Masatake Kitano: Conceptualization (equal); Formal Analysis (lead); Investigation (equal); Methodology (supportive); Writing - original draft (lead); Keisuke Hasegawa: Conceptualization (equal); Formal Analysis (supportive); Investigation (equal); Methodology (lead); Writing - review and editing (lead); Supervision (lead); Funding Acquisition (lead); Resources(lead); Project Administration(lead);
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2301.01206 | Speed up the inference of diffusion models via shortcut MCMC sampling | Diffusion probabilistic models have generated high quality image synthesis
recently. However, one pain point is the notorious inference to gradually
obtain clear images with thousands of steps, which is time consuming compared
to other generative models. In this paper, we present a shortcut MCMC sampling
algorithm, which balances training and inference, while keeping the generated
data's quality. In particular, we add the global fidelity constraint with
shortcut MCMC sampling to combat the local fitting from diffusion models. We do
some initial experiments and show very promising results. Our implementation is
available at https://github.com//vividitytech/diffusion-mcmc.git. | Gang Chen | 2022-12-18T07:37:26Z | http://arxiv.org/abs/2301.01206v1 | # Speed up the inference of diffusion models via shortcut MCMC sampling
###### Abstract
Diffusion probabilistic models have generated high quality image synthesis recently. However, one pain point is the notorious inference to gradually obtain clear images with thousands of steps, which is time consuming compared to other generative models. In this paper, we present a shortcut MCMC sampling algorithm, which balances training and inference, while keeping the generated data's quality. In particular, we add the global fidelity constraint with shortcut MCMC sampling to combat the local fitting from diffusion models. We do some initial experiments and show very promising results. Our implementation is available at [https://github.com/vividitytech/diffusion-mcmc](https://github.com/vividitytech/diffusion-mcmc).
## 1 Introduction
Leveraging deep generative models to generate high quality images has becoming the dominant approach in machine learning community. For example, generative adversarial networks (GANs) [1], PixelCNN [2] and variational autoencoders [3] have shown impressive image and speech synthesis results. Diffusion probabilistic models [4] have recently gained popularity over a variety of applications on computer vision and machine learning domain. And it also obtains state-of-the-art Inception score and FID score [5; 6; 7] on image generation, as well as best results on density estimation benchmarks [8]. Diffusion models are well defined with Markov chain assumption and are efficient to train. But it is time consuming to generate high quality images, which may take thousands of steps to the best of our knowledge. This paper presents an approach to speed up the inference of diffusion models. Instead of thousands of steps to produce samples, we constrain the number of inference steps, which can be randomly sampled from these thousand steps (we call shortcut MCMC) and then generate images to match the data. Both denoising diffusion probabilistic models (DDPMs) and variational diffusion models (VDMs) train a similar denoising deep nets, which focus on local model characteristics and thus long sampling steps needed to produce high quality images.
Compared to VDMs, we introduce the shortcut MCMC sampling and add the fidelity term in the loss function so that the final synthesized image match the original data. This new fidelity term is more like a global constraint and quality control while generating images in a shortcut manner. Thus, our method can balance the training and inference stages, and mitigates the inference burden significantly. We do some initial analysis and show promising results on synthesis dataset.
## 2 Background
The diffusion models [4; 5] are composed of forward process and reverse (backward) process. Given the data \(x_{0}\sim q(x_{0})\), the forward (diffusion) process follows a Markov chain
\[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t},\alpha_{t}\mathbf{x }_{0}+\sigma_{t}\mathbf{I}),\quad q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod_{t= 1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}) \tag{1}\]
where \(\alpha_{t}=\sqrt{1-\sigma_{t}^{2}}\), and \((\alpha_{t},\sigma_{t})\) is the signal and noise pair at time step \(t\). the Markov chain \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) is Gaussian
\[q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\alpha_{t|t-1},\sigma_{t|t-1}^ {2}\mathbf{I}) \tag{2}\]
where \(\alpha_{t|t-1}=\alpha_{t}/\alpha_{t-1}\) and \(\sigma_{t|t-1}^{2}=\sigma_{t}^{2}-\alpha_{t|t-1}^{2}\sigma_{t-1}^{2}\) according to VDMs [8]. The reverse (or backward) process is to learn \(p(\mathbf{x}_{0})=\int p(\mathbf{x}_{0:T})d\mathbf{x}_{1:T})\), where \(p(\mathbf{x}_{T})\) is Gaussian \(\mathcal{N}(\mathbf{x}_{T};0,\mathbf{I})\):
\[p(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\mu_{\theta}( x_{t},t),\sigma_{\theta}(x_{t},t)),\quad p(\mathbf{x}_{0:T})=p(x_{T})\prod_{t= 1}^{T}p(\mathbf{x}_{t-1}|\mathbf{x}_{t}) \tag{3}\]
Fig 1 shows the examples while increasing noise signal over the original data. By optimizing the variational lower bound, VDMs [8] chooses the conditional model distributions below
\[p(\mathbf{x}_{t-1}|\mathbf{x}_{t})=q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{ x}_{0}) \tag{4}\]
which can be induced according to the KL divergence. In the inference stage, we can replace \(\mathbf{x}_{0}\) with its prediction \(\hat{\mathbf{x}}_{0}(x_{t};t)\) using denoising diffusion models.
## 3 Model
In this section, we will introduce our approach based on the variational lower bound and the shortcut MCMC sampling to skip multiple steps to speed up inference. We consider the finite time steps and it can be easily extended to continuous scenario.
### Objective lower bound
In the case of finite \(T\) steps, we maximize the variational lower bound of marginal likelihood below
\[\mathcal{L}(\mathbf{x}_{0};\theta)=E_{q(z|\mathbf{x})}[\log p(\mathbf{x}_{0}| z)]-D_{KL}(q(\mathbf{x}_{T}|\mathbf{x}_{0})||p(\mathbf{x}_{T}))-\sum_{t=2}^{T}D_{KL} (q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})||\log p(\mathbf{x}_{t-1}| \mathbf{x}_{t})) \tag{5}\]
where \(z=(x_{1},x_{2},...,x_{T})\), and for detail induction, please refer Appendix A. Compared to VDMs, we have an additional fidelity term \(\mathbb{E}_{q}\log p(\mathbf{x}_{0}|z)\), which maps the latent (prior) Gaussian noise to data distribution. This is similar to GANs model, which can generate data from latent distribution. However, for diffusion model, it depends on the hyperparameter \(T\) that will take thousands of steps (e.g. \(T=1000\)) to produce synthesized data. In other words, it is 3 orders of magnitude slower than GANs when both use the similar deep neural nets architecture in the inference stage.
As for the diffusion loss, it leverages KL-divergence to match \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) with the forward process posterior \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},x_{0})\). Since both the forward posterior and \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) are Gaussians, with same variance assumption, then the KL loss can be minimized using the deep denoise model
\[D_{KL}(q(\mathbf{x}_{s}|\mathbf{x}_{t},\mathbf{x}_{0})||\log p(\mathbf{x}_{s }|\mathbf{x}_{t}))=\frac{1}{2}(\frac{\alpha_{s}^{2}}{\sigma_{s}^{2}}-\frac{ \alpha_{t}^{2}}{\sigma_{t}^{2}})||\epsilon-\hat{\epsilon}_{\theta}(\mathbf{x} _{t},t)||^{2} \tag{6}\]
Figure 1: The noised data with increasing noise level until random Gaussian distribution.
here \(0<s<t\leq T\), and \((\alpha_{s},\sigma_{s})\) and \((\alpha_{t},\sigma_{t})\) are signal and noise pairs respectively at time step \(s\) and \(t\).
In the following part, we will focus on the fidelity term \(\log p(\mathbf{x}_{0}|z)\), and we want the data generated from the latent space match the original data distribution.
### Shortcut MCMC sampling
The fidelity term \(\mathbb{E}_{q}\log p(\mathbf{x}_{0}|z)\) is hard to optimize, because its complexity is determined by the depth of the generative model and its neural nets architecture. In the training stage, we always set a large \(T\), such as \(T=1000\). We use the forward posterior to match \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). In other words, we have \(\mathcal{N}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_{t},t),\sigma_{\theta}( \mathbf{x}_{t},t))\) and needs to recover the data step by step.
For any time step \(s\) and \(t\in[1,T]\) and \(s<t\), we have \(q(\mathbf{x}_{s}|\mathbf{x}_{t},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{s}; \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t};s,t),\sigma_{\theta}^{2}(s,t)\mathbf{ I})\), with mean and variance as below
\[\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t};s,t)=\frac{\alpha_{t|s}\sigma_{s}^{2} }{\sigma_{t}^{2}}\mathbf{x}_{t}+\frac{\alpha_{s}\sigma_{t|s}^{2}}{\sigma_{t}^ {2}}\mathbf{x}_{0},\quad\sigma_{\theta}^{2}(s,t)=\sigma_{t|s}^{2}\sigma_{s}^{ 2}/\sigma_{t}^{2} \tag{7}\]
Using KL divergence, \(p(\mathbf{x}_{s}|\mathbf{x}_{t})=q(\mathbf{x}_{s}|\mathbf{x}_{t},\mathbf{x}_{ 0})\), and we need to replace \(\mathbf{x}_{0}\) with \(\mathbf{\hat{x}}_{0}(\mathbf{x}_{t},t)\) in the inference. After do some mathematical operations in Appendix B, we have the following formula
\[p(\mathbf{x}_{s})=\alpha_{s}\mathbf{x}_{0}+\sigma_{s}\epsilon \tag{8}\]
Thus, we can sample \(\mathbf{x}_{s}\) at any time step \(s\). In the best scenario, the marginal distribution \(p(\mathbf{x}_{t})\) from the reverse process matches the forward one \(q(\mathbf{x}_{t})\). Since we have \(p(\mathbf{x}_{t})\sim q(\mathbf{x}_{t})\), we approximate \(p(\mathbf{x}_{t})\) with the same formula in Eq. 1 and we can sample \(\mathbf{x}_{s}\) from the constructed \(\mathbf{\hat{x}}_{0}\). Since the latent variable \(z=(\mathbf{x}_{1},...,\mathbf{x}_{T})\), it will be time-consuming. To speed up the inference, we can skip steps to produce data while using MCMC sampling. Specifically, we random sample \(K\) time steps \(\{t_{1},..,t_{K}\}\) from \([1,T]\). Then we use the prediction \(\hat{\mathbf{x}}_{t_{k}}\) to get the next sample \(\hat{\mathbf{x}}_{t_{k-1}}\) according to the equation above. Thus we have the fidelity loss
\[\mathbb{E}_{q}\log p(\mathbf{x}_{0}|z)=||\mathbf{x}_{0}-\hat{\mathbf{x}_{0}} ||^{2} \tag{9}\]
where \(\hat{x_{0}}\) is predicted from the shortcut MCMC sampling. By minimizing this loss, we add the global constraint to the deep denoise models, and further improve the data approximation quality.
### Algorithm
We summarize our approach in Algorithm. 1. Compared to DDPMs and VDMs, we add the fidelity term which imposes a global constraint to our generated samples and use shortcut MCMC sampling to speed up the inference.
Figure 2: The forward process over \(T\) steps and the reverse process with shortcut MCMC sampling (red line).
In the inference stage, we just sample \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\), then we sample K time steps from \([1,T]\) and sample \(\mathbf{x}_{t_{k}}\sim\alpha_{t_{k}}\hat{\mathbf{x}}_{0}+\sigma_{t_{k}}\epsilon\), where \(\hat{\mathbf{x}}_{0}\) is predicted from the denoise neural network in the previous \(t_{k-1}\). Thus, our method has the potential to speed up inference at least an order of magnitude fast.
## 4 Experimental results
We did initial experiments on synthetic dataset. In this experiment, we create the swirl dataset with 1024 points, shown in Fig 1. As for the model architecture, we use 3 layer MLP, with Fourier feature expansion as the inputs. We set \(K=10\) for all the training in all the experiments below.
In the first experiment in Fig 3, we train the model with the shortcut MCMC sample. In the inference stage, we set \(T=200\) and sample \(K=10\) time steps, then we generate our results with only 10 steps inference. The result in Fig 3 shows that our approach not only converge fast, but also reconstruct better results.
In the second experiments, we train with \(K=10\), and in the inference we set \(K\) the same value as \(T\), \(K=T=200\) for step by step comparison. It indicates that with the same time steps, our approach converge fast and yield better results in Fig 4. For example, our approach recover the data well at \(K=100\).
## 5 Conclusion
In this paper, we propose a fast approach for diffusion models in the inference stage. To this end, we add a fidelity term as the global constraint over the diffusion models, and present a shortcut MCMC sampling method to speed up the inference. The experiments show promising results on both data quality and fast inference time.
## Appendix A
The maximum likelihood \(x_{0}\) is
\[\log p(x_{0}) =\log\int_{z}p(x_{0},z)=\log\int_{z}p(x_{0},z)\frac{q(z|x)}{q(z|x) }=\log\int_{z}q(z|x)\frac{p(x_{0},z)}{q(z|x)}\] \[\geq\int q(z|x)\log\frac{p(x,z)}{q(z|x)}=E_{q(z|x)}[\log\frac{p(x,z)}{q(z|x)}]=E_{q(z|x)}[\log\frac{p(x|z)p(z)}{q(z|x)}]\] \[=E_{q(z|x)}[\log p(x|z)]-E_{q(z|x)}[\log\frac{p(z)}{q(z|x)}] \tag{10}\]
Figure 3: The left column is from VDMs[8], the right column is from our approach. We use \(T=200\) in the inference stage, and K=10 to sample 10 time steps. Then we compare the corresponding 5 generated images between VDMs and our method.
Figure 4: We use \(T=200\) in the inference stage, and K=200 for the full time steps comparison. We can see our method can generate very good samples and converge fast then VDMs.
where we assume the latent \(z=(x_{1},x_{2},...,x_{T})\). Overall, we want to maximize the variational lower bound. The first term is reconstruction loss, which is our fidelity term in the paper. The second term is the KL divergence between \(p(z)\) and \(q(z|x)\), which we want to minimize.
As for the second term we can do some decomposition to get KL divergence between \(p(x_{s}|x_{t})\) and \(q(x_{s}|x_{t},x_{0})\) in the following analysis:
\[\mathbb{E}_{x_{0:T}\sim q(x_{0:T})}[\log\frac{p(x_{1:T})}{q(x_{1: T}|x_{0})}]\] \[= \mathbb{E}_{x_{0:T}\sim q(x_{0:T})}[-\log q(x_{1:T}|x_{0})+\log p (x_{1:T})]\] \[= \mathbb{E}_{x_{0:T}\sim q(x_{0:T})}\bigg{[}-\log[q(x_{T}|x_{0}) \prod_{t=2}^{T}q(x_{t-1}|x_{t},x_{0})]+\log[p(x_{T})\prod_{t=2}^{T}p(x_{t-1}|x _{t})]\bigg{]}\] \[= -D_{KL}(q(x_{T}|x_{0})||p(x_{T}))-\sum_{t=2}^{T}D_{KL}(q(x_{t-1}| x_{t},x_{0})||\log p(x_{t-1}|x_{t})) \tag{11}\]
## Appendix B
\[p(x_{s}|x_{t})=q(x_{s}|x_{t},x=\hat{x}_{\theta}(z_{t};t)) \tag{12}\]
Since the reverse process is also Gaussian, we then have
\[p(x_{s}|x_{t})=\mathcal{N}(x_{s};\mathbf{\mu_{\theta}}(x_{t};s,t), \sigma_{Q}^{2}(s,t)\mathbf{I}) \tag{13}\]
\[\mathbf{\mu_{\theta}}(x_{t};s,t) =\frac{\alpha_{t|s}\sigma_{s}^{2}}{\sigma_{t}^{2}}x_{t}+\frac{ \alpha_{s}\sigma_{t|s}^{2}}{\sigma_{t}^{2}}\mathbf{\hat{x}_{\theta}}(x_{t};t)\] \[=\frac{1}{\alpha_{t|s}}x_{t}-\frac{\sigma_{t|s}^{2}}{\alpha_{t|s} \sigma_{t}}\hat{\epsilon}_{\theta}(x_{t};t)\] \[=\frac{1}{\alpha_{t|s}}(\alpha_{t}\mathbf{x}+\sigma_{t}\epsilon) -\frac{\sigma_{t|s}^{2}}{\alpha_{t|s}\sigma_{t}}\hat{\epsilon}_{\theta}(x_{t} ;t)\] \[=\alpha_{s}\mathbf{x}+\frac{1}{\alpha_{t|s}}(\sigma_{t}\epsilon- \frac{\sigma_{t|s}^{2}}{\sigma_{t}}\hat{\epsilon}_{\theta}(x_{t};t))\] \[=\alpha_{s}\mathbf{x}+\frac{1}{\alpha_{t|s}\sigma_{t}}(\sigma_{t} ^{2}\epsilon-\sigma_{t|s}^{2}\hat{\epsilon}_{\theta}(x_{t};t))\]
Since \(p(x_{s}|x_{t})=\)
\[\mathbf{\mu_{\theta}}(x_{t};s,t) =\frac{\alpha_{t|s}\sigma_{s}^{2}}{\sigma_{t}^{2}}\mathbf{x}_{t}+ \frac{\alpha_{s}\sigma_{t|s}^{2}}{\sigma_{t}^{2}}\mathbf{x}_{0}\] \[=\frac{\alpha_{t|s}\sigma_{s}^{2}}{\sigma_{t}^{2}}(\alpha_{t} \mathbf{x}_{0}+\sigma_{t}\epsilon_{t})+\frac{\alpha_{s}\sigma_{t|s}^{2}}{ \sigma_{t}^{2}}\mathbf{x}_{0}\] \[=\frac{\alpha_{t}\sigma_{s}^{2}}{\sigma_{t}^{2}}\mathbf{x}_{0}+ \frac{\alpha_{t|s}\sigma_{s}^{2}}{\sigma_{t}}\epsilon_{t}+\frac{\alpha_{s} \sigma_{t|s}^{2}}{\sigma_{t}^{2}}\mathbf{x}_{0}\] \[=\alpha_{s}\mathbf{x}_{0}+\frac{\alpha_{t|s}\sigma_{s}^{2}}{ \sigma_{t}}\epsilon_{t} \tag{15}\]
We know that the variance at time \(s\), \(\sigma_{\theta}^{2}(s,t)=\sigma_{t|s}^{2}\sigma_{s}^{2}/\sigma_{t}^{2}\), then we can get by sampling \(p(x_{s}|x_{t})=\mathcal{N}(x_{s};\boldsymbol{\mu_{\theta}}(x_{t};s,t),\sigma_{ \theta}^{2}(s,t)\mathbf{I})\)
\[\mathbf{x_{s}} =\boldsymbol{\mu_{\theta}}(x_{t};s,t)+\sigma_{\theta}(s,t)\epsilon _{s}\] \[=\alpha_{s}\mathbf{x}_{0}+\frac{\alpha_{t|s}\sigma_{s}^{2}}{ \sigma_{t}}\epsilon_{t}+\sigma_{\theta}(s,t)\epsilon_{s}\] \[=\alpha_{s}\mathbf{x}_{0}+\frac{\alpha_{t|s}\sigma_{s}^{2}}{ \sigma_{t}}\epsilon_{t}+\frac{\sigma_{t|s}\sigma_{s}}{\sigma_{t}}\epsilon_{s}\]
since \(\epsilon_{t}\) and \(\epsilon_{s}\) from the same Gaussian noise, when we reduce the steps we can merge these two independent Gaussian distributions, the new variance can be formulated as:
\[(\frac{\alpha_{t|s}\sigma_{s}^{2}}{\sigma_{t}})^{2}+(\frac{ \sigma_{t|s}\sigma_{s}}{\sigma_{t}})^{2}\] \[= \frac{\alpha_{t|s}^{2}\sigma_{s}^{4}}{\sigma_{t}^{2}}+\frac{ \sigma_{t|s}^{2}\sigma_{s}^{2}}{\sigma_{t}^{2}}\] \[= \frac{\sigma_{s}^{2}}{\sigma_{t}^{2}}(\alpha_{t|s}^{2}\sigma_{s} ^{2}+\sigma_{t|s}^{2})\] \[= \sigma_{s}^{2} \tag{17}\]
we can see that \(\mathbf{x_{s}}\sim\alpha_{s}\mathbf{x}_{0}+\sigma_{s}\epsilon\)
So the most important step is to estimate accurate \(\mathbf{x}\) in the inference stage. we borrow the idea from signal decomposition. The forward process of diffusion model is to add noise to the original signal until it approximate random Gaussian distribution, while the backward process is to denoise the merged the signal to recover the original data. While the data is noising, the recovered \(\hat{\mathbf{x}}\), but it will be better with more denoising steps.
|
2310.20589 | Increasing The Performance of Cognitively Inspired Data-Efficient
Language Models via Implicit Structure Building | In this paper, we describe our submission to the BabyLM Challenge 2023 shared
task on data-efficient language model (LM) pretraining (Warstadt et al., 2023).
We train transformer-based masked language models that incorporate unsupervised
predictions about hierarchical sentence structure into the model architecture.
Concretely, we use the Structformer architecture (Shen et al., 2021) and
variants thereof. StructFormer models have been shown to perform well on
unsupervised syntactic induction based on limited pretraining data, and to
yield performance improvements over a vanilla transformer architecture (Shen et
al., 2021). Evaluation of our models on 39 tasks provided by the BabyLM
challenge shows promising improvements of models that integrate a hierarchical
bias into the architecture at some particular tasks, even though they fail to
consistently outperform the RoBERTa baseline model provided by the shared task
organizers on all tasks. | Omar Momen, David Arps, Laura Kallmeyer | 2023-10-31T16:26:36Z | http://arxiv.org/abs/2310.20589v1 | # Increasing The Performance of Cognitively Inspired Data-Efficient
###### Abstract
In this paper, we describe our submission to the BabyLM Challenge 2023 shared task on data-efficient language model (LM) pretraining Warstadt et al. (2023). We train transformer-based masked language models that incorporate unsupervised predictions about hierarchical sentence structure into the model architecture. Concretely, we use the Structformer architecture Shen et al. (2021) and variants thereof. StructFormer models have been shown to perform well on unsupervised syntactic induction based on limited pretraining data and to yield performance improvements over a vanilla transformer architecture Shen et al. (2021). Evaluation of our models on 39 tasks provided by the BabyLM challenge shows promising improvements of models that integrate a hierarchical bias into the architecture at some particular tasks, even though they fail to consistently outperform the baseline model on all tasks.1
Footnote 1: Implementation and models checkpoints can be found here: [https://github.com/OmarMomen14/structformer-babylm](https://github.com/OmarMomen14/structformer-babylm)
## 1 Introduction
Transformer-based Language Model (LM) performance is heavily influenced by three scaling factors: the number of model parameters, the pretraining dataset size, and the amount of computing. For optimal performance, all three factors must be simultaneously scaled up Kaplan et al. (2020). This scaling law has introduced several challenges in advancing research on neural language modeling. One major obstacle lies in the unequal distribution of resources across languages. Consequently, the current approach of transformer-based models falls short of achieving equally high-performance levels for models dedicated to different languages Choudhury and Deshpande (2021).
Moreover, we see a considerable difference when comparing the way LMs learn how humans acquire language. One difference concerns the data that is input to learning: LMs such as BERT Devlin et al. (2019), RoBERTa Liu et al. (2019) or GPT-3 Brown et al. (2020) are exposed to billions of tokens during training, far surpassing what an individual human is exposed to when learning a language Warstadt and Bowman (2022). This fundamental discrepancy raises important considerations when drawing parallels between language learning in machines and humans.
To improve the data-efficiency of LMs, one direction is to adapt the model architecture. An effective approach in this endeavor involves incorporating an inductive bias into the models' architectures, which could potentially facilitate acquiring more knowledge from the same amount of data compared to standard models. However, the specific type of inductive bias to be added is still under exploration. Recently, there have been efforts to investigate the use of syntactic hierarchical inductive biases as a potential improvement Mulligan et al. (2021); Papadimitriou and Jurafsky (2023).2
Footnote 2: Note that we don’t want to claim that humans integrate such an inductive bias and therefore can learn language with less data, compared to large LMs.
One of these potential solutions is the StructFormer architecture Shen et al. (2021), a transformer that is trained on the masked language modeling task. An additional convolutional neural network (CNN) component produces unlabeled dependency and constituency trees as a byproduct and influences the self-attention mechanism of the transformer layers. The model has obtained demonstrated competitive results in structure induction evaluations and a decrease in perplexity over a vanilla transformer baseline Vaswani et al. (2017). However, it is an open question whether the inductive bias learned in this architecture enhances performance on downstream NLP tasks.
We pretrain the StructFormer architecture on a dataset from a different domain that had not been tested on that model before. Moreover, we use a
more sophisticated tokenizer in comparison to the most frequent words dictionary used to train the models in the original experiment. Additionally, we modify the model architecture to investigate whether injecting a hierarchical bias in the middle layers of the transformer architecture (rather than after the embedding layer) leads to improved downstream performance. Eventually, we evaluate seven model variants through the evaluation pipeline of the shared task and submit our best-performing model to the shared task challenge.
### The BabyLM Challenge
The BabyLM Challenge is a shared task with the aim of data-efficient language modeling for English. Participants pretrain a LM from scratch on data that corresponds to the amount of linguistic data available to a child. The task is a great setting for conducting our experiments. It provides us with a pretraining dataset, a thorough evaluation pipeline, and, furthermore, an environment where we can compare our models' performance to other interesting architectures from the systems participating in the shared task.
DatasetThe shared task is conducted in two tracks with different dataset sizes: a 100M words corpus, and a 10M words corpus as a sample of the larger corpus. The size is inspired by the assumption that children are exposed to 2M-7M words per year (Gilkerson et al., 2017). To account for the fact that children mostly interact with spoken rather than written language data, the datasets include a high proportion of transcribed data from different domains. For more details regarding the source domains, please refer to Warstadt et al. (2023).
EvaluationA thorough evaluation pipeline that comprises 39 different tasks is used to evaluate every model participating in the shared task. These tasks are supposed to represent a model's performance with respect to efficiency and applied NLP, as well as cognitive science, and linguistics. A group of 17 tasks, named _BLiMP_(Warstadt et al., 2020) are performed via zero-shot predictions, while the other two groups of tasks; _SuperGLUE_(11 tasks, Wang et al., 2019) and _MSGS_(11 tasks, Warstadt et al., 2020) need finetuning of the submitted models for classification. Refer to Appendix A for the complete list of tasks.
## 2 Language Modeling and Hierarchical Information
Transformer LMs use syntactic information in their predictions. This has been shown by work on interpreting their internal representations as well as by investigating the grammatical correctness of their predictions (Mahowald et al., 2023; Kulmizev and Nivre, 2022). However, the vanilla transformer architecture that underlies both encoder and decoder-based LMs does not encode hierarchical information explicitly. Rather, objectives such as masked language modeling and next-token prediction are based on linear relationships between tokens. This has inspired two lines of work that incorporate hierarchical knowledge into LMs. The first group of papers introduces models in which the training objective involves syntactic labels explicitly (e.g. Dyer et al., 2016; Sartran et al., 2022), The second group introduces models in which hierarchical information is encoded implicitly as a byproduct of a language modeling task (Shen et al., 2018, 2021; Li et al., 2019; Kim et al., 2019; Choi et al., 2018; Williams et al., 2018). We consider the second group of models more relevant for this shared task since it allows us to train models with a hierarchical architecture bias on raw text data. In particular, we use the StructFormer model (Shen et al., 2021), a transformer in which one architecture component, the parser network, predicts the position of each token in the hierarchical structure of the sentence. The prediction of the parser network puts soft constraints on the attention mask of the transformer layers. The model is pretrained on the masked language modeling task, and we view two experimental contributions of Shen et al. (2021) as most relevant for using this model: First, they show that a StructFormer achieves lower perplexity on limited training data than a transformer that replaces the parser network with standard self-attention. Second, the induced hierarchical structure corresponds to unlabeled dependency trees. Concretely, evaluation on the Penn Treebank (PTB) shows that 61.6% of the undirected dependency edges are recovered. We further implement a variant of the model in which the parser network predicts hierarchical information based on hidden states that are contextualized with classical transformer layers, rather than using uncontextualized token embeddings as direct input to the parser network (Sec. 3.2.4).
Experiment
This section introduces the objectives of our experiment, a description of the model architectures, and the technical aspects of the pretraining and evaluation process.
### Objectives
In this work, we aim to validate the claim that the performance of LMs, in particular on syntax-sensitive tasks, can be improved through the implicit integration of an inductive bias into the model's architecture that yields a hierarchical structure of the tokens. Concretely, we conduct experiments towards pursuing the following three primary objectives:
1. Assess the robustness of the finding that LM performance is enhanced through the utilization of a linguistically informed model architecture (Shen et al., 2021).
2. Investigate whether the claim that transformer architectures better represent syntactic information in their middle attention layers is supported in a practical use case (Vig and Belinkov, 2019; Arps et al., 2022; Muller-Eberstein et al., 2022).
3. Develop models that surpass the performance of the baseline models offered by the organizers of the shared task.
### Methodology
In order to address the questions posed by the experiment's objectives, we train a tokenizer, develop several model variants, and perform iterations of model pretraining, finetuning, and evaluation. Due to limited resources, we only conducted our experiments on the 10M words dataset. Furthermore, from the model architectures provided by the shared task, we chose the encoder-type models due to their adaptability for integrating a hierarchical bias in the model architecture.
#### 3.2.1 Tokenizer
We use the same tokenizer across all variations of our models. Specifically, we train a Byte Pair Encoding (BPE) tokenizer (Sennrich et al., 2016; Gage, 1994) from scratch on the 10M BabyLM corpus. Since BPE tokenizers require specifying the vocabulary size as a hyperparameter before training on the corpus, we carefully determined an appropriate size. Our goal was to obtain a tokenizer that accurately represents tokens in our relatively small dataset while adhering to best practices for LMs. To achieve this, we train the tokenizer on the same corpus with different vocabulary sizes. We then observed the resulting vocabularies and identified the least frequent tokens within each (Table 1).
Based on our analysis, a vocabulary size of 32K tokens provides a fair representation relative to the corpus size for the least frequent tokens. Additionally, Geiping and Goldstein (2022) found that a BPE tokenizer with 32K tokens yielded the best results.
#### 3.2.2 Baseline model
To achieve objective 1, we pretrained a standard transformer architecture that we call _transformerbase_, using our custom-trained tokenizer and following the same model and training hyperparameters to minimize any effects due to uncontrolled variables.
#### 3.2.3 Hyperparameters
Due to resource limitations, and to assure fair comparisons between models, we use one set of pretraining and finetuning hyperparameters: We chose the default hyperparameters settings that were used to pretrain the shared task baseline models (Warstadt et al., 2023). In order to speed up the evaluation of finetuning tasks, we made modifications to the finetuning hyperparameters that were used to evaluate the baseline models. Our main hyperparameters are reported in Appendix B. We pretrain all models with the same batch size and the same number of steps. We use the training pipeline that Warstadt et al. (2023) introduced to train their baseline modes to minimize any effects due to uncontrolled variables.
However, one variable that could not be fixed during the experiment is the number of trainable parameters in each model. When adding a convolution parser network to a particular model, the increase in the number of parameters in that model is inevitable (parameter counts are listed in Appendix B). We are aware that this can have misleading effects on the results and conclusions, however, we
\begin{table}
\begin{tabular}{l r r} \hline \hline Vocabulary Size & Least Frequent Tokens & Frequency \\ \hline
8K & sought, arts, stolen,ATOR & 230 \\
10k & accounts, seated, lemm, feathers & 165 \\
12x & salus, gons, reun, tritres & 126 \\
16k & sophisticated, oleyball, AMES, poorly & 80 \\
32k & jets, estsu, isesselin, UCLA, mannik & 26 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Tokenizer Vocabulary Size Experiments
still think that the experiment in its current setting can show interesting behaviors that may encourage further investigation in a fully controlled experiment.
#### 3.2.4 Model Architectures
We develop two primary variants of model architectures for our experiment.
StructFormerThis variant (Figure 1) closely follows the architecture in Shen et al. (2021). In brief, it incorporates a parser network that consists of 4 convolution layers. The input to the parser network is token embeddings, and the output is probability distributions for dependencies between tokens. These distributions are then integrated into the multi-head self-attention mechanism of a standard transformer model. For a complete description of the architecture, we refer readers to Shen et al. (2021). We name models of this variant by the prefix _structformer_.
StructRoBERTaThe second variant (Figure 1) is similar to the StructFormer, but instead of employing a standard transformer, it utilizes a base RoBERTa encoder (Liu et al., 2019). We modify the HuggingFace (Wolf et al., 2020) implementation, which has a few differences from the vanilla transformer implementation, mainly adding normalization and dropout layers after the embeddings layer, and also adding an additional intermediate block within each layer. The models following this architecture will be identified with the prefix _structroberta_.
Vanilla transformerFor transformers without parser networks, we reuse the implementation by Shen et al. (2021) which follows the standard transformer introduced by Vaswani et al. (2017), except that a layer normalization is added in front of each layer.
VariantsSubsequently, for each of the main variants, _structformer_ and _structroberta_, we create two sub-variants to explore a different placement of the parser network within the architecture (Figure 2). This decision is based on insights from previous experiments, which indicate that syntactic information tends to be better represented in the middle layers of the transformer (Liu et al., 2019; Vig and Belinkov, 2019; Arps et al., 2022).
In our approach, we divide the initial \(n_{context}\) layers of either the transformer or RoBERTa component in _structformer_ or _structroberta_ respectively. We label these \(n_{context}\) layers as the Front Attention Layers, while the remaining attention layers are labeled as Rear Attention Layers. The input embeddings pass through the Front component, generating embeddings that are subsequently fed into the parser network. The parser network, in turn, outputs dependency distributions that are integrated into the Rear component of the architecture.
Figure 1: StructFormer and StructRoBERTa Architectures (\(s_{1}\))
Figure 2: In-between Parser Architectures (\(s_{2}\)), dotted lines indicate intervening the encoder layers at two positions, where the parser network connects the two split parts of the encoder
To distinguish between the two sub-variants, we append the suffix \(s_{1}\) to models with the parser network before the attention layers (Figure 1), and the suffix \(s_{2}\) to models with the parser network in-between the middle attention layers (Figure 2).
To achieve objective 3, we introduce two additional models, _structroberta\({}_{s1^{\prime}}\)_ and _structroberta\({}_{s2^{\prime}}\)_, to enhance the evaluation scores so we could submit the best attainable results to the shared task. These two models are basically an upgrade in the number of convolution layers (from 4 to 6) of the parser network in _structroberta\({}_{s1}\)_ and _structroberta\({}_{s2}\)_ respectively.
## 4 Results
After completing the pretraining process of the 7 investigated models, a comprehensive linguistic evaluation is conducted for the seven models under study. The shared task evaluation pipeline is used for this purpose. Detailed evaluation results are presented in Tables 2 3, 4, and 5. We compare the scores of the following models: _transformer-base_ (TF\({}_{base}\)), _structformer\({}_{s1}\)_ (SF\({}_{s1}\)), _structormers\({}_{s2}\)_ (SF\({}_{s2}\)), _structroberta\({}_{s1}\)_ (SR\({}_{s1}\)), _structroberta\({}_{s2}\)_ (SR\({}_{s2}\)), _structroberta\({}_{s1^{\prime}}\)_ (SR\({}_{s1^{\prime}}\)) and _structroberta\({}_{s2^{\prime}}\)_ (SR\({}_{s2^{\prime}}\)). We are particularly interested in assessing to which extent the introduction of a hierarchical bias improves a model's performance on a specific task. Therefore, in addition to the scores of the individual models, we also report the differences in scores as follows:
* \(\Delta_{SFs_{1}}=Score(SF_{s1})-Score(TF_{base})\)
* \(\Delta_{SFs_{2}}=Score(SF_{s2})-Score(TF_{base})\)
* \(\Delta_{SRs_{12}}=Score(SR_{s1})-Score(SR_{s2})\)
* \(\Delta_{SRs_{1^{\prime}}}=Score(SR_{s1^{\prime}})-Score(SR_{s1})\)
* \(\Delta_{SRs_{2^{\prime}}}=Score(SR_{s2^{\prime}})-Score(SR_{s2})\)
All numerical values in the result tables are measures of accuracy unless explicitly stated otherwise.
### Pseudo-perplexity
We report the corpus-level pseudo-perplexity (\(PPPL\), Salazar et al., 2020) on the test split of the BabyLM shared task dataset3 (Table 2). \(PPPL\) is computed by masking out each token in turn and collecting the log-likelihoods. This evaluation contributes to objective 1 in our experiment. Shen et al. (2021) found that _structormer_ models incorporating hierarchical inductive bias achieve lower \(PPPL\) than their baseline _transformer_ model. We want to assess this finding on the BabyLM dataset and using our custom-trained tokenizer. SF\({}_{s1}\) shows lower \(PPPL\) compared to TF\({}_{base}\), which follows the previous findings. However, the model with a parser network within the middle layers shows a higher \(PPPL\) than the baseline TF\({}_{base}\). The addition of more convolution layers at the parser network shows an improvement at SR\({}_{s2^{\prime}}\) but surprisingly shows a deterioration at SR\({}_{s1^{\prime}}\).
Footnote 3: We use Kauf and Ivanova (2023)’s implementation for computing \(PPPL\) scores and remove the 100 longest sentences from the dataset to reduce the computation time.
### Blimp
BIiMP is a challenging benchmark comprising a set of tests designed to evaluate the linguistic knowledge of LMs with a specific focus on linguistic phenomena encompassing syntax, morphology, and semantics Warstadt et al. (2020). Originally, the benchmark consisted of 12 tasks (see Appendix A). Additionally, in the shared task Warstadt et al. (2023), 5 more tasks were added to BLiMP as heldout tasks, aiming to assess the generalization capabilities of the submitted models. The random chance accuracy for all original BLiMP tasks is 50, while chance was not reported for the additional 5 supplement tasks.
According to the BLiMP scores in Table 3, within the _Set A_ models, the models incorporating hierarchical inductive bias (SF\({}_{s1}\) and SF\({}_{s2}\)) do not show consistent outperformance or underperformance in comparison to the baseline model TF\({}_{base}\).
However, on average, the SF\({}_{s1}\) model is on par with and occasionally outperforms the TF\({}_{base}\) model. In particular, SF\({}_{s1}\) excels in the following tests: Argument Structure, Determiner Noun Agreement, Filler Gap, Irregular Forms, Quantifiers, and Subj. Verb Agreement. Conversely, SF\({}_{s1}\) underperforms the TF\({}_{base}\) in the tasks of QA Congruence Easy, Subject Aux Inversion and Turn Taking. We hypothesize that this is because syntactic knowledge is helpful for the former list of tasks, but to a lesser degree for the latter, for example, Turn Taking, which focuses on knowledge of discourse and dialogue structure, in particular of referential properties of NPs, which is not reflected in the syntactic structure. A sample pair from this data set is _"Should you quit?" - "No, I shouldn't."_ (good) versus _"Should she quit?" - "No, I shouldn't."_ (bad). The negative and the positive data points have the same syntactic structure and the dependents are
perfectly fine as argument fillers.
While the model with a parser network in-between the middle layers SF\({}_{s2}\), underperforms TF\({}_{base}\) on average, but interestingly it demonstrates a noteworthy improvement in the specific task of Irregular Forms. Remarkably, similar to SF\({}_{s1}\), SF\({}_{s2}\) significantly outperform TF\({}_{base}\) in this particular task. The task of Irregular Forms involves aspects of lexical decisions but the syntax of course also plays a role.
Within the RoBERTa model variations in Set B, again the model with a parser network in-between the middle layers SR\({}_{s2}\) fails to improve over the one with a parser network ahead of the encoder layers SR\({}_{s1}\) in most of the tasks. It even gets worse with the upgrade in the number of convolution layers within the parser network at SR\({}_{s2^{\prime}}\). On the other hand, the upgrade in the number of convolution layers at SR\({}_{s1^{\prime}}\) shows also an upgrade in accuracies over SR\({}_{s1}\). Generally, SR\({}_{s1^{\prime}}\) achieves the best results among all the investigated models on average.
Moreover, the Set B models exhibit improvements over Set A models in the tests of Binding, Det. Noun Agreement, Subject Verb Agreement, and QA Congruence Easy.
It is not so clear how to interpret the results of the two Question Answering (QA) Congruence tasks, where the baselines achieve only very low scores. For the QA Congruence Easy task, which tests for detecting selectional preference violations on object fillers in answers (e.g., _"What did you sell? - A chair."_ (good) versus _"What did you sell? - Sarah."_ (bad)), knowing about the syntactic structure of the first sentence probably helps to apply selectional restrictions and thereby assessing the quality of the second as a possible reply. This might be the reason why we see an improvement in model performance in the SR models when adding implicit hierarchical information that reflects syntactic dependencies. The QA Congruence Tricky task is similar, except that the selectional preference that is violated in the negative data points does not refer to the direct object. Furthermore, the object is dropped in most examples and sometimes the (incorrect) argument filler would be a plausible direct object (e.g., _"Who state? - Sarah ate."_ (good) versus _"Who ate? - Pastaate."_ (bad)). This is why the task is tricky. In this context, it is important to keep in mind that our StructFormer models learn only unlabeled dependencies and therefore cannot distinguish between object and subject. This means that for _Pasta ate_, a structure would be implicitly predicted where _pasta_ is a dependent of _ate_, which is perfectly fine semantically (as a direct object). This might be a reason why the structformer models struggle with this test and partly lead to a decrease in the performance, compared to our baseline, since the unlabeled dependency tree actually licenses the negative data points.
\begin{table}
\begin{tabular}{l r r r|r r r r|r r r} \hline \hline & \multicolumn{3}{c|}{Set A} & \multicolumn{3}{c}{Set B} \\ \hline & \multicolumn{1}{c|}{**TF\({}_{base}\)**} & \multicolumn{1}{c|}{**SF\({}_{s1}\)**} & \multicolumn{1}{c|}{**SF\({}_{s2}\)**} & \multicolumn{1}{c|}{**SR\({}_{s1}\)**} & \multicolumn{1}{c|}{**SR\({}_{s2}\)**} & \multicolumn{1}{c|}{**SR\({}_{s1^{\prime}}\)**} & \multicolumn{1}{c|}{**SR\({}_{s2^{\prime}}\)**} \\ \hline \multicolumn{1}{l}{Perplexity} & \multicolumn{1}{c|}{32.84} & \multicolumn{1}{c|}{26.48} & \multicolumn{1}{c|}{38.26} & \multicolumn{1}{c|}{**21.15**} & \multicolumn{1}{c|}{23.15} & \multicolumn{1}{c|}{37.11} & \multicolumn{1}{c|}{22.48} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Perplexity Results
\begin{table}
\begin{tabular}{l r r r r|r r r r|r r r|r r} \hline \hline & \multicolumn{4}{c|}{**Set A**} & \multicolumn{4}{c}{**Set B**} \\ \hline & \multicolumn{1}{c|}{**TF\({}_{base}\)**} & \multicolumn{1}{c|}{**SF\({}_{s1}\)**} & \multicolumn{1}{c|}{**SF\({}_{s2}\)**} & \multicolumn{1}{c|}{**SF\({}_{s1}\)**} & \multicolumn{1}{c|}{**SR\({}_{s1}\)**} & \multicolumn{1}{c|}{**SR\({}_{s2}\)**} & \multicolumn{1}{c|}{**SR\({}_{s1^{\prime}}\)**} & \multicolumn{1}{c|}{**SR\({}_{s2^{\prime}}\)**} & \multicolumn{1}{c|}{**SR\({}_{s1^{\prime}}\)**} & \multicolumn{1}{c|}{**SR\({}_{s2^{\prime}}\)**} \\ \hline \multicolumn{1}{l}{Anghor Agreement} & 88 & 88 & 74 & 0 & -14 & 89 & 87 & 90 & 87 & -2 & 1 & 0 \\ Argument Structure & 68 & 69 & 68 & 1 & 0 & 69 & 72 & 73 & 68 & 3 & 4 & -4 \\ Binding & 68 & 68 & 66 & 0 & -2 & 72 & 70 & 70 & 67 & -2 & -2 & -3 \\ Control Raising & 66 & 66 & 64 & 0 & -2 & 69 & 70 & 68 & 63 & 1 & -1 & -7 \\ Det. Noun Agreement & 87 & 90 & 86 & 3 & -1 & 92 & 93 & 93 & 88 & 1 & 1 & -5 \\ Ellipsis & 79 & 79 & 72 & 0 & -7 & 70 & 71 & 77 & 70 & 1 & 7 & -1 \\ Filter Gap & 63 & 70 & 63 & 7 & 0 & 69 & 67 & 74 & 64 & -2 & 5 & -3 \\ Irregular Forms & 76 & 90 & 86 & 14 & 10 & 83 & 92 & 85 & 84 & 9 & 2 & -8 \\ Island Effects & 44 & 44 & 37 & 0 & -7 & 49 & 45 & 52 & 43 & -4 & 3 & -2 \\ NPI Licensing & 58 & 58 & 55 & 0 & -3 & 55 & 59 & 68 & 53 & 4 & 13 & -6 \\ Quantifiers & 73 & 78 & 73 & 5 & 0 & 71 & 68 & 68 & 71 & -3 & -3 & 3 \\ Suly. Verb Agreement & 64 & 70 & 60 & 6 & -4 & 75 & 75 & 76 & 66 & 0 & 1 & -9 \\ Hypersym & 50 & 50 & 50 & 0 & 0 & 48 & 48 & 50 & 49 & 0 & 2 & 1 \\ QA Congruence Easy & 59 & 56 & 56 & -3 & -3 & 64 & 69 & 66 & 64 & 5 & 2 & -5 \\ QA Congruence Tricky & 38 & 35 & 35 & -3 & -3 & 28 & 34 & 28 & 28 & 6 & 0 & -6 \\ Subject Aux Inversion & 82 & 78 & 81 & -4 & -1 & 70 & 71 & 76 & 70 & 1 & 6 & -1 \\ Turn Taking & 67 & 65 & 55 & -2 & -12 & 61 & 59 & 60 & 61 & -2 & -1 & 2 \\ \hline \hline \multicolumn{1}{l}{**Average**} & \multicolumn{1}{c|}{**66.55**} & \multicolumn{1}{c|}{**61.9**} & \multicolumn{1}{c|}{**63.6**} & \multicolumn{1}{c|}{**14.1**} & \multicolumn{1}{c|}{**52.9**} & \multicolumn{1}{c|}{**66.7**} & \multicolumn{1}{c|}{**61.7**} & \multicolumn{1}{c|}{**64.1**} & \multicolumn{1}{c|}{**60.9**} & \multicolumn{1}{c|}{**24.4**} & \multicolumn{1}{c|}{**32.2**} \\ \hline \hline \end{tabular}
\end{table}
Table 3: BLiMP Results
### SuperGLUE
SuperGLUE consists of eleven diverse tasks (see Appendix A) which evaluate various performance aspects. These tasks include sentiment analysis, linguistic acceptability judgments, entailment detection, and semantic similarity evaluations of words within contexts, among others [20].
The scores (see Table 4) in most of the tasks fall in a narrow range across all the investigated models. The incorporation of hierarchical inductive bias does not show clear improvements in most of the tasks. A noticeable result that is observed for the models with a parser network within the middle layers _(s2)_ is the result of the MRPC task, where _s2_ models consistently outperform the _s1_ models in both sets for this particular task. The upgrade in the number of convolution layers also does not show a clear improvement in most of the tasks for both SR\({}_{s1^{\prime}}\) and SR\({}_{s2^{\prime}}\) models.
Notably, in the case of the WSC task, we observe that all models' predictions heavily favored one specific class. This raises concerns about the success of the finetuning process for this particular task.
### Msgs
The MSGS tasks, listed in Appendix A, were introduced by the shared task as held-out tests specifically designed to evaluate generalization capabilities. Detailed information and further insights about these tasks are expected to be disclosed in an upcoming publication. MSGS tasks are measured using the Matthews correlation coefficient (MCC). MCC is used in machine learning as a measure of the quality of binary (two-class) classifications, introduced by Matthews (1975)
The MSGS results (Table 5), resemble to the SuperGLUE results. The models incorporating hierarchical inductive bias show contradicting behavior across the different tasks. While for some tasks e.g Control Raising (Control), Relative Position (Control), and Syntactic Category (Relative Position), SF\({}_{s1}\) and SF\({}_{s2}\) are strengthening the correlation in comparison to the baseline model, but with other tasks e.g Lexical Content (Control), Main Verb (Lexical Content) and Syntactic Category (Lexical Content), SF\({}_{s1}\) and SF\({}_{s2}\) are shown weakening the correlation.
### Aggregation
Indeed, analyzing the performance changes across 39 tasks for 7 different models is a complex process. To simplify the assessment and present a concise summary of each model's overall performance, we report an aggregate score of all the 39 scores for each model (Table 6). This aggregation approach was internally computed by the shared task submission platform to represent each model with a single score, providing a more straightforward evaluation of the overall performance. Subsequently, we select the model with the best aggregate score SR\({}_{s1^{\prime}}\) to represent our submission in the shared task.
## 5 Discussion
Although the evaluation pipeline of the shared task was meticulously designed to encompass a comprehensive analysis of pretrained LMs, covering aspects of efficiency, applied NLP standards, cognitive science, linguistics, and language acquisition [23], it was discussed in Warstadt et al. (2020) that some tasks that involve semantic phenomena such as Island Effects and NPI Licensing are very difficult for LMs in general. Consequently, the consistently low performance observed across all models on these tests can be attributed to this matter. As a result, we refrain from considering the aggregate score as a single definitive metric for representing how a model's performance compares to another. Instead, we advocate for a thorough investigation of individual tests while considering the test's objectives, dataset, and evaluation strategy.
Overall, the models incorporating hierarchical inductive bias did not show significant improvement in the scores of the BabyLM evaluation tasks, however, some exceptions of the evaluation tasks that show improvements in terms of scores when using the _structformer_ and _structroberta_ models, encourage a deeper investigation for patterns in the outputs predictions that might lead to a different conclusion. Namely, the tasks that we think are worth more investigation are: _Argument Structure, Determiner Noun Agreement, Filler Gap, Irregular Forms, Quantifiers, Subj. Verb Agreement, Control Raising (Control), Relative Position (Control) and Syntactic Category (Relative Position)_.
Contrary to our expectations, the modification of placing the parser in-between the middle attention layers has not demonstrated notable improvements but rather a decline in performance compared to the models with the parser placed right after the input
\begin{table}
\begin{tabular}{l r r r|r r r r r} \hline \hline & \multicolumn{3}{c|}{Set A} & \multicolumn{3}{c}{Set B} \\ \hline & \multicolumn{1}{c}{\(\mathbf{TF}_{base}\)} & \multicolumn{1}{c}{\(\mathbf{SF}_{s1}\)} & \multicolumn{1}{c}{\(\mathbf{SF}_{s2}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s1}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s2}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s1^{\prime}}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s2^{\prime}}\)} \\ \hline \hline Aggregate Score & 0.52 & 0.53 & 0.52 & 0.53 & 0.54 & **0.55** & 0.52 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Shared Task Leaderboard Results
\begin{table}
\begin{tabular}{l r r r r|r r r r r r r r} \hline \hline & \multicolumn{3}{c|}{Set A} & \multicolumn{3}{c}{Set B} \\ \hline & \multicolumn{1}{c}{\(\mathbf{TF}_{base}\)} & \multicolumn{1}{c}{\(\mathbf{SF}_{s1}\)} & \multicolumn{1}{c}{\(\mathbf{SF}_{s2}\)} & \multicolumn{1}{c}{\(\Delta_{SF_{s1}}\)} & \multicolumn{1}{c}{\(\Delta_{SF_{s2}}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s1}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s2}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s1^{\prime}}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s1^{\prime}}\)} & \multicolumn{1}{c}{\(\Delta_{SF_{s2^{\prime}}}\)} \\ \hline \hline BooIQ & 63 & 61 & 62 & -2 & -1 & 66 & 66 & 64 & 65 & 0 & -2 & -1 \\ COLA (MCC) & 0.16 & 0.19 & 0.14 & — & — & 0.23 & 0.23 & 0.19 & 0.26 & — & — & — \\ MNLI & 71 & 71 & 70 & 0 & -1 & 72 & 72 & 69 & 72 & 0 & -3 & 0 \\ MNLI-MM & 72 & 73 & 72 & 1 & 0 & 73 & 73 & 70 & 73 & 0 & -3 & 0 \\ MRPC (F1) & 75 & 75 & 79 & 0 & 4 & 76 & 81 & 77 & 75 & 5 & 1 & -6 \\ MultiRC & 61 & 58 & 62 & -3 & 1 & 62 & 59 & 59 & 54 & -3 & -3 & -5 \\ QNL1 & 81 & 77 & 78 & -4 & -3 & 71 & 72 & 66 & 74 & 1 & -5 & 2 \\ QQP (F1) & 81 & 82 & 81 & 1 & 0 & 82 & 82 & 80 & 81 & 0 & -2 & -1 \\ RTE & 48 & 42 & 47 & -6 & -1 & 46 & 57 & 53 & 56 & 11 & 7 & -1 \\ SST2 & 87 & 85 & 82 & -2 & -5 & 87 & 82 & 86 & 83 & -5 & -1 & 1 \\ WSC & 61 & 61 & 61 & 0 & 0 & 61 & 59 & 61 & 61 & -2 & 0 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: (Super)GLUE Results. Values are not aggregated across each model due to the presence of different metrics (Accuracy, F1 score, and MCC)
\begin{table}
\begin{tabular}{l r r r|r r r r r} \hline \hline & \multicolumn{3}{c|}{Set A} & \multicolumn{3}{c}{Set B} \\ \hline & \multicolumn{1}{c}{\(\mathbf{TF}_{base}\)} & \multicolumn{1}{c}{\(\mathbf{SF}_{s1}\)} & \multicolumn{1}{c}{\(\mathbf{SF}_{s2}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s1}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s2}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s1^{\prime}}\)} & \multicolumn{1}{c}{\(\mathbf{SR}_{s2^{\prime}}\)} \\ \hline \hline Control Raising (Control) & 0.54 & 0.56 & 0.69 & 0.57 & 0.56 & 0.69 & 0.56 \\ Control Raising (Lexical Content) & -0.45 & -0.04 & -0.02 & -0.03 & -0.07 & -0.36 & -0.14 \\ Control Raising (Relative Position) & -0.94 & -0.89 & -0.92 & -1.00 & -0.98 & -0.77 & -0.98 \\ Lexical Content (Control) & 1.00 & 0.88 & 0.6 & 1.00 & 0.98 & 1.00 & 0.78 \\ Main Vent Vent (Control) & 0.93 & 0.96 & 0.84 & 0.85 & 0.98 & 0.96 & 0.98 \\ Main Vent (Lexical Content) & -1.00 & -0.79 & -0.84 & -1.00 & -1.00 & -0.99 & -1.00 \\ Main Vent (Relative Position) & -0.87 & -0.78 & -0.89 & -0.98 & -0.93 & -0.83 & -0.95 \\ Relative Position (Control) & 0.67 & 0.81 & 0.78 & 0.86 & 0.95 & 0.97 & 1.00 \\ Syntactic Category (Control) & 0.62 & 0.23 & 0.47 & 0.80 & 0.73 & 0.66 & 0.87 \\ Syntactic Category (Lexical Content) & -0.61 & -0.17 & -0.17 & -0.42 & -0.59 & -0.26 & -0.76 \\ Syntactic Category (Relative Position) & -0.32 & -0.57 & -0.44 & -0.47 & -0.47 & -0.63 & -0.52 \\ \hline \hline \end{tabular}
\end{table}
Table 5: MSGS Results
embedding layer. We can only speculate about why this is so. It might be that it is an advantage to push the model very early towards identifying structural relations between words. More precisely to do so at a stage where the contributions of the single tokens are still separated from each other. The parsing network placed between the middle layers acts at a moment where single token contributions are already blurred.
To understand the effect of placing the parser network within the middle layers, we propose probing the layers of the Front and Rear modules and comparing them to the corresponding layers in the model where the parser network is placed ahead of the attention layers. Such a comparative analysis can provide valuable insights and either support or contradict our hypothesis regarding the learning of syntactic features in the middle layers of transformer models.
Regarding the aim of achieving competitive scores on the shared task challenge, the best score we could get was from the model \(\textit{structroberta}_{s1^{\prime}}\), this model is an upscaling of the \(\textit{structroberta}_{s1}\).
## 6 Conclusion
In this paper, we extend the work of Shen et al. (2021) to explore the capabilities of the StructureFormer architecture as an example of employing hierarchical bias in addressing the challenges posed by relatively small LLM pretraining datasets. Furthermore, we modify the StructFormer architecture to examine whether integrating the hierarchical bias within the middle attention layers leads to performance improvements. To accomplish these objectives, we pretrain seven model variants using the same dataset and configuration settings. We evaluate these models on 39 different tasks. The evaluation outcomes reveal varying behavior across the models, exhibiting inconsistencies in performance. We could not show strong evidence that models incorporating hierarchical bias are performing better in the context of this shared task, nor could we show practical evidence for the claim that syntactic information is better represented in the middle attention layers within the scope of our experiment. We have noted substantial enhancements in certain tasks when models incorporate hierarchical bias in their architectural designs. Nonetheless, to ensure the reliability of our findings and to eliminate potential confounding factors related to the varying number of parameters in each model, as well as the distinct objectives and complexities of individual tasks, we intend to carry out an in-depth analysis of each model's performance on a task-by-task basis.
## Acknowledgements
We thank the authors of the StuctFormer model Shen et al. (2021) for providing their implementation, which played an important role in the completion of this work. Additionally, we acknowledge the invaluable support received from the BabyLM shared task organizers, who provided the datasets, evaluation pipeline, and codes for pretraining and finetuning LMs. Their contributions enabled us to conduct a comprehensive and successful study. Furthermore, we are grateful for the comments of our reviewers that helped improve the paper. Lastly, we thank Hassan Sajjad and Younes Samih for fruitful discussions on hierarchical information in language models.
|
2309.15451 | On a fully nonlinear elliptic equation with differential forms | We introduce a fully nonlinear PDE with a differential form $\Lambda$, which
unifies several important equations in K\"ahler geometry including
Monge-Amp\`ere equations, J-equations, inverse $\sigma_{k}$ equations, and the
deformed Hermitian Yang-Mills (dHYM) equation. We pose some natural positivity
conditions on $\Lambda$, and prove analytical and algebraic criterions for the
solvability of the equation. Our results generalize previous works of G.Chen,
J.Song, Datar-Pingali and others. As an application, we prove a conjecture of
Collins-Jacob-Yau for the dHYM equation with small global phase. | Hao Fang, Biao Ma | 2023-09-27T07:37:05Z | http://arxiv.org/abs/2309.15451v1 | # On a fully nonlinear elliptic equation with differential forms
###### Abstract.
We introduce a fully nonlinear PDE with a differential form, which unifies several important equations in Kahler geometry including Monge-Ampere equations, \(J\)-equations, inverse \(\sigma_{k}\) equations, and the deformed Hermitian Yang-Mills (dHYM) equation. We pose some natural positivity conditions on \(\Lambda\), and prove analytical and algebraic criterions for the solvability of the equation. Our results generalize previous works of G.Chen, J.Song, Datar-Pingali and others. As an application, we prove a conjecture of Collins-Jacob-Yau for the dHYM equation with small global phase.
###### Contents
* 1 Introduction
* Part 1 Analytical Criterion
* 2 Assumptions and main results
* 3 Preliminary set-up
* 4 Ellipticity and convexity
* 5 Cone condition
* 6 Equations with almost positive volume forms
* 7 Continuity method
## 1. Introduction
In this paper, we study a general form of fully non-linear partial differential equations of Monge-Ampere type on Kahler manifolds.
Let \((M,\omega_{0})\) be a Kahler manifold of dimension \(n\), and let \([\omega_{0}]\) be the corresponding Kahler class. We fix a closed differential form of the following format:
\[\Lambda=\sum_{k=1}^{n}\Lambda^{[k]},\]
where for each \(1\leq k\leq n\), \(\Lambda^{[k]}\) is a real \((k,k)\)-form. Hereby we use \(\alpha^{[k]}\) to denote the \((k,k)\) component of a differential form \(\alpha\). Let \(\omega=\omega_{0}+i\partial\bar{\partial}u\in[\omega_{0}]\) be a Kahler metric. We denote
\[\Omega=\Omega(\omega)=\exp\omega=\sum_{k=0}^{n}\frac{\omega^{k}}{k!}.\]
In this paper, we study the following partial differential equation of function \(u\):
\[\kappa\Omega^{[n]}=(\Lambda\wedge\Omega)^{[n]}, \tag{1.1}\]
where \(\kappa\) is a positive constant such that \(\kappa\int_{M}\exp\omega_{0}=\int_{M}\Lambda\wedge\exp\omega_{0}\).
When \(\Lambda\) has only the positive \((n,n)\) component, it is thus a volume form of \(M\). (1.1) is the well-known complex Monge-Ampere equation, first completely solved by Yau [38].
For future convenience, we write \(\Lambda^{[n]}\) as \(\frac{f\rho^{n}}{n!}\), where \(f\) is a smooth function on \(M\)and \(\rho\) is a fixed background Kahler metric. Then (1.1) can be expanded as:
\[\kappa\frac{\omega^{n}}{n!}=\sum_{k=1}^{n-1}\Lambda^{[k]}\wedge\frac{\omega^{ n-k}}{(n-k)!}+f\frac{\rho^{n}}{n!}. \tag{1.2}\]
In addition to the original Monge-Ampere equation, other special cases of equation (1.2) include:
1. When \(\Lambda=\rho\) is a Kahler form, (1.1) is the \(J\)-equation, which was first introduced by Donaldson [15] and has been extensively studied by many authors. See [15, 6, 37, 33, 10, 4, 32] and references therein.
2. When \(\Lambda\) is \(\rho^{k}\) or \(\sum_{i=k}^{n-1}c_{k}\rho^{k}\), where \(\rho\) is a Kahler form and \(c_{k}\geq 0\), (1.1) is the inverse \(\sigma_{k}\)-equation, which was first raised in Fang-Lai-Ma [19]. An incomplete list of works on these equations includes [19, 17, 18, 10, 35, 12].
3. Given a specific global phase \(\theta>0\) and \(\Lambda=\sin\theta\cos\rho-\cos\theta\sin\rho\), where \(\rho\) is a Kahler form, (1.1) is the deformed Hermitian Yang-Mills (dHYM) equation. dHYM equation was introduced in Marino-Minasian-Moore-Strominger [26] and Leung-Yau-Zaslow [23] and has been extensively studied; See [21, 8, 29, 4, 7, 11, 16, 20, 24] and references therein. See also Section 12 for details.
PDEs in the general form of (1.1) are inspired by above-mentioned works, which have roots in rich geometric and mathematical physics background, and are important in the field of fully non-linear PDEs with significant geometric implications. Known results indicate that the existence of unique smooth solutions can be interpreted as analytical and algebraic properties of the underlying manifold. Therefore, it is our intention to systematically explore more general forms of such PDEs. In particular, we would like to study necessary and sufficient conditions that ensure the existence of solutions.
We realize from past works that, aside from the unknown Kahler metric to be solved, given geometric data often involve another Kahler form or its powers. The exact format of (1.1) suggests some point-wise positivity requirement for the differential form \(\Lambda\). In addition, the important fact of ellipticity of Monge-Ampere type equations poses similar requirements. One of our main goals is to find natural positivity conditions under which necessary a priori estimates can be established and deep connections between PDE and geometry can be reconstructed. Inspired by Gao Chen's work [4], we also consider (1.1) in a bit more general form, in which we allow \(\Lambda^{[n]}\) to be slightly negative at some points.
For convenience of further discussion, we decompose \(\Lambda\) as follows
\[\Lambda=\mathring{\Lambda}+\Lambda^{[n]},\]
and discuss constraints on \(\mathring{\Lambda}\) and \(\Lambda^{[n]}\) separately.
**Definition 1.1**.: We call a real even closed differential form \(\mathring{\Lambda}\)_on M \(k\)-uniformly positive_ (\(k\)-UP) if there exists an integer \(1\leq k\leq n\), a reference Kahler metric \(\rho\) of \(M\), and a uniform constant \(m>0\) such that \(\Lambda^{[l]}=0\) for \(l<k\) ; and
\[\mathring{\Lambda}-m\frac{\rho^{k}}{k!}\geq 0, \tag{1.3}\]
which (1.3) means that each \((l,l)\)-component of the left hand side is a positive form for \(1\leq l\leq n\). See Definition 3.1.
Notice that Definition 1.1 is independent of the choice of reference metric \(\rho\). An \(n\)-UP form \(\Lambda\) is a positive volume form, which is required for solving the classic Monge-Ampere equation. \(J\)-equation is a special case where \(\Lambda\) is 1-UP. Furthermore, positive linear combination of powers of a given Kahler form is uniformly positive. Therefore, inverse \(\sigma_{k}\) equations also fall into this category. However, as uniform positivity is an open condition, it is far more general than these special cases. In future sections, we will also give more general and technical conditions under which various conclusions hold.
Since the classic Monge-Ampere equation is well known in the field, we will discuss only cases when \(\mathring{\Lambda}\) is \(k\)-UP for some \(1\leq k\leq n-1\). When we consider \(\Lambda^{[n]}\), in spirit of the work of G. Chen [4], we pose a slightly weaker condition.
**Definition 1.2**.: Given \(1\leq k_{0}\leq n-1\), \(m>0\), and a reference Kahler metric \(\rho\), we call \((n,n)\)-form \(\alpha\)_an almost positive volume form_ with respect to \((k_{0},m,\rho)\), if \(\int_{M}\alpha\geq 0\) and there exists \(\epsilon=\epsilon(n,m,\kappa,k_{0},\omega_{0},\rho)>0\) such that
\[\frac{\alpha}{\rho^{n}/n!}>-\epsilon. \tag{1.4}\]
_Remark 1.3_.: We will choose a specific constant \(\epsilon<\min\left\{\frac{m}{4n+2}\gamma_{\min}(\frac{2\kappa}{m},1,n,k_{0}), \frac{\kappa\int_{M}\omega_{0}^{n}}{2\int_{M}\rho^{n}}\right\},\)where \(\gamma_{\min}\) is a positive number, defined later in (6.2) in the rest of the paper.
We state the following hypothesis on \(\Lambda\), which will be a key assumption for our paper.
**Definition 1.4**.: \(\Lambda=\hat{\Lambda}+\Lambda^{[n]}\) is called to satisfy the condition **H1**, if and only if \(\hat{\Lambda}\) is \(k_{0}\)-uniformly positive for some \(1\leq k_{0}\leq n-1\) with a reference metric \(\rho\) and a uniform constant \(m>0\); and \(\Lambda^{[n]}\) is almost positive with respect to a specific \(\epsilon\)(as in Remark 1.3) and \((k_{0},m,\rho)\).
In order to state our main analytical result, we define the following concept based on previous works ([33, 19, 17, 35] ).
**Definition 1.5**.: A Kahler form \(\omega\in[\omega_{0}]\) is called to satisfy the _cone condition_ for equation (1.2) if and only if pointwisely it satisfies the following
\[\kappa(\exp\omega)^{[n-1]}-(\Lambda\wedge\exp\omega)^{[n-1]}>0; \tag{1.5}\]
Or equivalently,
\[\frac{\kappa\omega^{n-1}}{(n-1)!}-\sum_{k=1}^{n-1}\frac{\Lambda^{[k]}\wedge \omega^{n-k-1}}{(n-k-1)!}>0, \tag{1.6}\]
which means that the left hand side of (1.5) or 1.6) is a strictly positive \((n-1,n-1)\) form. We also call \(\omega\) a _subsolution_ of the equation (1.1) or (1.2) if (1.5) holds everywhere.
In order to state our main geometrical result, inspired by the works of [22, 35], we propose the following numerical positivity conditions:
**Definition 1.6**.: Let \([\Lambda]\) be the cohomology class of \(\Lambda\). Let \(\kappa>0\) be a constant. Let \([\alpha]\) be a Kahler class. We call \([\alpha]\) a _\(([\Lambda],\kappa)\)-positive_ class if
\[[\exp\alpha]\cdot[\kappa-\Lambda]\cdot[M]\geq 0, \tag{1.7}\]
and for any subvariety \(Y\) of \(M\) with \(\dim Y<n\) it holds that
\[[\exp\alpha]\cdot[\kappa-\Lambda]\cdot[Y]>0. \tag{1.8}\]
With key definitions ready, we state our main theorems, which consist 2 parts. The first is an analytic criterion for the existence of solution to (1.2).
**Theorem 1.7**.: _Let \(M\) be a connected compact Kahler manifold of dimension \(n\) with a fixed Kahler class \([\omega_{0}]\). Suppose that \(\Lambda\) is a closed real differential form satisfying **H1**. Then, there exists a smooth solution of (1.2) if and only if there exists a smooth subsolution of (1.2)._
Special cases of Theorem 1.7 have been established by Song-Weinkove [33], Fang-La-Ma [19], Guan [17], Guan-Sun [18], Szekelyhidi [35], Chen [4], Datar-Pingali [12].
_Remark 1.8_.: We also establish the uniqueness result regarding (1.2). See Appendix A. In fact, solutions will be shown to be the unique minimizer of a global functional \(\mathcal{F}\) defined in (A.7). We will further explore the variational structure of equation (1.2) in the Appendix A and some future works.
_Remark 1.9_.: Following works of [17, 18, 35], it is likely that Theorem 1.7 may be extended to Hermitian manifolds, when similar subsolution definition may be established.
Our second main theorem relates the solvability of equation (1.2) to Definition 1.6.
**Theorem 1.10**.: _Let \(M\) be a connected compact Kahler manifold of dimension \(n\) with a fixed Kahler class \([\omega_{0}]\). Suppose that \(\Lambda\) is a closed real differential form satisfying **H1**. Then \([\omega_{0}]\) is \(([\Lambda],\kappa)\)-positive if and only if there exists a smooth Kahler metric \(\omega\) solves (1.2)._
Theorem 1.10 generalizes several existing works. In the context of \(J\)-equation, G. Chen [4] proved the equivalence of \(J\)-uniformly positivity and the solvability of the \(J\)-equation; Later, J. Song [32] reduced the \(J\)-uniformly positivity to only \(J\)-positivity. In the context of inverse \(\sigma_{k}\) type equations, using different methods, similar numerical results have been proved by Collins-Szekelyhidi [10] for toric manifolds and Datar-Pingali [12] for projective manifolds.
While many of past works are raised from strong geometric background, our work aims at a general form of non-linear PDEs of Monge-Ampere type on Kahler manifolds. In particular, our equation is one of the few fully non-linear PDEs that involve general differential forms of different degrees. It is hopeful that future works will reveal more geometry of positive differential forms on complex manifolds, which has not been studied much from the non-linear PDE point of view.
As an application of our main results, we study supercritical deformed Yang-Mills equations to get the following
**Theorem 1.11**.: _Let \(M\) be a connected compact Kahler manifold of dimension \(n\). Let \([\omega_{0}]\) be a real \((1,1)\)-cohomology class and let \(\rho\) be a Kahler form. Let_
\[\cot\theta=\frac{\int_{M}\text{Re}(\omega+\sqrt{-1}\rho)^{n}}{\int_{M}\text{ Im}(\omega+\sqrt{-1}\rho)^{n}}.\]
_If \(\theta\in(0,\frac{\pi}{n-1}]\), then there exists a smooth solution to the supercritical dHYM equation_
\[\text{Re}(\omega+\sqrt{-1}\rho)^{n}=\cot\theta\text{Im}(\omega+\sqrt{-1}\rho)^{n}, \tag{1.9}\]
_if and only if \([\omega_{0}-\cot\theta\rho]\) is Kahler and for any analytic subvariety \(V\) of dimension \(d\leq n-1\), we have_
\[\int_{V}\Big{(}\text{Re}(\omega_{0}+\sqrt{-1}\rho)^{d}-\cot\theta\text{Im}( \omega_{0}+\sqrt{-1}\rho)^{d}\Big{)}>0. \tag{1.10}\]
For supercritical dHYM equations, the equivalence of the existence of a smooth solution to (1.9) and the numerical condition (1.10) was conjectured by Collins-Jacob-Yau [8] and confirmed by the work of G. Chen [4], Chu-Lee-Takahashi [7], and A. Ballal [1] under various stronger assumptions. See more details in Section 12. The original conjecture in [8] in its full generality does not hold due to counterexamples constructed by Zhang [39]. However, Theorem 1.11 confirms the original conjecture when \(\theta\in(0,\frac{\pi}{n-1}]\). Notice that for Zhang's examples[39], \(\theta>\frac{\pi}{n-1}\). It is therefore interesting to find the sharp range of \(\theta\) in which the conjecture of Collins-Jacob-Yau holds.
We make some comments on our proofs.
We have utilized numerous ideas and techniques of existing results, many have been cited above. On the analytic side: We employ Yau's continuity method which has been successfully used by many to attack Monge-Ampere type and other types of equations. We follow works of many [33, 19, 17, 10, 35, 4, 12] to derive _a priori_ estimates. We use Szekelyhidi's argument to derive \(C^{0}\) estimate [35]. We use arguments in G. Chen [4] and Datar-Pingali [12] to treat almost positive volume forms. The definition of subsolution is inspired by similar concepts in Song-Weinkove [33], Fang-Ma-Lai [19], Guan [17], and Szekelyhidi [35]. On the algebraic side: The numerical condition in Definition 1.6 is inspired by [22, 35]. We follow G. Chen's induction method and overall approach in [4] to tackle the numerical criterion. We make use of Demailly-Paun's mass concentration technique [14]. We follow J. Song's treatment of singular subvarieties [32].
We have made a conscious effort to examine existing methods and explore their maximal applicability, which has lead us to the current formulation of general equations and proper positivity conditions. As mentioned earlier, our equation includes many existing important special cases. We apply a new set of local notations and carry out detailed multi-linear algebraic computation involving inverse \(\sigma_{k}\) type quantities.
One key component of our proof is the definition of the uniform positivity and the subsequent **H1** condition, which ensure that the important concept of subsolutions can be defined and key a priori estimates can be carried out. We would like to remark that the uniform positivity condition can not be used directly to prove Theorem 1.10. From the algebraic point of view, proper positivity conditions are required to be preserved
under two important geometric procedures which are used to prove corresponding numerical results: the first is a product manifold construction; the second is passing from a Kahler manifold to its sub-varieties. The uniform positivity condition can be preserved under the second procedure, but unfortunately it is not preserved under the product manifold procedure. In order to overcome this difficulty, we introduce a more technical positivity condition, called **H2**, which is a similar but more general version of the **H1** condition and will be stated in later sections. **H2** works under the product manifold procedure, but does not descend to sub-varieties. We carefully establish all necessary algebraic and analytic results that are needed in our proofs under these technical assumptions by fully exploring the algebraic structure of differential forms. The final write-up is technical but self-contained for the convenience of readers. We believe that our main theorems, Theorems 1.7 and 1.10, are cleanest in format, still incorporate all known special cases and will be of most use in future applications.
Another key component of our proof is a new geometric PDE (10.8) on the product manifold, which is simpler and more flexible than the one used by Chen [4], even in the J-equation case. Under our general setting, the new equation is compatible with proper cone conditions, and still carries essential algebraic information. See Example 10.5 for further elaboration in a simple yet illuminating case.
There are several future research directions of interest.
First of all, our results may be extended further. The general form of dHYM equations do not satisfy any of our positivity assumptions, while its special cases have been studied successfully. In [25], Lin also studied the convexity of inverse \(\sigma_{k}\) type equations with some negative coefficients. This may indicate that some weaker positivity conditions may lead to existence results.
Similar to the J-equation and dHYM equations ([15, 9, 11]), there is also a proper moment map interpretation of our equation, which involves an infinite dimensional symplectic space. Such point of view is helpful to consider further generalizations and applications. We would like to address this topic in a future work.
From the PDE point of view, our current approach, as well as several important previous works, heavily depends on the continuity method. It is interesting to explore corresponding geometric flows, which has been successfully used in the setting of J-equation and dHYM equations but missing in the more general setting. See[6, 37, 33, 19] and [8, 16, 4, 7, 20, 24, 29, 11, 36] for an incomplete list. Overall, the parabolic approach will expose finer geometric information.
From the geometry point of view, as mentioned earlier, one may explore the geometry of properly defined positive differential forms using non-linear PDEs, which will be a new research direction. Also mentioned earlier, some of our results is likely to be extended to the Hermitian setting. This may further open applications in the field of special
geometry and mathematical physics. From another prospective, indicated by works of Datar-Pingali [12], Chu-Lee-Takahashi [7] and a recent lecture note of G. Chen [5], one may explore the more restrictive setting where \(M\) is projective. Studies of generalized Hodge conjecture explore fine distinction between Kahler and projective manifolds. Our general form of PDE may be of help.
The rest of the paper is organized as follows: In Part 1, we establish main analytical results of this paper; In Part 2, we prove our main algebraic result. We give more details: In Section 2, we state our main technical analytical results; In Section 3, we describe our local setup and establish proper notations; In Section 4, when \(\Lambda^{[n]}\) is non-negative, we compute variations of our non-linear differential operators and derive its ellipticity and concavity; In Section 5, we discuss the cone condition and related properties; In Section 6, we extend results of previous sections to include the almost positive \(\Lambda^{[n]}\) ; In Section 7, we establish proper a priori estimate and use the continuity method to prove our main analytical results; In Section 8, we set up notations and make technical preparations for Part 2; In Section 9, we initiate the induction argument to prove Theorem 1.10; In Section 10, we prove the key mass concentration theorem; In Section 11, we complete the proof of Theorem 1.10; In Section 12, we study dHYM equations and prove Theorem 1.11.
### Acknowledgements
We thank Jian Song and Jianchun Chu for helpful discussions. We thank Gang Tian and Xiaohua Zhu for comments and suggestions. We thank Vamsi Pritham Pingali and Ved Datar for their interest and comments that are helpful.
### Part 1. Analytical Criterion
## 2. Assumptions and main results
In this part we prove Theorem 1.7, the first of our main theorems, by establishing a stronger result.
We begin introduce a positivity assumption, called **H2,** which is weaker than **H1** and allows us to work with differential forms which do not satisfy \(k\)-uniform positivity. In particular, this new positivity condition can be applied to product manifolds, which will be crucial in later proofs.
We start by defining a pointwise structure on \(M\). Let \(\mathcal{T}_{p}M\) be the holomorphic tangent space of \(M\) at \(p.\) We fix a reference Kahler metric as \(\rho\) and for future convenience, define
\[P:=\exp\rho=\sum_{k=0}^{n}\frac{\rho^{k}}{k!}. \tag{2.1}\]
As before, we define \(\mathring{\Lambda}=\Lambda-\Lambda^{[n]}.\)
**Definition 2.1**.: _A labeled orthogonal splitting_ at \(p\in M\) with respect to \(\rho\) is a tuple
\[\mathcal{O}_{p}=(n_{p},\mathbf{d}_{p},\mathbf{V}_{p},\mathbf{k}_{p}),\]
where \(n_{p}\in\mathbb{Z}_{+}\), \(\mathbf{d}_{p}=(d_{1},\cdots,d_{n_{p}})\in\mathbb{Z}_{\geq 0}^{n_{p}}\), \(\mathbf{k}_{p}=(k_{1},\cdots,k_{n_{p}})\in\mathbb{Z}_{\geq 0}^{n_{p}}\) and \(\mathbf{V}_{p}=\{\mathcal{V}_{i}:\) linear subspaces in \(\mathcal{T}_{p}M\}\) such that
1. \(d_{i}=\dim_{\mathbb{C}}\mathcal{V}_{i}\) and \(1\leq k_{i}\leq d_{i}-1\);
2. \(\mathcal{V}_{i}\) are mutually orthogonal with respect to \(\rho\) and \(\mathcal{T}_{p}M=\oplus_{i=1}^{n_{p}}\mathcal{V}_{i}\).
Given \(\mathcal{O}_{p}\) at \(p\), we denote \(\iota_{i},\pi_{i}\) be the standard embedding and orthogonal projection for each \(\mathcal{V}_{i}\), respectively. Denote
\[\rho_{i}=\pi_{i}^{*}\iota_{i}^{*}\rho. \tag{2.2}\]
If at each \(p\in M\), there is a labeled orthogonal splitting \(\mathcal{O}_{p}\), we call \(\mathcal{O}:=\{\mathcal{O}_{p}:p\in M\}\) a labeled orthogonal splitting on \(M\).
We propose the following
**Definition 2.2**.: Suppose that there exists a labeled orthogonal splitting \(\mathcal{O}=\{(n_{p},\mathbf{d}_{p},\mathbf{V}_{p},\mathbf{k}_{p}):\)\(p\in M\}\) with respect to some reference Kahler metric \(\rho\). \(\mathring{\Lambda}\) is called \(\mathcal{O}\)-_uniformly positive_ (\(\mathcal{O}\)-UP) if there exists a uniform constant \(m>0\) such that at each point \(p\), we have
\[\mathring{\Lambda}-m\sum_{i=1}^{n_{p}}\frac{\rho_{i}^{k_{i}}}{k_{i}!}\geq 0, \tag{2.3}\]
which means the \((l,l)\)-component of the left hand side is positive (See Definition 3.1) for \(l=1,\cdots,n-1\).
\(\mathcal{O}\)-uniform positivity is a weaker condition compared to \(k\)-UP, and the later corresponds to \(\{(1,n,\mathcal{T}_{p}M,k)\}\)-uniform positivity. However, as \(\mathcal{O}\) depends on the choice of \(\rho\), \(\mathcal{O}\)-UP property also depends on the choice of \(\rho\). Moreover, the \(\mathcal{O}\)-UP property is not preserved when descending to a smooth subvariety, which is why we assume \(k\)-UP in Theorem 1.10.
For \((n,n)\)-form \(\Lambda^{[n]}\), as discussed in the introduction, we allow it to be slightly negative:
**Definition 2.3**.: Given a labeled orthogonal splitting \(\mathcal{O}=\{(n_{p},\mathbf{d}_{p},\mathbf{V}_{p},\mathbf{k}_{p}):p\in M\}\) with respect to \(\rho\), we call \((n,n)\)-form \(\alpha\)_an almost positive volume form or almost positive_ with respect to \((\mathcal{O},m,\rho)\), if \(\int_{M}\alpha\geq 0\) and there exists \(\epsilon=\epsilon(n,m,\kappa,n_{p},\mathbf{d}_{p},\mathbf{k}_{p},\omega_{0}, \rho)>0\) such that
\[\frac{\alpha}{\rho^{n}/n!}|_{p}>-\epsilon. \tag{2.4}\]
_Remark 2.4_.: We will chose the precise number \(\epsilon=\min\left\{\frac{m}{4n+2}\gamma_{\min}(\frac{2\kappa}{m},n_{p},\mathbf{d }_{p},\mathbf{k}_{p}),\frac{\kappa\int_{M}\omega_{0}^{n}}{2\int_{M}\rho^{n}}\right\}\), where \(\gamma_{\min}\) is a positive number defined in (6.2). Notice that \(\epsilon\) can be chosen independent of \(p\), since variables \(\mathbf{d}_{p},\mathbf{k}_{p}\) in \(\gamma_{\min}\) only takes finitely many values for all possible local labeled orthogonal splitting.
We impose the following hypothesis on \(\Lambda\), which can be decomposed into \(\mathring{\Lambda}+\Lambda^{[n]}\).
**Definition 2.5**.: We call \(\Lambda=\mathring{\Lambda}+\Lambda^{[n]}\) satisfies the condition **H2**, if \(\mathring{\Lambda}\) is \(\mathcal{O}\)-uniformly positive for some uniform constant \(m\), and \(\Lambda^{[n]}\) is almost positive with respect to a specific \(\epsilon\) and \((\mathcal{O},m,\rho)\); Furthermore, there exists a constant \(C_{H2}=C_{H2}(\Lambda,M)\) such that for any \(p\in M\), \(\xi\in\mathcal{T}_{p}M\) with \(\|\xi\|_{\rho}\leq 1\), and each \(1\leq k\leq n\), we have
\[-C_{H2}\left(\Lambda^{[k]}+\sum_{l\in\mathbf{l}_{k}}\rho_{1}^{l_ {1}}\cdots\rho_{n_{p}}^{l_{n_{p}}}\right)\leq\mathrm{Re}\left(\nabla_{\xi} \Lambda^{[k]}\right)\leq C_{H2}\left(\Lambda^{[k]}+\sum_{l\in\mathbf{l}_{k}} \rho_{1}^{l_{1}}\cdots\rho_{n_{p}}^{l_{n_{p}}}\right), \tag{2.6}\] \[-C_{H2}\left(\Lambda^{[k]}+\sum_{l\in\mathbf{l}_{k}}\rho_{1}^{l_ {1}}\cdots\rho_{n_{p}}^{l_{n_{p}}}\right)\leq\nabla_{\xi\bar{\xi}}^{2}\Lambda^ {[k]}\leq C_{H2}\left(\Lambda^{[k]}+\sum_{l\in\mathbf{l}_{k}}\rho_{1}^{l_{1}} \cdots\rho_{n_{p}}^{l_{n_{p}}}\right), \tag{2.5}\]
where \(\mathbf{l}_{k}=\{(l_{1},\cdots,l_{n_{p}}):\sum_{i}l_{i}=k,\ l_{i}=0\ \text{or}\ l_{i}\geq k_{i}\}\) and \(\nabla\) is the Levi-Civita connection of \(\rho\).
_Remark 2.6_.: If \(k\geq\max_{1\leq i\leq n_{p}}\{n-d_{i}+k_{i}\}\), then (2.5) and (2.6) hold automatically, since \(\rho^{k}\leq c_{0}(n_{p},k)\sum_{l\in\mathbf{l}_{k}}\rho_{1}^{l_{1}}\cdots\rho_ {n_{p}}^{l_{np}}\) (see proof in Lemma 6.3) and we may choose \(C_{H2}=(\|\Lambda\|_{C^{2}}+1)\cdot\sup_{p}c_{0}(n_{p},k)\).
_Remark 2.7_.: **H1** is a stronger condition comparing to **H2**. First, the \(k_{0}\)-UP condition is equivalent to the \(\mathcal{O}\)-UP condition for \(\mathcal{O}=\{(1,n,\mathcal{T}_{p}M,k_{0}):p\in M\}\). Second, since \(\Lambda^{[k]}=0\) for \(k<k_{0}\), (2.5) and (2.6) hold for \(k<k_{0}\). By Remark 2.6, (2.5) and (2.6) hold for \(k\geq k_{0}\).
Comparing to **H1**, condition **H2** allows more general choices of geometric data. However, condition **H2** may be more difficult to verify. A crucial fact is following:
**Example 2.8**.: Let \(\{(M_{i},\rho_{i},\mathcal{O}_{i})\}_{i=1}^{l}\) be several Kahler manifolds, each with labeled orthogonal splitting \(\mathcal{O}_{i}\). Let \(\Lambda_{i}\) be positive differential forms on \(M_{i}\) satisfying **H2** for some uniform constant \(m\), respectively. Let \(\mathcal{M}:=\prod_{i=1}^{l}M_{i}\) be the product manifold and \(\mathrm{pr}_{i}\) be the canonical projection from \(\mathcal{M}\) to \(M_{i}\). Let \(\rho=\sum_{i=1}^{l}\mathrm{pr}_{i}^{*}\rho_{i}\). Then \(\mathcal{M}\) has a natural labeled orthogonal splitting at each \(\mathbf{p}=(p_{1},\cdots,p_{l})\in\mathcal{M}\) as
\[\mathcal{O}_{(p_{1},\cdots,p_{l})}=\left(\sum_{i=1}^{l}n_{p_{i}},(\mathbf{d}_{p _{1}},\cdots,\mathbf{d}_{p_{l}}),\cup_{i=1}^{l}\mathbf{V}_{p_{i}},(\mathbf{k}_ {p_{1}},\cdots,\mathbf{k}_{p_{l}})\right).\]
Let
\[\mathbf{\Lambda}:=\sum_{i=1}^{l}\operatorname{pr}_{i}^{*}\Lambda_{i}.\]
Then \(\mathbf{\Lambda}\) satisfies \(\mathcal{O}\)-uniformly positivity. Note the \(\nabla_{\xi}\mathrm{pr}_{i}^{*}\Lambda_{i}=\nabla_{\xi^{\top}}\mathrm{pr}_{i}^{ *}\Lambda_{i}\) where \(\xi^{\top}\) is the tangential part of \(\xi\) to \(M_{i}\). Thus, \(\mathrm{pr}_{i}\Lambda_{i}\) satisfies (2.5) and (2.6), and so does \(\mathbf{\Lambda}\) (with \(C_{H2}=\max\{C_{H2}(\Lambda_{i},M_{i})\}\)). On the other hand, if each \(\Lambda_{i}\) is \(k_{0i}\)-uniformly positive for some \(k_{0i}\geq 2\), i.e. \(\Lambda_{i}^{[k_{0i}]}\geq m\rho_{i}^{k_{0i}}/k_{0i}!\) on \(M_{i}\), one does not expect \(\mathbf{\Lambda}\geq m\rho^{k}/k!\) for some \(k\geq 2\). Therefore, \(\mathbf{H1}\) does not necessarily hold on \(\mathcal{M}\).
We are now ready to state the main theorem of this part.
**Theorem 2.9**.: _Let \(M\) be a connected compact Kahler manifold of dimension \(n\) with a fixed Kahler class \([\omega_{0}]\). Suppose \(\Lambda\) is a closed real differential form satisfying **H2**. Then, there is a smooth solution of (1.2) if and only if there is a smooth subsolution of (1.2)._
From discussions above, it is clear that Theorem 1.7 is a direct consequence of Theorem 2.9. The rest of this part is devoted to prove Theorem 2.9. It is organized as follows. In Section 3, we introduce necessary notations and definitions. In Section 4, we compute variations of the local functional and verify its ellipticity and convexity. In Section 5, we study the cone condition and state several equivalent expressions. In Section 6, we extend results of previous Sections to PDEs with almost positive volume forms. In Section 7, we use the continuity method to prove Theorem 2.9 by establishing key a priori estimates.
## 3. Preliminary set-up
In this section, we introduce some notations and discuss some basic properties of our equation (1.2).
### Positivity of differential forms
We recall several definitions of positivity for \((k,k)\)-forms from Demailly [13], III,1,A. Our presentation is slightly different due to our choice of notations.
**Definition 3.1**.: \(u\in\bigwedge^{k,k}T_{p}^{*}M\) (resp. \(\bigwedge^{k,k}T_{p}M\)) is said to be a _strongly positive form (resp. vector)_ if it is of the form
\[u=\sum_{s\in I}\lambda_{s}\sqrt{-1}^{k}\alpha_{s,1}\wedge\overline{\alpha}_{s,2}\wedge\cdots\wedge\alpha_{s,k}\wedge\overline{\alpha}_{s,k},\]
where \(\alpha_{s,i}\in\bigwedge^{1,0}T_{p}^{*}M\) (resp. \(\bigwedge^{0,1}T_{p}M\)) and \(\lambda_{s}\geq 0\).
An element \(u\in\bigwedge^{k,k}T_{p}^{*}M\) (resp. \(\bigwedge^{k,k}T_{p}M\)) is a _positive form (resp. vector)_ if
\[\langle u,v\rangle\geq 0\]
for any strongly positive \(v\in\bigwedge^{k,k}T_{p}M\) (resp. \(\bigwedge^{k,k}T_{p}^{*}M\)). Here \(\langle\cdot,\cdot\rangle\) is the canonical pairing (complex linear) between \(\bigwedge^{*}T_{p}M\) and \(\bigwedge^{*}T_{p}^{*}M\). We denote \(\alpha\geq\beta\) (resp. \(\alpha\leq\beta\)) if for each \(k=0,\cdots,n\), \(\alpha^{[k]}-\beta^{[k]}\) (resp. \(\beta^{[k]}-\alpha^{[k]}\)) is a positive \((k,k)\)-form/vector.
We denote \(\alpha\geq_{s}\beta\) (resp. \(\alpha\leq_{s}\beta\)) if \(\alpha-\beta\) (resp. \(\beta-\alpha\)) is a strongly positive form/vector.
We list several direct conclusions from Definition 3.1 and compare these to our uniform positivity concept. A strongly positive form (vector) is also positive. The converse is true for \(k=0,1,n-1,n\); However it is false for \(k=2,\cdots,n-2\) if \(n\geq 4.\) Also, the cone of positive \((k,k)\)-forms is the dual cone of strongly positive \((k,k)\)-vectors. It is obvious that both positivity and strongly positivity are invariant under coordinate change. If \(\rho\) is a Kahler form, then \(\rho^{k}\) is a strongly positive \((k,k)\)-form. Moreover, \(\rho^{k}\) is \(k\)-uniformly positive by 1.3. A uniformly positive form is positive, but not necessarily strongly positive. For instance, pick a weakly positive but not strongly positive \((k,k)\)-form \(\alpha\). Then \(\rho^{k}+\alpha\) is uniformly positive but not strongly positive.
The following lemma shows that \(\exp\omega\) is strongly positive if \(\omega\) is a non-negative \((1,1)\)-form. The same argument applies to non-negative \((1,1)\)-tangent vectors.
**Lemma 3.2**.: _Let \(A\) be a non-negative Hermitian matrix. Let \(\omega=A_{i\bar{j}}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{j}\). Then \(\exp\omega\) is a strongly positive form. Moreover, if both \(A\),\(B\) are non-negative Hermitian matrix, \(\omega^{\prime}=B_{i\bar{j}}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{j}\) and \(A\geq B\), then \(\exp\omega\geq_{s}\exp\omega^{\prime}\) as a strongly positive form._
Proof.: We may assume that \(A\) is non-singular; otherwise, we just restrict to the column space of \(A\). Since strongly positivity is invariant under linear transform, we may assume that \(A\) is the identity matrix. Furthermore, we may assume that \(B=\sum_{i=1}^{n}\lambda_{i}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{i}\). Then, clearly, \(A\geq B\) which implies \(0\leq\lambda_{i}\leq 1\). We then compute
\[\exp\omega=1+\sum_{k=1}^{n}\sum_{|I|=k}\prod_{i\in I}\frac{\sqrt{-1}}{2}dz^{i }\wedge d\bar{z}^{i},\]
\[\exp\omega-\exp\omega^{\prime}=\sum_{I}(1-\prod_{i\in I}\lambda_{i})\prod_{i \in I}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{i},\]
where \(I\) runs over all ordered subsets of \(\{1,\cdots,n\}\). It is clear that the coefficient \(1-\prod_{i\in I}\lambda_{i}\) is always non-negative, which proves that \(\exp\omega\geq_{s}\exp\omega^{\prime}\) is strongly positive.
### Point-wise setup
We use \(\Gamma_{n\times n},\Gamma_{n\times n}^{+},\overline{\Gamma_{n\times n}^{+}}\) to denote the set of all Hermitian matrices, the set of positive definite Hermitian matrices, and the set of non-negative Hermitian matrices, respectively.
For \(p\in M,\) we pick a local normal coordinate near \(p\) with respect to \(\rho.\) Therefore, \(\rho=\frac{\sqrt{-1}}{2}\sum_{i,j}\delta_{i\bar{j}}dz^{i}\wedge d\bar{z}^{j}\) at \(p\). We may write
\[\omega=\frac{\sqrt{-1}}{2}\sum_{i,j}A_{i\bar{j}}dz^{i}\wedge d\bar{z}^{j},\]
where \(A=(A_{i\bar{j}})\) is a positive definite Hermitian matrix. Let \((A^{\bar{j}i})\) be the inverse matrix of \(A\); i.e. \(A_{i\bar{j}}A^{\bar{j}k}=\delta_{i}^{k}.\) We define the following local functional
\[F(A)=F(A,\Lambda):=\frac{(\Lambda\wedge\Omega)^{[n]}}{\Omega^{[n]}}. \tag{3.1}\]
A coordinate change preserves \(\frac{(\Lambda\wedge\Omega)^{[n]}}{\Omega^{[n]}},\) and hence \(F\) is invariant under a coordinate change. Equation (1.1) can be re-written as
\[F(A)=\kappa. \tag{3.2}\]
**Definition 3.3**.: Notations as above. We define a canonical \((1,1)\)-vector \(\chi\in\bigwedge^{1,1}T_{p}M\) as
\[\chi=2\sqrt{-1}\sum_{i,j}A^{\bar{j}i}\frac{\partial}{\partial\bar{z}^{j}} \wedge\frac{\partial}{\partial z^{i}}. \tag{3.3}\]
\(\chi\) is a strongly positive \((1,1)\)-vector which induces a Hermitian metric on \(T^{*}M\) that is dual to \(\omega.\)
Let \(\langle\cdot,\cdot\rangle\) be the complex bi-linear pairing \(\bigwedge^{*}(T_{p}^{*}M)\times\bigwedge^{*}(T_{p}M)\to\mathbb{C}.\) A direct computation shows that
\[\langle\frac{\omega^{k}}{k!},\frac{\chi^{k}}{k!}\rangle=\frac{n!}{k!(n-k)!}, \ \ \langle\frac{\rho^{k}}{k!},\frac{\chi^{k}}{k!}\rangle=\sigma_{k}(A^{-1}), \tag{3.4}\]
where \(\sigma_{k}(\cdot)\) is the \(k\)-th elementary symmetric function of eigenvalues of a Hermitian matrix.
With a labeled orthogonal splitting \(\mathcal{O}_{p}=\{n_{p},\mathbf{d}_{p},\{\mathcal{V}_{i}\},\mathbf{k}_{p}\}\) at \(p\), we first fix a normal coordinate \(\{z^{i}\}\) at \(p\) of \(\rho\) such that \(\{\sqrt{2}\frac{\partial}{\partial z^{i}}\}\) restricts to a unitary basis on each \(\mathcal{V}_{i}\). Suppose that \(\rho_{i}=\pi_{i}^{*}i_{i}^{*}\rho\) with respect to \(\mathcal{O}_{p}\). We may write under this frame
\[A^{-1}=\left(\begin{array}{cccc}A^{-1}|\mathcal{V}_{1}&*&*&*\\ *&A^{-1}|\mathcal{V}_{2}&*&*\\ *&*&\ddots&*\\ *&*&*&A^{-1}|\mathcal{V}_{n_{p}}\end{array}\right),\]
where \(A_{i}\in\Gamma_{d_{i}\times d_{i}}\). Then
\[\langle\frac{\rho_{i}^{k}}{k!},\frac{\chi^{k}}{k!}\rangle=\langle i_{i}^{*} \frac{\rho^{k}}{k!},(\pi_{i})_{*}\frac{\chi^{k}}{k!}\rangle=\sigma_{k}(A^{-1} |\mathcal{V}_{i}). \tag{3.5}\]
To future convenience, we define, for \(k\leq n\),
\[F_{k}(A)=\frac{n!\Lambda^{[k]}\wedge\omega^{n-k}}{(n-k)!\omega^{n}}. \tag{3.6}\]
Denote \(\exp\chi=\sum_{k=0}^{n}\frac{\chi^{k}}{k!}.\) The following lemma shows that \(F_{k}\) and \(F\) can be represented by \(\chi\).
**Lemma 3.4**.: _Notations as above. We have_
\[F_{k}(A)=\langle\Lambda^{[k]},\frac{\chi^{k}}{k!}\rangle. \tag{3.7}\]
\[F(A)=\langle\Lambda,\exp\chi\rangle. \tag{3.8}\]
Proof.: It is enough to prove (3.7)) for \(k\leq n-1\). For two ordered index sets \(I=\{i_{1}<i_{2}<\cdots<i_{k}\},\ J=\{j_{1}<\cdots<j_{k}\}\), we denote
\[\sqrt{-1}^{k^{2}}dz^{I}\wedge d\bar{z}^{J} =\sqrt{-1}^{k}dz^{i_{1}}\wedge d\bar{z}^{j_{1}}\wedge\cdots \wedge dz^{i_{k}}\wedge d\bar{z}^{j_{k}},\] \[\sqrt{-1}^{k^{2}}\frac{\partial}{\partial\bar{z}^{J}}\wedge \frac{\partial}{\partial z^{I}} =\sqrt{-1}^{k}\frac{\partial}{\partial\bar{z}^{j_{1}}}\wedge \frac{\partial}{\partial z^{i_{1}}}\wedge\cdots\wedge\frac{\partial}{ \partial\bar{z}^{j_{k}}}\wedge\frac{\partial}{\partial z^{i_{k}}}.\]
Thus, we have the following
\[\Lambda^{[k]}=\frac{\sqrt{-1}^{k^{2}}}{2^{k}}\sum_{|I|=|J|=k}\Lambda_{I,J}dz^{ I}\wedge d\bar{z}^{J}, \tag{3.9}\]
and
\[\frac{\omega^{k}}{k!}=\frac{\sqrt{-1}^{k^{2}}}{2^{k}}\sum_{|I|=|J|=k}A_{I,J}dz ^{I}\wedge d\bar{z}^{J},\ \frac{\chi^{k}}{k!}=2^{k}\sqrt{-1}^{k^{2}}\sum_{|I|=|J|=k}A^{\bar{J},I}\frac{ \partial}{\partial\bar{z}^{J}}\wedge\frac{\partial}{\partial z^{I}}, \tag{3.10}\]
where
\[A_{I,\bar{J}}:=\det\left(\begin{array}{cccc}A_{i_{1}\overline{j_{1}}}&A_{i_{ 1}\overline{j_{2}}}&\cdots&A_{i_{1}\overline{j_{k}}}\\ A_{i_{2}\overline{j_{1}}}&A_{i_{2}\overline{j_{2}}}&\cdots&A_{i_{2}\overline{j_{ k}}}\\ \vdots&\vdots&\ddots&\vdots\\ A_{i_{k}\overline{j_{1}}}&A_{i_{k}\overline{j_{2}}}&\ldots&A_{i_{k}\overline{j_{ k}}}\end{array}\right),\ A^{\bar{J},I}:=\det\left(\begin{array}{cccc}A^{\overline{j_{1}}i_{1}}&A^{ \overline{j_{1}}i_{2}}&\cdots&A^{\overline{j_{1}}i_{k}}\\ A^{\overline{j_{2}}i_{1}}&A^{\overline{j_{2}}i_{2}}&\cdots&A^{\overline{j_{2}}i_ {k}}\\ \vdots&\vdots&\ddots&\vdots\\ A^{\overline{j_{k}}i_{1}}&A^{\overline{j_{k}}i_{2}}&\ldots&A^{\overline{j_{k}} i_{k}}\end{array}\right).\]
As a consequence of (3.9) and (3.10), we have
\[\Lambda^{[k]}\wedge\frac{\omega^{n-k}}{(n-k)!}=\sum_{|I|=|J|=k}\epsilon^{I,I^{ c}}_{J,J^{c}}\Lambda_{I,\bar{J}}A_{I^{c},\bar{J}^{c}}\frac{\rho^{n}}{n!}\]
where index sets \(I^{c}\), \(J^{c}\) are ordered complement of \(I\) and \(J\); and \(\epsilon^{I,I^{c}}_{J,J^{c}}=\frac{\sqrt{-1}^{n^{2}}d\bar{z}^{I}\wedge d\bar{z} ^{I^{c}}\wedge d\bar{z}^{J}\wedge d\bar{z}^{J^{c}}}{2^{n}\rho^{n}/n!}.\)
We make the following claim:
\[\frac{A_{I^{c},J^{c}}}{\det A}\epsilon^{I,I^{c}}_{J,J^{c}}=A^{\bar{J},I}. \tag{3.11}\]
After proper row and column permutations, we may assume without loss of generality that \(I=\{1,2,\cdots,k\}\) and \(J=\{1,\cdots,k\}\). We decompose as follows
\[A=\left(\begin{array}{cc}A_{1}&C^{\prime}\\ C&A_{2}\end{array}\right),\ A^{-1}=\left(\begin{array}{cc}B_{1}&D^{\prime}\\ D&B_{2}\end{array}\right),\]
where \(A_{1}\) is an \((n-k)\times(n-k)\) matrix and \(B_{1}\) is a \(k\times k\) matrix. Since the sign change of the permutation is \(\epsilon_{J,J^{c}}^{I,I^{c}}\), in order to prove (3.11), it suffices to show \(\det(A_{1})=\det A\cdot\det B_{2}\). Suppose first that \(A_{1}\) is nonsingular. Let \(E=-CA_{1}^{-1}\), \(E^{\prime}=-A_{1}^{-1}C^{\prime}\). Since
\[\left(\begin{array}{cc}I&0\\ E&I\end{array}\right)A\left(\begin{array}{cc}I&E^{\prime}\\ 0&I\end{array}\right)=\left(\begin{array}{cc}A_{1}&0\\ 0&A_{2}^{\prime}\end{array}\right).\]
We may write
\[\left(\begin{array}{cc}I&-E^{\prime}\\ 0&I\end{array}\right)A^{-1}\left(\begin{array}{cc}I&0\\ -E&I\end{array}\right)=\left(\begin{array}{cc}\tilde{B_{1}}&\tilde{D}^{ \prime}\\ \tilde{D}&B_{2}\end{array}\right).\]
Thus
\[\left(\begin{array}{cc}\tilde{B_{1}}&\tilde{D}^{\prime}\\ \tilde{D}&B_{2}\end{array}\right)\left(\begin{array}{cc}A_{1}&0\\ 0&A_{2}^{\prime}\end{array}\right)=\left(\begin{array}{cc}I&0\\ 0&I\end{array}\right). \tag{3.12}\]
(3.12) implies
\[B_{2}A_{2}^{\prime}=I.\]
Hence \(\det B_{2}\det A_{2}^{\prime}=1\). Notice that \(\det A=\det A_{1}\det A_{2}^{\prime}\). Therefore, \(\det(A_{1})=\det(A)\det(B_{2})\). We have proved the claim (3.11) for non-singular \(A_{1}\). If \(A_{1}\) is singular, we consider the perturbation \(A_{\epsilon}=A+\epsilon I\) for suitable \(\epsilon\) such that both \(A_{1}\) and \(A\) are non-singular. Then the claim follows from the continuity of the inversion and determinant functions. We have now established (3.11).
To finish the proof of our lemma, we observe that from (3.9)
\[\left\langle\Lambda^{[k]},\left(2^{k}\sqrt{-1}^{k^{2}}\frac{\partial}{ \partial\bar{z}^{J}}\wedge\frac{\partial}{\partial z^{I}}\right)\right\rangle= \Lambda_{I,\bar{J}}. \tag{3.13}\]
Thus, by (3.11),
\[\langle\Lambda^{[k]},\chi^{k}/k!\rangle =\langle\Lambda^{[k]},A^{\bar{J},I}2^{k}\sqrt{-1}^{k^{2}}(\frac{ \partial}{\partial\bar{z}^{\bar{J}}}\wedge\frac{\partial}{\partial z^{I}})\rangle\] \[=\left(\sum_{|I|=|J|=k}\Lambda_{I,\bar{J}}A^{\bar{J},I}\right)\] \[=\sum_{|I|=|J|=k}\Lambda_{I,\bar{J}}\frac{A_{I^{c},J^{c}}}{\det A }\epsilon_{J,J^{c}}^{I,I^{c}}\] \[=F_{k}(A). \tag{3.14}\]
We have finished the proof.
For future use, we record the following linear algebraic fact.
**Lemma 3.5**.: _Given Hermitian matrices \(V,H\), and \(A=\left(\begin{array}{cc}H&D\\ D^{\dagger}&V\end{array}\right)\) which are invertible, then_
\[A^{-1}=\left(\begin{array}{cc}\hat{H}^{-1}&-\hat{H}^{-1}DV^{-1}\\ -V^{-1}D^{\dagger}\hat{H}^{-1}&V^{-1}+V^{-1}D^{\dagger}\hat{H}^{-1}DV^{-1} \end{array}\right), \tag{3.15}\]
_where \(\hat{H}=\left(H-DV^{-1}D^{\dagger}\right)\), \(\hat{V}=V-D^{\dagger}H^{-1}D\), and \(V^{-1}+V^{-1}D^{\dagger}\hat{H}^{-1}DV^{-1}=\hat{V}^{-1}\). Furthermore, if \(\hat{H}>0\) then_
\[A^{-1}\geq\left(\begin{array}{cc}0&0\\ 0&V^{-1}\end{array}\right). \tag{3.16}\]
Proof.: (3.15) can be proved by column and row operation as in the proof of Lemma 3.4. By symmetry, we have also the following
\[A^{-1}=\left(\begin{array}{cc}H^{-1}+H^{-1}D\hat{V}^{-1}D^{\dagger}H^{-1}&- H^{-1}D\hat{V}^{-1}\\ -\hat{V}^{-1}D^{\dagger}H^{-1}&\hat{V}^{-1}\end{array}\right).\]
For (3.16), let \(C=-\hat{H}^{-1}DV^{-1}=-H^{-1}D\hat{V}^{-1}\). We have
\[A^{-1}-\left(\begin{array}{cc}0&0\\ 0&V^{-1}\end{array}\right) =\left(\begin{array}{cc}\hat{H}^{-1}&C\\ C^{\dagger}&C^{\dagger}\hat{H}C\end{array}\right)\] \[=\left(\begin{array}{cc}I&0\\ C^{\dagger}&I\end{array}\right)\left(\begin{array}{cc}\hat{H}^{-1}&0\\ 0&0\end{array}\right)\left(\begin{array}{cc}I&C\\ 0&I\end{array}\right).\]
Since \(\left(\begin{array}{cc}\hat{H}^{-1}&0\\ 0&0\end{array}\right)\geq 0\), \(A^{-1}\geq\left(\begin{array}{cc}0&0\\ 0&V^{-1}\end{array}\right)\).
## 4. Ellipticity and convexity
In this section, we discuss the monotonicity and convexity of \(F\), when certain positivity condition on \(\Lambda\) holds. First, we compute the first and second variations of \(F(A)\) using notations set earlier. We use abbreviations \(F^{i\bar{j}}\), \(F^{i\bar{j},r\bar{s}}\), \(F^{i\bar{j}}_{k}\), and \(F^{i\bar{j},r\bar{s}}_{k}\) to represent \(\frac{\partial F}{\partial A^{i\bar{j}}}\), \(\frac{\partial^{2}F}{\partial A^{i\bar{j}}\partial A^{r\bar{s}}}\), \(\frac{\partial F_{k}}{\partial A^{i\bar{j}}}\), and \(\frac{\partial^{2}F_{k}}{\partial A^{i\bar{j}}\partial A^{r\bar{s}}}\), respectively.
**Lemma 4.1**.: _Notations as above._
\[F^{i\bar{j}}_{k}=-\sum_{ab}A^{\bar{a}i}A^{\bar{j}b}\langle\Lambda^{[k]},\frac{ \chi^{k-1}}{(k-1)!}\wedge 2\sqrt{-1}\frac{\partial}{\partial\bar{z}^{a}}\wedge \frac{\partial}{\partial z^{b}}\rangle. \tag{4.1}\]
\[F_{k}^{i\bar{j},r\bar{s}} =\sum_{a,b,c,d}A^{\bar{a}i}A^{\bar{j}b}A^{\bar{c}r}A^{\bar{s}d} \langle\Lambda^{[k]},\left(2\sqrt{-1}\right)^{2}\frac{\chi^{k-2}}{(k-2)!}\wedge \frac{\partial}{\partial\bar{z}^{a}}\wedge\frac{\partial}{\partial z^{b}} \wedge\frac{\partial}{\partial\bar{z}^{c}}\wedge\frac{\partial}{\partial z^{d}}\rangle\] \[+\sum_{a,b,c,d}\left(A^{\bar{a}r}A^{\bar{s}i}A^{\bar{j}b}+A^{ \bar{a}i}A^{\bar{j}r}A^{\bar{s}b}\right)\langle\Lambda^{[k]},\left(2\sqrt{-1} \frac{\chi^{k-1}}{(k-1)!}\frac{\partial}{\partial\bar{z}^{a}}\wedge\frac{ \partial}{\partial z^{b}}\right)\rangle. \tag{4.2}\]
Proof.: Recall \(\chi=2\sqrt{-1}A^{\bar{j}i}\frac{\partial}{\partial\bar{z}^{j}}\wedge\frac{ \partial}{\partial z^{i}}\). We use Lemma 3.4 to find
\[\frac{\partial F_{k}}{\partial A^{\bar{j}i}}=\langle\Lambda^{[k]},\frac{\chi ^{k-1}}{(k-1)!}\wedge 2\sqrt{-1}\frac{\partial}{\partial\bar{z}^{j}}\wedge \frac{\partial}{\partial z^{i}}\rangle.\]
Therefore
\[F_{k}^{i\bar{j}} =\frac{\partial A^{\bar{a}b}}{\partial A_{i\bar{j}}}\cdot\frac{ \partial F_{k}}{\partial A^{\bar{a}b}}\] \[=-\sum_{a,b}A^{\bar{a}i}A^{\bar{j}b}\langle\Lambda^{[k]},\frac{ \chi^{k-1}}{(k-1)!}\wedge 2\sqrt{-1}\frac{\partial}{\partial\bar{z}^{a}}\wedge \frac{\partial}{\partial z^{b}}\rangle. \tag{4.3}\]
Furthermore,
\[F_{k}^{i\bar{j},r\bar{s}} =\sum_{a,b,c,d}A^{\bar{a}i}A^{\bar{j}b}A^{\bar{c}r}A^{\bar{s}d} \langle\Lambda^{[k]},\left(\frac{1}{k!}\frac{\partial^{2}\chi^{k}}{\partial A ^{\bar{c}d}\partial A^{\bar{a}b}}\right)\rangle\] \[\quad+\sum_{a,b,c,d}\left(A^{\bar{a}r}A^{\bar{s}i}A^{\bar{j}b}+A^ {\bar{a}i}A^{\bar{j}r}A^{\bar{s}b}\right)\langle\Lambda^{[k]},\left(\frac{ \partial}{\partial A^{\bar{a}b}}\left(\frac{\chi^{k}}{k!}\right)\right)\rangle\] \[=\sum_{a,b,c,d}A^{\bar{a}i}A^{\bar{j}b}A^{\bar{c}r}A^{\bar{s}d} \langle\Lambda^{[k]},\left(\left(2\sqrt{-1}\right)^{2}\frac{\chi^{k-2}}{(k-2)! }\right)\frac{\partial}{\partial\bar{z}^{a}}\wedge\frac{\partial}{\partial z ^{b}}\wedge\frac{\partial}{\partial\bar{z}^{c}}\wedge\frac{\partial}{\partial z ^{d}}\rangle\] \[\quad+\sum_{a,b,c,d}\left(A^{\bar{a}r}A^{\bar{s}i}A^{\bar{j}b}+A^ {\bar{a}i}A^{\bar{j}r}A^{\bar{s}b}\right)\langle\Lambda^{[k]},2\sqrt{-1}\frac {\chi^{k-1}}{(k-1)!}\wedge\frac{\partial}{\partial\bar{z}^{a}}\wedge\frac{ \partial}{\partial z^{b}}\rangle. \tag{4.4}\]
**Lemma 4.2**.: _Notations as above. We assume that \(\Lambda\) satisfies condition **H2**. For any covector \(b=b_{i}dz^{i}\neq 0\), we have_
\[-F_{k}^{i\bar{j}}b_{i}\overline{b_{j}}\geq 0. \tag{4.5}\]
_Moreover,_
\[-\sum_{k=1}^{n-1}F_{k}^{i\bar{j}}b_{i}\overline{b_{j}}>0. \tag{4.6}\]
Proof.: Let \(b^{\sharp}=\bar{b_{i}}A^{\bar{a}a}\frac{\partial}{\partial z^{a}}\) be the dual of \(b\) raised by \(\omega\). Then by (4.1)
\[-F_{k}^{i\bar{j}}b_{i}\overline{b_{j}} =\langle\Lambda^{[k]},2\sqrt{-1b^{\sharp}}\wedge b^{\sharp} \wedge\exp\chi\rangle.\] \[\geq 0. \tag{4.7}\]
We have used the fact that \(2\sqrt{-1}\overline{b^{\sharp}}\wedge b^{\sharp}\wedge\exp\chi\) is strongly positive.
By condition **H2**, we may choose a subspace \(\mathcal{V}_{i}\subset\mathcal{T}_{p}M\) such that \((\pi_{i})_{*}(b^{\sharp})\neq 0\) and \(\Lambda^{[k_{i}]}\geq m\frac{(\rho_{i})^{k_{i}}}{k_{i}!}\).Then
\[\Lambda^{[k_{i}]}\geq m\frac{\rho_{i}^{k_{i}}}{k_{i}!}\geq m^{\prime}(p)\frac{ (\pi_{i}^{*}\iota_{i}^{*}\omega)^{k_{i}}}{k_{i}!}, \tag{4.8}\]
where \(m^{\prime}(p)\) is a positive constant at \(p\). Then
\[\langle\Lambda^{[k_{i}]},2\sqrt{-1}\frac{\overline{b^{\sharp}}\wedge b^{ \sharp}\wedge\chi^{k_{i}-1}}{(k_{i}-1)!}\rangle\geq m^{\prime}(p)\langle\frac{ (\pi_{i}^{*}\iota_{i}^{*}\omega)^{k_{i}}}{k_{i}!},2\sqrt{-1}\frac{\overline{b^ {\sharp}}\wedge b^{\sharp}\wedge\chi^{k_{i}-1}}{(k_{i}-1)!}\rangle>0. \tag{4.9}\]
**Proposition 4.3**.: _Notations as above. If \(\Lambda\) satisfies condition **H2,** for any local complex \(n\times n\) matrix \(B_{i\bar{j}}\),_
\[\sum_{i,j,r,s}\left(\frac{\partial^{2}F_{k}(A)}{\partial A_{i\bar{j}} \partial A_{r\bar{s}}}+\frac{\partial F_{k}(A)}{\partial A_{i\bar{s}}}A^{\bar{ j}r}\right)B_{i\bar{j}}\overline{B_{s\bar{r}}}\geq 0. \tag{4.10}\]
Proof.: Define the matrix \(C\) and the corresponding element
\[C^{\bar{a}b} :=\sum_{i,j}A^{\bar{a}i}B_{i\bar{j}}A^{\bar{j}b}, \tag{4.12}\] \[\zeta :=2\sqrt{-1}\sum_{a,b}C^{\bar{a}b}\frac{\partial}{\partial\bar{z }^{a}}\wedge\frac{\partial}{\partial\bar{z}^{b}}. \tag{4.11}\]
In addition, we use \(C^{\dagger}\) to denote the adjoint matrix \(\left(C^{\dagger}\right)^{\bar{a}b}:=\overline{C^{\bar{b}a}}\). Define
\[\left(D\right)^{\bar{a}b}:=\sum_{i,j,r,s}A^{\bar{a}i}A^{\bar{s}b}A^{\bar{j}r}B _{i\bar{j}}\overline{B_{s\bar{r}}}=\sum_{r,s}C^{\bar{a}r}A_{r\bar{s}}\overline {C^{\bar{b}s}}. \tag{4.13}\]
Then,
\[\sum_{i,j,r,s}A^{\bar{a}r}A^{\bar{s}i}A^{\bar{j}b}B_{i\bar{j}} \overline{B_{s\bar{r}}}=\overline{C^{\bar{r}a}}A_{r\bar{s}}C^{\bar{s}b}=(D^{ \dagger})^{\bar{a}b}, \tag{4.15}\] \[\sum_{i,j,r,s}A^{\bar{a}i}A^{\bar{j}b}A^{\bar{c}r}A^{\bar{s}d}B_{ i\bar{j}}\overline{B_{s\bar{r}}}=C^{\bar{a}b}\overline{C^{\bar{d}c}}. \tag{4.14}\]
We define
\[\xi:=2\sqrt{-1}\sum_{a,b}D^{\bar{a}b}\frac{\partial}{\partial\bar{z}^{a}} \wedge\frac{\partial}{\partial\bar{z}^{b}}. \tag{4.16}\]
Therefore
\[\sum_{i,j,r,s}\left(F_{k}^{i\bar{j},r\bar{s}}+\frac{\partial F_{k}}{ \partial A_{i\bar{s}}}A^{\bar{j}r}\right)B_{i\bar{j}}\overline{B_{s\bar{r}}} =\langle\Lambda^{[k]},\frac{\chi^{k-2}}{(k-2)!}\wedge\bar{\zeta} \wedge\zeta\rangle\] \[+\langle\Lambda^{[k]},\frac{\chi^{k-1}}{(k-1)!}\wedge\xi\rangle. \tag{4.17}\]
Define
\[\Theta_{k}(B,B)=\frac{\chi^{k-2}}{(k-2)!}\wedge(2\sqrt{-1}\bar{\zeta}\wedge \zeta)+\frac{\chi^{k-1}}{(k-1)!}\wedge\xi. \tag{4.18}\]
From (4.17), we see that
\[\sum_{i,j,r,s}\left(F_{k}^{i\bar{j},r\bar{s}}+\frac{\partial F_{k}}{\partial A _{i\bar{s}}}A^{\bar{j}r}\right)B_{i\bar{j}}\overline{B_{s\bar{r}}}=\langle \Lambda^{[k]},\Theta_{k}(B,B)\rangle. \tag{4.19}\]
Now, it suffices to check the (strongly) positivity of \(\Theta_{k}(B,B)\). Since the positivity is invariant under coordinate change, we may change the local coordinate such that at a point \(p\), it holds
\[\chi=2\sqrt{-1}\sum_{i}^{n}\frac{\partial}{\partial\bar{z}^{i}}\wedge\frac{ \partial}{\partial z^{i}},\ \ \zeta=2\sqrt{-1}\sum_{i=1}^{n}a_{i}\frac{\partial}{\partial\bar{z}^{i}}\wedge \frac{\partial}{\partial z^{i}}. \tag{4.20}\]
Therefore, by (4.20),
\[\frac{\chi^{k-2}}{(k-2)!}\wedge\bar{\zeta}\wedge\zeta=\sum_{|J|=k-2}\sum_{i,j \not\in J}a_{i}a_{j}2^{k}\sqrt{-1}^{k^{2}}\frac{\partial}{\partial\bar{z}^{J} }\wedge\frac{\partial}{\partial\bar{z}^{i}}\wedge\frac{\partial}{\partial\bar {z}^{j}}\wedge\frac{\partial}{\partial z^{j}}\wedge\frac{\partial}{\partial z ^{i}}\wedge\frac{\partial}{\partial z^{j}}, \tag{4.21}\]
\[\frac{\chi^{k-1}}{(k-1)!}\wedge(\xi)=\sum_{|J|=k-1}\sum_{i\not\in J}a_{i}^{2} 2^{k}\sqrt{-1}^{k^{2}}\frac{\partial}{\partial\bar{z}^{J}}\wedge\frac{ \partial}{\partial\bar{z}^{i}}\wedge\frac{\partial}{\partial z^{J}}\wedge \frac{\partial}{\partial z^{i}}. \tag{4.22}\]
Let \(J\) be any ordered subset of \(\{1,\cdots,n\}\) such that \(|J|=k\). The coefficient of \(\sqrt{-1}^{k^{2}}2^{k}\frac{\partial}{\partial\bar{z}^{J}}\wedge\frac{\partial }{\partial z^{J}}\) in \(\Theta_{k}(B,B)\) is given by
\[\sum_{i\in J}a_{i}^{2}+\sum_{i\in J}\sum_{j\not\in i,j\in J}a_{i}a_{j}=\left( \sum_{i\in J}a_{i}\right)^{2}.\]
Since each \(\sqrt{-1}^{k^{2}}2^{k}\frac{\partial}{\partial\bar{z}^{J}}\wedge\frac{ \partial}{\partial z^{J}}\) is strongly positive, we conclude that \(\Theta_{k}(B,B)\) is strongly positive.
For any general complex matrix \(B\), we consider the matrix decomposition
\[B=B^{R}+\sqrt{-1}B^{I},\]
where
\[B^{R}=\frac{B+B^{\dagger}}{2},\ B^{I}=\frac{B-B^{\dagger}}{2\sqrt{-1}}.\]
Thus, \(B^{R}\) and \(B^{I}\) are both Hermitian. The Hermitian bilinear form \(h(B,B)=\langle\Lambda^{[k]},\Theta_{k}(B,B)\rangle\) is given by
\[h(B,B) =h(B^{R},B^{R})+h(B^{I},B^{I})+\sqrt{-1}\left(h(B^{I},B^{R})-h(B^{ R},B^{I})\right)\] \[=h(B^{R},B^{R})+h(B^{I},B^{I}).\]
Hence \(h\) is positive definite. Therefore, we have finished the proof.
Now, we establish the ellipticity and convexity of \(F\). In this section, we consider the simpler case when \(\Lambda^{[n]}=f\frac{\rho^{n}}{n!}\geq 0.\)
**Corollary 4.4**.: _Notations as above. Suppose that \(\Lambda^{[n]}\geq 0\) at some point \(p\). Then at \(p\), \(F(A)\) is a strictly decreasing function in \(A\), i.e._
\[F(A+B)<F(A) \tag{4.23}\]
_for any non-zero semi-positive Hermitian matrix \(B\). Furthermore, for any complex matrix \(B_{i\bar{j}}\), we have_
\[\sum_{i,j,r,s}\left(F^{i\bar{j},r\bar{s}}+F^{i\bar{s}}A^{\bar{j}r}\right)B_{i \bar{j}}\overline{B_{s\bar{r}}}\geq 0.\]
_In particular, \(F\) is a strictly convex function in \(\Gamma^{+}_{n\times n}.\)_
Proof.: Notice that
\[F^{i\bar{j}}=\sum_{k=1}^{n-1}F_{k}^{i\bar{j}}-\frac{f}{\det A}A^{\bar{j}i}\]
Thus, (4.23) follows from Lemma 4.2 and assumption **H2**.
For convexity, by Proposition 4.3
\[\sum\left(F^{i\bar{j},r\bar{s}}+F^{i\bar{s}}A^{\bar{j}r}\right)B_ {i\bar{j}}\overline{B_{s\bar{r}}} =\sum_{k=1}^{n-1}\langle\Lambda^{[k]},\Theta_{k}(B,B)\rangle+ \frac{f}{\det A}A^{\bar{s}r}A^{\bar{j}i}B_{i\bar{j}}\overline{B_{s\bar{r}}}\] \[\geq 0.\]
Here, we have used the lower bound for \(f.\) By Lemma 4.2 and assumption **H2**, \(-F^{i\bar{s}}A^{\bar{j}r}B_{i\bar{j}}\overline{B_{s\bar{r}}}>0\) for non-zero \(B\). Thus \(F\) is strictly convex.
## 5. Cone condition
We have introduced the concept of cone condition/subsolution in (1.5). In this section, we state more criteria for subsolutions and prove some properties that will be used later. We will focus on the local cone condition near a point \(p\in M\).
**Definition 5.1**.: Let
\[\mathcal{C}^{\kappa}_{\Lambda}:=\{\omega:\omega\text{ is K\"{a}hler; \eqref{eq:c_
At a point \(p\in M\), let
\[\mathcal{C}^{\kappa}_{\Lambda}(p):=\{\omega:\omega\text{ is a positive }(1,1)\text{-form};\ (1.5)\text{ holds at }p\}.\]
In a local coordinates at \(p\), where \(\omega=\frac{\sqrt{-1}}{2}\sum A_{i\bar{j}}dz^{i}\wedge d\bar{z}^{j},\) we say the matrix \(A\in\mathcal{C}^{\kappa}_{\Lambda}(p)\) if \(\omega\) satisfies (1.5) at \(p\). By (1.5), \(\mathcal{C}^{\kappa}_{\Lambda}(p)\) can be viewed as an open set in \(\Gamma^{+}_{n\times n}.\)
The cone condition (1.5) in the study of \(J\)-equation was first explored in Song-Weinkove [33]. Later Fang-Lai-Ma [19] extended the discussion to inverse \(\sigma_{k}\) type equations. The notion of subsolution for a class of fully nonlinear equations was introduced by Guan [17]. See also Szekelyhidi [35].
We have the following criterions for subsolutions. Here we do not assume the sign of \(\Lambda^{[n]}.\)
**Proposition 5.2**.: _Notations as above. Suppose that \(\underline{A}\) is a positive Hermitian matrix. The followings are equivalent_
1. \(\underline{A}\in\mathcal{C}^{\kappa}_{\Lambda}(p).\)__
2. _There is a constant_ \(R=R(\underline{A},\Lambda,n,|f(p)|)\) _s.t. if_ \(B\) _is a non-negative Hermitian matrix satisfying_ \[F(\underline{A}+B)=\kappa,\] _then_ (5.1) \[|B|\leq R.\] _Here_ \(|B|=\left(\sum_{i,j}|B_{i\bar{j}}|^{2}\right)^{\frac{1}{2}}.\)__
3. _For any non-zero semi-positive Hermitian matrix_ \(B\) _the following holds_ (5.2) \[\lim_{t\to\infty}F(\underline{A}+tB)<\kappa.\]
Proof.: Pick a local unit covector \(b=\sum_{i=1}^{n}b_{i}dz^{i}\) such that \(\|b\|_{\rho}=1.\) We may identify \(b\) with an \(1\times n\) matrix \((b_{1},\cdots,b_{n})\). Define \(\beta=\frac{\sqrt{-1}}{2}\sum_{i,j}b_{i}\bar{b}_{j}dz^{i}\wedge d\bar{z}^{j}\). We have the following identity
\[\frac{(\omega+t\beta)^{n}}{n!}\left(\kappa-F(\underline{A}+tb^{\dagger}b) \right)=t\left(\kappa\Omega-\Lambda\wedge\Omega\right)^{[n-1]}\wedge\beta+ \left(\kappa-F(\underline{A})\right)\frac{\omega^{n}}{n!}. \tag{5.3}\]
\((1)\Rightarrow(2)\). Suppose that \(\underline{A}\in\mathcal{C}^{\kappa}_{\Lambda}\). Then there is a positive \(\delta=\delta(\underline{A},\Lambda)\) s.t.
\[(\kappa\Omega-\Lambda\wedge\Omega)^{[n-1]}\wedge\beta\geq\delta P^{[n-1]}.\]
Then from (5.3),
\[\frac{(\omega+t\beta)^{n}}{n!}\left(\kappa-F(\underline{A}+tb^{\dagger}b) \right)\geq t\delta\|b\|_{\rho}^{2}P^{[n]}+O(1). \tag{5.4}\]
Here \(O(1)\) represents a \((n,n)\)-form with bounded norm with respect to \(\rho\), depending on \(\underline{A}\) and \(\Lambda\) but not \(b\). As
\[F(\underline{A}+tb^{\dagger}b)=\sum_{k=1}^{n-1}F_{k}(\underline{A}+tb^{\dagger }b)+\frac{f(p)}{\det(\underline{A}+tb^{\dagger}b)}, \tag{5.5}\]
By Lemma 4.2, the first term in the right hand side of (5.5) is decreasing in \(t\) and is non-negative. The second term is bounded and approaches \(0\) when \(t\to\infty\). Thus \(\lim_{t\to\infty}F(A+tb^{\dagger}b)\) exists. Hence from (5.4),
\[t\delta P^{[n]}+O(1) \leq\left(t\frac{\omega^{n-1}}{(n-1)!}\wedge\beta\right)\left( \kappa-F(\underline{A}+tb^{\dagger}b)\right)+O(1)\] \[=t\|b\|_{\omega}\det\underline{A}\left(\kappa-F(\underline{A}+ tb^{\dagger}b)\right)P^{[n]}+O(1)\] \[\leq t\lambda_{max}\det\underline{A}\left(\kappa-F(\underline{A }+tb^{\dagger}b)\right)P^{[n]}+O(1).\]
Here \(\lambda_{max}\) is the maximal eigenvalue of \(\underline{A}\). Therefore, there is a \(R=R(\underline{A},\Lambda)\) such that if \(t>R\), then
\[\kappa-F(\underline{A}+tb^{\dagger}b)\geq\frac{\delta}{2\lambda_{max}\det \underline{A}}. \tag{5.6}\]
For any non-zero semi-positive Hermitian matrix \(B\) with \(\|B\|=\sqrt{\sum_{i,j}|B_{i\bar{j}}|^{2}}=N\), let \(\lambda\) be the biggest non-zero eigenvalue of \(B\) and \(b^{\prime}=(b^{\prime}_{1},\cdots,b^{\prime}_{n})\) be the unit eigenvector corresponding to \(\lambda\). Notice \(\lambda\geq\frac{N}{n}\) and
\[B-\frac{N}{2n}(b^{\prime})^{\dagger}b^{\prime}\geq 0.\]
Suppose that \(N/(2n)\geq R\).
\[\kappa-F(\underline{A}+\frac{N}{2n}(b^{\prime})^{\dagger}b^{\prime})\geq\frac{ \delta}{2\lambda_{max}\det\underline{A}}. \tag{5.7}\]
Now
\[F(\underline{A}+B) =\sum_{k=1}^{n-1}F_{k}(\underline{A}+B)+\frac{f(p)}{\det( \underline{A}+B)}\] \[\leq F(\underline{A}+\frac{N}{2n}(b^{\prime})^{\dagger}b^{\prime} )+|f(p)|\cdot\left|\frac{1}{\det(\underline{A}+B)}-\frac{1}{\det(\underline{ A}+\frac{N}{2n}(b^{\prime})^{\dagger}b^{\prime})}\right|. \tag{5.8}\]
Take \(N\) large enough so that \(\det(\underline{A}+\frac{N}{2n}(b^{\prime})^{\dagger}b^{\prime})^{-1}<\frac{ \delta}{4\lambda_{max}(\det\underline{A})(|f(p)|+1)}\). Then by (5.7) and (5.8),
\[\kappa-F(\underline{A}+B)>\frac{\delta}{4\lambda_{max}\det\underline{A}}.\]
We then have a contradiction. We have proved (5.1).
\((2)\Rightarrow(3)\) is obvious since \(F_{k}(\underline{A}+tB)\) is strictly monotonic decreasing and \(\frac{f}{\det(\underline{A}+tB)}\) tends to \(0\) as \(t\to\infty\).
\((3)\Rightarrow(1)\). If \(\lim_{t\to\infty}F(\underline{A}+tB)<\kappa\), we can test each \(B=(B_{i\bar{j}})=(\delta_{i\bar{j}})\) in (5.3) to see that each \((\kappa\Omega-\Lambda\wedge\Omega)^{[n-1]}\wedge(\frac{\sqrt{-1}}{2}dz^{i} \wedge d\bar{z}^{i})\) is positive. Thus, \((\kappa\Omega-\Lambda\wedge\Omega)^{[n-1]}\) is positive, which implies \(\underline{A}\in\mathcal{C}_{\Lambda}^{\kappa}(p)\).
_Remark 5.3_.: The criterion (2) in Proposition 5.2 is the definition of subsolution given by Szekelyhidi [35]. The equivalence of (2) and (3) is suggested in [35] Remark 8.
**Definition 5.4**.: Let \(B\) be a non-zero semi-positive Hermitian matrix. We define
\[F_{\Lambda}(A:B):=\lim_{t\to\infty}F(A+tB,\Lambda), \tag{5.9}\]
\[\mathcal{P}_{\Lambda}(A):=\max_{B\in\overline{\Gamma_{n\times n}^{+}},\|B\|=1 }\lim_{t\to+\infty}F(A+tB,\Lambda). \tag{5.10}\]
Clearly, \(A\in\mathcal{C}_{\Lambda}^{\kappa}(p)\) if and only if for any non-zero \(B\in\overline{\Gamma_{n\times n}^{+}}\), \(F_{\Lambda}(A:B)<\kappa\). Equivalently, \(A\in\mathcal{C}_{\Lambda}^{\kappa}(p)\) if and only if
\[\mathcal{P}_{\Lambda}(A)<\kappa.\]
We use the notation \(\mathcal{P}_{\Lambda}(\omega)=\mathcal{P}_{\Lambda}(A)\) if \(\omega\) is represented by matrix \(A\) in a local coordinate. There is another perspective where \(F_{\Lambda}(A:B)\) and \(\mathcal{P}_{\Lambda}(A)\) are given by the restrictions of \(F\) on subspaces, which relies on the the general inverse matrix in the sense of Moore-Penrose [27, 28].
**Definition 5.5**.: Suppose that \(A\) is a \(n\times n\) matrix. A Moore-Penrose inverse matrix \(A^{-1}\) is the unique matrix that satisfies the following conditions:
1. \(AA^{-1}A=A\), \(A^{-1}AA^{-1}=A^{-1}\).
2. Both \(AA^{-1}\) and \(A^{-1}A\) are Hermitian matrices.
For any matrix, the Moore-Penrose inverse of \(A\) exists and can be constructed via singular value decomposition. The following lemma 5.6 will be used to relate \(F_{\Lambda}(A:B)\) with a Moore-Penrose inverse.
**Lemma 5.6**.: _Let \(V\geq 0\) be a Hermitian matrix of rank \(r\). Let \(\mathcal{V}\) be the linear spaces spaced by the non-zero eigenvectors of \(V\) and let \(\mathcal{H}\) be the orthogonal complement of \(\mathcal{V}\). Let \(\Pi_{\mathcal{H}}\) be the orthogonal projection matrix to the subspace \(\mathcal{H}\subset\mathcal{T}_{p}M\). Denote_
\[(A|_{\mathcal{H}})^{-1}:=\Pi_{\mathcal{H}}A\Pi_{\mathcal{H}}. \tag{5.11}\]
_Then_
\[\lim_{t\to+\infty}\left(A+tV\right)^{-1}=\left(A|_{\mathcal{H}}\right)^{-1}, \tag{5.12}\]
_Remark 5.7_.: The inverse on the right hand side of (5.12) is in the sense of Moore-Penrose and we denote it as \((A|_{\mathcal{H}})^{-1}\).
Proof.: Since Moore-Penrose inverse is invariant under unitary transform, we may choose a unitary basis \(\{e_{i}\}\) so that \(e_{1},\cdots,e_{r}\) spans \(\mathcal{V}\). Then
\[V=\sum_{i=1}^{r}V_{i\bar{i}}e_{i}e_{i}^{\dagger},\ A=\sum_{i,j}A_{i\bar{j}}e_{i }e_{j}^{\dagger}.\]
Then, under this basis, we have
\[V=\left(\begin{array}{cc}v&0\\ 0&0\end{array}\right),\ \Pi_{\mathcal{H}}=\left(\begin{array}{cc}0&0\\ 0&I_{n-r}\end{array}\right)\]
where \(v\) is a \(r\times r\) positive definite matrix and \(I_{n-r}\) is an identity matrix. We may write
\[A+tV=\left(\begin{array}{cc}A_{1}+tv&C\\ C^{\dagger}&A_{2}\end{array}\right).\]
The inverse is given by
\[(A+tV)^{-1}=\left(\begin{array}{cc}(\hat{A}_{1}(t))^{-1}&-(\hat{A}_{1}(t))^ {-1}CA_{2}^{-1}\\ -A_{2}^{-1}C^{\dagger}(\hat{A}_{1}(t))^{-1}&A_{2}^{-1}+A_{2}^{-1}C^{\dagger}( \hat{A}_{1}(t))^{-1}CA_{2}^{-1}\end{array}\right)\]
where \(\hat{A}_{1}(t)=A_{1}+tv-CA_{2}^{-1}C^{\dagger}\). As \(t\to+\infty\), \(\left(\hat{A}_{1}(t)\right)^{-1}\to 0\) uniformly. Thus,
\[\lim_{t\to+\infty}(A+tV)^{-1}=\left(\begin{array}{cc}0&0\\ 0&A_{2}^{-1}\end{array}\right).\]
Notice that \(\Pi_{\mathcal{H}}A\Pi_{\mathcal{H}}=\left(\begin{array}{cc}0&0\\ 0&A_{2}\end{array}\right)\). Hence, we easily see that \((\Pi_{\mathcal{H}}A\Pi_{\mathcal{H}})^{-1}=\lim_{t\to+\infty}(A+tV)^{-1}\).
Given an orthogonal splitting \(\mathcal{T}_{p}M=\mathcal{H}\oplus\mathcal{H}^{\perp}\) with respect to \(\rho\). Following the notation in Lemma 5.6, we denote
\[\chi_{\mathcal{H}}:=\sum_{i,j\geq d+1}(A|_{\mathcal{H}})^{\bar{j}i}2\sqrt{-1} \frac{\partial}{\partial\bar{z}^{j}}\wedge\frac{\partial}{\partial z^{i}}. \tag{5.13}\]
Then \(\chi_{\mathcal{H}}\in\bigwedge^{1,1}\mathcal{H}\) is dual to \(\omega|_{\mathcal{H}}\). Now we characterize \(\mathcal{P}_{\Lambda}\) using subspaces.
**Lemma 5.8**.: _Notations as above, the following statements are true:_
1. _If_ \(B\geq 0\) _and its all the non-zero eigenvectors span_ \(\mathcal{H}^{\perp}\)_, then_ \[F_{\Lambda}(A:B)=\langle\Lambda,\exp\chi_{\mathcal{H}}\rangle.\]
2. \[\mathcal{P}_{\Lambda}(A)=\max_{\mathcal{H}\subset\mathcal{T}_{p}M:\dim\mathcal{H}=d \leq n-1}\langle\Lambda,\exp\chi_{\mathcal{H}}\rangle.\]
3. \[\mathcal{P}_{\Lambda}(A)=\max_{\mathcal{H}\subset\mathcal{T}_{p}M:\dim\mathcal{ H}=n-1}\langle\Lambda,\exp\chi_{\mathcal{H}}\rangle.\]
4. _If_ \(\omega\in\mathcal{C}_{\Lambda}^{\kappa}(p)\)_, then for any_ \(d\)_-dimensional subspace_ \(\mathcal{H}\) _of_ \(\mathcal{T}_{p}M\) _with_ \(d\leq n-1\)_, we have_ \[((\kappa-\Lambda)\wedge\exp\omega)^{[d]}\,|_{\mathcal{H}}>0,\] _as a positive_ \((d,d)\)_-form._
Proof.: (1) follows from the definition of \(F(A:B)\) and Lemma 5.6.
(2) follows immediately from (1).
(3) follows from the monotonicity of \(F\). In fact, \(F(A)\) is monotonic in \(A\), which implies \(\lim_{t\to+\infty}F(A+tB,\Lambda)\leq\lim_{t\to\infty}F(A+tb^{\dagger}b)\) where \(b\) is a unit eigenvector of \(B\) with positive eigenvalue. Thus, we may restrict to rank one Hermitian matrix when computing \(\mathcal{P}_{\Lambda}(A)\) in (5.10).
(4) follows from (2) immediately.
The following corollary gives the easy part of Theorem 1.10.
**Corollary 5.9**.: _Notations as above. If \(\omega\in\mathcal{C}_{\Lambda}^{\kappa}\), for any \(d\)-dimensional subvariety \(Z\subset M\) and \(d\leq n-1\), we have_
\[\int_{Z}\left(\kappa-\Lambda\right)\wedge\exp\omega>0. \tag{5.14}\]
_Furthermore, if \(\omega\in[\omega_{0}]\) solves (1.2), then \([\omega_{0}]\) is \(([\Lambda],\kappa)\)-positive._
Proof.: For any \(d\)-dimensional subvariety \(Z\subset M\) with its regular component as \(Z_{reg}\), \(Z\backslash Z_{\rm reg}\) is of zero measure. Then by Lemma 5.8 (4) \(((\kappa-\Lambda|_{Z})\wedge\exp\left(\omega|_{Z}\right))^{[d]}>0\) for any point \(p\in Z_{\rm reg}\). Hence (5.14) is established.
We will prove that any solution \(\omega\) to (1.2) is a subsolution in Lemma 6.1. Thus, by (5.14), we conclude that \([\omega]=[\omega_{0}]\) is \(([\Lambda],\kappa)\)-positive.
For future use, we collect some properties of \(\mathcal{P}_{\Lambda}(A)\)_in the following lemma_.
**Lemma 5.10**.: _Notations as above. The following properties of \(\mathcal{P}_{\Lambda}(A)\) hold:_
1. \(\mathcal{P}_{\Lambda}(A)\) _is a continuous convex function in_ \(\Gamma_{n\times n}^{+}\)_._
2. \(\mathcal{P}_{\Lambda}(A)\) _is decreasing in_ \(A\)_, i.e._ \(\mathcal{P}_{\Lambda}(A^{\prime})\leq\mathcal{P}_{\Lambda}(A)\) _if_ \(A^{\prime}-A\geq 0\)_._
3. \(\mathcal{P}_{\Lambda}(A)\) _is continuous in_ \(\Lambda\)_._
4. \(\mathcal{P}_{\Lambda}(A)\) _is increasing in_ \(\Lambda\)_, i.e._ \(\mathcal{P}_{\Lambda}(A)\leq\mathcal{P}_{\Lambda^{\prime}}(A)\) _if_ \(\Lambda^{\prime}-\Lambda\geq 0\)_._
5. \(\mathcal{P}_{\Lambda}(\omega)\) _as a function of_ \(p\) _on_ \(M\) _is continuous._
6. \(\mathcal{P}_{\Lambda}(A)\) _is sub-linear in_ \(\Lambda\)_, i.e._ \(\mathcal{P}_{\Lambda+\Lambda^{\prime}}(A)\leq\mathcal{P}_{\Lambda}(A)+ \mathcal{P}_{\Lambda^{\prime}}(A)\)_._
7. \(\mathcal{P}_{\Lambda}(A)=\mathcal{P}_{\mathring{\Lambda}}(A)\)_, for_ \(\mathring{\Lambda}=\Lambda-\Lambda^{[n]}\)_,_
8. Suppose that \(\Lambda\) satisfies \(H\)2. If \(\{A_{l}\}\subset\Gamma_{n\times n}^{+}\) and \(A_{l}\to A_{\infty}\in\partial\Gamma_{n\times n}^{+}\) as \(l\to\infty\), then \(\lim_{l\to\infty}\mathcal{P}_{\Lambda}(A_{l})=\infty\).
Proof.: (1) follows from the convexity of \(F\) on \(\Gamma_{n\times n}^{+}\). (2) follows from the monotonicity of \(F\). See Corollary 4.4.
For (3), let \(\Lambda\) and \(\Lambda^{\prime}\) be two differential forms. We may choose \(\epsilon^{n}=n!\|\Lambda-\Lambda^{\prime}\|_{\rho}\) so that
\[-\exp\epsilon\rho+1\leq\Lambda-\Lambda^{\prime}\leq\exp\epsilon\rho-1.\]
By Lemma 5.8 (3), we have
\[|\mathcal{P}_{\Lambda}(A)-\mathcal{P}_{\Lambda^{\prime}}(A)| \leq\max_{\mathcal{H}\subset T^{1,0}M:\dim\mathcal{H}=n-1}\left| \langle\Lambda-\Lambda^{\prime},\exp\chi_{\mathcal{H}}\rangle\right|\] \[\leq\max_{\mathcal{H}\subset T^{1,0}M:\dim\mathcal{H}=n-1} \langle\exp\left(\epsilon\rho\right)-1,\exp\left(\chi_{\mathcal{H}}\right)\rangle\] \[\leq\langle\exp\left(\epsilon\rho\right)-1,\exp\chi\rangle. \tag{5.15}\]
Then, as \(\|\Lambda-\Lambda^{\prime}\|_{\rho}\to 0\), \(\epsilon\to 0\), it holds that \(|\mathcal{P}_{\Lambda}(A)-\mathcal{P}_{\Lambda^{\prime}}(A)|\to 0\) as well. Hence, \(\mathcal{P}_{\Lambda}(A)\) is continuous in \(\Lambda\).
(4) follows from Lemma 5.8 (1). (5) follows from (1) and (3).
For (6), we have the following
\[\mathcal{P}_{\Lambda+\Lambda^{\prime}}(A) =\max_{\mathcal{H}\subset T^{1,0}M:\dim\mathcal{H}=n-1}\langle \Lambda+\Lambda^{\prime},\exp\chi_{\mathcal{H}}\rangle\] \[\leq\mathcal{P}_{\Lambda}(A)+\mathcal{P}_{\Lambda^{\prime}}(A).\]
For (7), we assume that \(\Lambda^{[n]}(p)=f(p)P^{[n]}\). Then (7) follows from the fact that \(\lim_{t\to\infty}\frac{f(p)}{\det(A+tB)}=0\) for any \(B\geq 0\) and \(B\neq 0\).
For (8), note we may assume that \(\Lambda\geq 0\) since \(\mathcal{P}_{\mathring{\Lambda}}(A)=\mathcal{P}_{\Lambda}\) if \(\mathring{\Lambda}=\Lambda-\Lambda^{[n]}\) by (7). Let \(\chi_{A_{l}}=(A_{l})^{\tilde{j}i}2\sqrt{-1}\frac{\partial}{\partial\tilde{z}^ {j}}\wedge\frac{\partial}{\partial z^{i}}\). We first prove that
\[\lim_{l\to\infty}\langle\Lambda,\exp\chi_{A_{l}}\rangle=\infty \tag{5.16}\]
as \(A_{l}\to A_{\infty}\in\partial\Gamma_{n\times n}^{+}\). By **H**2,
\[\langle\Lambda,\exp\chi_{A_{i}}\rangle \geq m\langle\sum_{i=1}^{n_{p}}\frac{\rho_{i}^{k_{i}}}{k_{i}!}, \exp\chi_{A_{l}}\rangle\] \[\geq m\sum_{i=1}^{n_{p}}\sigma_{k_{i}}(A_{l}^{-1}|\nu_{i})\] \[\geq mC(n,\mathbf{d}_{p},\mathbf{k}_{p})\det(A_{l}^{-1})^{\frac{1 }{\sum_{i=1}^{n_{p}}\frac{d_{i}}{k_{i}}}}. \tag{5.17}\]
We have used Newton-Maclaurin inequality and the mean value inequality in the last inequality. (5.16) follows from the fact that \(\lim_{l\to\infty}\det(A_{l}^{-1})=\infty.\) To prove (8), we choose a hyperplane \(\mathcal{H}\) which contains the kernel of \(A_{\infty}\). Then, Lemma 5.8 (3) and (5.16) implies (8).
## 6. Equations with almost positive volume forms
With the cone condition in place, we extend results of Section 4 to include cases where \(\Lambda\) contains an almost positive volume form. Many ideas are from works of Chen [4] and Datar-Pingali [12]. For simplicity, we write
\[\Lambda^{[n]}=fP^{[n]}. \tag{6.1}\]
In this section, we allow \(f\) to be almost positive.
The following lemma implies that a solution to (1.2) is also a subsolution if **H2** is satisfied.
**Lemma 6.1**.: _Assume that \(\mathring{\Lambda}\) is \(\mathcal{O}\)-uniformly positive with constant \(m\). Suppose that for some \(p_{0}\in M\), \(f(p_{0})\geq 0\). For any \(p\in M\), let_
\[\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p}):=\frac{ \min_{i}\left\{\binom{d_{i}}{k_{i}-1}\binom{d_{i}}{k_{i}}^{\frac{1}{k_{i}}-1} \right\}\prod_{i=1}^{n_{p}}\binom{d_{i}}{k_{i}}^{\frac{d_{i}}{k_{i}}}}{\max_{ i}\{d_{i}\binom{d_{i}}{k_{i}}^{\frac{1}{k_{i}}}\}(\frac{\kappa}{m})^{\sum_{i=1}^{n_ {p}}\frac{d_{i}}{k_{i}}}\cdot\left(\sum_{i=1}^{n_{p}}\left(\frac{\kappa}{m} \right)^{\frac{1}{k_{i}}}\right)}. \tag{6.2}\]
_If \(\omega\) is a solution to (1.2), and for all \(p\in M\), \(f(p)>-m\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p})\), then \(\omega\in\mathcal{C}_{\Lambda}^{\kappa}\)._
We first state some lemmas before proving Lemma 6.1.
**Lemma 6.2**.: _Notations as above. Let \(A\) be a positive Hermitian matrix. Then the following statements hold._
1. _If_ \(k\geq l\geq 0\)_,_ \(r\geq s\geq 0\)_, the function_ (6.3) \[A\mapsto\frac{\sigma_{l}(A)}{\sigma_{k}(A)}\] _is decreasing in_ \(A\)_._
2. _Let_ \(\mathcal{H}\) _be a linear subspace with codimension_ \(r\)_. Then_ (6.4) \[\frac{\sigma_{l-r}(A|_{\mathcal{H}})}{\sigma_{k-r}(A|_{\mathcal{H}})}\leq\frac {\sigma_{l}(A)}{\sigma_{k}(A)}.\]
3. _Let_ \(T_{k-1}(A)\) _be the linearized operator of_ \(\sigma_{k}(A)\)_, i.e._ \(\langle T_{k-1}(A),B\rangle=\frac{d}{dt}\sigma_{k}(A+tB)|_{t=0}\)_. Then_ (6.5) \[\sigma_{r-1}(A)T_{k-1}(A)\geq\sigma_{k-1}(A)T_{r-1}(A),\]
_if_ \(r\geq k\)_._
Proof.: (1) is well known in the literature of quotient Hessian equations. See for instance, [34].
To prove (2), we may assume that \(\mathcal{H}=\mathrm{span}\{\frac{\partial}{\partial z^{1}},\cdots,\frac{ \partial}{\partial z^{n-r}}\}\). We define
\[A_{t}=A+t\left(\begin{array}{cc}0&0\\ 0&I_{r}\end{array}\right)=\left(\begin{array}{cc}A|_{\mathcal{H}}&C\\ C^{\dagger}&A|_{\mathcal{H}^{\perp}}+tI_{r}\end{array}\right). \tag{6.6}\]
Then \(A_{t}\geq A\) and
\[\sigma_{l}(A_{t})=t^{r}\sigma_{l-r}(A|_{\mathcal{H}})+o(t^{r}),\ \sigma_{k}(A_{t})=t^{r} \sigma_{k-r}(A|_{\mathcal{H}})+o(t^{r}). \tag{6.7}\]
Hence,
\[\frac{\sigma_{l}(A)}{\sigma_{k}(A)}\geq\lim_{t\to\infty}\frac{\sigma_{l}(A_{t })}{\sigma_{k}(A_{t})}=\frac{\sigma_{l-r}(A|_{\mathcal{H}})}{\sigma_{k-r}(A|_ {\mathcal{H}})}. \tag{6.8}\]
To prove (3), we may assume that \(A\) is diagonal after a unitary transform. Write \(A=\mathrm{diag}\{\lambda_{1},\cdots,\lambda_{n}\}\).Then
\[(T_{k-1}(A))^{i\bar{j}}=\sigma_{k-1}(A|i)\delta^{i\bar{j}}. \tag{6.9}\]
Here, \((A|i)\) denote a matrix obtained by deleting \(i\)-th row and \(i\)-th column of \(A\). Thus, (6.5) is equivalent to
\[\sigma_{r-1}(A)\sigma_{k-1}(A|i)\geq\sigma_{k-1}(A)\sigma_{r-1}(A|i), \tag{6.10}\]
for each \(i\). If \(\sigma_{r-1}(A|i)=0\), then (6.10) holds trivially. Otherwise (6.10) is equivalent to
\[\frac{\sigma_{k-1}(A|i)}{\sigma_{r-1}(A|i)}\geq\frac{\sigma_{k-1}(A)}{\sigma_ {r-1}(A)}. \tag{6.11}\]
By 1), \(\frac{\sigma_{k-1}(A)}{\sigma_{r-1}(A)}\) is decreasing in \(A\). Thus (6.11) holds since \((A|i)\leq A\).
With a labeled orthogonal splitting \(\mathcal{O}_{p}=\{n_{p},\mathbf{d}_{p},\{\mathcal{V}_{i}\},\mathbf{k}_{p}\}\) at \(p\), we fix a normal coordinate \(\{z^{i}\}\) at \(p\) of \(\rho\) such that \(\{\sqrt{2}\frac{\partial}{\partial z^{i}}\}\) restricts to a unitary frame on each \(\mathcal{V}_{i}\). For any non-negative Hermitian matrix \(A\), we may write under this frame
\[A=\left(\begin{array}{cccc}A_{1}&*&*&*\\ *&A_{2}&*&*\\ *&*&\ddots&*\\ *&*&*&A_{n_{p}}\end{array}\right), \tag{6.12}\]
where \(A_{i}\in\Gamma_{d_{i}\times d_{i}}\).
**Lemma 6.3**.: _Notations as above. For any non-negative \(A\), Let_
\[A^{\prime}=\left(\begin{array}{cccc}A_{1}&0&0&0\\ 0&A_{2}&0&0\\ 0&0&\ddots&0\\ 0&0&0&A_{n_{p}}\end{array}\right). \tag{6.13}\]
_Let \(\omega^{\prime}=(A^{\prime})_{i\bar{j}}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z }^{j}.\) Then_
1. \[\det(A)\leq\det(A^{\prime})=\prod_{i=1}^{n_{p}}\det A_{i};\]
2. \[\sigma_{k}(A)\leq\sigma_{k}(A^{\prime})=\sum_{l\in\mathbf{l}_{k}}\prod_{j=1}^ {n_{p}}\sigma_{l_{j}}(A_{j}),\] _where_ \(\mathbf{l}_{k}=\{(l_{1},\cdots,l_{n_{p}})\in\mathbb{N}:\sum_{j=1}^{n_{p}}l_{j }=k\}\)_;_
3. \[T_{k-1}(A)\leq 2^{(k-1)(n_{p}-1)}T_{k-1}(A^{\prime})=2^{(k-1)(n_{p}-1)}\text{ diag}(T_{1},\cdots,T_{n_{p}}),\] _where_ (6.16) \[T_{i}=\sum_{l\in\mathbf{l}_{k,i}}\prod_{j\neq i}\sigma_{l_{j}}(A_{j})T_{l_{i} -1}(A_{i}),\] _and_ \(\mathbf{l}_{k,i}=\{(l_{1},\cdots,l_{n_{p}})\in\mathbb{N}^{n_{p}}:\sum_{j=1}^ {n_{p}}l_{j}=k,\ l_{i}\geq 1\}\)_._
Proof.: For claim (1), we first prove the case when \(n_{p}=2\). We may write
\[A=\left(\begin{array}{cc}A_{1}&C\\ C^{\dagger}&A_{2}\end{array}\right).\]
Therefore \(\det(A)=\det\left(A_{1}-C^{\dagger}A_{2}^{-1}C\right)\det A_{2}\) if \(A_{2}\) is invertible. If \(A_{2}\) is not invertible, we replace \(A\)with \(A_{\epsilon}=A+\epsilon I\) and let \(\epsilon\to 0\). Then, (1) follows from the continuity of the determinant function. The general case follows from induction.
For claim (2), for any \(1\leq i_{1}<\cdots<i_{k}\leq n\), by (1), we have
\[\sigma_{k}(A) =\sum_{1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n}A\left(\begin{array}[] {cccc}i_{1}&i_{2}&\cdots&i_{k}\\ i_{1}&i_{2}&\cdots&i_{k}\end{array}\right)\] \[\leq\sum_{1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n}A^{\prime}\left( \begin{array}{cccc}i_{1}&i_{2}&\cdots&i_{k}\\ i_{1}&i_{2}&\cdots&i_{k}\end{array}\right)\] \[=\sigma_{k}(A^{\prime}). \tag{6.17}\]
Thus the inequality in (6.14) holds. The equality in (6.14) can be calculated directly.
For claim (3), if \(n_{p}=2\), we have
\[2A^{\prime}-A=\left(\begin{array}{cc}2A_{1}&0\\ 0&2A_{2}\end{array}\right)-\left(\begin{array}{cc}A_{1}&C\\ C^{\dagger}&A_{2}\end{array}\right)=\left(\begin{array}{cc}A_{1}&-C\\ -C^{\dagger}&A_{2}\end{array}\right)\geq 0. \tag{6.18}\]
If \(n_{p}\geq 2\), by iteration, we have
\[A\leq 2^{n_{p}-1}A^{\prime},\ \omega\leq_{s}2^{n_{p}-1}\omega^{\prime},\ \frac{\omega^{k}}{k!}\leq_{s}2^{k(n_{p}-1)}\frac{(\omega^{\prime})^{k}}{k!}. \tag{6.19}\]
Now, for any \(b=b_{i}dz^{i}\), by (6.19), we have
\[(T_{k-1}(A))^{i\bar{j}}\,b_{i}\bar{b}_{j}\frac{\rho^{n}}{n!} =\frac{\rho^{n-k}}{(n-k)!}\wedge\frac{\omega^{k-1}}{(k-1)!}\wedge \frac{\sqrt{-1}}{2}b\wedge\bar{b}\] \[\leq 2^{(k-1)(n_{p}-1)}\frac{\rho^{n-k}}{(n-k)!}\wedge\frac{( \omega^{\prime})^{k-1}}{(k-1)!}\wedge\frac{\sqrt{-1}}{2}b\wedge\bar{b}\] \[=2^{(k-1)(n_{p}-1)}T_{k-1}(A^{\prime})^{i\bar{j}}b_{i}\bar{b}_{j} \frac{\rho^{n}}{n!}. \tag{6.20}\]
Thus, by (6.20),
\[T_{k-1}(A)\leq 2^{(k-1)(n_{p}-1)}T_{k-1}(A^{\prime}).\]
(6.16) can be verified by direct calculation.
**Lemma 6.4**.: _Suppose \(\mathring{\Lambda}\) satisfies the \(\mathcal{O}\)-UP condition. Let \(b\) be a covector in \(\mathcal{T}_{p}^{*}M\). Let \(\mathcal{B}=\{\zeta\in\mathcal{T}_{p}M:\langle b,\zeta\rangle=0\}\) be a complex hyperplane. Denote_
\[\chi_{\mathcal{B}}:=(A|_{\mathcal{B}})^{\bar{j}i}2\sqrt{-1}\frac{\partial}{ \partial\bar{z}^{j}}\wedge\frac{\partial}{\partial z^{i}}. \tag{6.21}\]
_If at \(p\), \(\mathcal{P}_{\Lambda}(A)\leq\kappa\), then we have_
\[\sum_{i=1}^{n_{p}}\langle\frac{\rho_{i}^{k_{i}}}{k_{i}!},\exp\chi_{\mathcal{B }}\rangle\leq\frac{\kappa}{m}. \tag{6.22}\]
_Furthermore,_
\[\sum_{i=1}^{n_{p}}\langle\frac{\rho_{i}^{k_{i}}}{k_{i}!},\exp\chi\rangle\leq \frac{n\kappa}{m}. \tag{6.23}\]
Proof.: Let \(\Lambda^{\prime}=\sum_{i=1}^{n_{p}}(\exp\rho_{i})^{[k_{i}]}\). By the \(\mathcal{O}\)-UP condition, at \(p\),
\[\mathring{\Lambda}\geq m\Lambda^{\prime}.\]
As a result, we have
\[\mathcal{P}_{\Lambda^{\prime}}(A)\leq\frac{\kappa}{m}. \tag{6.24}\]
Let \(B_{i\bar{j}}=b_{i}\bar{b}_{j}\). Then,
\[F_{\Lambda^{\prime}}(A:B)\leq\mathcal{P}_{\Lambda^{\prime}}(A)\leq\frac{\kappa}{m}. \tag{6.25}\]
By Lemma 5.8, (6.25) implies (6.22).
Choose a normal coordinate of \(\omega\) at \(p\). Let \(\mathcal{B}_{i}=\{\xi:dz^{i}(\xi)=0\}\). Then
\[\chi=\sum_{i=1}^{n}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{i},\ \chi_{\mathcal{B}_{j}}= \sum_{i\neq j}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{i}.\]
Then, for \(k\leq n-1\),
\[\frac{\chi^{k}}{k!}\leq_{s}\sum_{j=1}^{n}\frac{\chi_{\mathcal{B}_{j}}^{k}}{k!}.\]
Thus,
\[\sum_{i=1}^{n_{p}}\langle\frac{p_{i}^{k_{i}}}{k_{i}!},\exp\chi\rangle\leq\sum _{j=1}^{n}\sum_{i=1}^{n_{p}}\langle\frac{\rho_{i}^{k_{i}}}{k_{i}!},\exp\chi_{ \mathcal{B}_{j}}\rangle\leq\frac{n\kappa}{m}. \tag{6.26}\]
We have proved 6.23.
The following lemma gives an explicit estimate of \(-F^{i\bar{j}}\) when the cone condition holds.
**Lemma 6.5**.: _Suppose \(\dot{\bar{\Lambda}}\) satisfies \(\mathcal{O}\)-UP and for all \(p\in M\), \(f(p)>-m\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p})\). Let \(b\) be a covector in \(\mathcal{T}_{p}^{*}M\). Let \(\mathcal{B}\) and \(\chi_{\mathcal{B}}\) be given as in Lemma 6.4. Let \(\xi\in\mathcal{T}_{p}M\). If at \(p\), \(\mathcal{P}_{\Lambda}(A)\leq\kappa\), then we have_
\[\langle\dot{\bar{\Lambda}},2\sqrt{-1}\xi\wedge\bar{\xi}\wedge\exp\chi_{ \mathcal{B}}\rangle\geq m\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p}, \mathbf{k}_{p})\frac{|\langle\xi,b\rangle|^{2}}{(\det A)\,\|b\|_{\omega}^{2}}. \tag{6.27}\]
Proof.: If \(\langle b,\xi\rangle=0\) then (6.27) holds trivially. Otherwise, we may assume by rescaling that \(\|b\|_{\rho}^{2}=2\) and \(|\langle b,\xi\rangle|=1\). We may choose a normal coordinate \(\{z^{i}\}\) of \(\rho\) at \(p\) such that
1. \(dz^{1}|_{p}=b\);
2. \(\{\sqrt{2}e_{j}^{i}\}_{j=1}^{d_{i}}\) is an orthornormal frame of \(\mathcal{V}_{i}\) with respect to \(\rho_{i}\) ;
3. \(\frac{\partial}{\partial z^{\sum_{l=1}^{i-1}(d_{l}-1)+n_{p}+j}}=e_{j+1}^{i}\) for \(j=1,\cdots,d_{i}-1\);
4. \(e_{1}^{i}=\sum_{j=1}^{n_{p}}\alpha^{ij}\frac{\partial}{\partial z^{j}}\) and \(\alpha^{-1}=(\alpha^{ij})\) is a unitary matrix of dimension \(n_{p}\).
The construction of the coordinate can be done as follows: First, we construct the unitary frame. Take \(dz^{1}|_{p}=b\). In each \(\mathcal{V}_{i}\), let \(\tilde{e}_{1}^{i}=(\pi_{i})_{*}\frac{\partial}{\partial z^{1}}|_{p}\). If \(\tilde{e}_{1}^{i}=0\), we pick any norm \(\sqrt{1/2}\) vector in \(\mathcal{V}_{i}\) to be \(e_{1}^{i}\); if \(\tilde{e}_{1}^{i}\neq 0\), let \(e_{1}^{i}=\frac{\tilde{e}_{1}^{i}}{\sqrt{2}\|\tilde{e}_{1}^{i}\|_{\rho}}\). We then pick the rest vectors so that \(\{\sqrt{2}e_{j}^{i}\}_{j=1}^{d_{i}}\) is a unitary frame of \(\mathcal{V}_{i}\) with respect to \(\rho_{i}\). At \(p\), we choose \(\frac{\partial}{\partial z^{2}}|_{p},\cdots,\frac{\partial}{\partial z^{n_{p}} }|_{p}\) together with \(\frac{\partial}{\partial z^{1}}|_{p}\) to span the space \(\text{span}\{e_{1}^{i}\}_{i=1}^{n_{p}}\). Let
\(\frac{\partial}{\partial z^{\sum_{l=1}^{i-1}(d_{l-1})+n_{p}+j}}|_{p}=e^{i}_{j+1}\) for \(j=1,\cdots,d_{i}-1\). Then we extend \(\{z^{i}\}\) to be a normal coordinate.
By our choice of coordinate, \(\mathcal{B}=\text{span}\{\frac{\partial}{\partial z^{2}},\cdots,\frac{\partial} {\partial z^{n}}\}\). Use \((A|1)\) to denote the matrix obtained by deleting 1st row and 1st column of \(A\). We may write
\[\chi=2\sqrt{-1}\sum_{i,j}A^{\bar{j}i}\frac{\partial}{\partial\bar{z}^{j}} \wedge\frac{\partial}{\partial z^{i}},\;\chi_{\mathcal{B}}=2\sqrt{-1}\sum_{i, j>1}(A|1)^{\bar{j}i}\frac{\partial}{\partial\bar{z}^{j}}\wedge\frac{\partial}{ \partial z^{i}}, \tag{6.28}\]
where \(((A|1)^{\bar{j}i})\) is the inverse matrix of \((A|1)\). Let \(\{\bar{e}^{j}_{k}\}\subset\bigwedge^{1,0}T^{*}_{p}M\) be the dual frame of \(\{e^{k}_{j}\}\), \(k=1,2,\cdots,n_{p}\). Therefore, we have the following decompositions:
\[A=\left(\begin{array}{cc}a&q\\ q^{\dagger}&(A|1)\end{array}\right),A^{-1}=\left(\begin{array}{cc}\hat{a}^{- 1}&\hat{q}\\ \hat{q}^{\dagger}&(A|1)^{-1}+\hat{q}^{\dagger}\hat{a}\hat{q}\end{array}\right),\]
where \(a=A_{1\bar{1}}\),
\[\hat{a}=a-q(A|1)^{-1}q^{\dagger}=\frac{1}{A^{\bar{1}1}}=\frac{1}{\|b\|_{\omega }^{2}}, \tag{6.29}\]
and \(\hat{q}=-\hat{a}^{-1}q(A|1)^{-1}\). We may write
\[(A|1)^{-1}=\left(\begin{array}{ccccc}V&Q_{01}&Q_{02}&\ldots&Q_{1n_{p}}\\ Q_{01}^{\dagger}&\hat{A}_{1}^{-1}&Q_{12}&\ldots&Q_{1n_{p}}\\ Q_{02}^{\dagger}&Q_{12}^{\dagger}&\hat{A}_{2}^{-1}&\ldots&Q_{2n_{p}}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ Q_{0n_{p}}^{\dagger}&Q_{1n_{p}}^{\dagger}&Q_{2n_{p}}^{\dagger}&\ldots&\hat{A}_{ n_{p}}^{-1}\end{array}\right), \tag{6.30}\]
where \(V\) is a matrix of dimension \(n_{p}-1\). For \(l=1,2,\cdots,d_{1}\), we have
\[\frac{\rho_{1}^{l}}{l!} =\sum_{|I|=l,I\subset\{n_{p}+1,\cdots,n_{p}+d_{1}-1\}}\frac{\sqrt {-1}^{l^{2}}}{2^{l}}dz^{I}\wedge d\bar{z}^{I}\] \[+\frac{\sqrt{-1}}{2}\bar{e}^{1}_{1}\wedge\bar{\bar{e}}^{1}_{1} \wedge\sum_{|I|=l-1,I\subset\{n_{p}+1,\cdots,n_{p}+d_{1}-1\}}\frac{\sqrt{-1}^{( l-1)^{2}}}{2^{l-1}}dz^{I}\wedge d\bar{z}^{I}. \tag{6.31}\]
Let \(\alpha_{ij}=\overline{\alpha^{ji}}\). Define \(\alpha^{\prime}_{i}:=(\alpha_{2i},\cdots,\alpha_{n_{p}i})\), and
\[\tilde{A}_{i}^{-1}:=\left(\begin{array}{ccccc}0&0&0\\ 0&(\alpha^{\prime}_{i})^{\dagger}\,V\alpha^{\prime}_{i}&(\alpha^{\prime}_{i})^{ \dagger}Q_{0i}\\ 0&Q_{0i}^{\dagger}\alpha^{\prime}_{i}&\hat{A}_{i}^{-1}\end{array}\right). \tag{6.32}\]
Notice
\[\bar{e}^{1}_{i}\wedge\bar{\bar{e}}^{1}_{i}=\sum_{l,j=2}^{n_{p}}\alpha_{li} \overline{\alpha_{ji}}dz^{l}\wedge d\bar{z}^{j}+\text{terms with }dz^{1}\text{ or }d\bar{z}^{1}. \tag{6.33}\]
Thus by (6.31), (6.33), (6.32), and (3.4), \(\langle\frac{\rho_{1}^{l}}{l!},\frac{\chi_{\mathcal{B}}^{l}}{l!}\rangle=\sigma_{l }(\tilde{A}_{1}^{-1})\) and similarly
\[\langle\frac{\rho_{i}^{l}}{l!},\frac{\chi_{\mathcal{B}}^{l}}{l!}\rangle=\sigma_ {l}(\tilde{A}_{i}^{-1}). \tag{6.34}\]
Let
\[\tilde{\xi}=\langle\tilde{e}_{1}^{1},\xi\rangle e_{1}^{1}+\sum_{i=n_{p}+1}^{n_{ p}+d_{1}-1}\xi^{i}\frac{\partial}{\partial z^{i}}. \tag{6.35}\]
Then, by (6.5), we have
\[\langle\frac{\rho_{1}^{k_{1}}}{k_{1}!},2\sqrt{-1}\xi\wedge\bar{ \xi}\wedge\exp\chi_{\mathcal{B}}\rangle =\langle T_{k_{1}-1}(\tilde{A}_{1}^{-1}),\tilde{\xi}^{\dagger} \tilde{\xi}\rangle\] \[\geq\frac{\sigma_{k_{1}-1}(\tilde{A}_{1}^{-1})}{\sigma_{d_{1}-1 }(\tilde{A}_{1}^{-1})}\langle T_{d_{1}-1}(\tilde{A}_{1}^{-1}),\tilde{\xi}^{ \dagger}\tilde{\xi}\rangle. \tag{6.36}\]
Notice that
\[T_{d_{1}-1}(\tilde{A}_{1}^{-1})=\left(\begin{array}{cc}\sigma_{d_{1}-1}( \tilde{A}_{1}^{-1})&0\\ 0&T^{\prime}\end{array}\right), \tag{6.37}\]
where \(T^{\prime}\geq 0\) is a non-negative Hermitian matrix. Thus, from (6.36) and (6.37), we have
\[\langle\frac{\rho_{1}^{k_{1}}}{k_{1}!},2\sqrt{-1}\xi\wedge\bar{ \xi}\wedge\exp\chi_{\mathcal{B}}\rangle= \geq\frac{\sigma_{k_{1}-1}(\tilde{A}_{1}^{-1})}{\sigma_{d_{1}-1 }(\tilde{A}_{1}^{-1})}\sigma_{d_{1}-1}(\tilde{A}_{1}^{-1})|\langle\tilde{e}_{1 }^{1},\xi\rangle|^{2}\] \[=\sigma_{k_{1}-1}(\tilde{A}_{1}^{-1})|\langle\tilde{e}_{1}^{1}, \xi\rangle|^{2}. \tag{6.38}\]
Apply the same argument on each \(\mathcal{V}_{i}\) to obtain
\[\langle\Lambda^{\prime},2\sqrt{-1}\xi\wedge\bar{\xi}\wedge\exp\chi_{ \mathcal{B}}\rangle\geq\sum_{i=1}^{n_{p}}\sigma_{k_{i}-1}(\tilde{A}_{i}^{-1}) |\langle\tilde{e}_{1}^{i},\xi\rangle|^{2}, \tag{6.39}\]
where \(\Lambda^{\prime}=\sum_{i=1}^{n_{p}}(\exp\rho_{i})^{[k_{i}]}.\) On the other hand, by applying Lemma 6.3 (2), we have
\[\sigma_{n-1}\left((A|1)^{-1}\right)\leq\sum_{i=1}^{n_{p}}\sigma_{d_{i}-1}( \tilde{A}_{1}^{-1})\prod_{j\neq i}\sigma_{d_{j}}(\tilde{A}_{j}^{-1}). \tag{6.40}\]
Let \(s_{i}=\sigma_{k_{i}}(\tilde{A}_{i}^{-1})^{\frac{1}{k_{i}}}\). Then by Newton-Maclaurin inequality,
\[\sigma_{k_{i}-1}(\tilde{A}_{i}^{-1})\geq\binom{d_{i}}{k_{i}-1}\binom{d_{i}}{k _{i}}^{\frac{1}{k_{i}}-1}s_{i}^{k_{i}-1}, \tag{6.41}\]
\[\sum_{i=1}^{n_{p}}\sigma_{d_{i}-1}(\tilde{A}_{i}^{-1})\prod_{j\neq i}\sigma_{d_ {j}}(\tilde{A}_{j}^{-1})\leq\frac{\max_{i}\{d_{i}\binom{d_{i}}{k_{i}}\}^{ \frac{1}{k_{i}}}}{\prod_{i=1}^{n_{p}}\binom{d_{i}}{k_{i}}^{\frac{d_{i}}{k_{i}} }}\prod_{i=1}^{n_{p}}s_{i}^{d_{i}}\sum_{j=1}^{n_{p}}s_{j}^{-1}. \tag{6.42}\]
Thus, by (6.39), (6.40), (6.41), and (6.42), we get
\[\langle\Lambda^{\prime},2\sqrt{-1}\xi\wedge\bar{\xi}\wedge\exp\chi_{ \mathcal{B}}\rangle \geq\sigma_{n-1}((A|1))^{-1}\cdot\frac{\sum_{i=1}^{n_{p}}\sigma_{k _{i}-1}(\tilde{A}_{i}^{-1})|\langle\check{e}_{1}^{i},\xi\rangle|^{2}}{\sum_{i= 1}^{n_{p}}\sigma_{d_{i}-1}(\tilde{A}_{1}^{-1})\prod_{j\neq i}\sigma_{d_{j}}( \tilde{A}_{j}^{-1})}\] \[=\det(A)^{-1}\hat{a}\cdot\frac{\sum_{i=1}^{n_{p}}\sigma_{k_{i}-1} (\tilde{A}_{i}^{-1})|\langle\check{e}_{1}^{i},\xi\rangle|^{2}}{\sum_{i=1}^{n_ {p}}\sigma_{d_{i}-1}(\tilde{A}_{1}^{-1})\prod_{j\neq i}\sigma_{d_{j}}(\tilde{ A}_{j}^{-1})}\] \[\geq c_{0}(\mathcal{O}_{p})\cdot\hat{a}\cdot\det(A)^{-1}\cdot \frac{\sum_{i=1}^{n_{p}}s_{i}^{k_{i}-1}|\langle\check{e}_{1}^{i},\xi\rangle|^ {2}}{\prod_{i=1}^{n_{p}}s_{i}^{d_{i}}\sum_{i=1}^{n_{p}}s_{i}^{-1}}, \tag{6.43}\]
where
\[c_{0}(\mathcal{O}_{p})=\frac{\min_{i}\left\{\binom{d_{i}}{k_{i}-1}\binom{d_{i }}{k_{i}}^{\frac{1}{k_{i}}-1}\right\}\prod_{i=1}^{n_{p}}\binom{d_{i}}{k_{i}}^{ \frac{d_{i}}{k_{i}}}}{\max_{i}\{d_{i}\binom{d_{i}}{k_{i}}^{\frac{1}{k_{i}}}\}}. \tag{6.44}\]
On the other hand, by the cone condition and Lemma 6.4, we have \(s_{i}\leq(\frac{\kappa}{m})^{\frac{1}{k_{i}}}\). Since \(\langle\xi,b\rangle=1\) and \(b\in\operatorname{span}\{\check{e}_{1}^{i}\}_{i=1}^{n_{p}}\), we have \(\sum_{i=1}^{n_{p}}|\langle\check{e}_{1}^{i},\xi\rangle|^{2}\geq 1\). Define a function
\[\gamma(s_{1},\cdots,s_{n_{p}}):=c_{0}(\mathcal{O}_{p})\frac{\sum_{i=1}^{n_{p} }s_{i}^{k_{i}-1}|\langle\check{e}_{1}^{i},\xi\rangle|^{2}}{\prod_{i=1}^{n_{p} }s_{i}^{d_{i}}\sum_{i=1}^{n_{p}}s_{i}^{-1}}. \tag{6.45}\]
Then \(\gamma\) is decreasing in each \(s_{i}\). Since \(s_{i}\leq(\frac{\kappa}{m})^{\frac{1}{k_{i}}}\), we have
\[\gamma \geq c_{0}\frac{\sum_{i=1}^{n_{p}}\left(\frac{\kappa}{m}\right)^ {1-\frac{1}{k_{i}}}|\langle\check{e}_{1}^{i},\xi\rangle|^{2}}{(\frac{\kappa}{ m})^{\sum_{i=1}^{n_{p}}\frac{d_{i}}{k_{i}}\sum_{i=1}^{n_{p}}\left(\frac{ \kappa}{m}\right)^{-\frac{1}{k_{i}}}}}\] \[\geq c_{0}\frac{\min\{\left(\frac{\kappa}{m}\right)^{1-\frac{1}{k _{i}}}\}}{(\frac{\kappa}{m})^{\sum_{i=1}^{n_{p}}\frac{d_{i}}{k_{i}}\sum_{i=1}^ {n_{p}}\left(\frac{\kappa}{m}\right)^{-\frac{1}{k_{i}}}}}\] \[=\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{ p}). \tag{6.46}\]
Thus, by (6.43) and (6.46), we have
\[\langle\Lambda^{\prime},2\sqrt{-1}\xi\wedge\bar{\xi}\wedge\exp\chi_{\mathcal{B }}\rangle\geq\hat{a}\cdot\det A^{-1}\cdot\gamma_{\min}. \tag{6.47}\]
Notice \(m\Lambda^{\prime}\leq\mathring{\Lambda}\) and \(\hat{a}=\frac{1}{\|b\|_{\omega}^{2}}.\) Hence, (6.27) follows from (6.47) immediately.
An immediate consequence of Lemma 6.5 is the monotonicity of \(F(A)\).
**Lemma 6.6**.: _Suppose \(\mathring{\Lambda}\) satisfies \(\mathcal{O}\)-UP and for all \(p\in M\), \(f(p)>-m\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p})\). If \(\omega\) satisfies the cone condition (1.5) at a point \(p\), then_
\[-F^{i\bar{j}}(A)b_{i}\bar{b_{j}}\geq-\frac{1}{2}\sum_{k=1}^{n-1}F_{k}^{i\bar{j }}b_{i}\bar{b}_{j}>0, \tag{6.48}\]
_for any non-zero covector \(b=b_{i}dz^{i}\). As a result, \(F(A)\) is strictly decreasing in \(\mathcal{C}_{\Lambda}^{\kappa}\)._
Proof.: From (4.1), we have
\[-F_{k}^{i\bar{j}}(A)b_{i}\bar{b}_{j}=\langle\Lambda^{[k]},2\sqrt{-1}b^{\sharp} \wedge\bar{b^{\sharp}}\wedge\exp\chi\rangle. \tag{6.49}\]
Let \(\mathcal{B}=\{\xi\in\mathcal{T}_{p}M:b(\xi)=0\}\). Since \(A^{-1}|_{\mathcal{B}}\geq(A|_{\mathcal{B}})^{-1}\) by Lemmas 3.5 and 3.2, \(\exp\chi\geq_{s}\exp\chi_{\mathcal{B}}\). Thus, if we take \(\xi=b^{\sharp}\) in (6.27), then we have
\[-\sum_{k=1}^{n-1}F_{k}^{i\bar{j}}b_{i}\bar{b}_{j}\geq\langle\Lambda^{[k]},2 \sqrt{-1}b^{\sharp}\wedge\bar{b^{\sharp}}\wedge\exp\chi_{\mathcal{B}}\rangle \geq m\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p})\frac {\|b\|_{\omega}^{2}}{\det A}. \tag{6.50}\]
Direct computation shows that
\[-F^{i\bar{j}}b_{i}\bar{b}_{j}=-\sum_{k=1}^{n-1}F_{k}^{i\bar{j}}b_{i}\bar{b}_{ j}+\frac{f(p)}{\det A}A^{\bar{j}i}b_{i}\overline{b_{j}}\]
Then, from Lemma 6.5, we have
\[-F^{i\bar{j}}b_{i}\bar{b}_{j} \geq-\frac{1}{2}\sum_{k=1}^{n-1}F_{k}^{i\bar{j}}b_{i}\bar{b}_{j}+ \left(\frac{m}{2}\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k }_{p})+f(p)\right)\frac{\|b\|_{\omega}^{2}}{\det A}\] \[\geq-\frac{1}{2}\sum_{k=1}^{n-1}F_{k}^{i\bar{j}}b_{i}\bar{b}_{j}>0. \tag{6.51}\]
Now we prove Lemma 6.1.
Proof of Lemma 6.1.: We consider a generic point \(p_{1}\in\mathcal{M}\). If \(f(p_{1})\geq 0\), the cone condition holds automatically. Thus, we may assume \(f(p_{1})<0.\) Let \(\gamma(s)\),\(s\in[0,1]\) be a curve connecting \(p_{0}\) and \(p_{1}\). Let \(s_{0}=\min\{s\in(0,1],\ \omega(\gamma(s))\notin\mathcal{C}_{\Lambda}^{\kappa}\}\) and let \(p=\gamma(s_{0})\). By Lemma 5.8, the degeneracy of cone condition implies that we can find a rank \(1\) Hermitian matrix \(B=b^{\dagger}b\) such that
\[F_{\Lambda}(A:B)=\mathcal{P}_{\Lambda}(A)=\kappa. \tag{6.52}\]
At \(p\), we pick a normal coordinate of \(\rho\) as in Lemma 6.5. Let
\[\chi=2\sqrt{-1}\sum_{i,j}A^{\bar{j}i}\frac{\partial}{\partial\bar{z}^{j}} \wedge\frac{\partial}{\partial z^{i}},\ \chi_{\infty}=2\sqrt{-1}\sum_{i,j>1}(A|1)^{\bar{j}i}\frac{\partial}{\partial \bar{z}^{j}}\wedge\frac{\partial}{\partial z^{i}}, \tag{6.53}\]
where \(((A|1)^{\bar{j}i})\) is the inverse matrix of \((A|1)\). From (6.52), \(\langle\Lambda,\exp(\chi_{\infty})\rangle=\kappa.\) From equation (1.2), \(\langle\Lambda,\exp(\chi)\rangle=\kappa.\) Thus, we have
\[\sum_{k=1}^{n-1}\frac{\langle\Lambda^{[k]},\left(\chi^{k}-\chi_{\infty}^{k} \right)\rangle}{k!}+f(p)\langle\frac{\rho^{n}}{n!},\frac{\chi^{n}}{n!}\rangle=0.\]
Hence
\[-f(p)\langle\frac{\rho^{n}}{n!},\frac{\chi^{n}}{n!}\rangle=\sum_{k=1}^{n-1}\frac{ \langle\Lambda^{[k]},\left(\chi^{k}-\chi_{\infty}^{k}\right)\rangle}{k!}. \tag{6.54}\]
We may write
\[A=\left(\begin{array}{cc}a&Q\\ Q^{\dagger}&(A|1)\end{array}\right),A^{-1}=\left(\begin{array}{cc}\hat{a}^{- 1}&\hat{Q}\\ \hat{Q}^{\dagger}&(A|1)^{-1}+\hat{Q}^{\dagger}\hat{a}\hat{Q}\end{array}\right), \tag{6.55}\]
where \(\hat{a}=a-Q(A|1)^{-1}Q^{\dagger}=1/\|dz^{1}\|_{\omega}^{2}\) and \(\hat{Q}=-\hat{a}^{-1}Q(A|1)^{-1}\). Let
\[\xi=\frac{1}{\sqrt{\hat{a}}}\frac{\partial}{\partial z^{1}}+\sqrt{\hat{a}} \sum_{i=2}^{n}\hat{Q}_{i}\frac{\partial}{\partial z^{i}}. \tag{6.56}\]
Then
\[\chi=2\sqrt{-1}\left(\xi\wedge\bar{\xi}+(A|1)\bar{j}^{\bar{i}}\frac{\partial }{\partial\bar{z}^{j}}\wedge\frac{\partial}{\partial z^{i}}\right), \tag{6.57}\]
Thus, we have
\[\exp\chi-\exp\chi_{\infty}=2\sqrt{-1}\xi\wedge\bar{\xi}\wedge\exp\chi_{\infty}. \tag{6.58}\]
By (6.54), (6.58), and Lemma 6.5, we have
\[-\frac{f(p)}{\det A} \geq\frac{1}{\det A}\frac{|\langle\xi,dz^{1}\rangle|^{2}}{\|dz^{ 1}\|_{\omega}^{2}}m\gamma_{\min}\] \[=\frac{1}{\det A}\cdot m\gamma_{\min}. \tag{6.59}\]
However, it is impossible as \(|f(p)|<m\gamma_{\min}\). We have finished the proof.
Finally, we show that \(F\) is strict convex in \(\mathcal{C}_{\Lambda}^{\kappa}(p)\).
**Lemma 6.7**.: _Notations as above. If \(\mathring{\Lambda}\) satisfies \(\mathcal{O}\)-UP then_
\[\sum_{r,s,i,j}\left(F^{i\bar{j},r\bar{s}}(A)+F^{i\bar{s}}(A)A^{\bar{j}\tau} \right)B_{i\bar{j}}\overline{B_{s\bar{r}}}\geq\frac{f(p)}{\det A}A^{\bar{s}r} A^{\bar{j}i}B_{i\bar{j}}\overline{B_{s\bar{r}}}. \tag{6.60}\]
_Suppose further that for all \(p\in M\),\(f(p)>-\frac{m}{2n+1}\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p}, \mathbf{k}_{p}).\) Then, \(F\) is a strictly convex function in \(\mathcal{C}_{\Lambda}^{\kappa}\)._
Proof.: In a local normal coordinate of \(\rho\) at \(p\), we may assume that \(A=\operatorname{diag}\{\lambda_{1},\cdots,\lambda_{n}\}\). By Proposition 4.3,
\[\sum_{i,j,r,s}\left(F^{i\bar{j},r\bar{s}}+F^{i\bar{s}}A^{\bar{j}r} \right)B_{i\bar{j}}\overline{B_{s\bar{r}}} \geq\frac{f(p)}{\det A}\left(A^{\bar{s}r}A^{\bar{j}i}\right)B_{i \bar{j}}\overline{B_{s\bar{r}}}\] \[=\frac{f(p)}{\det A}\left|\sum_{j}\frac{B_{j\bar{j}}}{\lambda_{j} }\right|^{2}. \tag{6.61}\]
By (6.51), we have
\[-\sum_{i,j,r,s}F^{i\bar{s}}(A)A^{\bar{j}r}B_{i\bar{j}}\overline{B_{s\bar{r}}}\geq \frac{1}{\det A}\left(\frac{m}{2}\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d }_{p},\mathbf{k}_{p})+f(p)\right)\sum_{i,j}\frac{|B_{i\bar{j}}|^{2}}{\lambda_{i} \lambda_{j}}. \tag{6.62}\]
Then by (6.62), Cauchy inequality, and the assumption on \(f\), we have
\[\sum_{i,j,r,s}\left(F^{i\bar{j},r\bar{s}}+\frac{1}{2}F^{i\bar{s}} A^{\bar{j}r}\right)B_{i\bar{j}}\overline{B_{s\bar{r}}}\] \[\geq\frac{1}{\det A}\sum_{j}\frac{|B_{j\bar{j}}|^{2}}{\lambda_{j} ^{2}}\left(\frac{m}{2}\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p}, \mathbf{k}_{p})-(n+\frac{1}{2})|f(p)|\right)\] \[\geq 0. \tag{6.63}\]
Since \(F^{i\bar{j}}\) is strictly monotone in \(\mathcal{C}_{\Lambda}^{\kappa}\) by Lemma 6.6, by (6.63) \(F^{i\bar{j},r\bar{s}}\) is positive definite in \(\mathcal{C}_{\Lambda}^{\kappa}\). Thus, \(F\) is strictly convex in \(\mathcal{C}_{\Lambda}^{\kappa}\).
## 7. Continuity method
In this section, we prove the existence of a solution to (1.2) assuming the existence of a subsolution \(\omega_{\text{sub}}\in[\omega_{0}].\) The main result of this section is the following
**Theorem 7.1**.: _Let \(M\) be a connected compact Kahler manifold. Suppose \(\Lambda\) satisfies **H2**. If there is a Kahler metric \(\omega_{\text{sub}}\in[\omega_{0}]\) satisfies the cone condition (1.5), then there exists a unique smooth solution to equation (1.2)._
_Remark 7.2_.: Since a solution itself is a subsolution by Lemma 6.1, Theorem 1.7 and Theorem 2.9 are immediate corollaries of Theorem 7.1. The uniqueness will be addressed in Appendix A.
To prove Theorem 7.1, we use the continuity method. Consider the following continuity path depending on the parameter \(t\in[0,1]\). Consider \(\Omega_{t}=\exp\omega_{t}\) which solves the PDE
\[\kappa(\Omega_{t})^{[n]}=\left(t\hat{\Lambda}\wedge\Omega_{t}\right)^{[n]}+( tf+(1-t)\,\kappa_{0})P^{[n]}, \tag{7.1}\]
where the constant \(\kappa_{0}\) is chosen such that
\[\kappa[\omega_{0}]^{n}=\kappa_{0}[\rho]^{n}.\]
Denote \(f_{t}(p):=tf(p)+(1-t)\kappa_{0}\), and \(\Lambda_{t}=t\hat{\Lambda}+f_{t}P^{[n]}\), then we may rewrite (7.1) as
\[\kappa(\Omega_{t})^{[n]}=(\Lambda_{t}\wedge\Omega_{t})^{[n]}\,. \tag{7.2}\]
Let
\[\mathbf{I}=\{t\in[0,1]:\text{\eqref{eq:1.2} has a smooth solution}\}.\]
Then, from Yau's theorem [38], there is a smooth Kahler metric \(\hat{\omega}_{0}\) which solves (7.2) at \(t=0\). Without confusion, we replace \(\omega_{0}\) by \(\hat{\omega}_{0}\) and consider \(\omega_{t}=\omega_{0}+i\partial\bar{\partial}\varphi_{t}\) which solves (7.1) for \(t\in\mathbf{I}\subset[0,1]\). Notice the linearization of (7.1) is
\[(\kappa\Omega_{t}-t\Lambda\wedge\Omega_{t})^{[n-1]}\wedge i\partial\bar{ \partial}u=(t\mathring{\Lambda}\wedge\Omega_{t})^{[n]}+(f(p)-\kappa_{0})\,P^{[ n]}. \tag{7.3}\]
By Lemma 6.1, \((\kappa\Omega_{t}-t\Lambda\wedge\Omega_{t})^{[n-1]}>0\); Therefore, (7.3) is strictly elliptic, which implies the openness of \(\mathbf{I}\). If \(\Lambda\) satisfies \(\mathbf{H2}\), then for \(t>0\), \(\Lambda_{t}\) satisfies \(\mathbf{H2}\) with respect to proper positive constants. In fact, \(f_{t}(p)\geq 0\) if \(t\leq\frac{1}{2}\), and since \(m_{t}\geq\frac{m}{2}\) for \(t\geq\frac{1}{2}\), we have
\[f_{t}(p)\geq-\min\left\{\frac{m_{t}}{2n+1}\gamma_{\min}(\frac{\kappa}{m_{t}},n _{p},\mathbf{d}_{p},\mathbf{k}_{p}),\frac{\kappa_{0}}{2}\right\}.\]
Thus, if \(\Lambda\) satisfies \(\mathbf{H2}\), \(\Lambda_{t}\) satisfies the following \(\mathbf{H2}\)' condition.
**Definition 7.3**.: We say \(\Lambda\) satisfies \(\mathbf{H2}\)' if \(\mathring{\Lambda}\) satisfies \(\mathbf{H2}\) with some uniform constant \(m>0\), and \(\Lambda^{[n]}\) is almost positive with respect to \((\mathcal{O},2m,\rho)\), i.e. for any \(p\in M\),
\[\frac{\Lambda^{[n]}}{P^{[n]}}(p)\geq-\min\left\{\frac{m}{2n+1}\gamma_{\min}( \frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p}),\frac{\kappa_{0}}{2} \right\}.\]
_Remark 7.4_.: Readers may check that all arguments in section 6 are valid if condition \(\mathbf{H2}\)' is assumed.
In the following, by replacing \(\Lambda\) with \(\Lambda_{t}\), we will suppress the subscript \(t\) and derive a priori estimates for equation (1.2) assuming \(\mathbf{H2}\)'.
We proceed to prove a priori \(C^{0}\)-estimate of equation (1.2). The idea of using Alexandroff-Bakelman-Pucci type estimates based on [2] follows [35]. Let
\[\omega=\omega_{0}+i\partial\bar{\partial}\varphi \tag{7.4}\]
be a solution to (1.2). Suppose that in the Kahler class \([\omega_{0}]\) there is a \(\omega_{\mathrm{sub}}\in\mathcal{C}_{\Lambda}^{\kappa}\). We denote
\[\omega_{\mathrm{sub}}=\omega_{0}+i\partial\bar{\partial}\varphi_{\mathrm{sub}} \tag{7.5}\]
Let \(u=\varphi-\varphi_{\mathrm{sub}}\).
**Proposition 7.5**.: _Suppose that \(\Lambda\) satisfies \(\mathbf{H2}\)'. If \(\omega=\omega_{sub}+i\partial\bar{\partial}u\) is a solution to (1.2) and \(\sup u=0\), then there is a constant \(C\) depends on \(n\), \(M\), \(\Lambda\), \(\omega_{sub}\), \(\rho\) s.t._
\[\sup_{M}|u|<C.\]
Suppose that \(\underline{A}\in\mathcal{C}_{\Lambda}^{\kappa}(p)\). Since the \(\mathcal{C}_{\Lambda}^{\kappa}(p)\) is open, \(\mathrm{dist}(\underline{A},\partial\mathcal{C}_{\Lambda}^{\kappa}(p))>0\). Thus there is a \(0<r<\mathrm{dist}(\underline{A},\mathcal{C}_{\Lambda}^{\kappa}(p))\) s.t. the radius \(r\) ball \(\mathcal{B}_{r}(\underline{A})\) in Hermitian matrix space is contained in \(\mathcal{C}_{\Lambda}^{\kappa}(p)\). As a result, we have
**Lemma 7.6**.: _If \(\underline{A}\in\mathcal{C}_{\Lambda}^{\kappa}(p)\), then there is a constant \(r=r(\underline{A},\Lambda)\) s.t. \(\underline{A}-r\underline{Id}\in\mathcal{C}_{\Lambda}^{\kappa}(p)\)._
Proof of Proposition 7.5.: Since \(\omega\) is positive, we have
\[\Delta_{\rho}u>-\mathrm{tr}_{\rho}\omega_{\mathrm{sub}}>-C(\omega_{\mathrm{sub}}, \rho). \tag{7.6}\]
Thus, we can use Green function representation to obtain that \(\|u\|_{L^{1}(\rho^{n})}<C(\omega_{\mathrm{sub}},\rho)\).
Let \(L=-\inf_{M}u\) and assume that \(L\) is achieved at \(x_{0}\). Pick a normal coordinate of \(\rho\) at \(x_{0}\). After a proper rescaling of \(\rho\), we may assume that the chosen coordinate exists in the unit ball \(B_{1}(0)\subset\mathbb{C}^{n}\). For \(x\in B_{1}(0)\), we pick a uniform \(r=r(\omega_{\mathrm{sub}},\Lambda)\) s.t. \(\omega_{\mathrm{sub}}-r\sqrt{-1}\partial\bar{\partial}|x|^{2}\) belongs to \(\mathcal{C}_{\Lambda}^{\kappa}(x)\) for all \(x\in B_{1}(0)\). Let \(a>0\) and \(a<r/2\). Let \(w=u+a|x|^{2}\). Note \(w>-L+a\) on \(\partial B_{1}(0)\). Define the following set:
\[W=\{x\in B_{1}(0):|Dw(x)|<a,\ w(y)\geq w(x)+Dw(x)\cdot(y-x)\}.\]
We use Alexandroff-Bakelman-Pucci maximum principle (Gilbarg-Trudinger Lemma 9.2) to claim that
\[B_{a}(0)\subset Dw(W).\]
In \(W\), we have \(D^{2}w\geq 0\) and hence
\[c(n)a^{2n}\leq\int_{W}\det(D^{2}w)\leq 2^{2n}\int_{W}\left(\det w_{i\bar{j}} \right)^{2}. \tag{7.7}\]
As \(D^{2}w\geq 0\) in \(W\), \(D^{2}u\geq-2a\mathrm{Id}_{2n}\) which implies that \(u_{i\bar{j}}+a\delta_{i\bar{j}}\geq 0\) as a Hermitian matrix. Since \(a<r/2\), \(\omega_{\mathrm{sub}}-a\sqrt{-1}\partial\bar{\partial}|x|^{2}\in\mathcal{C}_{ \Lambda}^{\kappa}(x)\), we apply Proposition 5.2 to \(\omega_{\mathrm{sub}}-a\sqrt{-1}\partial\bar{\partial}|x|^{2}\) to conclude that \(|w_{i\bar{j}}|<R(\Lambda,\omega_{\mathrm{sub}})\). By (7.7), we have
\[c(n)a^{2n}\leq C(\Lambda,\omega_{\mathrm{sub}},n)\mathrm{vol}(W). \tag{7.8}\]
On the other hand, it is obvious that
\[\mathrm{vol}(W)\leq\frac{\|w\|_{L^{1}}}{|L-a|}\leq\frac{C(\omega_{\mathrm{sub }},\rho)}{|L-a|}. \tag{7.9}\]
Thus by (7.8) and (7.9),
\[L<C(\Lambda,\omega_{\mathrm{sub}},\rho,n)\left(\frac{1}{a^{2n}}+1\right)<C( \Lambda,\omega_{\mathrm{sub}},\rho,n).\]
Next, we state a \(C^{2}\)-estimate for solutions to (7.1).
**Proposition 7.7**.: _Let \(\omega_{\mathit{sub}}\in\mathcal{C}_{\Lambda}^{\kappa}\). Let \(u=\varphi-\varphi_{\mathit{sub}}\) where \(\varphi,\varphi_{\mathit{sub}}\) are given in (7.4),(7.5) and \(\omega=\omega_{\mathit{sub}}+\sqrt{-1}\partial\bar{\partial}u\) solves (1.2). If \(\Lambda\) satisfies **H2'**, and \(F^{i\bar{j},r\bar{s}},F^{i\bar{j}}\) satisfies inequality (6.60) then it holds that_
\[|\partial\bar{\partial}u|_{\rho}<C.\]
_where \(C\) depends on \(\|u\|_{C^{0}},M,\Lambda,\omega_{\mathit{sub}}\),\(n,m,\kappa,C_{H2}\), \(\rho\)._
We first prove some technical lemmas.
**Lemma 7.8**.: _Notations as above. We have_
\[-F^{i\bar{j}}A_{i\bar{j}}\leq nF(A).\]
Proof.: Since each \(F_{k}\) is homogeneous of degree \(-k\), we have
\[-\frac{\partial F(A)}{\partial A_{i\bar{j}}}A_{i\bar{j}} =\sum_{k=1}^{n-1}kF_{k}(A)+\frac{nf}{\det A}\] \[\leq\sum_{k=1}^{n-1}nF_{k}(A)+\frac{nf}{\det A}\] \[=nF(A).\]
Let \(\{z^{i}\}\) be a normal coordinate at \(p\). Pick the direction \(\frac{\partial}{\partial z^{1}}\) and denote \(\Lambda_{,1},\Lambda_{,1\bar{1}}\) to be the corresponding covariant derivatives of \(\Lambda\) with respect to the Chern connection of \(\rho\). We denote
\[F_{,1\bar{1}} =F(A,\Lambda_{,1\bar{1}}),\ F_{,\bar{1}}^{i\bar{j}}=\frac{ \partial F\left(A,\Lambda_{,\bar{1}}\right)}{\partial A_{i\bar{j}}}, \tag{7.11}\] \[F_{k,1\bar{1}} =F_{k}(A,\Lambda_{,1\bar{1}}),\ F_{k,\bar{1}}^{i\bar{j}}=\frac{ \partial F_{k}\left(A,\Lambda_{,\bar{1}}\right)}{\partial A_{i\bar{j}}}. \tag{7.10}\]
**Lemma 7.9**.: _Notations as above. If \(\Lambda\) satisfies **H2'**, and \(F(A)\geq\kappa\), we have_
\[\left|F_{,1\bar{1}}\right|<C_{7.9}F(A), \tag{7.12}\]
\[\left|2\text{Re}\left(F_{k,\bar{1}}^{i\bar{j}}B_{i\bar{j}}\right)\right|\leq \frac{C_{7.9}}{\epsilon}F(A)+C_{7.9}\epsilon F^{i\bar{s}}A^{r\bar{j}}B_{i\bar{ j}}\overline{B_{s\bar{r}}}, \tag{7.13}\]
_for some \(C_{7.9}=C_{7.9}(\kappa,n,m,C_{H2})\)._
Proof.: From the cone condition, Lemma 6.4, and Newton-Maclaurin inequality, we have
\[\langle\frac{\rho_{i}^{l_{i}}}{l_{i}!},\exp\chi\rangle\leq\binom{d_{i}}{l_{i} }\binom{d_{i}}{k_{i}}^{-\frac{l_{i}}{k_{i}}}(\frac{n\kappa}{m})^{\frac{l_{i}} {k_{i}}}. \tag{7.14}\]
By **H2'**, there is a constant \(C_{H2}\) s.t. for \(k=1,\cdots,n-1\), it holds
\[-C_{H2}\left(\Lambda^{[k]}+\sum_{l\in\mathfrak{l}_{k}}\rho_{1}^{l_{1}}\cdots \rho_{n_{p}}^{l_{n_{p}}}\right)\leq\left(\Lambda^{[k]}\right)_{,1\bar{1}} \leq C_{H2}\left(\Lambda^{[k]}+\sum_{l\in\mathfrak{l}_{k}}\rho_{1}^{l_{1}} \cdots\rho_{n_{p}}^{l_{n_{p}}}\right),\]
where \(\mathbf{l}_{k}=\{(l_{1},\cdots,l_{n_{p}}):\sum_{i}l_{i}=k,\ l_{i}=0\ \text{or}\ l_{i}\geq k_{i}\}\). Hence
\[|F_{k,1\bar{1}}| \leq C_{H2}\left(F_{k}+\sum_{l\in\mathbf{l}_{k}}\prod_{i=1}^{n_{p}}( \frac{\rho_{i}^{l_{i}}}{l_{i}!},\exp\chi)\right)\] \[=C_{H2}\left(F_{k}+\sum_{l\in\mathbf{l}_{k}}C_{1}(\frac{\kappa}{m},n_ {p},\mathbf{d}_{p},\mathbf{k}_{p})\right)\] \[\leq C_{H2}\left(F_{k}+C_{2}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p },\mathbf{k}_{p})\right) \tag{7.15}\]
By the cone condition and Lemma 6.4,
\[\det A^{-1}\leq C_{3}(m,\kappa,n_{p},\mathbf{d}_{p},\mathbf{k}_{p}). \tag{7.16}\]
By (7.15) and (7.16), we have
\[\left|F_{,1\bar{1}}\right| <\sum_{k}|F_{k,1\bar{1}}|+|f_{,1\bar{1}}|\frac{1}{\det A}\] \[<C_{4}F(A), \tag{7.17}\]
for some \(C_{4}=C_{4}(\kappa,m,n_{p},\mathbf{d}_{p},\mathbf{k}_{p},C_{H2})\).
We use **H2'** to deduce that
\[\left|\operatorname{Re}\left(F_{k,\bar{1}}^{i\bar{j}}b_{i}\bar{b}_{j}\right) \right|\leq C_{H2}\left(-F_{k}^{i\bar{j}}b_{i}\bar{b}_{j}+\langle\sum_{l\in\mathbf{ l}_{k}}\rho_{1}^{l_{1}}\cdots\rho_{n_{p}}^{l_{n_{p}}},\exp\chi\wedge 2\sqrt{-1b^{ \sharp}}\wedge b^{\sharp}\rangle\right). \tag{7.18}\]
From (6.16) in Lemma 6.3, we have
\[\langle\sum_{l\in\mathbf{l}_{k}}\rho_{1}^{l_{1}}\cdots\rho_{n_{p}}^{l_{n_{p}}}, \exp\chi\wedge 2\sqrt{-1b^{\sharp}}\wedge b^{\sharp}\rangle=\sum_{i=1}^{n_{p}} \sum_{l\in\mathbf{l}_{k,i}}\left(\prod_{j\neq i}\langle\frac{\rho_{j}^{l_{j}}}{l_ {j}!},e^{\chi}\rangle\right)\langle\frac{\rho_{i}^{l_{i}}}{l_{i}!},e^{\chi} \wedge 2\sqrt{-1b^{\sharp}}\wedge b^{\sharp}\rangle, \tag{7.19}\]
where \(\mathbf{l}_{k,i}=\{(l_{1},\cdots,l_{n_{p}})\in\mathbf{l}_{k}:l_{i}\geq 1\}\). By (6.5) in Lemma 6.2 and Newton-Maclaurin inequality, for \(l\geq k_{i}\) and any positive \(d_{i}\times d_{i}\) Hermitian matrix \(D\), we have
\[T_{l-1}(D) \leq\frac{\sigma_{l-1}(D)}{\sigma_{k_{i}-1}(D)}T_{k_{i}-1}(D)\] \[\leq C(d_{i},l,k_{i})\frac{\sigma_{k_{i}-1}(D)(\sigma_{k_{i}}(D)) ^{\frac{l-k_{i}}{k_{i}}}}{\sigma_{k_{i}-1}(D)}T_{k_{i}-1}(D)\] \[=C(d_{i},l,k_{i})(\sigma_{k_{i}}(D))^{\frac{l-k_{i}}{k_{i}}}T_{k_ {i}-1}(D). \tag{7.20}\]
We apply (7.20) to \(A^{-1}|_{\mathcal{V}_{i}}\) and use (7.14) to obtain
\[\langle\frac{\rho_{i}^{l-1}}{(l-1)!},\exp\chi\wedge 2\sqrt{-1b^{\sharp}}\wedge b ^{\sharp}\rangle\leq C_{5}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p},l)\langle\frac{\rho_{i}^{k_{i}}}{k_{i}!},\exp\chi\wedge 2\sqrt{-1b^{\sharp}} \wedge b^{\sharp}\rangle. \tag{7.21}\]
Thus, by (7.18), (7.19), and (7.21), we have
\[\left|\operatorname{Re}\left(F_{k,\bar{1}}^{i\bar{j}}b_{i}\bar{b }_{j}\right)\right| \leq C_{H2}\left(-F_{k}^{i\bar{j}}b_{i}\bar{b}_{j}+C_{6}(\frac{ \kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p})\sum_{i=1}^{n_{p}}\langle\frac{ \rho_{i}^{k_{i}}}{k_{i}!},\exp\chi\wedge 2\sqrt{-1b^{\sharp}}\wedge b^{\sharp} \rangle\right)\] \[\leq-C_{7}F^{i\bar{j}}b_{i}\bar{b}_{j}, \tag{7.22}\]
where \(C_{7}=C_{7}(\kappa,m,n_{p},\mathbf{d}_{p},\mathbf{k}_{p},C_{H2})\).
Now for matrix \(B\), by (7.22) and mean value inequality,
\[\left|\operatorname{Re}\left(F_{,\bar{1}}^{i\bar{s}}A^{\bar{j}}( A_{i\bar{j}}+\epsilon B_{i\bar{j}})\left(\overline{A_{s\bar{r}}+\epsilon B _{s\bar{r}}}\right)\right)\right| \leq-C_{9}F^{i\bar{s}}A^{\bar{j}}(A_{i\bar{j}}+\epsilon B_{i\bar{ j}})\left(\overline{A_{s\bar{r}}+\epsilon B_{s\bar{r}}}\right)\] \[\leq 2C_{9}\left(F(A)-\epsilon^{2}F^{i\bar{s}}A^{\bar{r}\bar{j}}B _{i\bar{j}}\overline{B_{s\bar{r}}}\right), \tag{7.23}\]
where \(C_{9}=C_{9}(n,\kappa,m,n_{p},\mathbf{d}_{p},\mathbf{k}_{p},C_{H2})\). As a result of (7.23), we have
\[\left|2\operatorname{Re}\left(F_{,\bar{1}}^{i\bar{j}}B_{i\bar{j}}\right) \right|\leq\frac{C_{10}}{\epsilon}F(A)-C_{10}\epsilon F^{i\bar{s}}A^{\bar{r} \bar{j}}B_{i\bar{j}}\overline{B_{s\bar{r}}},\]
where \(C_{10}=C_{10}(n,\kappa,m,n_{p},\mathbf{d}_{p},\mathbf{k}_{p},C_{H2})\).
Finally, we choose
\[C_{7.9}=\max\{C_{2},C_{4},C_{10}:p\in M\}.\]
Notice that for a fixed labeled orthogonal splitting \(\mathcal{O}\), the set \(\{(n_{p},\mathbf{d}_{p},\mathbf{k}_{p}):p\in M\}\) is finite. Thus \(C_{7.9}\) has a uniform upper bound which only depends on \(\kappa,m,n\) and \(C_{H2}\).
**Lemma 7.10**.: _Notations as above. If \(\Lambda\) satisfies **H2'**, and \(F(A)\geq\kappa\), there is a constant \(C_{7.10}\) depends on \(C_{7.9}\), and the bisectional curvature \(\text{Rm}_{\rho}\) of \(\rho\) s.t. for any \(\epsilon>0\)_
\[\partial_{1\bar{1}}F(A) \geq A_{1\bar{1}}F^{i\bar{j}}\left(\log A_{1\bar{1}}\right)_{,i \bar{j}}+A^{\bar{1}1}F^{i\bar{j}}A_{i\bar{1},1}\overline{A_{j\bar{1},1}}+F^{i \bar{j},r\bar{s}}A_{i\bar{j},1}A_{r\bar{s},\bar{1}}\] \[+\epsilon F^{i\bar{s}}A^{\bar{j}r}A_{i\bar{j},\bar{1}}\overline{A _{s\bar{r},1}}-\frac{C_{7.10}}{\epsilon}F(A)+C_{7.10}A_{1\bar{1}}\sum_{i}F^{i \bar{i}}.\]
Proof.: We have
\[\partial_{1\bar{1}}F(A)=F^{i\bar{j}}A_{i\bar{j},1\bar{1}}+F^{i\bar{j},r\bar{s} }A_{i\bar{j},1}A_{r\bar{s},\bar{1}}+2\text{Re}\left(F_{,\bar{1}}^{i\bar{j}}A_{ i\bar{j},1}\right)+F_{1\bar{1}}. \tag{7.24}\]
Note
\[F^{i\bar{j}}A_{i\bar{j},1\bar{1}}=F^{i\bar{j}}A_{1\bar{1},i\bar{j}}+F^{i\bar{j }}(\rho^{a\bar{b}}A_{a\bar{j}}R_{1\bar{1},i\bar{b}}-\rho^{a\bar{b}}A_{a\bar{1}} R_{i\bar{j},1\bar{b}}), \tag{7.25}\]
where \(R_{i\bar{j},k\bar{l}}\) is the bisectional curvature of \(\rho\). By (7.24) and (7.25),
\[\partial_{1\bar{1}}F(A) =F^{i\bar{j}}A_{1\bar{1},i\bar{j}}+F^{i\bar{j}}\left(\rho^{a\bar{b }}A_{a\bar{j}}R_{1\bar{1},i\bar{b}}-\rho^{a\bar{b}}A_{a\bar{1}}R_{i\bar{j},1 \bar{b}}\right)\] \[+F^{i\bar{j},r\bar{s}}A_{i\bar{j},1}A_{r\bar{s},\bar{1}}+2\text{ Re}\left(F^{i\bar{j}}_{,\bar{1}}A_{i\bar{j},1}\right)+F_{1\bar{1}}. \tag{7.26}\]
The first term in (7.26) is
\[F^{i\bar{j}}A_{1\bar{1},i\bar{j}}=A_{1\bar{1}}F^{i\bar{j}}\left(\log A_{1\bar {1}}\right)_{,i\bar{j}}+A^{\bar{1}1}F^{i\bar{j}}A_{i\bar{1},1}\overline{A_{j \bar{1},1}}. \tag{7.27}\]
The second term in (7.26) is controlled by
\[F^{i\bar{j}}\left(\rho^{a\bar{b}}A_{a\bar{j}}R_{1\bar{1},i\bar{ b}}-\rho^{a\bar{b}}A_{a\bar{1}}R_{i\bar{j},1\bar{b}}\right) \geq C_{1}F^{i\bar{j}}A_{i\bar{j}}+C_{1}A_{1\bar{1}}\sum_{i}F^{i \bar{i}}\] \[\geq-C_{2}F(A)+C_{2}A_{1\bar{1}}\sum_{i}F^{i\bar{i}}, \tag{7.28}\]
where the constants \(C_{1},C_{2}\) depend a bound of the bisectional curvature \(|\text{Rm}_{\rho}|\). Apply Lemma 7.9 to (7.26) to obtain
\[\partial_{1\bar{1}}F(A) \geq A_{1\bar{1}}F^{i\bar{j}}\left(\log A_{1\bar{1}}\right)_{,i \bar{j}}+A^{\bar{1}1}F^{i\bar{j}}A_{i\bar{1},1}\overline{A_{j\bar{1},1}}+F^{i \bar{j},r\bar{s}}A_{i\bar{j},1}A_{r\bar{s},\bar{1}}\] \[+\epsilon F^{i\bar{s}}A^{\bar{j}r}A_{i\bar{j},1}\overline{A_{s \bar{r},1}}-C_{3}\left(1+\frac{1}{\epsilon}\right)F(A)+C_{3}A_{1\bar{1}}\sum_ {i}F^{i\bar{i}}.\]
We have proved the claim.
The following proposition is a key ingredient in the proof of \(C^{2}\) estimate. It has been proved in several context. See Song-Weinkove [33], Fang-Lai-Ma [19], Guan [17], Guan-Sun [18], Collins-Szekelyhidi [10], Szekelyhidi [35], Datar-Pingali [12]. The current form of Proposition 7.11 is adapted from Fang-Lai-Ma [19] and Datar-Pingali [12]. We will put the proof in the appendix.
**Proposition 7.11**.: _Suppose that \(\Lambda\) satisfies **H2'**. Let \(\omega_{sub}\in\mathcal{C}_{\Lambda}^{\kappa}\) and \(\omega=\omega_{sub}+i\partial\bar{\partial}u\) be a solution to (1.1). Then there is a \(N=N(\omega_{sub},\Lambda,M)\) and \(\mu=\mu(\omega_{sub},\Lambda,M)>0\) s.t. if \(|\partial\bar{\partial}u|_{\rho}>N\),_
\[F^{i\bar{j}}(A)\left(u_{i\bar{j}}\right)\geq\mu\left(1-\sum_{i}F^{i\bar{i}}(A )\right).\]
Now with all the preparations, we prove the \(C^{2}\)-estimate.
Proof of Proposition 7.7.: We use maximum principle to deduce Proposition 7.7. Let \(u=\varphi-\varphi_{\text{sub}}\) and \(g\) be the metric tensor of \(\omega\). We may assume that \(\inf_{M}u=0\), \(|\partial\bar{\partial}u|>N\) so that we can apply Proposition 7.11. Pick the test function
\[G(x,\xi)=\log(g_{i\bar{j}}\xi^{i}\xi^{\bar{j}})-\phi(u), \tag{7.29}\]
where \(\xi\in T_{x}^{1,0}M,\rho_{i\bar{j}}\xi^{i}\xi^{\bar{j}}=1\). \(\phi:\mathbb{R}_{\geq 0}\to\mathbb{R}\) is a smooth function:
\[\phi(x)=2Lx-\frac{L\tau}{2}x^{2}. \tag{7.30}\]
The choice of \(\tau\) relies on \(\sup u\) so that
\[L\leq\phi^{\prime}\leq 2L,\ \phi^{\prime\prime}=-L\tau. \tag{7.31}\]
For instance, we may start by assuming that \(\tau=\frac{1}{\sup u+1}\). Suppose that \(G(x,\xi)\) achieves maximum at \((p,\xi_{0})\). We choose a normal coordinate \(\{z^{i}\}\) of \(\rho\) at \(p\) so that \(\xi_{0}\) is along the direction of \(\frac{\partial}{\partial z^{1}}\) and \(\omega\) is diagonal at \(p\). Locally
\[H=\log g_{1\bar{1}}-\phi(u) \tag{7.32}\]
also achieves maximum at \(p\). At \(p\), we have
\[0=H_{,i}=g_{1\bar{1}}^{-1}g_{1\bar{1},i}-\phi^{\prime}u_{,i}, \tag{7.33}\]
\[0\leq F^{i\bar{j}}H_{,i\bar{j}}. \tag{7.34}\]
At \(p\), we have
\[0=H_{,i}=g_{1\bar{1}}^{-1}g_{1\bar{1},i}-\phi^{\prime}u_{,i}. \tag{7.35}\]
We have \(g_{1\bar{1},i}=g_{i\bar{1},1}\) and (7.35) implies that
\[u_{,i}=\frac{g^{\bar{1}1}}{\phi^{\prime}}g_{i\bar{1},1}. \tag{7.36}\]
Now
\[H_{,i\bar{j}}=\left(\log g_{1\bar{1}}\right)_{,i\bar{j}}-\phi^{\prime\prime}u_ {,i}u_{,\bar{j}}-\phi^{\prime}u_{,i\bar{j}}. \tag{7.37}\]
We have \(g_{1\bar{1},i}=g_{i\bar{1},1}\). Thus, by Lemma 7.10,
\[0 \leq F^{i\bar{j}}H_{,i\bar{j}}=F^{i\bar{j}}\left(\left(\log g_{1 \bar{1}}\right)_{,i\bar{j}}-\phi^{\prime\prime}u_{,i}u_{,\bar{j}}-\phi^{ \prime}u_{,i\bar{j}}\right)\] \[\leq-g^{\bar{1}1}\left(F^{i\bar{j},r\bar{s}}g_{i\bar{j},1}g_{r \bar{s},\bar{1}}+F^{i\bar{j}}g^{\bar{1}1}g_{i\bar{1},1}\overline{g_{j\bar{1}, 1}}+\epsilon F^{i\bar{j}}g^{\bar{l}l}g_{i\bar{l},1}\overline{g_{j\bar{l},1}} \right)-\frac{C_{7.10}}{\epsilon}F(A)\] \[-C_{7.10}\sum_{i}F^{i\bar{i}}-F^{i\bar{j}}\left(\phi^{\prime \prime}u_{,i}u_{,\bar{j}}+\phi^{\prime}u_{,i\bar{j}}\right). \tag{7.38}\]
We apply (6.60) to (7.38) to get
\[0 \leq g^{\bar{1}1}\left((1-\epsilon)F^{i\bar{j}}g^{\bar{l}l}g_{i \bar{l},1}\overline{g_{j\bar{l},1}}-\frac{f(p)}{\det g}\left|\sum_{j}g^{\bar{j }j}g_{j\bar{j},1}\right|^{2}-F^{i\bar{j}}g^{\bar{1}1}g_{i\bar{1},1}\overline{ g_{j\bar{1},1}}\right)\] \[+g^{\bar{1}1}\frac{C_{7.10}}{\epsilon}F-C_{7.10}\sum_{i}F^{i\bar{ i}}-F^{i\bar{j}}\left(\phi^{\prime\prime}u_{,i}u_{,\bar{j}}+\phi^{\prime}u_{,i \bar{j}}\right). \tag{7.39}\]
Substitute \(u_{,i}=\frac{g^{\bar{1}1}}{\phi^{\prime}}g_{i\bar{1},1}\) into (7.39) to get
\[0 \leq(1-\epsilon)g^{\bar{1}1}F^{i\bar{j}}g^{\bar{l}l}g_{i\bar{l},1} \overline{g_{j\bar{l},1}}-\frac{f(p)}{\det g}\left|\sum_{j}g^{\bar{j}j}g_{j\bar {j},1}\right|^{2}-F^{i\bar{j}}(g^{\bar{1}1})^{2}g_{i\bar{1},1}\overline{g_{j \bar{1},1}}\] \[+g^{\bar{1}1}\frac{C_{7.10}}{\epsilon}F(A)-\frac{\phi^{\prime \prime}}{(\phi^{\prime})^{2}}F^{i\bar{j}}\left(g^{\bar{1}1}\right)^{2}g_{i\bar {1},1}\overline{g_{j\bar{1},1}}-C_{7.10}\sum_{i}F^{i\bar{i}}-\phi^{\prime}F^{ i\bar{j}}u_{,i\bar{j}}\] \[=g^{\bar{1}1}\left[(1-\epsilon)F^{i\bar{j}}g^{\bar{l}l}g_{i\bar{l },1}\overline{g_{j\bar{l},1}}-\frac{f(p)}{\det g}\left|\sum_{j}g^{\bar{j}j}g_{ j\bar{j},1}\right|^{2}-\left(1+\frac{\phi^{\prime\prime}}{(\phi^{\prime})^{2}} \right)F^{i\bar{j}}g^{\bar{1}1}g_{i\bar{1},1}\overline{g_{j\bar{1},1}}\right]\] \[+g^{\bar{1}1}\frac{C_{7.10}}{\epsilon}F-C_{7.10}\sum_{i}F^{i\bar{ i}}-\phi^{\prime}F^{i\bar{j}}u_{,i\bar{j}}. \tag{7.40}\]
Take \(\epsilon\leq\min\{-\frac{\phi^{\prime\prime}}{(\phi^{\prime})^{2}},\frac{1}{2}\}\), then we have
\[(1-\epsilon)F^{i\bar{j}}g^{\bar{l}l}g_{i\bar{l},1}\overline{g_{j\bar{l},1}}- \left(1+\frac{\phi^{\prime\prime}}{(\phi^{\prime})^{2}}\right)F^{i\bar{j}}g^{ \bar{1}1}g_{i\bar{1},1}\overline{g_{j\bar{1},1}}\leq\frac{1}{2}\sum_{l\geq 2}F ^{i\bar{j}}g^{\bar{l}}g_{i\bar{l},1}\overline{g_{j\bar{l},1}}. \tag{7.41}\]
If \(f(p)\geq 0\), then by (7.41), (7.41), and the fact that \(F^{i\bar{j}}\leq 0\) in Lemma 6.6,
\[0\leq g^{\bar{1}1}\frac{C_{7.10}}{\epsilon}F-C_{7.10}\sum_{i}F^{i\bar{i}}-\phi ^{\prime}F^{i\bar{j}}u_{,i\bar{j}}. \tag{7.42}\]
If \(f(p)<0\), then by (6.51), we have
\[-\sum_{l\geq 2}F^{i\bar{j}}g^{\bar{l}l}g_{i\bar{l},1}\overline{g_{j\bar{l},1}} \geq\left(\frac{m}{2}\gamma_{\min}(\frac{\kappa}{m},n_{p},\mathbf{d}_{p}, \mathbf{k}_{p})-|f(p)|\right)\frac{1}{\det g}\sum_{l\geq 2}\sum_{i=1}^{n}\frac{|g_{i \bar{l},1}|^{2}}{g_{l\bar{l}}g_{i\bar{i}}}. \tag{7.43}\]
We argue similarly as in Lemma 6.7:
\[\frac{1}{2}\sum_{l\geq 2}F^{i\bar{j}}g^{\bar{l}l}g_{i\bar{l},1} \overline{g_{j\bar{l},1}}-\frac{f(p)}{\det g}\left|\sum_{j}g^{\bar{j}j}g_{j \bar{j},1}\right|^{2}\] \[\leq-\frac{1}{\det g}\left(\left(\frac{m}{4}\gamma_{\min}(\frac{ \kappa}{m},n_{p},\mathbf{d}_{p},\mathbf{k}_{p})-\frac{1}{2}|f(p)|\right)\sum_{l \geq 2}\sum_{i=1}^{n}\frac{|g_{i\bar{l},1}|^{2}}{g_{l\bar{l}}g_{i\bar{i}}}-|f(p)|n \sum_{j=1}^{n}\frac{|g_{j\bar{j},1}|^{2}}{g_{j\bar{j}}^{2}}\right)\] \[\leq\frac{n|f(p)|}{\det g}\frac{|g_{1\bar{1},1}|^{2}}{g_{1\bar{1} }^{2}}. \tag{7.44}\]
The last line is due to the almost positive volume condition.
By (7.40), (7.41), (7.44), and (7.42), we have
\[0 \leq-\frac{\min\{0,f(p)\}n(g^{\bar{1}1})^{3}}{\det g}|g_{1\bar{1},1} |^{2}+g^{\bar{1}1}\frac{C_{7.10}}{\epsilon}F-C_{7.10}\sum_{i}F^{i\bar{i}}-\phi^ {\prime}F^{i\bar{j}}u_{,i\bar{j}}\] \[\leq C_{0}(m,\kappa,n)(g^{\bar{1}1})^{3}|g_{1\bar{1},1}|^{2}+g^{ \bar{1}1}\frac{C_{7.10}}{\epsilon}F-C_{7.10}\sum_{i}F^{i\bar{i}}-\phi^{\prime}F ^{i\bar{j}}u_{,i\bar{j}}, \tag{7.45}\]
where the last line is due to \(\det g^{-1}\leq C_{1}(m,\kappa,n_{p},\mathbf{d}_{p},\mathbf{k}_{p})\) by Lemma 6.4. Since \(u_{,i\bar{j}}=g_{i\bar{j}}-g_{i\bar{j}}^{\text{sub}}\), if \(|\partial\bar{\partial}u|>N\), from Proposition 7.11, we have
\[F^{i\bar{j}}\left(u_{,i\bar{j}}\right) =F^{i\bar{j}}\left(g_{i\bar{j}}-g_{i\bar{j}}^{\text{sub}}\right)\] \[\geq\mu(1-\sum_{i}F^{i\bar{i}}). \tag{7.46}\]
Thus, use 7.45) and \(g_{1\bar{1},1}=\phi^{\prime}u_{,1}g_{1\bar{1}}\) to get
\[0 \leq C_{1}g^{\bar{1}1}(\phi^{\prime})^{2}\left(1+|\nabla u|^{2} \right)+g^{\bar{1}1}\frac{C_{7.10}}{\epsilon}F(A)-C_{7.10}\sum_{i}F^{i\bar{i}} -\phi^{\prime}\mu(1-\sum_{i}F^{i\bar{i}})\] \[\leq C_{1}g^{\bar{1}1}(\phi^{\prime})^{2}\left(1+|\nabla u|^{2} \right)+g^{\bar{1}1}\frac{C_{7.10}}{\epsilon}F(A)-\phi^{\prime}\mu+\left(\phi^ {\prime}\mu-C_{7.10}\right)\sum_{i}F^{i\bar{i}}. \tag{7.47}\]
We choose \(\phi^{\prime}\mu>C_{7.10}\) and then by (7.47),
\[0\leq C_{1}g^{\bar{1}1}(\phi^{\prime})^{2}\left(1+|\nabla u|^{2}\right)+g^{ \bar{1}1}\frac{C_{7.10}}{\epsilon}F(A)-\phi^{\prime}\mu. \tag{7.48}\]
Therefore, by (7.48), we have \(g_{1\bar{1}}\leq C_{2}(1+|\nabla u|^{2})\), and hence at \(p\)
\[|\partial\bar{\partial}u|<C_{3}(1+|\nabla u|^{2}), \tag{7.49}\]
for some \(C_{3}=C_{3}(C_{7.10},L,\epsilon,\mu,m,\kappa,n)\).
Now, we fix choices of \(L,\tau,\epsilon\). First, we need \(\phi^{\prime}\mu>C_{7.10}\) which requires that\(L>\frac{C_{7.10}}{\mu}+1\). Next, we need \(\epsilon\leq-\frac{\phi^{\prime\prime}}{(\phi^{\prime})^{2}}\). Note
\[-\frac{\phi^{\prime\prime}}{(\phi^{\prime})^{2}}\geq\frac{L\tau}{4L^{2}}=\frac {\tau}{4L}. \tag{7.50}\]
Therefore, we may pick \(\epsilon=\min\{\frac{\tau}{4L},\frac{1}{2}\}=\frac{1}{4L(\sup u+1)}\). Note \(L,\epsilon,\tau\) only depend on \(M,\Lambda,\omega_{\text{sub}},n,m,\kappa\),\(C_{H2},\sup|u|,|\text{Rm}_{\rho}|\).
Once we have arranged \(L,\epsilon,\tau\) properly, from (7.49),
\[|\partial\bar{\partial}u|<C_{4}\sup(|\nabla u|^{2}+1), \tag{7.51}\]
for some \(C_{4}=C_{4}(M,\Lambda,\omega_{\text{sub}},n,m,\kappa\),\(C_{H2},\sup|u|,|\text{Rm}_{\rho}|)\). Finally, we apply the blow-up technique in [8] (Proposition 5.1) to (7.51) to deduce that \(|\partial\bar{\partial}u|\leq C_{5}(M,\Lambda,\omega_{\text{sub}},n\),\(m,\kappa,C_{H2},\sup|u|,|\text{Rm}_{\rho}|)\) as desired.
We now give the proof of Theorem 7.1.
Proof of Theorem 7.1.: Suppose that \(\Lambda\) satisfies **H2** with uniform constant \(m>0\). From discussions above, we only need to show that **I** is closed. Since **I** is open \((0,t_{0})\subset\textbf{I}\) for some \(t_{0}>0\). Suppose that \(t_{1}=\sup\{t:[0,t)\subset\textbf{I}\}>t_{0}>0\). Let \(t\in[t_{0},t_{1}]\), and denote
\[\Lambda_{t}=t\hat{\Lambda}+f_{t}P^{[n]}.\]
Then as explained before, \(\Lambda_{t}\) satisfies **H2'**. It is straightforward to see that if \(\omega_{\text{sub}}\) is a subsolution for (1.2), then it is a subsolution for (7.1) at all \(t\in[0,1]\). So we can apply the a priori \(C^{0}\), \(C^{2}\) estimates derived in Proposition 7.5 and 7.7. \(C^{1}\) estimate is obtained by the blow-up technique as in [8] (Proposition 5.1). Together with the complex version of Evans-Krylov theory we obtain \(C^{2,\alpha}\) bounds for solutions of (7.1) for all \(t\in\textbf{I}\). The standard Schauder theory implies \(C^{k,\alpha}\) bounds for every \(k\). The closedness of **I** can then be obtained from Arzela-Ascoli theorem. Thus, **I** is closed which implies the existence of smooth solution of (1.2).
### Part 2. Numerical Criterion
The second part of this paper aims to give a proof of Theorem 1.10. The main approach to prove Theorem 1.10 is an induction argument on dimension introduced first by G. Chen [4]. A key component is the so called mass concentration technique, which was originally due to Demailly-Paun [14] and has been successfully employed by Chen [4] to the \(J\)-equation and the supercritical dHYM equation. Our work is also based on Song [32], in which Song improved Chen's argument to treat singular sub-varieties.
The rest of this part is organized as follows. In Section 9, we set up the above-mentioned induction process and establish the base case. In Section 10, we prove a mass concentration result, Theorem 10.1. In Section 11, we complete the induction argument, and prove Theorem 1.10. In Section 12, we apply Theorem 1.10 to study special cases of dHYM equations.
## 8. Notations and technical preparation
In this section, we set up some further notations and definitions. Assume that \(M\), \([\omega_{0}]\), \(\Lambda\), \(\kappa\) are given as before.
We first define regularized maximum functions. Readers may consult [13] I.5.E. for general discussion.
**Definition 8.1**.: (Regularized Maximum). For any \(\eta\in(0,\infty)^{l}\), the regularized maximum function is defined as
\[\widetilde{\max}_{\eta}(t_{1},\cdots,t_{l}):=\int_{\mathbb{R}^{l}}\max(t_{1}+ h_{1},\cdots,t_{l}+h_{l})\prod_{1\leq j\leq n}\theta\left(\frac{h_{j}}{\eta_{j}} \right)\frac{dh_{1}\cdots dh_{l}}{\eta_{1}\cdots\eta_{l}},\]
for \(\eta=(\eta_{1},\cdots,\eta_{l})\in(0,\infty)^{l}\). Here \(\theta\) is a smooth non-negative function supported on \((-1,1)\) s.t. \(\int_{\mathbb{R}}\theta(t)dt=1\) and \(\int_{\mathbb{R}}t\theta(t)dt=0\).
We will use the regularized maximum function and Richberg's technique [30] to glue local PSH functions. Some related known facts are collected in the following lemma. See [13] I.5.18 for proofs.
**Lemma 8.2**.: _For any \(\eta\in(0,\infty)^{l}\), \(\widetilde{\max}_{\eta}\) possesses the following properties:_
1. \(\widetilde{\max}_{\eta}(t_{1},\cdots,t_{l})\) _is non-decreasing in all variables, smooth and convex on_ \(\mathbb{R}^{l}\)_;_
2. \(\max\{t_{1},\cdots,t_{l}\}\leq\widetilde{\max}_{\eta}(t_{1},\cdots,t_{l})\leq \max\{t_{1}+\eta_{1},\cdots,t_{l}+\eta_{l}\}\)_;_
3. _if_ \(t_{j}+\eta_{j}\leq\max_{k\neq j}\{t_{k}-\eta_{k}\}\)_, then_ \[\widetilde{\max}_{\eta}(t_{1},\cdots,t_{l})=\widetilde{\max}_{(\eta_{1}, \cdots,\hat{\eta}_{j},\cdots,t_{l})}(t_{1},\cdots,\hat{t}_{j},\cdots,t_{l});\]
4. \(\widetilde{\max}_{\eta}(t_{1}+a,\cdots,t_{l}+a)=\widetilde{\max}_{\eta}(t_{ 1},\cdots,t_{l})+a\)_._
We prove the following technical lemma.
**Lemma 8.3**.: _If \(g=g(A_{1},\cdots,A_{l})\) is a convex function on the spaces of Hermitian matrices and is monotone decreasing in each \(A_{i}\), then_
\[g(\sqrt{-1}\partial\bar{\partial}\widetilde{\max}_{\eta}(u_{1},\cdots,u_{l}) )\leq\sum_{i}\frac{\partial\widetilde{\max}_{\eta}}{\partial u_{i}}g(\sqrt{-1 }\partial\bar{\partial}u_{i}). \tag{8.1}\]
_In particular, if \(u_{1},\cdots,u_{l}\) are in \(\text{PSH}(M,\omega_{0})\) and satisfies \(\mathcal{P}_{\Lambda}(\omega_{0}+\sqrt{-1}\partial\bar{\partial}u_{i})<\kappa\), then_
\[\mathcal{P}_{\Lambda}(\omega_{0}+\sqrt{-1}\partial\bar{\partial}\widetilde{ \max}_{\eta}(u_{1},\cdots,u_{l}))\leq\mathcal{P}_{\Lambda}(\omega_{0}+\sqrt{- 1}\partial\bar{\partial}u_{i})<\kappa. \tag{8.2}\]
Proof.: Suppose that \(u_{1},\cdots,u_{l}\) are \(C^{2}\) functions on \(\mathbb{C}^{n}\). Then
\[\sqrt{-1}\partial\bar{\partial}\widetilde{\max}_{\eta}(u_{1},\cdots,u_{l})= \sum_{i}\frac{\partial\widetilde{\max}_{\eta}}{\partial u_{i}}\sqrt{-1} \partial\bar{\partial}u_{i}+\sum_{i,j}\frac{\partial^{2}\widetilde{\max}_{ \eta}}{\partial u_{i}\partial u_{j}}\sqrt{-1}\partial u_{j}\wedge\bar{\partial }u_{i}. \tag{8.3}\]
From Lemma 8.2 (4), we see that \(\sum_{i}\frac{\partial\widetilde{\max}_{\eta}}{\partial u_{i}}=1\) and each \(\frac{\partial\widetilde{\max}}{\partial u_{i}}\) is non-negative. Thus, from (8.3),
\[\sqrt{-1}\partial\bar{\partial}\widetilde{\max}_{\eta}(u_{1},\cdots,u_{l})\geq \sum_{i}\frac{\partial\widetilde{\max}_{\eta}}{\partial u_{i}}\sqrt{-1} \partial\bar{\partial}u_{i}.\]
The right hand side is a convex combination of \(\sqrt{-1}\partial\bar{\partial}u_{i}\). Thus, (8.1) holds by the monotonicity and the convexity of \(g\). (8.2) holds by the corresponding properties of \(\mathcal{P}_{\Lambda}\) in Lemma 5.10.
We use Richberg technique ([13], I. Corollary 5.19) to patch up PSH functions on manifolds.
**Corollary 8.4**.: _Let \(u_{\alpha}\in C^{\infty}(\overline{U}_{\alpha})\cap PSH(U_{\alpha},\omega)\) where \(U_{\alpha}\subset\subset M\) is a finite open covering of \(M\). Assume that \(u_{\beta}<\max\{u_{\alpha}(z)\}\) at every point \(z\in\partial U_{\beta}\) when \(\alpha\) runs
_over the indices s.t. \(z\in U_{\alpha}\). Choose a family \(\{\eta_{\alpha}\}\) of positive numbers so small that \(u_{\beta}(z)+\eta_{\beta}\leq\max_{U_{\alpha}\ni z}\{u_{\alpha}-\eta_{\alpha}\}\) for all \(\beta\) s.t. \(z\in\partial U_{\beta}\). Then the function_
\[\tilde{u}(z)=\widetilde{\max}_{(\eta_{\alpha})}(u_{\alpha}(z))\]
_is in \(C^{\infty}(M)\cap\text{PSH}(M,\omega)\)._
Next, we recall some well known definitions in complex geometry.
**Definition 8.5**.: Let \(T\) be a closed positive \((1,1)\)-current on \(M\). We call \(T\) a Kahler current if
\[T-\epsilon\gamma\geq 0,\]
for some \(\epsilon>0\) and \(\gamma\) is a Hermitian metric on \(M\).
We recall the definition of Lelong number:
**Definition 8.6**.: For \(p\in B_{R}\subset V\subset\mathbb{C}^{d}\), and \(\varphi\in\text{PSH}(V)\), define
\[\nu_{\varphi}(p,r)=\frac{\bar{\varphi}(p,R)-\bar{\varphi}(p,r)}{\log R-\log r}\]
where \(0<r<R\), and \(\bar{\varphi}(p,r)=\sup_{B_{r}(p)}\varphi(z)\). The Lelong number of \(\varphi\) at \(p\) is given by
\[\nu_{\varphi}(p)=\lim_{r\to 0^{+}}\nu_{\varphi}(p,r).\]
The Lelong number of a closed positive \((1,1)\)-current \(T\), denoted as \(\nu_{T}(p)\), is defined to be the Lelong number of the local potential function.
The classic result of Y.-T. Siu shows that the Lelong number is upper semi-continuous with respect to analytic Zariski topology.
**Theorem 8.7** (Siu's semi-continuity theorem [31]).: _If \(T\) is a closed positive \((1,1)\)-current on \(M\), then the upper level sets_
\[E_{c}(T)=\{p:\nu_{T}(p)\geq c\}\]
_are analytic subsets of \(M\) of dimension \(\leq n-1\)._
Finally we define the local regularization of a current.
**Definition 8.8**.: Let \(T\) be a closed positive \((1,1)\)-current on a smooth variety \(Y\) of dimension \(d\) and \(R>0\). We call \(T^{(r)}=\{T^{(r)}_{j}\}_{j\in\mathcal{J}}\) a local regularization of \(T\) with respect to a finite open covering \(\mathscr{P}=\{B_{j,3R}\}_{j\in\mathcal{J}}\) of \(Y\) if the following conditions (1) and (2) are satisfied.
1. Each \(B_{j,3R}\) is biholomorphic to a Euclidean ball \(B_{3R}(0)\) in \(\mathbb{C}^{d}\) equipped with standard Euclidean metric \(g_{j}\). Furthermore, \(B_{j,R}\simeq B_{R}(0)\subset\mathbb{C}^{d}\) is also a covering of \(Y\);
2. \(T_{j}^{(r)}(z)\) is the standard smoothing of \(T\) in \(B_{j,2R}\) defined by \[T_{j}^{(r)}(z)=\int_{B_{r}(0)}r^{-2m}\vartheta\left(\frac{z^{\prime}}{|r|}\right) T(z-z^{\prime})dV_{\mathbb{C}^{n}}(z^{\prime})\] for \(r\in(0,R)\). \(\vartheta(t)\) is a smooth non-negative function with support in \([0,1]\) satisfying \(\vartheta\equiv\mathrm{const}\) in \([0,1/2]\) and a normalization condition: \[\int_{B_{1}(0)}\vartheta(|z^{\prime}|)dV_{\mathbb{C}^{n}}(z^{\prime})=1.\]
If in each \(B_{3R}\), \(T=\sqrt{-1}\partial\bar{\partial}\varphi_{i}\) for some local PSH function \(\varphi_{i}\). Then it is easy to see that for \(r\in(0,R)\),
\[T_{j}^{(r)}=\sqrt{-1}\partial\bar{\partial}\varphi_{i}^{(r)},\]
where
\[\varphi_{i}^{(r)}(z)=\int_{B_{r}(0)}r^{-2m}\vartheta\left(\frac{z^{\prime}}{| r|}\right)\varphi_{i}(z-z^{\prime})dV_{\mathbb{C}^{n}}(z^{\prime}).\]
## 9. Initiation of induction argument
In this section, we state the main technical theorem of this part and initiate an induction proof. Several technical issues including the resolution of singularities are also discussed.
The following theorem is the main goal of the rest of this part:
**Theorem 9.1**.: _Let \(M\) be a \(n\)-dimensional connected compact Kahler manifold. Let \(\kappa>0\) be a constant. Let \(\Lambda\) be a close real form satisfies **H1**. If \([\omega_{0}]\) is \(([\Lambda],\kappa)\)-positive, then for any analytic subvariety \(Y\) with dimension \(d\leq n\), there is a neighborhood \(U_{Y}\) of \(Y\) in \(M\) and a Kahler form \(\omega_{U_{Y}}\in[\omega_{0}]|_{U_{Y}}\) satisfies the cone condition (1.5) on \(U_{Y}\)._
Without loss of generality, by a rescaling of \(\Lambda\), we may assume that \(\kappa=1\) in the rest of this part. Then the corresponding cone condition is
\[((1-\Lambda)\wedge\exp\omega)^{[n-1]}>0\quad\text{or}\quad\mathcal{P}_{ \Lambda}(\omega)<1. \tag{9.1}\]
Note Theorem 1.10, presented in our introduction, follows immediately from Theorem 9.1 and Theorem 7.1.
_Remark 9.2_.: If \(Y\) is a smooth subvariety of \(M\), Theorem 9.1 implies that \(\omega_{Y}=\omega_{U}|_{Y}\) satisfies the cone condition on \(Y\):
\[(\exp\omega_{Y})^{[\dim Y-1]}>(\Lambda|_{Y}\wedge\exp\omega_{Y})^{[\dim Y-1]} \quad\text{or}\quad\mathcal{P}_{\Lambda|_{Y}}(\omega_{Y})<1. \tag{9.2}\]
On the other hand, (9.2) also implies the existence of a subsolution in a neighborhood \(U\) of a smooth subvariety \(Y\). This is stated in the following lemma.
**Lemma 9.3**.: _Notations as above. Suppose that \(Y\) is a smooth subvariety of \(M\) and \(\omega_{Y}\in[\omega_{0}|_{Y}]\) satisfies the cone condition (9.2) on \(Y\). Then there exists a neighborhood \(U\) of \(Y\) in \(M\) and a Kahler form \(\omega_{U}\in[\omega_{0}|_{U}]\) such that \(\omega_{U}|_{Y}=\omega_{Y}\) and \(\omega_{U}\) satisfies the cone condition (9.1) in \(U\)._
Proof.: Let \(U_{1}\) be a tubular neighborhood of \(Y\) in \(M\) such that he projection map \(\mathrm{pr}_{Y}:U_{1}\to Y\) is well defined. Let \(N>0\) and define
\[\omega_{U}=\omega_{0}+i\partial\bar{\partial}\left(\mathrm{pr}_{Y}^{*}\varphi+ Nd_{\rho}^{2}(\cdot,Y)\right).\]
Clearly, there exists \(N>0\) and a neighborhood \(U_{2}\subset\subset U_{1}\) of \(Y\) such that \(\omega_{U}\) is Kahler in \(U_{2}\). At a point \(p\in Y\), we choose a local normal coordinate \(\{z^{i}\}\) with respect to \(\rho\) such that \(\frac{\partial}{\partial z^{1}},\cdots,\frac{\partial}{\partial z^{d}}\) are tangential to \(Y\) at \(p\), \(\frac{\partial}{\partial z^{d+1}},\cdots,\frac{\partial}{\partial z^{n}}\) are orthogonal to \(Y\), and \(\omega_{U}=A_{i\bar{j}}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{j}\) with
\[A=\left(\begin{array}{cc}H&C\\ C^{\dagger}&V\end{array}\right).\]
At \(p\), \(H_{i\bar{j}}\frac{\sqrt{-1}}{2}dz^{i}\wedge d\bar{z}^{j}=\pi^{*}\omega_{Y}\), \(V_{i\bar{j}}\geq N\delta_{i\bar{j}}\), and \(C=0\). In a neighborhood of \(p\), we have \(C=O(d_{\rho})\) and \(V>\frac{N}{2}\mathrm{Id}_{(n-d)\times(n-d)}\). By the continuity of \(\mathcal{P}_{\Lambda}\) in Lemma 5.10, we only need to verify the cone condition (9.1) at \(p\). Notice that at \(p\), if \(N\to\infty\), \(A^{-1}\) converges to \(H^{-1}\) uniformly. By (9.2), there exists \(\epsilon_{Y}>0\) such that for any linear subspace \(\mathcal{H}^{\prime}\subset\mathcal{T}_{p}Y\), it holds
\[1-\epsilon_{Y}>\langle\Lambda,\exp\chi_{\mathcal{H}^{\prime}}\rangle,\]
where \(\chi_{\mathcal{H}}=(H|_{\mathcal{H}^{\prime}})^{\bar{j}i}\,2\sqrt{-1}\frac{ \partial}{\partial\bar{z}^{j}}\wedge\frac{\partial}{\partial z^{i}}\). Therefore, there exists \(N>>1\) s.t. \(\mathcal{P}_{\Lambda}(A)<1-\frac{\epsilon_{Y}}{2}\). Hence, by the continuity of \(\mathcal{P}_{\Lambda}\) and the compactness of \(Y\), we may pick a uniform \(N\), a uniform \(\epsilon_{Y}\), and a neighborhood \(U\subset\subset U_{2}\) of \(Y\) s.t. \(\mathcal{P}_{\Lambda}(\omega_{U})<1-\frac{\epsilon_{Y}}{4}\) in \(U\), which verifies the cone condition (9.1).
We prove Theorem 9.1 by induction on the dimension of \(Y\). We start with the base case.
### Base case for induction
Since we have assumed the **H1** condition, \(\mathring{\Lambda}\) is \(k_{0}\)-UP. Clearly, for any smooth subvariety \(Y\) of dimension \(d>k_{0}\), \(\mathring{\Lambda}|_{Y}\) is also \(k_{0}\)-UP. To initiate the induction argument, we need to show that Theorem 9.1 is valid for any subvariety \(Y\) with dimension \(d\leq k_{0}\). The following lemma generalizes Song's Lemma 2.1 in [32].
**Lemma 9.4**.: _Notations as above. Suppose that \(\mathring{\Lambda}\) is \(k_{0}\)-UP. Let \(Y\) be an analytic subvariety of \(M\) with \(\dim Y\leq k_{0}\). If \([\omega_{0}]\) is \((\kappa,[\Lambda])\)-positive, then there exists a neighborhood \(U_{Y}\) of \(Y\) in \(M\) and a Kahler metric \(\omega_{U_{Y}}\in[\omega_{0}]|_{U_{Y}}\) such that the cone condition (1.5) holds for \(\omega_{U_{Y}}\) in \(U_{Y}\)._
Proof.: Case 1. \(\dim Y=0\), the result is obvious since any Kahler class in a neighborhood of a point is trivial.
We may assume from now on that there exists \(d\geq 0\) such that Lemma 9.4 holds for any \(d^{\prime}\) dimensional subvariety \(Y^{\prime}\subset M\) where \(d^{\prime}\leq d\). Let \(Y\) be an subvariety of dimenison \(d\leq k_{0}\).
Case 2. \(1\leq d\leq k_{0}.\) Let \(\Lambda^{\prime}=\Lambda^{[d]}\) if \(k_{0}=d\) or \(\Lambda^{\prime}=0\) if \(k_{0}>d\).
Note that \(([\Lambda],\kappa)\)-positivity implies the following
\[\int_{Y}(1-\Lambda^{\prime})\wedge\exp\omega>0.\]
Let \(S_{Y}\) be the singular set of \(Y\). Let \(\Phi:M^{\prime}\to M\) be the resolution of singularities of \(Y\) by successive blowups along smooth centers. Let \(\hat{Y}\) be the strict transform of \(Y\). Denote the exceptional divisor \(E_{\Phi}\). Let \(\sigma\) be a defining section of the line bundle \([E_{\Phi}]\) and \(h\) be a hermitian metric on \([E_{\Phi}]\). Let \(F_{h}\) be the curvature form on \(h\). Then there is a small \(\delta>0\) such that \(\varpi_{Y}:=\omega_{Y}-\delta F_{h}\) is a Kahler metric on \(\Phi^{-1}(W)\). Pick a small \(\epsilon\) such that
\[\int_{Y}\exp\omega-(1+2\epsilon)\Lambda^{\prime}>0. \tag{9.3}\]
By choosing a smaller \(\delta\) if necessary, we may assume that
\[\int_{\hat{Y}}\exp\varpi-(1+\epsilon)\Phi^{*}\Lambda^{\prime}>0. \tag{9.4}\]
On \(\hat{Y}\), we may solve the following Monge-Ampereequation
\[(\varpi_{Y}+\sqrt{-1}\partial\bar{\partial}u)^{d}=(1+\epsilon)\Phi^{*}\Lambda ^{\prime}+c\varpi_{Y}^{d}, \tag{9.5}\]
for some constant \(c>0\). In \(M^{\prime}\backslash E_{\Phi}\), we may write \(-F_{h}=\sqrt{-1}\partial\bar{\partial}\log|\sigma|_{h}^{2}\). Thus, there exists \(\varphi_{Y}\in C^{\infty}(W\backslash S_{Y})\cap{\rm PSH}(W,\omega_{0})\) such that
\[(\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi_{Y})^{d}-(1+\epsilon) \Lambda^{\prime}>0,\]
as a \((d,d)\) form away from \(S_{Y}\); Furthermore, the Lelong number of \(\varphi_{Y}\) at \(S_{Y}\) is larger than \(\delta\).
By the induction hypothesis, there exists a neighborhood \(U\) of \(S_{Y}\) in which there is a smooth Kahler metric \(\omega_{U}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi_{U}\) satisfying the condition \(((1-\Lambda)\wedge\exp\omega_{U})^{[n-1]}>0\). We pick neighborhoods \(U_{0}\Subset U_{1}\Subset U_{2}\Subset U\) of \(p\).
Without loss of generality, we may subtract a large number from \(\varphi_{U}\) such that \(\varphi_{U}<\varphi_{Y}-2\) in \(W\backslash U_{2}\). Since \(\varphi_{Y}\) diverges to \(-\infty\) at \(p\), shrinking \(U_{1}\) if necessary, we may assume that in \(U_{1}\), \(\varphi_{Y}+2<\varphi_{U}\) in \(U_{1}\).
For a point \(z\in W\backslash U_{0}\), let \(d_{\rho}(z)\) be the distant function to \(Y\cap(W\backslash U_{0})\) with respect to \(\rho\). Let
\[\tilde{\varphi}_{Y}=\varphi_{Y}+Nd_{\rho}^{2}. \tag{9.6}\]
By the same argument as in Lemma 9.3, for sufficiently large \(N\), \(\tilde{\omega}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\tilde{\varphi}_{Y}\) is a Kahler form in a neighborhood \(\tilde{U}\) of \(Y\cap(W\backslash U_{1})\) and satisfies the cone condition (9.1) on \(\tilde{U}.\) Shrinking \(\tilde{U}\) if necessary, we may assume that in \(\tilde{U}\cap U_{1}\) the follow holds:
\[\tilde{\varphi}_{Y}<\varphi_{U}-1,\text{ in }\tilde{U}\cap U_{1};\ \tilde{ \varphi}_{Y}>\varphi_{U}+1,\text{ in }\tilde{U}\backslash U_{2}. \tag{9.7}\]
Let
\[\tilde{\varphi}=\widetilde{\max}_{(1/2,1/2)}(\tilde{\varphi}_{Y},\varphi_{U})\]
in \(U_{Y}=\tilde{U}\cup U_{1}\). By the above construction, we have on \(U_{1}\cap\tilde{U}\)\(\varphi_{U}-\frac{1}{2}>\tilde{\varphi}_{Y}+\frac{1}{2}\) and on \(\tilde{U}\backslash U_{2}\), \(\tilde{\varphi}_{Y}-\frac{1}{2}>\varphi_{U}+\frac{1}{2}\). Therefore, by Richberg's technique (Corollary 8.4), \(\tilde{\varphi}\) is smooth and in \(\operatorname{PSH}(U_{Y},\omega_{0})\). Moreover, by Corollary 8.3, \(\widetilde{\max}\) preserves the subsolution. Thus, \(\omega_{0}+\sqrt{-1}\partial\bar{\partial}\tilde{\varphi}\) satisfies the cone condition.
Now that we have established the base case for the induction argument, from now on, we may assume \(\dim Y>k_{0}\) in the later discussions. We state our induction hypothesis.
**Induction Hypothesis**: There exists \(d\in\mathbb{N}\) and \(d>k_{0}\) such that for any subvariety \(Y\) if \(\dim Y\leq d-1\leq n-1\), then there exists a neighborhood \(U\) of \(Y\) in \(M\) and a Kahler metric \(\omega_{U}\in[\omega_{0}|_{U}]\) such that \(\omega_{U}|_{Y}=\omega_{Y}\) and \(\omega_{U}\) satisfies the cone condition (9.1) in \(U\).
### Resolution of singularities
Let \(Y\) be a subvariety of \(M\) with \(\dim Y=d>k_{0}\). Assume that \(Y\) is irreducible for simplicity. Apply the resolution of singularities for \(Y\) to get
\[\Phi:M^{\prime}\to M, \tag{9.8}\]
Figure 9.1. Neighborhoods
where \(\Phi\) is achieved by a sequence of blow-ups along smooth centers. Let \(\hat{Y}\) be the strict transform of \(Y\) in \(M^{\prime}\) which is a smooth \(d\)-dimensional submanifold. Let \(S_{Y}\) be the singular set of \(Y\) and
\[S_{\hat{Y}}=\Phi^{-1}(S_{Y})\cap\hat{Y}. \tag{9.9}\]
Since \(Y\) is irreducible, \(Y\backslash S_{Y}=\Phi(\hat{Y}\backslash S_{\hat{Y}})\).
Let \(S\) be the exceptional locus of \(\Phi\), \(h_{S}\) be a hermitian metric on the line bundle \([S]\) associated to \(S\), and \(F_{h_{S}}\) be the curvature. Then for some small \(\delta_{S}>0\),\(\varpi:=\rho-\delta_{S}F_{h_{S}}\) is Kahler on \(M^{\prime}\) for some small \(\delta_{S}>0\). We may further assume that
\[\varpi=\rho-\delta_{S}F_{h_{S}}>\frac{\rho}{2}. \tag{9.10}\]
If \(\sigma_{S}\) is a defining section of the line bundle \([S]\), and \(\phi_{S}=\delta_{S}\log|\sigma_{S}|^{2}_{h_{S}}\) then on \(M^{\prime}\backslash S\) we have
\[\varpi=\rho+\sqrt{-1}\partial\bar{\partial}\phi_{S}.\]
Notice \(\phi_{S}\) has positive Lelong number along \(S\).
We need to perturb \([\omega_{0}]\) and \([\Lambda]\) to obtain strict \((\kappa,[\Lambda])\)-positivity on \(\hat{Y}\). For this purpose, we define the following:
\[\hat{\omega}_{0}=\hat{\omega}_{0}(t,\hat{\epsilon})=(1+Kt)\,\omega_{0}+\hat{ \epsilon}t\varpi,\ \hat{\Lambda}=\hat{\Lambda}(t,\hat{\epsilon})=\Lambda+\hat{ \epsilon}^{n}t^{n}\frac{\varpi^{k_{0}}}{k_{0}!}, \tag{9.11}\]
\[\hat{\rho}=\hat{\rho}(t,\hat{\epsilon})=\rho+\hat{\epsilon}^{n}t^{n}\varpi. \tag{9.12}\]
Here \(K>1\) is chosen large enough such that
\[\Lambda\leq K\left(\exp(\frac{\omega_{0}}{2n})-1\right). \tag{9.13}\]
The following result shows that \([\hat{\omega}_{0}]\) is \((\kappa,[\hat{\Lambda}])\)-positive.
**Lemma 9.5** (Song Lemma 4.1).: _There is a small \(\hat{\epsilon}_{Y}\) s.t. if \(\hat{\epsilon}\in(0,\hat{\epsilon}_{Y}),\ t\in(0,1]\) then_
\[\int_{\hat{Y}}e^{\hat{\omega}_{0}}\wedge\left(1-\hat{\Lambda}\right)>0, \tag{9.14}\]
_and for any subvariety \(V^{\prime}\) of \(\hat{Y}\) with dimension \(k<d\), it holds_
\[\int_{V^{\prime}}e^{\hat{\omega}_{0}}\wedge\left(1-\hat{\Lambda}\right)>\frac {1}{2}\int_{V^{\prime}}\exp(\hat{\epsilon}t\varpi). \tag{9.15}\]
Proof.: For \(\hat{Y}\), we have
\[\int_{\hat{Y}}e^{\hat{\omega}_{0}}\wedge\left(1-\hat{\Lambda}\right) =\int_{\hat{Y}}e^{(1+Kt)\omega_{0}+\hat{\epsilon}t\varpi}\wedge (1-\Lambda)-\hat{\epsilon}^{n}t^{n}\int_{\hat{Y}}e^{\hat{\omega}_{0}}\wedge \frac{\varpi^{k_{0}}}{k_{0}!}\] \[\geq(1+Kt)^{d}\int_{Y}e^{\omega_{0}}\wedge(1-\Lambda)+O(\hat{ \epsilon}(1+t)^{d}). \tag{9.16}\]
Since \(\int_{V}e^{\omega_{0}}\wedge(1-\Lambda)>0\), there is a \(\hat{\epsilon}^{\prime}_{Y}>0\) so that if \(\hat{\epsilon}<\hat{\epsilon}^{\prime}_{Y}\) then (9.14) holds.
If \(k<d\), we may assume \(V^{\prime}\) is a irreducible. Let \(W=\Phi(V^{\prime})\). By induction hypothesis, there is a Kahler form \(\omega_{W}\) in a neighborhood \(U_{W}\) of \(W\) in \(M\) so that \(\omega_{W}\in[\omega_{0}]|_{W}\) and
\[(\exp\omega_{W}\wedge(1-\Lambda))^{[n-1]}>0.\]
Then by Lemma 5.8, for any \(l\leq n-1\),
\[(\exp\omega_{W}\wedge(1-\Lambda))^{[l]}\geq 0,\]
which implies
\[(\exp\omega_{W}\wedge(1-\Lambda))^{[l]}\wedge\varpi^{k-l}\geq 0, \tag{9.17}\]
in \(U^{\prime}_{W}=\Phi^{-1}(W)\) in \(M^{\prime}\) for \(l\leq k\). Let
\[\hat{\omega}_{1}:=(1+Kt)\omega_{W}+\hat{\epsilon}t\varpi. \tag{9.18}\]
Then by (9.17),
\[\int_{V^{\prime}}e^{\hat{\omega}_{1}}\wedge\left(1-\hat{\Lambda}\right) =\int_{V^{\prime}}e^{(1+Kt)\omega_{W}}\wedge e^{\hat{\epsilon}t \varpi}\wedge(1-\Lambda-\hat{\epsilon}^{n}t^{n}\frac{\varpi^{k_{0}}}{k_{0}!})\] \[=\int_{V^{\prime}}e^{(1+Kt)\omega_{W}}\wedge e^{\hat{\epsilon}t \varpi}\wedge(1-\Lambda)-\hat{\epsilon}^{n}t^{n}\int_{V^{\prime}}e^{(1+Kt) \omega_{W}}\wedge e^{\hat{\epsilon}t\varpi}\wedge\frac{\varpi^{k_{0}}}{k_{0}!}\] \[\geq\int_{V^{\prime}}\frac{(\hat{\epsilon}t)^{k}}{k!}\varpi^{k} -\sum_{a=0}^{k-k_{0}}R(a)\int_{V^{\prime}}\frac{\omega_{W}^{a}}{a!}\wedge\frac {\varpi^{k-a}}{(k-k_{0}-a)!k_{0}!}, \tag{9.19}\]
where \(R(a)=(1+Kt)^{a}(\hat{\epsilon}t)^{n+k-k_{0}-a}\). There is a uniform constant \(C_{1}\), s.t. \(C_{1}[\varpi]\geq[\Phi^{*}\omega_{0}]\). Thus,
\[\int_{V^{\prime}}e^{\hat{\omega}_{1}}\wedge\left(1-\hat{\Lambda}\right)\geq \left(\frac{(\hat{\epsilon}t)^{k}}{k!}-C_{2}\sum_{a=0}^{k-k_{0}}R(a)\right) \int_{V^{\prime}}\varpi^{k}. \tag{9.20}\]
where \(C_{2}=C_{2}(C_{1},k,k_{0})\). If \(k<k_{0}\), we have
\[\int_{V^{\prime}}e^{\hat{\omega}_{1}}\wedge\left(1-\hat{\Lambda}\right)>\frac {(\hat{\epsilon}t)^{k}}{k!}\int_{V^{\prime}}\varpi^{k}. \tag{9.21}\]
If \(k\geq k_{0}\), the power of \(\hat{\epsilon}\) in \(R(a)\) is greater than \(k\) and the power of \(t\) is bigger than \(n\). Hence, there is an \(\hat{\epsilon}^{\prime\prime}_{Y}=\hat{\epsilon}^{\prime\prime}_{Y}(k,n,C_{2},K)\) such that if \(\hat{\epsilon}<\hat{\epsilon}^{\prime\prime}_{Y}\),
\[R(a)<\frac{1}{2kC_{2}}\frac{(\hat{\epsilon}t)^{k}}{k!}. \tag{9.22}\]
Substituting (9.22) into (9.20) yields (9.15). Finally, choose \(\hat{\epsilon}_{Y}=\min(\hat{\epsilon}^{\prime}_{Y},\hat{\epsilon}^{\prime \prime}_{Y})\) and we have finished the proof.
We proceed to check the \(k_{0}\)-UP condition for \(\hat{\Lambda}\).
**Lemma 9.6**.: _If \(\hat{\Lambda}\) is \(k_{0}\)-UP with respect to \(\rho\) and a uniform constant \(m>0\). Then \(\hat{\bar{\Lambda}}\) is \(k_{0}\)-UP with respect to \(\hat{\rho}\) and a constant_
\[m^{\prime}=\min\{m,k_{0}\left(\frac{1}{2}+(\hat{\epsilon}t)^{n}\right)^{k_{0}- 1}\}.\]
Proof.: By 9.10, we have assumed \(\varpi>\frac{\rho}{2}\). Thus
\[\frac{\hat{\rho}^{k_{0}}}{k_{0}!} =\frac{(\rho+\hat{\epsilon}^{n}t^{n}\varpi)^{k_{0}}}{k_{0}!}\] \[\leq\frac{\rho^{k_{0}}}{k_{0}!}+(\hat{\epsilon}t)^{n}\frac{\varpi ^{k_{0}}}{k_{0}!}\left(\sum_{a=0}^{k_{0}-1}\frac{(\hat{\epsilon}t)^{n(k_{0}-1- a)}k_{0}!}{2^{a}a!(k_{0}-a)!}\right)\] \[\leq\frac{\rho^{k_{0}}}{k_{0}!}+(\hat{\epsilon}t)^{n}k_{0}\left( \frac{1}{2}+(\hat{\epsilon}t)^{n}\right)^{k_{0}-1}\frac{\varpi^{k_{0}}}{k_{0 }!}. \tag{9.23}\]
Since \(\hat{\Lambda}\geq m\frac{\rho^{k_{0}}}{k_{0}!}\), by (9.23),
\[\hat{\Lambda}-\hat{\Lambda}^{[n]} =\hat{\bar{\Lambda}}+(\hat{\epsilon}t)^{n}\frac{\varpi^{k_{0}}}{ k_{0}!}\geq m\frac{\rho^{k_{0}}}{k_{0}!}+(\hat{\epsilon}t)^{n}\frac{\varpi^{k_{0}}}{k_{0 }!}\] \[\geq m^{\prime}\frac{\hat{\rho}^{k_{0}}}{k_{0}!}.\]
We have proved the Lemma.
### Equations on \(\hat{Y}\)
Next, we consider the following equation on \(\hat{Y}\) :
\[(\exp\hat{\omega}\wedge(1-\hat{\Lambda}))^{[d]}=c_{t,\hat{\epsilon}}(\exp\hat{ \rho})^{[d]}, \tag{9.24}\]
for \(t>0\), \(\hat{\epsilon}\in(0,\hat{\epsilon}_{Y})\), and \(\hat{\omega}\in[\hat{\omega}_{0}]\). Here \(c_{t,\hat{\epsilon}}\) is a normalization constant defined s.t.
\[\int_{\hat{Y}}(\exp\hat{\omega}\wedge(1-\hat{\Lambda}))^{[d]}=\int_{\hat{Y}}c _{t,\hat{\epsilon}}(\exp\hat{\rho})^{[d]}.\]
We may choose a smaller \(\hat{\epsilon}_{Y}\) if necessary, such that \(\hat{\epsilon}_{Y}<\left(\frac{1}{4n!}\right)^{\frac{1}{n-k_{0}}}\).
The following lemma implies that \(\hat{\omega}_{0}\) is a subsolution of (9.24) for \(t=1\).
**Lemma 9.7**.: _If \(K>1\) is chosen as in (9.13), we have_
\[e^{(1+K)\omega_{0}}\wedge(1-\Lambda)\geq\frac{1}{2}e^{(1+K)\omega_{0}} \tag{9.25}\]
_on \(M\). Moreover, on \(\hat{Y}\), for \(t=1\) and \(\hat{\epsilon}<\left(\frac{1}{4n!}\right)^{\frac{1}{n-k_{0}}}\), we have_
\[\left(e^{\hat{\omega}_{0}}\wedge(1-\hat{\Lambda})\right)^{[d-1]}>0. \tag{9.26}\]
Proof.: By our choice of \(K\) in (9.13),
\[e^{(1+K)\omega_{0}}(1-\Lambda) \geq e^{(1+K)\omega_{0}}\wedge((1+K)-Ke^{\frac{\omega_{0}}{2n}})\] \[=\left((1+K)e^{(1+K)\omega_{0}}-Ke^{(1+K+\frac{1}{2n})\omega_{0}}\right)\] \[=\sum_{k=0}^{n}\left((1+K)^{k+1}-K(1+K+\frac{1}{2n})^{k}\right) \frac{\omega_{0}^{k}}{k!}, \tag{9.27}\]
on \(M\). If \(k=0\), then \((1+K)-K=1\). If \(k\geq 1\) then
\[(1+K)^{k+1}-K(1+K)^{k}\left(1+\frac{1}{2n(1+K)}\right)^{k}\geq(1+K)^{k}K\left( 1+\frac{1}{2K}-e^{\frac{1}{2(1+K)}}+\frac{1}{2K}\right). \tag{9.28}\]
If \(K>1\), we have \(1+\frac{1}{2K}>e^{\frac{1}{2(1+K)}}\). By (9.27) and (9.28), we have
\[e^{(1+K)\omega_{0}}\wedge(1-\Lambda)\geq\sum_{k=0}^{n}\frac{(1+K)^{k}K}{2K} \frac{\omega_{0}^{k}}{k!}=\frac{1}{2}e^{(1+K)\omega_{0}}. \tag{9.29}\]
Thus we have proved (9.25).
At \(t=1\), by (9.29), we have
\[(\exp\hat{\omega}_{0}\wedge(1-\hat{\Lambda}))^{[d-1]} =\left(e^{(1+K)\omega_{0}}\wedge(1-\Lambda-\hat{e}^{n}\frac{ \varpi^{k_{0}}}{k_{0}!})\wedge e^{\hat{\epsilon}\varpi}\right)^{[d-1]}\] \[\geq\left(e^{(1+K)\omega_{0}}\wedge\left(\frac{1}{2}e^{\hat{ \epsilon}\varpi}-\hat{\epsilon}^{n}\frac{\varpi^{k_{0}}}{k_{0}!}\wedge e^{\hat {\epsilon}\varpi}\right)\right)^{[d-1]}\] \[=\left(e^{(1+K)\omega_{0}}\wedge\left(\sum_{k=0}^{d-2}\frac{1}{2} \cdot\frac{\hat{\epsilon}^{k}}{k!}\varpi^{k}-\sum_{k\geq k_{0}}^{d-2}\frac{ \hat{\epsilon}^{n+k-k_{0}}}{(k-k_{0})!k_{0}!}\varpi^{k}\right)\right)^{[d-1]}\] \[\geq\frac{1}{4}\frac{\hat{\epsilon}^{d-1}}{(d-1)!}\varpi^{d-1},\]
since \(\hat{\epsilon}<\left(\frac{1}{4n!}\right)^{\frac{1}{n-k_{0}}}\).
We run a continuity argument for (9.24). Let
\[\mathbf{I}_{\hat{\epsilon}}:=\{t\in(0,1]:(\ref{eq:1})\text{ has a smooth solution for }\hat{\epsilon}\in(0,\hat{\epsilon}_{Y})\}. \tag{9.30}\]
By Lemma 9.7, \(\hat{\omega}_{0}\) is a subsolution if \(t=1\) and \(\hat{\epsilon}<\hat{\epsilon}_{Y}\). We conclude that \(1\in\mathbf{I}_{\hat{\epsilon}}\). \(\mathbf{I}_{\hat{\epsilon}}\) is clearly open as the cone condition is an open condition. Let
\[t_{\hat{\epsilon}}:=\inf\mathbf{I}_{\hat{\epsilon}}.\]
_Remark 9.8_.: We may and will assume that \(t_{\hat{\epsilon}}=0\) without loss of generality. If \(t_{\hat{\epsilon}}=t^{\prime}_{\hat{\epsilon}}>0\), we may replace \(\omega_{0}\) by \(\hat{\omega}_{0}(t^{\prime}_{\hat{\epsilon}},\hat{\epsilon})\). If \(t^{\prime}_{\hat{\epsilon}}>0\), \(\hat{\omega}(t^{\prime}_{\hat{\epsilon}},\hat{\epsilon})\) is Kahler. We can go through the same induction argument to prove Theorem 9.1. Then (9.24) admits a solution in \([\hat{\omega}_{0}(t^{\prime}_{\hat{\epsilon}},\hat{\epsilon})]\). By the openness of the cone condition, we have \(t_{\hat{\epsilon}}<t^{\prime}_{\hat{\epsilon}}\), which a contradiction. Therefore, from now on, we assume that \(t_{\hat{\epsilon}}=0\).
## 10. Mass concentration
In this Section, we prove a mass concentration result for our PDE based on techniques from [14, 4, 32]. We use notations as in previous sections.
By (9.30), for \(t\in\mathbf{I}_{\epsilon}\), there exists a \(\omega_{t}\in[\hat{\omega}_{0}(t,\hat{\epsilon})]\) solving
\[\left(\exp\omega_{t}\wedge\left(1-\hat{\Lambda}\right)-c_{t,\hat{\epsilon}} \exp\hat{\rho}\right)^{[d]}=0, \tag{10.1}\]
for some \(c_{t,\hat{\epsilon}}>0\). Here \(\hat{\omega}_{0}\), \(\hat{\Lambda}\), \(\hat{\rho}\) are defined in (9.11) and (9.12). As before, we denote \(\Omega_{t}=\exp\omega_{t}\).
The main result of this section is the following theorem.
**Theorem 10.1**.: _Under the same assumption as in Theorem 9.1, then there exists \(\delta>0\), a finite Euclidean ball partition \(\mathscr{P}=\{B_{j,3R}\}_{j\in\mathcal{J}}\) of \(\hat{Y}\), \(\varepsilon>0\), \(r_{0}>0\), and a Kahler current \(\Upsilon\in(1-\delta)[\omega_{0}]\) s.t. for all \(0<r<r_{0}\), \(j\in\mathcal{J}\),_
\[\mathcal{P}_{\Lambda}(\Upsilon^{(r)})<1-\varepsilon,\text{ in }B_{j,R},\]
_where \(\Upsilon^{(r)}\) is given in Definition 8.8. Furthermore, \(\Upsilon\) has positive Lelong numbers along \(S_{\hat{Y}}\)._
_Remark 10.2_.: We may simplify our argument by making following assumptions without loss of generality:
1. \(d>k_{0}\), by Lemma 9.4;
2. \(\mathbf{I}_{\hat{\epsilon}}=(0,1]\), which may be achieved by arguments in Remark 9.8;
3. \(\hat{\Lambda}^{[d]}=0\); Since the subsolution condition is only related to the components of degree less than \(d\); If \(\hat{\Lambda}^{[d]}\neq 0\), we may take another \(c^{\prime}_{t,\hat{\epsilon}}\geq 0\) such that \[\int_{\hat{Y}}\exp\omega_{t}\wedge(1-\sum_{k=1}^{d-1}\hat{\Lambda}^{[k]})=c^{ \prime}_{t,\hat{\epsilon}}\int_{\hat{Y}}\exp\hat{\rho};\] Therefore, after replacing \(\hat{\Lambda}\) by \(\sum_{k=1}^{d-1}\hat{\Lambda}^{[k]}\) and \(c_{t,\hat{\epsilon}}\) by \(c^{\prime}_{t,\hat{\epsilon}}\), the cone condition does not change and the solvability is not affected;
4. \(\hat{\Lambda}\) is \(k_{0}\)-UP with respect to \(\hat{\rho}\) and a uniform constant \(m^{\prime}>0\). This reduction is possible due to Lemma 9.6.
### The lifted equation on the product manifold
Following [4], we consider a new equation on the product space \(\mathcal{Y}=\hat{Y}\times\hat{Y}\). We fix some notations: On \(\mathcal{Y}\), let \(\pi_{1}:\mathcal{Y}\rightarrow\hat{Y}\), \((x,y)\mapsto x\) and \(\pi_{2}:\mathcal{Y}\rightarrow\hat{Y}\), \((x,y)\mapsto y\) be canonical projections. Let \(\hat{Y}_{1}=\pi_{1}(\mathcal{Y})\), \(\hat{Y}_{2}=\pi_{2}(\mathcal{Y})\), and let \(\iota_{i}:\hat{Y}_{i}\hookrightarrow\mathcal{Y}\) be canonical embeddings. We denote
\[\Lambda_{x}:=\pi_{1}^{*}\hat{\Lambda},\ \Lambda_{y}:=\frac{1}{d}\pi_{2}^{*} \hat{\rho}. \tag{10.2}\]
We make the following observation: \(\hat{\rho}\) can be viewed as a solution to the equation of \(\omega\in[\hat{\rho}]:\)
\[(\frac{1}{d}\hat{\rho}\wedge\exp\omega)^{[d]}=\frac{\omega^{d}}{d!}\]
because of the simple fact
\[\left(\frac{1}{d}\hat{\rho}\wedge\exp\hat{\rho}\right)^{[d]}=\frac{\hat{\rho} ^{d}}{d!}. \tag{10.3}\]
Let
\[\boldsymbol{\Lambda}:=\Lambda_{x}+\Lambda_{y},\ \boldsymbol{\rho}:=\pi_{1}^{*} \hat{\rho}+\pi_{2}^{*}\hat{\rho},\ \boldsymbol{\varpi}=\pi_{1}^{*}\varpi+\pi_{2}^{*}\varpi. \tag{10.4}\]
Let \(\{B_{j}\}\) be a finite open cover of \(\Delta=\{(x,x):x\in\hat{Y}\}\subset\hat{Y}\times\hat{Y}\) with balls \(B_{j}\). Let \(\theta_{j}^{2}\) be a partition of unity subordinate to \(B_{j}\). Let \(g_{j,k}\) be the defining function of \(\Delta\) in \(B_{j}\). Let
\[\psi=\frac{1}{2}\log(\sum_{j,k}\theta_{j}^{2}|g_{j,k}|^{2}),\ \psi_{s}=\frac{1}{2}\log(\sum_{j,k}\theta_{j}^{2}|g_{j,k}|^{2}+s^{2}). \tag{10.5}\]
Define
\[\boldsymbol{\rho}_{s}=\boldsymbol{\rho}+\delta_{\rho}\sqrt{-1}\partial\bar{ \partial}\left(\pi_{1}^{*}\phi_{S}+\pi_{2}^{*}\phi_{S}\right)+\delta_{\rho}^{ 2}\sqrt{-1}\partial\bar{\partial}\psi_{s}. \tag{10.6}\]
\(\delta_{\rho}\) is chosen small but fixed so that \(\boldsymbol{\rho}_{s}\) is still Kahler. We will determined \(\delta_{\rho}\) later in Proposition 10.4. Let
\[f_{t,s}=\left(\frac{\boldsymbol{\rho}_{s}^{2d}}{\boldsymbol{\rho}^{2d}}-(1+c_ {t,\hat{\epsilon},\delta_{\rho}})\right)+c_{t,\hat{\epsilon}}. \tag{10.7}\]
where \(c_{t,\hat{\epsilon},\delta_{\rho}}\) is chosen such that
\[\int_{\mathcal{Y}}\left(\boldsymbol{\rho}_{s}^{2d}-(1+c_{t,\hat{\epsilon}, \delta_{\rho}})\boldsymbol{\rho}^{2d}\right)=0.\]
Note, \(c_{t,\hat{\epsilon},\delta_{\rho}}\) is uniformly bounded for any \(t\in(0,1)\) and \(\hat{\epsilon}\in(0,\hat{\epsilon}_{Y})\).
We denote \(\boldsymbol{\tau}=(t,s)\) with \(t,s>0\). We consider the following lifted equation on \(\mathcal{Y}\):
\[2\frac{\boldsymbol{\omega}_{\tau}^{2d}}{(2d)!}=\sum_{k=1}^{d-1}\boldsymbol{ \Lambda}^{[k]}\wedge\frac{\boldsymbol{\omega}_{\tau}^{2d-k}}{(2d-k)!}+f_{ \boldsymbol{\tau}}\frac{\boldsymbol{\rho}^{2d}}{(2d)!}, \tag{10.8}\]
for \(\boldsymbol{\omega}_{\boldsymbol{\tau}}\in[\pi_{1}^{*}\omega_{t}+\pi_{2}^{*}\hat{ \rho}]\). If we denote
\[\boldsymbol{\Omega}_{\boldsymbol{\tau}}:=\exp\boldsymbol{\omega}_{\boldsymbol{ \tau}},\ \mathbf{P}:=\exp\boldsymbol{\rho}, \tag{10.9}\]
then we may re-write (10.8) as
\[2\boldsymbol{\Omega}_{\boldsymbol{\tau}}^{[2d]}=(\boldsymbol{\Lambda}\wedge \boldsymbol{\Omega}_{\boldsymbol{\tau}})^{[2d]}+f_{\boldsymbol{\tau}}\mathbf{P }^{[2d]}. \tag{10.10}\]
**Lemma 10.3**.: _Notations as above. The canonical splitting gives a labeled splitting \(\mathcal{O}\) with respect to \(\boldsymbol{\rho}\). Moreover, \(\boldsymbol{\Lambda}\) is \(\mathcal{O}\)-UP._
Proof.: Following arguments in Example 2.8, the product structure gives the labeled splitting
\[\mathcal{O}_{(x,y)}=\{2,(d,d),\{\mathcal{T}_{x}\hat{Y}_{1},\mathcal{T}_{y} \hat{Y}_{2}\},(k_{0},1)\}.\]
By Lemma 9.6, we have
\[\boldsymbol{\Lambda}\geq\min\{m^{\prime},\frac{1}{d}\}\left(\frac{\hat{\rho} ^{k_{0}}(x)}{k_{0}!}+\hat{\rho}(y)\right).\]
Hence, \(\boldsymbol{\Lambda}\) is \(\mathcal{O}\)-UP with uniform constant \(m^{\prime\prime}=\min\{m^{\prime},\frac{1}{d}\}\).
**Proposition 10.4**.: _Notations as above. For \(\delta_{\rho}\) small, there is a smooth solution to (10.8) if \(\Lambda\) satisfies condition **H1**._
Proof.: By Lemma 10.3, \(\boldsymbol{\Lambda}\) satisfies \(\mathcal{O}\)-UP condition with a uniform constant \(m^{\prime\prime}>0\). Let
\[f_{\min}=-\min\left\{\frac{m^{\prime\prime}}{8d+2}\gamma_{\min}(\frac{2\kappa }{m^{\prime\prime}},2,(d,d),(k_{0},1)),\frac{\kappa\int_{\mathcal{Y}}(\pi_{1} ^{*}\omega_{t}+\pi_{2}^{*}\hat{\rho})^{2d}}{2\int_{\mathcal{Y}}\boldsymbol{ \rho}^{2d}}\right\}.\]
Note \(f_{\min}<-c<0\) where \(c\) can be chosen independent of \(\hat{\epsilon}\) and \(t\). From Lemma 10.10, there exits a uniform small \(\delta_{\rho}\) independent of \(\hat{\epsilon}\) and \(t\) such that \(f_{\boldsymbol{\tau}}>f_{\min}\). Therefore, \(\boldsymbol{\Lambda}+f_{\boldsymbol{\tau}}\mathbf{P}^{[2d]}\) satisfies condition **H2**. It is easy to check that integrals of both sides of (10.8) match. Thus apply Theorem 1.7, one only needs to check the cone condition.
Let
\[\boldsymbol{\omega}_{0}=\pi_{1}^{*}\omega_{t}+\pi_{2}^{*}\rho,\ \boldsymbol{\Omega}_{0}=\exp\boldsymbol{\omega}_{0},\ \hat{P}=\exp\hat{\rho}. \tag{10.11}\]
We claim \(\boldsymbol{\omega}_{0}\in\mathcal{C}_{\boldsymbol{\Lambda}}^{2}\). Since
\[\boldsymbol{\Omega}_{0}^{[2d-1]}=\Omega_{t}(x)^{[d]}\wedge P(y)^{[d-1]}+\Omega _{t}(x)^{[d-1]}\wedge P(y)^{[d]}, \tag{10.12}\]
we have
\[(\boldsymbol{\Lambda}\wedge\boldsymbol{\Omega}_{0})^{[2d-1]} =(\Lambda_{x}\wedge\Omega_{t}(x))^{[d]}\wedge\hat{P}(y)^{[d-1]}+( \Lambda_{x}\wedge\Omega_{t}(x))^{[d-1]}\wedge\hat{P}(y)^{[d]}\] \[+\Omega_{t}(x)^{[d-1]}\wedge\left(\Lambda_{y}\wedge\hat{P}(y) \right)^{[d]}+\Omega_{t}(x)^{[d]}\wedge(\Lambda_{y}\wedge\hat{P}(y))^{[d-1]}. \tag{10.13}\]
By equation (10.1),
\[(\boldsymbol{\Lambda}\wedge\boldsymbol{\Omega}_{0})^{[2d-1]} =\left(\Omega_{t}-c_{t,\hat{\epsilon}}\hat{P}(x)\right)^{[d]}\wedge \hat{P}(y)^{[d-1]}+(\Lambda_{x}\wedge\Omega_{t})^{[d-1]}\wedge\hat{P}(y)^{[d]}\] \[+\Omega_{t}{}^{[d-1]}\wedge\hat{P}(y)^{[d]}+\Omega_{t}(x)^{[d]} \wedge(\Lambda_{y}\wedge\hat{P}(y))^{[d-1]}. \tag{10.14}\]
Since \(\omega_{t}\) satisfies the corresponding cone condition of equation (10.1), we have
\[(\Lambda_{x}\wedge\Omega_{t})^{[d-1]}<\Omega_{t}{}^{[d-1]}. \tag{10.15}\]
Similarly, \(\rho\) satisfies equation (10.3), which implies
\[(\Lambda_{y}\wedge\hat{P}(y))^{[d-1]}<\hat{P}(y)^{[d-1]}. \tag{10.16}\]
Combining (10.15), (10.16), and (10.14), we obtain
\[(\boldsymbol{\Lambda}\wedge\boldsymbol{\Omega}_{0})^{[2d-1]} <\Omega_{t}(x)^{[d]}\wedge\hat{P}(y)^{[d-1]}+\Omega_{t}(x)^{[d-1 ]}\wedge\hat{P}(y)^{[d]}\] \[+\Omega_{t}(x)^{[d-1]}\wedge\hat{P}(y)^{[d]}+\Omega_{t}(x)^{[d]} \wedge\hat{P}(y)^{[d-1]}\] \[=2\Omega_{t}(x)^{[d]}\wedge\hat{P}(y)^{[d-1]}+2\Omega_{t}(x)^{[d- 1]}\wedge\hat{P}(y)^{[d]}\] \[=2\boldsymbol{\Omega}_{0}^{[2d-1]}.\]
Therefore, \(\boldsymbol{\omega}_{0}\in\mathcal{C}_{\boldsymbol{\Lambda}}^{2}\) satisfies the cone condition. By Theorem 1.7, there exists a smooth solution to (1.7).
We illustrate the construction of the lifted equation with the following example.
**Example 10.5**.: Suppose that \(Y\) is smooth of dimension \(3\) and \(\Lambda=\rho^{2}\). Then equations (10.1) and (10.3) imply that
\[\frac{\omega_{t}^{3}}{3!}=\rho^{2}\wedge\omega_{t}+c_{t}\frac{\rho^{3}}{3!}, \ \frac{\rho^{3}}{3!}=\frac{\rho}{3}\wedge\frac{\rho^{2}}{2!}. \tag{10.17}\]
Let \(\boldsymbol{\omega}_{0}=\pi_{1}^{*}\omega_{t}+\pi_{2}^{*}\rho\) and \(\boldsymbol{\omega}\in[\omega_{0}]\). The lifted equation on \(\mathcal{Y}\) is
\[2\frac{\boldsymbol{\omega}^{6}}{6!}=\pi_{1}^{*}\rho^{2}\wedge\frac{ \boldsymbol{\omega}^{4}}{4!}+\frac{1}{3}\pi_{2}^{*}\rho\wedge\frac{\boldsymbol {\omega}^{5}}{5!}+f_{t,s}\frac{(\pi_{1}^{*}\rho+\pi_{2}^{*}\rho)^{6}}{6!}. \tag{10.18}\]
We use (10.17) and corresponding cone conditions to obtain that
\[2\omega_{0}^{5}/5! =2\frac{\pi_{1}^{*}\omega_{t}^{3}}{3!}\wedge\frac{\pi_{2}^{*}\rho ^{2}}{2}+2\frac{\pi_{1}^{*}\omega_{t}^{2}}{2}\wedge\frac{\pi_{2}^{*}\rho^{3}} {3!}\] \[>\pi_{1}^{*}\left(\rho^{2}\wedge\omega_{t}\right)\wedge\frac{\pi_ {2}^{*}\rho^{2}}{2}+\pi_{1}^{*}\rho^{2}\wedge\frac{\pi_{2}^{*}\rho^{3}}{3!}\] \[+\frac{\pi_{1}^{*}\omega_{t}^{3}}{3!}\wedge\pi_{2}^{*}\rho\wedge \frac{\pi_{2}^{*}\rho}{3}+\frac{\pi_{1}^{*}\omega_{t}^{2}}{2}\wedge\frac{\pi_ {2}^{*}\rho^{2}}{2}\wedge\frac{\pi_{2}^{*}\rho}{3}\] \[=\pi_{1}^{*}\rho^{2}\wedge\frac{\boldsymbol{\omega}_{0}^{3}}{3!} +\pi_{2}^{*}\frac{\rho}{3}\wedge\frac{\boldsymbol{\omega}_{0}^{4}}{4!}.\]
Thus \(\boldsymbol{\omega}_{0}\) is a subsolution to (10.18).
Suppose that \(\mathbf{\omega}_{\mathbf{\tau}}\) is a solution to the equation (10.8). At \((x,y)\in\mathcal{Y}\), we pick a coordinate \(\{x^{i}\}\), \(\{y^{i}\}\) near \(x\) and \(y\), respectively. Then, \(\mathbf{\omega}\) is represented by a Hermitian matrix
\[\mathbf{A}=\left(\begin{array}{cc}H&D\\ D^{\dagger}&V\end{array}\right).\]
We write
\[\mathbf{\omega}_{\mathbf{\tau}}=\mathbf{\omega}_{x}+\mathbf{\omega}_{y}+\mathbf{\omega}_{m}+\bar{ \mathbf{\omega}}_{m}, \tag{10.19}\]
where
\[\mathbf{\omega}_{x}=\pi_{1}^{*}\iota_{1}^{*}\mathbf{\omega}_{\mathbf{\tau}}=\frac{\sqrt{-1 }}{2}H_{i\bar{j}}dx^{i}\wedge d\bar{x}^{j}, \tag{10.20}\]
\[\mathbf{\omega}_{y}=\pi_{2}^{*}\iota_{2}^{*}\mathbf{\omega}_{\mathbf{\tau}}=\frac{\sqrt{-1 }}{2}V_{i\bar{j}}dy^{i}\wedge d\bar{y}^{j}, \tag{10.21}\]
\[\mathbf{\omega}_{m}=\frac{\sqrt{-1}}{2}D_{i\bar{j}}dx^{i}\wedge d\bar{y}^{j}. \tag{10.22}\]
Let
\[\hat{c}=\hat{c}(t,\hat{\epsilon}):=\left[\frac{\hat{\rho}^{d}}{d!}\right] \cdot\hat{Y}. \tag{10.23}\]
Define
\[\omega_{\mathbf{\tau}} :=\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}}\left(\Lambda_{y} \wedge\mathbf{\Omega}_{\mathbf{\tau}}\right)^{[d+1]}\] \[=\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}}\left((\Lambda_{y} \wedge\exp\mathbf{\omega}_{y})^{[d]}\wedge\mathbf{\omega}_{x}+(\Lambda_{y}\wedge\exp \mathbf{\omega}_{y})^{[d-1]}\wedge\mathbf{\omega}_{m}\wedge\bar{\mathbf{\omega}}_{m}\right). \tag{10.24}\]
**Lemma 10.6**.: _Notations as above. We have \(\omega_{\mathbf{\tau}}\in[\hat{\omega}_{0}]\)._
Proof.: Direct computation shows
\[[\omega_{\mathbf{\tau}}] =\frac{1}{\hat{c}}\left[\int_{\{x\}\times\hat{Y}}\left(\Lambda_{ y}\wedge\frac{\mathbf{\omega}_{\mathbf{\tau}}^{d}}{d!}\right)\right]\] \[=\frac{1}{\hat{c}}\left(\frac{1}{d}[\Lambda_{y}]\cdot\frac{[\mathbf{ \omega}_{y}]^{d-1}}{(d-1)!}\cdot\hat{Y}\right)[\omega_{t}]\] \[=\frac{1}{\hat{c}}\left(\frac{[\hat{\rho}]^{d}}{d!}\cdot\hat{Y} \right)[\hat{\omega}_{0}]\] \[=[\hat{\omega}_{0}]. \tag{10.25}\]
We list following definitions.
\[\mathbf{F}(\boldsymbol{\omega},\boldsymbol{\Lambda}):=\frac{(\boldsymbol{\Lambda} \wedge\exp\boldsymbol{\omega})^{[2d]}}{(\exp\boldsymbol{\omega})^{[2d]}}. \tag{10.26}\]
For convenience, we abuse the notation and write
\[\mathbf{F}\left(\mathbf{A}\right)=\mathbf{F}(\boldsymbol{\omega},\boldsymbol{ \Lambda}), \tag{10.27}\]
and define
\[F_{1}(H):=\frac{(\Lambda_{x}\wedge\exp\boldsymbol{\omega}_{x})^{[d]}}{(\exp \boldsymbol{\omega}_{x})^{[d]}}, \tag{10.28}\]
\[F_{2}(V):=\frac{(\Lambda_{y}\wedge\exp\boldsymbol{\omega}_{y})^{[d]}}{(\exp \boldsymbol{\omega}_{y})^{[d]}}. \tag{10.29}\]
We also define
\[\mathcal{P}_{\boldsymbol{\Lambda}}(\mathbf{A})=\max_{\mathbf{B}\in\Gamma_{n \times 2n}^{+},\|\mathbf{B}\|=1}\lim_{t\to+\infty}\mathbf{F}(\mathbf{A}+t \mathbf{B}), \tag{10.30}\]
\[\mathcal{P}_{1}(H)=\mathcal{P}_{\tilde{\Lambda}}(H)=\max_{B\in\overline{ \Gamma_{n\times n}^{+},\|B\|=1}}\lim_{t\to+\infty}F_{1}(H+tB), \tag{10.31}\]
\[\mathcal{P}_{2}(V)=\max_{B\in\Gamma_{n\times n}^{+},\|B\|=1}\lim_{t\to+\infty }F_{2}(V+tB). \tag{10.32}\]
It is easy to check that \(\mathbf{F},F_{1},F_{2},\mathcal{P}_{\boldsymbol{\Lambda}},\mathcal{P}_{1}, \mathcal{P}_{2}\) are all functions satisfying monotone and convexity conditions by Lemma 5.10.
**Lemma 10.7**.: _Notations as above. We have_
\[\mathbf{F}(\mathbf{A})\geq F_{1}(H-DV^{-1}D^{\dagger})+F_{2}(V). \tag{10.33}\]
_Moreover,_
\[\mathcal{P}_{\boldsymbol{\Lambda}}(\mathbf{A})\geq\mathcal{P}_{1}(H-DV^{-1}D^ {\dagger})+F_{2}(V). \tag{10.34}\]
Proof.: We denote
\[\chi=\mathbf{A}^{\bar{\imath}j}2\sqrt{-1}\frac{\partial}{\partial\bar{z}^{ \bar{\imath}}}\wedge\frac{\partial}{\partial z^{j}}.\]
Then by Lemma 3.4, \(\mathbf{F}(\mathbf{A})=\langle\boldsymbol{\Lambda},\exp\chi\rangle.\) Denote \(\hat{H}=H-DV^{-1}D^{\dagger}\). Then
\[\mathbf{A}^{-1}=\left(\begin{array}{cc}\hat{H}^{-1}&-\hat{H}^{-1}DV^{-1}\\ -V^{-1}D^{\dagger}\hat{H}^{-1}&V^{-1}+V^{-1}D^{\dagger}\hat{H}^{-1}DV^{-1}\\ \end{array}\right). \tag{10.35}\]
Let
\[\chi_{h} =\hat{H}^{\bar{i}j}2\sqrt{-1}\frac{\partial}{\partial\bar{x}^{i}} \wedge\frac{\partial}{\partial x^{j}}, \tag{10.37}\] \[\chi_{v} =\left(V^{\bar{i}j}+\left(V^{-1}D^{\dagger}\hat{H}^{-1}DV^{-1} \right)^{\bar{i}j}\right)2\sqrt{-1}\frac{\partial}{\partial\bar{y}^{i}}\wedge \frac{\partial}{\partial y^{j}},\] (10.38) \[\chi_{m} =\left(-\hat{H}^{-1}DV^{-1}\right)^{\bar{i}j}2\sqrt{-1}\frac{ \partial}{\partial\bar{x}^{i}}\wedge\frac{\partial}{\partial y^{j}}. \tag{10.36}\]
Then
\[\mathbf{F}(\mathbf{A})=\sum_{k=1}^{d-1}\frac{1}{k!}\langle\mathbf{\Lambda}^{[ k]},\sum_{a+2b+c=k}\frac{k!}{a!c!b!b!}\chi_{h}^{a}\wedge\chi_{v}^{c}\wedge( \chi_{m}\wedge\overline{\chi_{m}})^{b}\rangle. \tag{10.39}\]
Note that \(\chi_{h}^{a}\wedge\chi_{v}^{c}\wedge(\chi_{m}\wedge\overline{\chi_{m}})^{b}\) is a wedge product of some type \((a+b,a+b)\) tensor in \(x\) and type \((b+c,b+c)\) tensor in \(y\). Since \(\mathbf{\Lambda}^{[k]}=\Lambda_{x}^{[k]}+\Lambda_{y}^{[k]}\), for non-vanishing terms in (10.39), these indeces satisfy \(a+b=k\) or \(b+c=k\); which implies that \(a=k\) or \(c=k\). Therefore,
\[\mathbf{F}(\mathbf{A})=\sum_{k=1}^{d-1}\frac{1}{k!}\langle\Lambda_{x}^{[k]}, \chi_{h}^{k}\rangle+\langle\Lambda_{y},\chi_{v}\rangle. \tag{10.40}\]
The first term is \(F_{1}(H-DV^{-1}D^{\dagger})\). Since \(V^{-1}D^{\dagger}\hat{H}^{-1}DV^{-1}\) is non-negative, we have \(\langle\Lambda_{y},\chi_{v}\rangle\geq F_{2}(V)\). We have proved (10.33).
Let \(B\) be any \(d\times d\) non-negative Hermitian matrix. Let \(\mathbf{B}=\left(\begin{array}{cc}B&0\\ 0&0\end{array}\right)\). From (10.33),
\[\mathbf{F}(\mathbf{A}+t\mathbf{B})\geq F_{1}(\hat{H}+tB)+F_{2}(V). \tag{10.41}\]
We obtain (10.34) by taking \(t\to\infty\) and then taking maximum for all \(\|B\|=1\).
**Lemma 10.8**.: _Notations as above, we have_
\[\Lambda_{y}\wedge\frac{\boldsymbol{\omega}_{y}^{d-2}}{(d-2)!}\wedge\boldsymbol {\omega}_{m}\wedge\bar{\boldsymbol{\omega}}_{m}\geq-\Lambda_{y}\wedge\frac{ \boldsymbol{\omega}_{y}^{d-1}}{(d-1)!}V^{\bar{j}l}D_{i\bar{j}}\overline{D_{k \bar{l}}}\frac{\sqrt{-1}}{2}dx^{i}\wedge d\bar{x}^{k}.\]
Proof.: Notice that
\[\boldsymbol{\omega}_{m}\wedge\overline{\boldsymbol{\omega}}_{m} =\left(\frac{\sqrt{-1}}{2}\right)^{2}D_{i\bar{j}}dx^{i}\wedge d \bar{y}^{j}\wedge\overline{D_{k\bar{l}}}dy^{l}\wedge d\bar{x}^{k}\] \[=-\left(\frac{\sqrt{-1}}{2}\right)^{2}D_{i\bar{j}}\overline{D_{k \bar{l}}}dx^{i}\wedge d\bar{x}^{k}\wedge dy^{l}\wedge d\bar{y}^{j}. \tag{10.42}\]
To prove the lemma, it is sufficient to show that for any \(\zeta=(\zeta^{i})\),
\[\sum_{i,j,k,l}\Lambda_{y}\wedge\left(\frac{\mathbf{\omega}_{y}^{d-2}}{(d-2)!}\wedge \frac{\sqrt{-1}}{2}dy^{l}\wedge d\bar{y}^{j}-\frac{\mathbf{\omega}_{y}^{d-1}}{(d-1)! }V^{\bar{j}l}\right)D_{i\bar{j}}\overline{D_{k\bar{l}}}\zeta^{i}\bar{\zeta}^{k} \leq 0. \tag{10.43}\]
However, since \(F_{2}^{l\bar{j}}(V)\leq 0\), (10.43) is true.
The following lemma illustrates that \(\omega_{\mathbf{\tau}}\) satisfies the cone condition.
**Lemma 10.9**.: _Notations as above, we have_
\[\mathcal{P}_{\hat{\Lambda}}(\omega_{\mathbf{\tau}})<1. \tag{10.44}\]
Proof.: From Lemma 10.8, we see that
\[\omega_{\mathbf{\tau}} =\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}}\Lambda_{y}\wedge \left(\frac{\mathbf{\omega}_{y}^{d-1}}{(d-1)!}\wedge\mathbf{\omega}_{x}+\frac{\mathbf{ \omega}_{y}^{d-2}}{(d-2)!}\wedge\mathbf{\omega}_{m}\wedge\bar{\mathbf{\omega}}_{m}\right)\] \[\geq\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}}\Lambda_{y}\wedge \frac{\mathbf{\omega}_{y}^{d-1}}{(d-1)!}\wedge\left(H_{i\bar{k}}-V^{\bar{j}l}D_{i \bar{j}}\overline{D_{k\bar{l}}}\right)\frac{\sqrt{-1}}{2}dx^{i}\wedge d\bar{x }^{k}. \tag{10.45}\]
Now, we apply \(\mathcal{P}_{1}\) to \(H_{i\bar{k}}-V^{\bar{j}l}D_{i\bar{j}}\overline{D_{k\bar{l}}}\). By Lemma 10.7,
\[\mathcal{P}_{1}(H-V^{\bar{j}l}D_{i\bar{j}}\overline{D_{k\bar{l}}}) \leq\mathcal{P}_{\mathbf{A}}\left(\begin{array}{cc}H&D\\ D^{\dagger}&V\end{array}\right)-F_{2}(V)\] \[<2-F_{2}(V). \tag{10.46}\]
Using the convexity of \(\mathcal{P}\), we have
\[\mathcal{P}_{\hat{\Lambda}}(\omega_{\mathbf{\tau}}) <\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}}\left(\Lambda_{y} \wedge\frac{\mathbf{\omega}_{y}^{d-1}}{(d-1)!}\right)(2-F_{2}(V))\] \[=\frac{1}{\hat{c}}\left[2\hat{c}-\int_{\{x\}\times\hat{Y}}F_{2}^ {2}(V)\frac{\mathbf{\omega}_{y}^{d}}{d!}\right]\] \[\leq\frac{1}{\hat{c}}\left[2\hat{c}-\frac{1}{[\mathbf{\omega}_{y}^{d }/d!]}\left(\int_{\hat{Y}}F_{2}(V)\frac{\mathbf{\omega}_{y}^{d}}{d!}\right)^{2} \right]. \tag{10.47}\]
Notice\(\int_{\hat{Y}}F_{2}(V)\frac{\mathbf{\omega}_{y}^{d}}{d!}=\int_{\hat{Y}}\frac{\hat{ \rho}^{d}}{d!}=\hat{c}\).Thus, by
\[\mathcal{P}_{\hat{\Lambda}}(\omega_{\mathbf{\tau}})<\frac{1}{\hat{c}}\left[2\hat{ c}-\hat{c}\right]=1. \tag{10.48}\]
We have finished the proof.
### Mass Concentration on \(\Delta\)
Next, we show that any weak limit of \(\mathbf{\omega}_{\mathbf{\tau}}^{d}\) (10.8) as \(s\) is converging to \(0\) contains a positive piece of \([\Delta]\).
**Lemma 10.10**.: _Let \(\mathbf{\rho}_{s}\) be defined as in (10.6). Then we have_
1. _For any_ \(\epsilon>0\)_, there is a_ \(\delta_{\rho}>0\) _s.t._ \(\boldsymbol{\rho}_{s}>(1-\epsilon)\boldsymbol{\rho}+\frac{\delta_{\rho}}{2} \boldsymbol{\varpi}\)_._
2. _Let_ \(V_{s}:=\{z:\psi(z)<\log s\}\) _where_ \(\psi\) _is given in (_10.5_). Let_ \(p\) _be a point of_ \(\Delta\)_. For any open neighborhood_ \(U\) _of_ \(p\)_, there is a_ \(\delta_{1}(U)>0\) _independent of_ \(s\) _s.t._ \[\int_{U\cap V_{s}}\boldsymbol{\rho}_{s}^{2d}\geq\delta_{1}(U)>0.\]
3. _For_ \(p\in\Delta\) _and any open neighborhood_ \(U\) _of_ \(p\)_, there is a_ \(\delta_{2}(U)>0\) _independent of_ \(s\) _such that_ (10.49) \[\int_{U\cap V_{s}}\boldsymbol{\rho}_{s}^{d}\wedge\boldsymbol{\varpi}^{d}\geq \delta_{2}(U)>0.\]
The proof is the same as in Lemma 2.1 in Demailly-Paun[14].
Let \(\mathbf{1}_{\Delta}\) be the characteristic function of the diagonal \(\Delta\subset\mathcal{Y}\). The following proposition asserts that when \(s\) tends to \(0\), a positive portion of mass of \(\boldsymbol{\omega}_{\boldsymbol{\tau}}^{d}\) concentrates on \(\Delta\). The proof is similar to the proof of Proposition 2.6 of Demailly-Paun. For readers' convenience, we write a detailed proof here.
**Proposition 10.11**.: _Let \(T\) be a weak limit of \(\boldsymbol{\omega}_{\boldsymbol{\tau}}^{d}\) when \(s\to 0\) for some \(t\in(0,1)\). Then \(\mathbf{1}_{\Delta}T\) is a positive closed current and there is a constant \(\epsilon_{T}\) s.t. \(\mathbf{1}_{\Delta}T=\epsilon_{T}[\Delta]\) and \(\epsilon_{T}>\epsilon_{\Delta}\), where \(\epsilon_{\Delta}(\hat{Y})>0\) is a constant independent of \(t,\hat{\epsilon}\)._
Proof.: We first prove 2 claims.
**Claim 1:** For any \(p\in\Delta\) and \(U\) a neighborhood of \(p\), there is a constant \(\delta(U)>0\) independent of \(s\) s.t. \(\int_{U\cap V_{s}}\boldsymbol{\omega}_{\boldsymbol{\tau}}^{d}\wedge \boldsymbol{\varpi}^{d}>\delta(U)\) for small \(s\).
Notice that
\[2\frac{\boldsymbol{\omega}_{\boldsymbol{\tau}}^{2d}}{(2d)!} \geq f_{\boldsymbol{\tau}}\frac{\boldsymbol{\rho}^{2d}}{(2d)!}\] \[=\frac{1}{(2d)!}\left(\boldsymbol{\rho}_{s}^{2d}+(c_{t}-1) \boldsymbol{\rho}^{2d}\right), \tag{10.50}\]
where \(c_{t}=c_{t,\hat{\epsilon}}-c_{t,\hat{\epsilon},\delta_{\rho}}\) has a uniform lower bound for all \(t,\hat{\epsilon}\). Let \(\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{2d}\) be the eigenvalues of \(\boldsymbol{\omega}_{\boldsymbol{\tau}}\) with respect to \(\boldsymbol{\rho}_{s}\). From (10.50), we have
\[2\lambda_{1}\cdots\lambda_{2d}\frac{\boldsymbol{\rho}_{s}^{2d}}{(2d)!}-\frac{ c_{t}-1}{(2d)!}\boldsymbol{\rho}^{2d}\geq\frac{\boldsymbol{\rho}_{s}^{2d}}{(2d)!}. \tag{10.51}\]
We have
\[\boldsymbol{\omega}_{\boldsymbol{\tau}}^{d}\geq\lambda_{1}\cdots\lambda_{d} \boldsymbol{\rho}_{s}^{d}. \tag{10.52}\]
\[\frac{\boldsymbol{\omega}_{\boldsymbol{\tau}}^{d}}{d!}\wedge\frac{\boldsymbol{ \rho}_{s}^{d}}{d!}>\lambda_{d+1}\cdots\lambda_{2d}\frac{\boldsymbol{\rho}_{s}^ {d}}{d!}. \tag{10.53}\]
Thus
\[\int_{\hat{Y}}\lambda_{d+1}\cdots\lambda_{2d}\frac{\mathbf{\rho}_{s}^{d} }{d!} \leq\int_{\hat{Y}}\frac{\mathbf{\omega}_{\tau}^{d}}{d!}\wedge\frac{\mathbf{ \rho}_{s}^{d}}{d!}\] \[=\int_{\hat{Y}}\frac{\mathbf{\omega}_{0}^{d}}{d!}\wedge\frac{\mathbf{\rho} ^{d}}{d!}\] \[\leq C(\hat{Y}).\]
Given \(U\), let \(\delta_{2}(U)\) be given in (10.49). For any \(\delta>0\) s.t. \(\left(\left(\frac{2}{\delta_{\rho}}\right)^{d}+1\right)\delta<(1-2^{-d}) \delta_{2}\), let \(E_{\delta}\) be the set of points in \(\hat{Y}\) s.t. \(\lambda_{d+1}\cdots\lambda_{2d}>C(\hat{Y})/\delta\). Then, it is clear that
\[\int_{E_{\delta}}\mathbf{\rho}_{s}^{2d}\leq\delta. \tag{10.54}\]
Therefore, we have
\[\int_{U\cap V_{s}\setminus E_{\delta}}\mathbf{\omega}_{\tau}^{d} \wedge\mathbf{\varpi}^{d} \geq\int_{U\cap V_{s}\setminus E_{\delta}}\lambda_{1}\cdots \lambda_{d}\mathbf{\rho}_{s}^{d}\wedge\mathbf{\varpi}^{d}\] \[=\int_{U\cap V_{s}\setminus E_{\delta}}\frac{\lambda_{1}\cdots \lambda_{2d}}{\lambda_{d}\cdots\lambda_{2d}}\mathbf{\rho}_{s}^{d}\wedge\mathbf{\varpi} ^{d}\] \[\geq\frac{\delta}{C(\hat{Y})}\left(\int_{U\cap V_{s}\setminus E_ {\delta}}\lambda_{1}\cdots\lambda_{2d}\mathbf{\rho}_{s}^{d}\wedge\mathbf{\varpi}^{d} \right).\]
From (10.51), we have
\[\lambda_{1}\cdots\lambda_{2d}\geq\frac{1}{2}+\frac{c_{t}-1}{2}\frac{\mathbf{\rho} ^{2d}}{\mathbf{\rho}_{s}^{2d}}. \tag{10.55}\]
We assume that \(c_{t}<1\) since otherwise he right hand side of (10.55)\(\geq\frac{1}{2}\) and the proof is easier. From Lemma (10.10), we choose \(\delta_{\rho}\) small so that \(\mathbf{\rho}_{s}>\frac{\delta_{\rho}}{2}\mathbf{\varpi}\). Then
\[\frac{c_{t}-1}{2}\cdot\frac{\mathbf{\rho}^{2d}}{\mathbf{\rho}_{s}^{2d}} \cdot\mathbf{\rho}_{s}^{d}\wedge\mathbf{\varpi}^{d} \geq\frac{c_{t}-1}{2}\cdot\frac{\mathbf{\rho}^{2d}}{\mathbf{\rho}_{s}^{2 d}}\cdot\mathbf{\rho}_{s}^{2d}2^{d}\delta_{\rho}^{-d}\] \[=\left(\frac{2}{\delta_{\rho}}\right)^{d}\frac{c_{t}-1}{2}\cdot \mathbf{\rho}^{2d}.\]
Therefore, we see that
\[\int_{U\cap V_{s}\setminus E_{\delta}}\mathbf{\omega}_{\tau}^{d}\wedge\mathbf{\varpi }^{d}\geq\frac{\delta}{C(\hat{Y})}\int_{U\cap V_{s}\setminus E_{\delta}} \left(\frac{1}{2}\mathbf{\rho}_{s}^{d}\wedge\mathbf{\varpi}^{d}+\left(\frac{2}{\delta _{\rho}}\right)^{d}\frac{c_{t}-1}{2}\cdot\mathbf{\rho}^{2d}\right). \tag{10.56}\]
Now, for sufficiently small \(s\) (independent of \(t,\hat{\epsilon}\)), we have
\[-\int_{U\cap V_{s}}\left(\left(\frac{2}{\delta_{\rho}}\right)^{d}\frac{c_{t}-1 }{2}\cdot\mathbf{\rho}^{2d}\right)<2^{-d}\delta_{2}. \tag{10.57}\]
Also, from (10.54),
\[\int_{U\cap V_{s}\setminus E_{\delta}}\left(\frac{1}{2}\mathbf{\rho}_{s}^ {d}\wedge\mathbf{\varpi}^{d}\right) =\frac{1}{2}\left(\int_{U\cap V_{s}}\mathbf{\rho}_{s}^{d}\wedge\mathbf{ \varpi}^{d}-\int_{E_{\delta}}\mathbf{\rho}_{s}^{d}\wedge\mathbf{\varpi}^{d}\right)\] \[\geq\frac{1}{2}\left(\delta_{2}-\left(\frac{2}{\delta_{\rho}} \right)^{d}\delta\right). \tag{10.58}\]
Thus, by (10.56),(10.57), and (10.58), we have
\[\int_{U\cap V_{s}\setminus E_{\delta}}\mathbf{\omega}_{\mathbf{\tau}}^{d} \wedge\mathbf{\varpi}^{d} \geq\frac{\delta}{C(\hat{Y})}\cdot\frac{1}{2}\left(\delta_{2}- \left(\frac{2}{\delta_{\rho}}\right)^{d}\delta-2^{-d}\delta_{2}\right)\] \[\geq\frac{\delta^{2}}{2C(\hat{Y})}. \tag{10.59}\]
We have proved Claim 1.
**Claim 2: \(\mathbf{\omega}_{\mathbf{\tau}}^{d}\)** has a uniform upper bound in mass.
In fact, it is easy to check that
\[\int_{\hat{Y}}\mathbf{\omega}_{\mathbf{\tau}}^{d}\wedge\mathbf{\varpi}^{d} =\int_{\hat{Y}}\hat{\mathbf{\omega}}_{0}^{d}\wedge\mathbf{\varpi}^{d}\] \[=[\pi_{1}^{*}\omega_{t}+\frac{1}{d}\pi_{2}^{*}\hat{\rho}]^{d} \cdot[\mathbf{\varpi}]^{d}\] \[\leq\text{Const.}\]
By Claims 1 and 2, if \(U\) is a neighborhood of a point \(p\in\Delta\), any weak limit \(T\) of \(\mathbf{\omega}_{\mathbf{\tau}}^{d}\) contains positive mass in \(U\cap\Delta\). By Skoda-El Mir extension theorem (Theorem III.2.3 of [13]) and Corollary III.2.14 of [13],
\[\mathbf{1}_{\Delta}T=\epsilon_{T}[\Delta].\]
### The cone condition for the limit current
We continue our discussion. Since \(\omega_{\mathbf{\tau}}\) satisfies the cone condition by Lemma 10.9 and by Proposition 10.11, it converges weakly to a positive current. However, \(\omega_{\mathbf{\tau}}\in[\omega_{0}]\) instead of \((1-\delta)[\omega_{0}]\), and the cone condition may degenerate when passing to the limit. Thus, to get the positive current \(\Upsilon\) in Theorem 10.1, we need more precise estimates.
Let \(\eta>0\). We denote \(\Delta_{\eta}\) to be the \(\eta\)-neighborhood of \(\Delta\) in \(\mathcal{Y}\) with respect to \(\varpi\). Define following forms
\[\omega_{\mathbf{\tau},\eta}^{\prime}=\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y} \cap\Delta_{\eta}}\left(\Lambda_{y}\wedge\mathbf{\Omega}_{\mathbf{\tau}}\right)^{[d+1]}, \tag{10.60}\]
\[\omega_{\mathbf{\tau},\eta}^{\prime\prime}=\frac{1}{\hat{c}}\int_{\{x\}\times\hat{ Y}\cap\Delta_{\eta}}F_{2}(V)(1+K)\hat{\omega}_{0}(x)\wedge\frac{\mathbf{\omega}_{y}^{d}}{d!}, \tag{10.61}\]
and
\[\omega_{\boldsymbol{\tau},\eta}=\omega_{\boldsymbol{\tau}}-\omega^{\prime}_{ \boldsymbol{\tau},\eta}+\omega^{\prime\prime}_{\boldsymbol{\tau},\eta}. \tag{10.62}\]
The next \(2\) lemmas shows that \(\omega_{\boldsymbol{\tau},\eta}\) almost satisfies the cone condition.
**Lemma 10.12**.: _Notations as above, we have_
\[\mathcal{P}_{\hat{\Lambda}}(\omega_{\boldsymbol{\tau},\eta})\leq 1+\frac{1}{ \hat{c}}\int_{\{x\}\times\hat{Y}\cap\Delta_{\eta}}F_{2}(V)\frac{\boldsymbol{ \omega}_{y}^{d}}{d!}. \tag{10.63}\]
Proof.: From the proof of Lemma 10.9 and Jensen's inequality, we have
\[\mathcal{P}_{\hat{\Lambda}}\left(\omega_{\boldsymbol{\tau},\eta}\right) \leq\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}\setminus\Delta_{ \eta}}F_{2}(V)\left(2-F_{2}(V)\right)\frac{\boldsymbol{\omega}_{y}^{d}}{d!}\] \[+\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}\cap\Delta_{\eta}}F_{2 }(V)\mathcal{P}_{\hat{\Lambda}}((1+K)\hat{\omega}_{0}(x))\frac{\boldsymbol{ \omega}_{y}^{d}}{d!}. \tag{10.64}\]
Now it is straightforward to check
\[\mathcal{P}_{\hat{\Lambda}}((1+K)\hat{\omega}_{0})<1, \tag{10.65}\]
as in the proof of Lemma 9.7. Therefore, by (10.64) and (10.65),
\[\mathcal{P}_{\hat{\Lambda}}\left(\omega_{\boldsymbol{\tau},\eta}\right) \leq\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}}2F_{2}(V)\frac{ \boldsymbol{\omega}_{y}^{d}}{d!}-\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y} \setminus\Delta_{\eta}}F_{2}(V)^{2}\frac{\boldsymbol{\omega}_{y}^{d}}{d!}- \frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}\cap\Delta_{\eta}}F_{2}(V)\frac{ \boldsymbol{\omega}_{y}^{d}}{d!}\] \[\leq 2-\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}}F_{2}(V)^{2} \frac{\boldsymbol{\omega}_{y}^{d}}{d!}+\frac{1}{\hat{c}}\int_{\{x\}\times\hat {Y}\cap\Delta_{\eta}}F_{2}(V)^{2}\frac{\boldsymbol{\omega}_{y}^{d}}{d!}\] \[\leq 2-\frac{1}{\hat{c}^{2}}\left(\int_{\{x\}\times\hat{Y}}F_{2}( V)\frac{\boldsymbol{\omega}_{y}^{d}}{d!}\right)^{2}+\frac{1}{\hat{c}}\int_{\{x\} \times\hat{Y}\cap\Delta_{\eta}}F_{2}(V)^{2}\frac{\boldsymbol{\omega}_{y}^{d}} {d!}\] \[=1+\frac{1}{\hat{c}}\int_{\{x\}\times\hat{Y}\cap\Delta_{\eta}}F_{ 2}(V)^{2}\frac{\boldsymbol{\omega}_{y}^{d}}{d!}\] \[\leq 1+\frac{2}{\hat{c}}\int_{\{x\}\times\hat{Y}\cap\Delta_{\eta}} F_{2}(V)\frac{\boldsymbol{\omega}_{y}^{d}}{d!}.\]
The last line is because \(F_{2}(V)\leq 2\) from (10.34). We have proved the lemma.
**Lemma 10.13**.: _For any \(\varepsilon>0\), \(t\in(0,1)\), there exists a \(\eta_{0}=\eta_{0}(\varepsilon,t,\hat{\epsilon})>0\) s.t. for all \(s\in(0,s_{0})\) and \(0<\eta<\eta_{0}\), it holds_
\[\int_{\Delta_{\eta}}F_{2}(V)\frac{\boldsymbol{\omega}_{y}^{d}}{d!}\wedge\frac {\pi_{1}^{*}\varpi^{d}}{d!}<\varepsilon.\]
Proof.: We may rewrite
\[\int_{\Delta_{\eta}}F_{2}(V)\frac{\boldsymbol{\omega}_{y}^{d}}{d!}\wedge\frac{ \boldsymbol{\pi}_{1}^{*}\varpi^{d}}{d!}=\int_{\Delta_{\eta}}\Lambda_{y}\wedge \frac{\boldsymbol{\omega}_{\boldsymbol{\tau}}^{d-1}}{(d-1)!}\wedge\frac{ \boldsymbol{\pi}_{1}^{*}\varpi^{d}}{d!}. \tag{10.66}\]
To prove the claim, we argue by contradiction. If the lemma is false, then there is an \(\varepsilon>0\), \(t=t_{0}\), a sequence of \(s_{i}\in(0,s_{0})\) and a sequence \(\eta_{i}\to 0\) s.t.
\[\int_{\Delta_{\eta_{i}}}\Lambda_{y}\wedge\frac{\boldsymbol{\omega}_{ \boldsymbol{\tau}_{i}}^{d-1}}{(d-1)!}\wedge\frac{\boldsymbol{\pi}_{1}^{*} \varpi^{d}}{d!}>\varepsilon, \tag{10.67}\]
where \(\boldsymbol{\tau}_{i}=(t_{0},s_{i})\). By weak compactness, replacing by a subsequence, we may assume that
\[\boldsymbol{\omega}_{\boldsymbol{\tau}_{i}}^{d-1}\rightharpoonup T^{\prime},\]
where \(T^{\prime}\) is a closed positive \((d-1,d-1)\)-current. By Skoda-El Mir extension theorem \(\boldsymbol{1}_{\Delta}T^{\prime}\) is a positive closed current with support in \(\Delta\). As \(\Delta\) has dimension \(d\), by the first theorem of support ([13], III, Corollary 2.11), \(\boldsymbol{1}_{\Delta}T^{\prime}=0\). Thus,
\[\lim_{i\to\infty}\int_{\Delta_{\eta_{i}}}\Lambda_{y}\wedge\frac{\boldsymbol{ \omega}_{\boldsymbol{\tau}_{i}}^{d-1}}{(d-1)!}\wedge\frac{\boldsymbol{\pi}_{1} ^{*}\varpi^{d}}{d!}=0,\]
which contradicts to (10.67). We have finished the proof.
Next, we investigate a weak limiting current of \(\omega_{\boldsymbol{\tau},\eta}\) and its regularization.
To perform a local regularization as in Definition 8.8, we need to choose a finite open ball covering. We pick a finite covering \(\mathscr{P}=\{B_{j,3R}\}_{j\in\mathcal{J}}\) of \(\hat{Y}\) so that each \(B_{j,3R}\) is biholomorphic to a Euclidean ball \(B_{3R}(0)\) in \(\mathbb{C}^{d}\) equipped with standard Euclidean metric \(g_{j}\). Furthermore, \(B_{j,2R}\simeq B_{2R}(0)\subset\mathbb{C}^{d}\) is also a covering of \(M\). For a small \(\epsilon_{\Lambda}>0\), we can choose a sufficiently fine cover \(\mathscr{P}\) s.t. on each \(B_{j,2R}\), there are constant coefficient positive forms \(\tilde{\Lambda}_{j}\) s.t.
\[\tilde{\Lambda}_{j}^{[k]}\leq\tilde{\Lambda}^{[k]}\leq\tilde{\Lambda}_{j}^{[k ]}+\epsilon_{\Lambda}\frac{\varpi^{k}}{k!}, \tag{10.68}\]
for some small \(\epsilon_{\Lambda}\) to be chosen later. We may choose \(R\) even smaller such that on \(B_{j,2R}\)
\[\frac{1}{2}\varpi<g_{j}<2\varpi, \tag{10.69}\]
where \(g_{j}\) is the Euclidean metric on \(B_{j,3R}\).
**Lemma 10.14**.: _Notations as above. There exists \(r_{0}>0\), s.t. for any \(\varepsilon>0\), \(t\in(0,1)\), there is a \(\eta_{0}(\varepsilon,t,\hat{\epsilon})>0\) s.t. for \(\eta\in(0,\eta_{0})\), \(s\in(0,s_{0})\), \(r\in(0,r_{0})\), and \(j\in\mathcal{J}\), such that_
\[\mathcal{P}_{\tilde{\Lambda}_{j}}\left(\omega_{\boldsymbol{\tau},\eta}^{(r)} \right)\leq 1+\varepsilon. \tag{10.70}\]
Proof.: We pick \(r_{0}<R\). At a point \(x\in B_{j,2R}\), we have
\[\mathcal{P}_{\tilde{\Lambda}_{j}}(\omega_{\boldsymbol{\tau},\eta}^{ (r)})(x) =\mathcal{P}_{\tilde{\Lambda}_{j}}\left(\int_{z\in B_{r}(0)}r^{-2d} \vartheta\left(\frac{z}{|r|}\right)\omega_{\boldsymbol{\tau},\eta}(x+z)dV_{ \mathbb{C}^{d}}(z)\right)\] \[\leq\int_{z\in B_{r}(0)}r^{-2d}\vartheta\left(\frac{z}{|r|} \right)\mathcal{P}_{\tilde{\Lambda}_{j}}\left(\omega_{\boldsymbol{\tau},\eta}( x+z)\right)dV_{\mathbb{C}^{d}}(z). \tag{10.71}\]
Since \(\tilde{\Lambda}_{j}\leq\hat{\Lambda}\), we have \(\mathcal{P}_{\tilde{\Lambda}_{j}}\left(\omega_{\boldsymbol{\tau},\eta}(x+z) \right)\leq\mathcal{P}_{\tilde{\Lambda}}\left(\omega_{\boldsymbol{\tau},\eta} (x+z)\right)\). Hence by (10.71), Lemma 10.12, and (10.69),
\[\mathcal{P}_{\tilde{\Lambda}_{j}}(\omega_{\boldsymbol{\tau},\eta}^{ (r)})(x) \leq\int_{z\in B_{r}(0)}r^{-2d}\vartheta\left(\frac{z}{|r|}\right) \left(1+\frac{2}{\hat{c}}\int_{\{x+z\}\times M\cap\Delta_{\eta}}F_{2}(V)\frac {\boldsymbol{\omega}_{y}^{d}}{d!}\right)dV_{\mathbb{C}^{d}}(z)\] \[\leq 1+\frac{2}{\hat{c}}2^{2d}\int_{\Delta_{\eta}}F_{2}(V)\frac{ \boldsymbol{\omega}_{y}^{d}}{d!}\wedge\frac{\pi_{1}^{*}\varpi^{d}}{d!}\] \[<1+\varepsilon, \tag{10.72}\]
where last inequality is due to Lemma 10.13 if \(\eta<\eta_{0}(\varepsilon,t,\hat{\epsilon})\).
The following Lemma shows that \(\omega_{\boldsymbol{\tau},\eta}^{\prime}\) is almost a Kahler current when \(s\) is small.
**Lemma 10.15**.: _Notations as above, there exist \(r_{0}>0\) and \(\delta_{\Delta}=\frac{\epsilon_{\Delta}}{100\hat{c}}\) s.t. for any \(t\in(0,1)\), there is a \(s_{1}=s_{1}(r_{0},t,\eta)>0\) s.t. for \(s\in(0,s_{1})\), \(r\in(0,r_{0})\),_
\[\left(\omega_{\boldsymbol{\tau},\eta}^{\prime}+100\delta_{\Delta}\sqrt{-1} 0\bar{\partial}\phi_{S}\right)^{(r)}>20\delta_{\Delta}\varpi. \tag{10.73}\]
Proof.: If (10.73) is false, then there are sequence \(s_{i}\to 0\), and a point \(x\in B_{j,R}\) s.t.
\[(\omega_{\boldsymbol{\tau}_{i},\eta}^{\prime})^{(r)}(x)<20\delta_{\Delta}\varpi. \tag{10.74}\]
Here \(\boldsymbol{\tau}_{i}=(t,s_{i})\). For any sequence \(s_{j}\to 0\), after passing to a subsequence, we have
\[\frac{\boldsymbol{\omega}_{\boldsymbol{\tau}_{i}}^{d}}{d!}\rightharpoonup T \geq\epsilon_{\Delta}[\Delta]\]
in weak sense by Proposition 10.11.
Fix \(j\) s.t. \(x\in B_{j,R}\subset M\). Let \(v\in\mathcal{T}_{x}\hat{Y}\) be any vector s.t. \(\|v\|_{\varpi}=1\) and we extend it in \(B_{j,2R}\) so that \(v\) has constant coefficients in a local coordinates and \(1/2\leq\|v\|_{\varpi}\leq 2\). Let \(\gamma_{v}\) be a \((d-1,d-1)\)-form in \(B_{j,2R}\) such that
\[(\iota_{\bar{v}}\iota_{v}\xi)\,dV_{\mathbb{C}^{d}}=\xi\wedge\gamma_{v}. \tag{10.75}\]
for any \((1,1)\)-form \(\xi\). Since \(v\) has constant coefficients, \(\gamma_{v}\) is a non-negative \((d-1,d-1)\)-form with constant coefficients. At \(x\),
\[\lim_{i\to\infty}(\omega^{\prime}_{\boldsymbol{\tau}_{i},\eta})^{(r )}(v,\bar{v}) =\lim_{i\to 0}\frac{1}{\hat{c}}\int_{z\in B_{r}(0)}r^{-2d}\vartheta \left(\frac{z}{|r|}\right)\omega^{\prime}_{\boldsymbol{\tau}_{i},\eta}(z+x) \wedge\gamma_{v}(z+x)\] \[=\lim_{i\to 0}\frac{1}{\hat{c}}\int_{B_{r}(x)\times\hat{Y}\cap \Delta_{\eta}}r^{-2d}\vartheta\left(\frac{z^{\prime}-x}{|r|}\right)\Lambda_{ y}(y)\wedge\frac{\boldsymbol{\omega}^{d}_{\boldsymbol{\tau}_{i}}(z^{\prime},y)}{d!} \wedge\gamma_{v}(z^{\prime})\] \[=\frac{1}{\hat{c}}\int_{B_{r}(x)\times\hat{Y}\cap\Delta_{\eta}}r ^{-2d}\vartheta\left(\frac{z^{\prime}-x}{|r|}\right)\Lambda_{y}(y)\wedge T(z ^{\prime},y)\wedge\gamma_{v}(z^{\prime})\] \[\geq\frac{\epsilon_{\Delta}}{\hat{c}}\int_{B_{r}(x)}r^{-2d} \vartheta\left(\frac{z^{\prime}-x}{|r|}\right)\hat{\rho}(z^{\prime})\wedge \gamma_{v}(z^{\prime})\] \[\geq 100\delta_{\Delta}\int_{B_{r}(x)}r^{-2d}\vartheta\left( \frac{z^{\prime}-x}{|r|}\right)\hat{\rho}(z^{\prime})\wedge\gamma_{v}(z^{ \prime}). \tag{10.76}\]
By (10.76),
\[\lim_{i\to\infty}(\omega^{\prime}_{\boldsymbol{\tau}_{i},\eta}+1 00\delta_{\Delta}\sqrt{-1}\partial\bar{\partial}\phi_{S})^{(r)}(v,\bar{v}) \geq 100\delta_{\Delta}\int_{B_{r}(x)}r^{-2d}\vartheta\left( \frac{z-x}{|r|}\right)\varpi(z)\wedge\gamma_{v}(z)\] \[=100\delta_{\Delta}\int_{B_{r}(x)}r^{-2d}\vartheta\left(\frac{z-x }{|r|}\right)\|v\|_{\varpi}^{2}dV_{\mathbb{C}^{d}}\] \[\geq 25\delta_{\Delta}, \tag{10.77}\]
for \(r<r_{0}<R\). Since \(v\) is chosen arbitrarily at \(x\), (10.77) contradicts to (10.74). So we have finished the proof.
**Lemma 10.16**.: _Notations as above, for any \(\varepsilon>0\), \(t\in(0,1)\), there is a \(\eta_{0}(\varepsilon,t,\hat{\epsilon})>0\) s.t. for all \(s\in(0,s_{0})\), and \(r\in(0,r_{0})\),_
\[(\omega^{\prime\prime}_{\boldsymbol{\tau},\eta})^{(r)}<\varepsilon\varpi. \tag{10.78}\]
Proof.: We argue by contradiction. If the claim is false, then there is a point \(x\), a vector \(v\in T_{x}^{(1,0)}\hat{Y}\) with \(\|v\|_{\varpi}=1\), an \(\varepsilon>0\), \(t=t_{0}\), a sequence of \(s_{i}\in(0,s_{0})\), and a sequence \(\eta_{i}\to 0\) s.t.
\[\iota_{\bar{v}}\iota_{v}(\omega^{\prime\prime}_{\boldsymbol{\tau}_{i},\eta_{i} })^{(r)}(x)>\varepsilon, \tag{10.79}\]
where \(\boldsymbol{\tau}_{i}=(t_{0},s_{i})\). By weak compactness, replacing by a subsequence, we may assume that for \(k=1,\cdots,d-1\)
\[\boldsymbol{\omega}^{k}_{\boldsymbol{\tau}_{i}}\rightharpoonup T_{k}, \tag{10.80}\]
where each \(T_{k}\) is a closed positive \((k,k)\)-current. By Skoda-El Mir extension theorem \(\mathbf{1}_{\Delta}T_{k}\) is a positive closed current with support in \(\Delta\) which has dimension \(d\). By the first theorem of support ([13] III, Corollary 2.11), \(\mathbf{1}_{\Delta}T_{k}=0\). Therefore, for any fixed smooth \((d,d)\)-form \(\gamma\) on \(\hat{Y}_{1}\) it holds that
\[\lim_{i\to\infty}\int_{\Delta_{\eta_{i}}}\pi_{1}^{*}(\gamma)\wedge\left(\Lambda _{y}\wedge\frac{\boldsymbol{\omega}_{\boldsymbol{\tau}_{i}}^{d-1}}{(d-1)!} \right)=0. \tag{10.81}\]
Suppose \(x\in B_{j,R}\). Let \(v\in\mathcal{T}_{x}\hat{Y}\) be any vector s.t. \(\|v\|_{\varpi}=1\) and we extend \(v\) in \(B_{j,2R}\) so that \(v\) has constant coefficients and \(1/2\leq\|v\|_{\varpi}\leq 2\). Let \(\gamma_{v}\) be defined in (10.75). At \(x\),
\[(\omega_{\boldsymbol{\tau}_{i},\eta_{i}}^{\prime\prime})^{(r)}(v,\bar{v})\] \[=\frac{1+K}{\hat{c}}\int_{B_{r}(x)}r^{-2d}\vartheta\left(\frac{z -x}{|r|}\right)\left(\int_{\{z\}\times\hat{Y}\cap\Delta_{\eta_{i}}}\Lambda_{y} \wedge\frac{\boldsymbol{\omega}_{\boldsymbol{\tau}_{i}}^{d-1}(z,y)}{(d-1)!} \right)\hat{\omega}_{0}(z)\wedge\gamma_{v}(z)\] \[=\frac{1+K}{\hat{c}}\int_{B_{r}(x)\times\hat{Y}\cap\Delta_{\eta_ {i}}}r^{-2d}\vartheta\left(\frac{z-x}{|r|}\right)\Lambda_{y}\wedge\frac{ \boldsymbol{\omega}_{\boldsymbol{\tau}_{i}}^{d-1}(z,y)}{(d-1)!}\wedge\hat{ \omega}_{0}(z)\wedge\gamma_{v}(z). \tag{10.82}\]
Therefore, by (10.81) and (10.82),
\[\lim_{i\to\infty}(\omega_{\boldsymbol{\tau}_{i},\eta_{i}}^{\prime\prime})^{(r )}(v,\bar{v})=0, \tag{10.83}\]
which contradicts to (10.79). Thus, we have finished the proof.
We choose \(K_{1}>1\) so that on \(\hat{Y}\),
\[K_{1}\varpi>\hat{\omega}_{0}. \tag{10.84}\]
Note that \(K_{1}\) may be chosen independent of \(\hat{\epsilon}\) and \(t\). We have the following proposition.
**Proposition 10.17**.: _Notations as above. For any \(\varepsilon>0\), there is a small \(\epsilon_{\Lambda}\) such that for any \(t\in(0,1]\), \(\hat{\epsilon}\in(0,\hat{\epsilon}_{Y})\), \(r\in(0,r_{0})\),_
\[\mathcal{P}_{\hat{\Lambda}}\left(\left(\omega_{\boldsymbol{\tau}}-\frac{\delta _{\Delta}}{K_{1}}\hat{\omega}_{0}+100\delta_{\Delta}\sqrt{-1}\partial\bar{ \partial}\phi_{S}\right)^{(r)}\right)<1+2\varepsilon,\]
_for \(s\in(0,s_{2})\) where \(s_{2}=s_{2}(\varepsilon,\hat{\epsilon},t,r)\)._
Proof.: By Lemmas 10.15 and 10.16, if \(s_{2},\eta<\eta_{0}\) are small, we have
\[\left(\omega_{\boldsymbol{\tau}}-\frac{1}{K_{1}}\delta_{\Delta} \hat{\omega}_{0}+100\delta_{\Delta}\sqrt{-1}\partial\bar{\partial}\phi_{S} \right)^{(r)} =\left(\omega_{\boldsymbol{\tau},\eta}+\omega_{\boldsymbol{\tau},\eta}^{\prime}-\omega_{\boldsymbol{\tau},\eta}^{\prime\prime}-\frac{\delta_{ \Delta}\hat{\omega}_{0}}{K_{1}}+100\delta_{\Delta}\sqrt{-1}\partial\bar{ \partial}\phi_{S}\right)^{(r)}\] \[\geq(\omega_{\boldsymbol{\tau},\eta})^{(r)}+20\delta_{\Delta} \varpi-\delta_{\Delta}\varpi-\delta_{\Delta}\varpi\] \[\geq(\omega_{\boldsymbol{\tau},\eta})^{(r)}+\delta_{\Delta}\varpi. \tag{10.85}\]
For 2 positive forms \(\Lambda\) and \(\Lambda^{\prime}\), by (5.10), we have
\[\mathcal{P}_{\Lambda+\Lambda^{\prime}}(\gamma)\leq\mathcal{P}_{\Lambda}(\gamma)+ \mathcal{P}_{\Lambda^{\prime}}(\gamma)\]
for any positive \((1,1)\)-form \(\gamma\). Let \(\Lambda^{\prime}=\exp\varpi\). Fix a point \(x\in B_{j}\). Since \(\hat{\Lambda}<\tilde{\Lambda}_{j}+\epsilon_{\Lambda}\Lambda^{\prime}\), we have
\[\mathcal{P}_{\tilde{\Lambda}}\left(\left(\omega_{\boldsymbol{ \tau},\eta}\right)^{(r)}+\delta_{\Delta}\varpi\right) \leq\mathcal{P}_{\tilde{\Lambda}_{j}}\left(\left(\omega_{ \boldsymbol{\tau},\eta}\right)^{(r)}+\delta_{\Delta}\varpi\right)+\mathcal{P}_ {\epsilon_{\Lambda}\Lambda^{\prime}}\left(\left(\omega_{\boldsymbol{\tau}, \eta}\right)^{(r)}+\delta_{\Delta}\varpi\right)\] \[\leq\mathcal{P}_{\tilde{\Lambda}_{j}}(\omega_{\boldsymbol{\tau}, \eta}^{(r)})+\mathcal{P}_{\epsilon_{\Lambda}\Lambda^{\prime}}\left(\delta_{ \Delta}\varpi\right)\] \[<1+\varepsilon+\mathcal{P}_{\epsilon_{\Lambda}\Lambda^{\prime}} \left(\delta_{\Delta}\varpi\right). \tag{10.86}\]
We have used Lemma 10.14 in the last line. Assume \(\delta_{\Delta}<1\). We have
\[\epsilon_{\Lambda}\frac{\left(\Lambda^{\prime}\wedge\exp\left(\delta_{\Delta} \varpi\right)\right)^{[d]}}{\exp\left(\delta_{\Delta}\varpi\right)^{[d]}}= \epsilon_{\Lambda}\left(1+\frac{1}{\delta_{\Delta}}\right)^{d}, \tag{10.87}\]
which implies \(\epsilon_{\Lambda}\mathcal{P}_{\Lambda^{\prime}}(\delta_{\Delta}\varpi)< \epsilon_{\Lambda}\left(1+\frac{1}{\delta_{\Delta}}\right)^{d}.\) We choose \(\epsilon_{\Lambda}\) small enough so that
\[\epsilon_{\Lambda}\left(1+\frac{1}{\delta_{\Delta}}\right)^{d}<\varepsilon.\]
Then by (10.86), the claim follows.
With all the preparations, we prove Theorem 10.1.
Proof of Theorem 10.1.: We fix a small \(\varepsilon>0\). By Proposition 10.17, there is \(r_{0}>0\) such that for a fixed \(t\in(0,t_{0})\) and \(\hat{\epsilon}\in(0,\hat{\epsilon}_{Y})\) there is a sequence of \(s_{i}\to 0\) s.t. for all \(r\in(0,r_{0})\)
\[\mathcal{P}_{\hat{\Lambda}}((\omega_{\boldsymbol{\tau}_{i}}-\frac{\delta_{ \Delta}}{K_{1}}\hat{\omega}_{0}+100\delta_{\Delta}\sqrt{-1}\partial\bar{\partial }\phi_{S})^{(r)})\leq 1+2\varepsilon, \tag{10.88}\]
where \(\boldsymbol{\tau}_{i}=(t,s_{i})\). Let \(\tilde{\omega}_{t}\) be a weak limit a subsequence of \(\omega_{\boldsymbol{\tau}_{i}}-\frac{\delta_{\Delta}}{K_{1}}\hat{\omega}_{0}+1 00\delta_{\Delta}\sqrt{-1}\partial\bar{\partial}\phi_{S}\) as \(i\to\infty\). Then \(\tilde{\omega}_{t}\in(1-\frac{\delta_{\Delta}}{K_{1}})[\tilde{\omega}_{0}]\) and \(\tilde{\omega}_{t}\geq\delta_{\Delta}\varpi\) by (10.85). In each \(B_{j,R}\), we may assume that
\[\omega_{\boldsymbol{\tau}_{i}}-\frac{\delta_{\Delta}}{K_{1}}\hat{\omega}_{0}+1 00\delta_{\Delta}\sqrt{-1}\partial\bar{\partial}\phi_{S}=\sqrt{-1}\partial\bar {\partial}\phi_{j,\boldsymbol{\tau}_{i}}, \tag{10.89}\]
for some local PSH function \(\phi_{j,\boldsymbol{\tau}_{i}}\). After passing to a subsequence, in each \(B_{j,R}\), define \(\tilde{\phi}_{j,t}=\lim_{i\to\infty}\phi_{j,\boldsymbol{\tau}_{i}}\) in \(L^{1}\) and
\[\tilde{\omega}_{t}=\sqrt{-1}\partial\bar{\partial}\tilde{\phi}_{j,t}. \tag{10.90}\]
Then for any \(r\in(0,r_{0})\), \(\phi^{(r)}_{j,\tau_{i}}\) converges to \(\tilde{\phi}^{(r)}_{j,t}\) uniformly in any compact subset in \(B_{j,R}\). Therefore, by (10.88),
\[\mathcal{P}_{\hat{\Lambda}}(\sqrt{-1}\partial\bar{\partial}\phi^{(r)}_{j,t}) =\lim_{i\to\infty}\mathcal{P}_{\hat{\Lambda}}(\sqrt{-1}\partial \bar{\partial}\phi^{(r)}_{j,\tau_{i}})\] \[\leq 1+2\varepsilon. \tag{10.91}\]
Now we take a sequence \(t_{k}\to 0\) such that \(\tilde{\omega}_{t_{k}}\) converges to a closed positive current \(\tilde{\omega}_{0}\in(1-\frac{\delta_{\Delta}}{K_{1}})[\tilde{\omega}_{0}]\). By the same argument, we have
\[\mathcal{P}_{\Lambda}(\tilde{\omega}_{0}^{(r)})\leq 1+2\varepsilon. \tag{10.92}\]
Let
\[\Upsilon=\left(1+\frac{\delta_{\Delta}}{K_{1}}\right)\tilde{\omega}_{0}\in \left(1-\left(\frac{\delta_{\Delta}}{K_{1}}\right)^{2}\right)[\omega_{0}]. \tag{10.93}\]
Assume \(\delta_{\Delta}/K_{1}<1\). If
\[\varepsilon<\frac{\frac{\delta_{\Delta}}{K_{1}}}{3+\frac{\delta_{\Delta}}{K_ {1}}}<\frac{\delta_{\Delta}}{4K_{1}}, \tag{10.94}\]
then by (10.92),
\[\mathcal{P}_{\Lambda}(\Upsilon^{(r)})\leq\frac{1+2\varepsilon}{1+\delta_{ \Delta}/K_{1}}<1-\varepsilon.\]
Since \(\phi_{S}\) has positive Lelong number along \(S_{\hat{Y}}\), \(\Upsilon\) has positive Lelong number along \(S_{\hat{Y}}\).
_Remark 10.18_.: We remark on various constants introduced before and their dependence. First, \(\delta_{\Delta}\) depends on \(\epsilon_{\Delta}\) and \(\hat{c}\). Although Lemma 10.15 is stated for a covering \(\mathscr{P}\), such \(\delta_{\Delta}\) is uniform as long as the covering is fine enough. Second, \(K_{1}\) is independent of the choice of covering. Thus, we may choose \(\varepsilon<\frac{\delta_{\Delta}}{4K_{1}}\). Once we have fixed \(\varepsilon\), we pick the covering \(\mathscr{P}\) fine enough such that \(\epsilon_{\Lambda}\left(1+\frac{1}{\delta_{\Delta}}\right)^{d}<\varepsilon\), where \(\epsilon_{\Lambda}\) is used to control the variation of \(\hat{\Lambda}\) in each \(B_{j,2R}\). It is important to note that all above constants may be chosen independent of \(t,\hat{\epsilon}\).
## 11. Completing the induction
In this section, we complete the induction argument started in the Section 9 and finish the proof of Theorem 9.1. We adopt notations in the last section.
From Theorem 10.1, we obtain a Kahler current which satisfies the cone condition after local regularization. Together with the induction step, we obtain a global Kahler current smooth away from singular points of \(Y\) by a gluing process. Near the singular points, this Kahler current has positive Lelong number. Our discussion follows J. Song's modification [32] of G. Chen's argument [4] based on the trick of Blocki-Kolodziej [3].
**Theorem 11.1**.: _Under the same assumption in Theorem 9.1, let \(Y\) be a \(d\)-dimensional analytic subvariety of \(M\) and \(S_{Y}\) be the singular points of \(Y\). Then there is a \(\varphi_{Y}\in C^{\infty}(Y\backslash S_{Y})\) such that \(\omega_{Y}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi_{Y}\) is a Kahler metric on \(Y\backslash S_{Y}\) satisfying the cone condition (9.1) on \(Y\backslash S_{Y}\). Moreover, \(\varphi_{Y}\) has positive Lelong number along \(S_{Y}\)._
We may assume that \(Y\) is irreducible for simplicity. Otherwise, we just apply the same argument to each component of \(Y\). Since the main argument is same as in J. Song's work [32], we will only state some key lemmas and sketch the proof of Theorem 11.1.
We pick a new covering \(\mathscr{P}^{\prime}=\{B_{j,R^{\prime}}\}_{j\in\mathcal{J}}\) of \(\hat{Y}\) such that \(R^{\prime}<\frac{1}{4}r_{0}<\frac{R}{4}\), and \(\{B_{j,4R^{\prime}}\}\) as a cover is finer than \(\mathscr{P}\). Let \(\{z^{i}\}\) be the local coordinate in \(B_{j,R^{\prime}}\). We require that in each \(B_{j,R^{\prime}}\),
\[\varpi=\sqrt{-1}\partial\bar{\partial}\phi_{\varpi,j},\ |\phi_{\varpi,j}- \frac{1}{4}|z|^{2}|<(R^{\prime})^{2}, \tag{11.1}\]
\[\omega_{0}=\sqrt{-1}\partial\bar{\partial}\phi_{\omega_{0},j},\ |\nabla\phi_{\omega_{0},j}|<K_{3}R^{\prime},\ \phi_{\omega_{0},j}(0)=0, \tag{11.2}\]
\[\rho=\sqrt{-1}\partial\bar{\partial}\phi_{\rho,j},\ |\nabla\phi_{\rho,j}|<K_{3}R^{ \prime},\ \phi_{\rho,j}(0)=0. \tag{11.3}\]
Let \(\Upsilon\) be the current given in Theorem 10.1. Denote
\[\Upsilon=(1-\delta)\omega_{0}+\sqrt{-1}\partial\bar{\partial}\phi_{\Upsilon}, \tag{11.4}\]
where \(\phi_{\Upsilon}\in\)PSH\((\hat{Y},(1-\delta)\omega_{0})\). Let
\[\phi_{\Upsilon,j}=(1-\delta)\phi_{\omega_{0},j}+\phi_{\Upsilon} \tag{11.5}\]
be the local potential of \(\Upsilon\) in \(B_{j,4R^{\prime}}\). Let
\[\varphi_{j,r}=\phi_{\Upsilon,j}^{(r)}-(1-\delta)\phi_{\omega_{0},j}-\delta \left(\delta^{\prime 2}\phi_{\varpi,j}-\delta^{\prime}\phi_{S}\right). \tag{11.6}\]
We choose \(\delta^{\prime}\) small so that \(\delta^{\prime 2}\varpi\leq\omega_{0}+\delta^{\prime}\sqrt{-1}\partial\bar{ \partial}\phi_{S}\). Then in each \(B_{j,4R^{\prime}}\), by (11.6)
\[\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi_{j,r}\geq\Upsilon^{(r)}. \tag{11.7}\]
Therefore from Theorem 10.1, there exists \(\varepsilon_{0}>0\) so that in each \(B_{j,4R^{\prime}}\),
\[\mathcal{P}_{\Lambda}(\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi_{j,r})< 1-\varepsilon_{0}, \tag{11.8}\]
for \(r\in(0,4R^{\prime})\).
Denote \(S_{\tilde{\epsilon}}:=\{p\in\hat{Y}:\nu_{\Upsilon}(p)\geq\tilde{\epsilon}\}\) for some \(\tilde{\epsilon}>0\) to be chosen later. Siu's decomposition theorem [31] shows that \(S_{\tilde{\epsilon}}\) is an analytic subvariety of dimension\(<d\) of \(\hat{Y}\) which contains \(S_{\hat{Y}}\).
By the induction hypothesis, there is an open neighborhood \(U\) of \(\Phi(S_{\tilde{\epsilon}})\) in \(M\) and a smooth Kahler metric \(\omega_{U}\) such that in \(U\)
\[(\exp\omega_{U}\wedge(1-\Lambda))^{[n-1]}>0. \tag{11.9}\]
We may assume that \((\exp\omega_{U}\wedge(1-\varepsilon_{1}-\Lambda))^{[n-1]}>0\) for some \(\varepsilon_{1}<\varepsilon_{0}\). Let \(\hat{U}=\Phi^{-1}(U)\) and \(\omega_{\hat{U}}=\Phi^{*}\omega_{U}=\omega_{0}+\sqrt{-1}\partial\bar{\partial} \varphi_{\hat{U}}\) for some smooth \(\varphi_{\hat{U}}\). We have
\[(\exp\omega_{\hat{U}}\wedge(1-\varepsilon_{1}-\Lambda))^{[n-1]}\geq 0 \tag{11.10}\]
on \(\hat{U}\) and it is strictly positive in \(\hat{U}\backslash S_{\hat{Y}}\).
The following Lemma is almost identical to Lemma 6.3 of [32], which is a modification of Proposition 4.1 of [4]. We skip its proof.
**Lemma 11.2**.: _Notations as above. If \(\tilde{\epsilon}=\tilde{\epsilon}(\delta,R^{\prime},d,n,\inf_{p\in S_{\hat{Y} }}\nu_{\Upsilon}(p))\) is chosen small, then there exists a neighborhood \(\hat{V}\Subset\hat{U}\) of \(S_{\tilde{\epsilon}}\) and \(0<r_{1}<R^{\prime}/2\) depending on \(\phi_{\varpi,j},\phi_{\omega_{0},j},\varphi_{\hat{U}}\),\(\phi_{S}|_{\hat{Y}\backslash\hat{V}}\),\(\mathscr{P}^{\prime},K_{3},\tilde{\epsilon}\) such that for any \(r<r_{1}\) the following holds._
1. _If_ \(p\in\hat{Y}\backslash\hat{V}\)_, then_ \[\max_{p\in B_{j,3R^{\prime}}}\nu_{\phi_{\Upsilon,j}}(p)\leq 2\tilde{ \epsilon},\ \max_{p\in B_{j,3R^{\prime}}}\varphi_{j,r}(p)>\sup_{\hat{U}}\varphi_{\hat{U} }+3\tilde{\epsilon}\log r+1.\]
2. _If_ \(\max_{p\in B_{j,3R^{\prime}}}\nu_{\phi_{\Upsilon,j}}(p,r)\geq 4\tilde{\epsilon}\)_, then_ \[\max_{p\in B_{j,3R^{\prime}}}\varphi_{j,r}(p)\leq\inf_{\hat{U}}\varphi_{\hat{U} }+3\tilde{\epsilon}\log r-1.\]
3. _If_ \(\max_{p\in B_{j,3R^{\prime}}}\nu_{\phi_{\Upsilon,j}}(p,r)\leq 4\tilde{\epsilon}\)_, then_ \[\max_{p\in B_{i,3R^{\prime}}\backslash B_{i,2R^{\prime}}}\varphi_{i,r}(p)<\max _{p\in B_{j,R^{\prime}}}\varphi_{j,r}(p)-2\tilde{\epsilon}.\]
For some sufficiently small \(\epsilon<\tilde{\epsilon}\), \(0<r<r_{1}\), we define
\[\tilde{\varphi}_{\epsilon}(p) :=\widetilde{\max}_{(\epsilon,\cdots,\epsilon)}\{\varphi_{\hat{U} }+3\tilde{\epsilon}\log r,\varphi_{j,r}:j\in\mathcal{J}\}\] \[=\int_{\mathbb{R}|\mathcal{J}|+1}\max_{j\in\mathcal{J}}\{\varphi_{ \hat{U}}+3\tilde{\epsilon}\log r+h_{0},\ \varphi_{j,r}+h_{j}\}\prod_{0\leq j\leq|\mathcal{J}|}\theta\left(\frac{h_{j}}{ \epsilon}\right)\frac{dh_{0}\cdots dh_{|\mathcal{J}|}}{\epsilon^{|\mathcal{J}| +1}}. \tag{11.11}\]
Then by Lemma 11.2 and Corollary 8.4, \(\tilde{\varphi}_{\epsilon}\in\mathrm{PSH}(\hat{Y},\omega_{0})\cap C^{\infty}( \hat{Y})\). Denote \(\Upsilon_{1}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\tilde{\varphi}_{\epsilon}\). By (11.8) and (11.10),
\[\mathcal{P}_{\Lambda}(\Upsilon_{1})<1-\varepsilon_{2} \tag{11.12}\]
on \(\hat{Y}\backslash S_{\hat{Y}}\) for some \(0<\varepsilon_{2}<\varepsilon_{1}\). Let
\[\Upsilon_{2}=(1-\varepsilon_{2})\Upsilon_{1}+\varepsilon_{2}(\omega_{0}+ \delta^{\prime}\sqrt{-1}\partial\bar{\partial}\phi_{S}). \tag{11.13}\]
And let
\[\sqrt{-1}\partial\bar{\partial}\varphi_{Y}=\Upsilon_{2}-\omega_{0}.\]
If \(\varepsilon_{2}\) is sufficiently small, by (11.12),
\[\mathcal{P}_{\Lambda}(\Upsilon_{2})<1-\varepsilon_{2}/2 \tag{11.14}\]
on \(\hat{Y}\backslash S_{\hat{Y}}\).
We are finally ready to finish proofs of several theorems stated earlier.
Proof of Theorem 11.1.: Since \(\Phi(\hat{Y}\backslash S_{\hat{Y}})=Y\backslash S_{Y}\), we see that \(\omega_{Y}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi_{Y}\) is smooth on \(Y\backslash S_{Y}\) and satisfies the cone condition on \(Y\backslash S_{Y}\) by (11.10). Moreover, since \(\phi_{S}\) has positive Lelong number along \(S_{Y}\), by (11.10) \(\varphi_{Y}\) also has positive Lelong number on \(S_{Y}\).
Finally, we complete the induction step and prove Theorem 9.1.
Proof of Theorem 9.1.: Let \(Y\) be a \(d\)-dimensional subvariety of \(M\) and \(S_{Y}\) be the singular set of \(Y\). By our induction assumption, there is an open neighborhood \(U\) of \(S_{Y}\) in \(M\) such that there exists \(\varphi_{U}\) in \(C^{\infty}(U)\cap\operatorname{PSH}(U,\omega_{0})\) and \(\omega_{U}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi_{U}\) satisfying
\[(\exp\omega_{U}\wedge(1-\Lambda))^{[n-1]}>0.\]
Let \(\omega_{Y}\) be given in Theorem 11.1. We will take neighborhoods of \(S_{Y}\), \(S_{Y}\subset U_{0}\Subset U_{1}\Subset U_{2}\Subset U\). Subtracting a large number from \(\varphi_{U}\), we may assume that
\[\varphi_{Y}>\varphi_{U}+2,\text{ in }Y\backslash U_{2}. \tag{11.15}\]
Since \(\varphi_{Y}\to-\infty\) near \(S_{Y}\), we may assume
\[\varphi_{Y}<\varphi_{U}-2,\text{ in }Y\cap U_{1}. \tag{11.16}\]
Let \(W\) be a neighborhood of \(Y\) and \(\operatorname{pr}_{Y}\) be the projection form \(W\) to \(Y\). Using the same notation as in Lemma 9.3, we may choose \(N>>1\) and
\[\tilde{\varphi}_{Y}=\operatorname{pr}_{Y}^{*}\!\varphi_{Y}+Nd_{\rho}^{2} \tag{11.17}\]
where \(d_{\rho}(p)\) is the distance function to \(Y\backslash U_{0}\) with respect to metric \(\rho\). Then the similar arguments as in Lemma 9.3 shows that \(\omega_{0}+\sqrt{-1}\partial\bar{\partial}\tilde{\varphi}_{Y}\) satisfies the cone condition (9.1) in a neighborhood \(\tilde{U}\) of \(Y\backslash U_{1}\) in \(M\). By (11.16), shrinking \(\tilde{U}\) if necessary, we may assume that in \(\tilde{U}\cap U_{1}\),
\[\tilde{\varphi}_{Y}<\varphi_{U}-1, \tag{11.18}\]
and in \(\tilde{U}\backslash U_{2}\), by (11.15),
\[\tilde{\varphi}_{Y}>\varphi_{U}+1. \tag{11.19}\]
Let
\[\tilde{\varphi}=\widetilde{\max}_{(\frac{1}{2},\frac{1}{2})}\left\{\tilde{ \varphi}_{Y},\varphi_{U}\right\}. \tag{11.20}\]
Let \(U_{Y}=\tilde{U}\cup U_{1}\). Then by Corollary 8.4, \(\tilde{\varphi}\) is in \(C^{\infty}(U_{Y})\cap\operatorname{PSH}(U_{Y},\omega_{0})\) and equals to \(\varphi_{U}\) in \(U_{0}\). By Corollary 8.3, \(\omega_{0}+\sqrt{-1}\partial\bar{\partial}\tilde{\varphi}\) satisfies the cone condition in \(U_{Y}\). This conclude the proof of Theorem 9.1.
Proof of Theorem 1.10.: From Theorem 9.1, \(([\Lambda],\kappa)\)-positivity implies the existence of a smooth subsolution on \(M\). Finally, we apply Theorem 7.1 to show the existence of a smooth solution to (1.2).
## 12. An application to supercritical dHYM equations
In this section, we apply Theorem 1.10 to the deformed Hermitian Yang-Mills equation and prove Theorem 1.11.
Let \(\rho\) be a Kahler metric on a compact connected Kahler manifold \(M\) of dimension \(n\). Let \([\omega_{0}]\) be a real \((1,1)\) cohomology class. The dHYM equation searches for a close \((1,1)\)-form \(\omega\in[\omega_{0}]\) such that
\[\operatorname{Re}(\omega+\sqrt{-1}\rho)^{n}=\cot\theta\text{Im}(\omega+\sqrt{ -1}\rho)^{n}, \tag{12.1}\]
where \(\theta\) is the global phase defined as the argument of complex number \(\int_{M}(\omega+\sqrt{-1}\rho)^{n}\), which depends only on \([\rho]\) and \([\omega]\). Let \(\lambda_{1},\cdots,\lambda_{n}\) be the eigenvalues of \(\omega\) with respect to \(\rho\), then locally dHYM equation can be written as
\[\sum_{i=1}^{n}\operatorname{arccot}\lambda_{i}=\theta. \tag{12.2}\]
We now write the dHYM equation using notations of this paper. Let \(\Lambda_{\theta}:=\sin\theta\cos\rho-\cos\theta\sin\rho\), where
\[\cos\rho=\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^{2k}\frac{\rho^{2k}}{(2k)!},\ \sin\rho=\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^{k}\frac{\rho^{2k+1}}{(2k+1)!}. \tag{12.3}\]
Then (12.1) can be written as
\[\Lambda_{\theta}\wedge\exp\omega=0. \tag{12.4}\]
If \(\theta\in(0,\pi)\) (resp. \((0,\frac{\pi}{2})\)), (12.1) is called _supercritical_ (resp. _hypercritical_). Collins-Jacob-Yau [8] proved that if there exists a solution to supercritical dHYM equation then the following numerical condition holds:
\[\int_{V}\left(\operatorname{Re}(\omega_{0}+\sqrt{-1}\rho)^{d}-\cot\theta \text{Im}(\omega_{0}+\sqrt{-1}\rho)^{d}\right)>0 \tag{12.5}\]
for any \(d\) dimensional subvariety \(V\). Collins-Jacob-Yau then conjectured that (12.1) is also sufficient for the existence of a solution to supercritical dHYM equation.
G. Chen [4] has proved Collins-Jacob-Yau conjecture under some stronger assumption, which can be stated as follows: There exists \(\epsilon>0\) such that
\[\int_{V}\Big{(}\mathrm{Re}(\omega_{0,t}+\sqrt{-1}\rho)^{d}-\cot\theta\mathrm{Im} (\omega_{0,t}+\sqrt{-1}\rho)^{d}\Big{)}>(n-d)\epsilon\int_{V}\rho^{d}, \tag{12.6}\]
for any test ray \(\omega_{0,t}\). Here a test ray is \(\omega_{0,t}=\omega_{0}+t\rho^{\prime}\) for \(t\in[0,\infty)\) and some Kahler form \(\rho^{\prime}\). Following J. Song's [32] modification of G. Chen's argument, Chu-Lee-Takahashi [7] remove the constant \(\epsilon\) but their condition requires (12.5) be positive for any test ray. In particular, when \(M\) is projective, Chu-Lee-Takahashi confirmed Collins-Jacob-Yau's conjecture. See also Ballal [1] for an alternative proof. In dimension 3, Collins-Jacob-Yau's conjecture was also proved in Datar-Pingali [12] for projective 3-manifolds with hypercritical phase. In general, Collins-Jacob-Yau's conjecture has counterexamples, which was pointed out by J. Zhang [39]. The counterexample is explicitly constructed, and involves a blow-up of a complex torus in dimension 3 with the global phase \(\theta>\frac{\pi}{2}\).
We confirm Collins-Jacob-Yau's conjecture for Kahler manifolds with global phase \(\theta\in(0,\frac{\pi}{n-1}]\). Notice that in dimension 3, \(\theta\) falls in the hypercritical range. Note if \(\theta\in(0,\pi)\), and (12.1) has a solution \(\omega\), then \(\omega-\rho\cot\theta\) is a Kahler form. Thus, \([\omega_{0}-\rho\cot\theta]\) is a Kahler class. As one direction has been confirmed, we rewrite Theorem 1.11 in the following form.
**Theorem 12.1**.: _If \(\theta\in(0,\frac{\pi}{n-1}]\) and \([\omega_{0}-\rho\cot\theta]\) is a Kahler class, then the equation (12.1) has a smooth solution if condition (12.5) holds for any subvariety \(V\)._
Proof.: In order to apply Theorem 1.10, we rewrite the equation and work on the Kahler class \([\omega_{0}-\rho\cot\theta]\). Let
\[\hat{\omega}=\omega-\rho\cot\theta\in[\omega_{0}-\rho\cot\theta]. \tag{12.7}\]
We may assume that \(\hat{\omega}\) is Kahler. We have
\[\omega+\sqrt{-1}\rho=\hat{\omega}+\frac{1}{\sin\theta}e^{\sqrt{-1}\theta}\rho. \tag{12.8}\]
By (12.8),
\[\frac{1}{n!}(\omega+\rho\sqrt{-1})^{n}=\sum_{k=0}^{n}\frac{1}{k!}\hat{\omega} ^{k}\frac{e^{\sqrt{-1}(n-k)\theta}}{(n-k)!}\left(\frac{\rho}{\sin\theta} \right)^{n-k}. \tag{12.9}\]
Similarly,
\[\mathrm{Re}\frac{1}{n!}(\omega+\rho\sqrt{-1})^{n}=\sum_{k=0}^{n}\frac{1}{k!} \hat{\omega}^{k}\wedge\frac{\cos\left((n-k)\theta\right)}{(n-k)!}\left(\frac{ \rho}{\sin\theta}\right)^{n-k}. \tag{12.10}\]
\[\mathrm{Im}\frac{1}{n!}(\omega+\rho\sqrt{-1})^{n}=\sum_{k=0}\frac{1}{k!}\hat{ \omega}^{k}\wedge\frac{\sin\left((n-k)\theta\right)}{(n-k)!}\left(\frac{\rho} {\sin\theta}\right)^{n-k}. \tag{12.11}\]
By (12.10) and (12.11), we have
\[\sin\theta\text{Re}\frac{1}{n!}(\omega+\rho\sqrt{-1})^{n}-\cos\theta \text{Im}\frac{1}{n!}(\omega+\rho\sqrt{-1})^{n}\] \[=\left(\exp\hat{\omega}\wedge\left(\sum_{k=0}^{n}\left(\sin\theta \cos\left(k\theta\right)-\cos\theta\sin\left(k\theta\right)\right)\frac{\left( \frac{\rho}{\sin\theta}\right)^{k}}{k!}\right)\right)^{[n]}\] \[=\left(\exp\hat{\omega}\wedge\left(-\sum_{k=0}^{n}\sin\left((k-1) \theta\right)\frac{\left(\frac{\rho}{\sin\theta}\right)^{k}}{k!}\right) \right)^{[n]}. \tag{12.12}\]
Then (12.1) is the equivalent to
\[\left(\exp\hat{\omega}\wedge\left(1-\sum_{k=2}^{n}\frac{\sin\left((k-1)\theta \right)}{\sin\theta}\cdot\frac{\left(\frac{\rho}{\sin\theta}\right)^{k}}{k!} \right)\right)^{[n]}=0. \tag{12.13}\]
Let
\[\Lambda=\sum_{k=2}^{n}\frac{\sin\left((k-1)\theta\right)}{\sin\theta}\frac{ \left(\frac{\rho}{\sin\theta}\right)^{k}}{k!}. \tag{12.14}\]
Then if \(\theta\in(0,\frac{\pi}{n-1}]\), \(\Lambda\) satisfies **H1** with \(k_{0}=2\).
Note that (12.5) is equivalent to \([\hat{\omega}]\) being \(([\Lambda],1)\)-positive. Therefore, we apply Theorem 1.10 to equation (12.13) to establish the existence of a solution to (12.1). We have finished the proof.
## Appendix A Functional and uniqueness
In this appendix, we introduce a global functional that is closely related to our PDE (1.1). We show that any solution to (1.2) is a unique minimizer to this functional.
Let
(A.1) \[\omega_{\varphi}=\omega_{0}+\sqrt{-1}\partial\bar{\partial}\varphi.\]
Let \(\phi(t)\) be a smooth path of \(C^{\infty}\) functions on \(M\) such that \(\omega_{\phi}\) is Kahler and \(\phi(0)=0\) and \(\phi(1)=\varphi\). Define the following functional
(A.2) \[\mathcal{F}_{1}(\varphi):=\int_{0}^{1}\int_{M}\dot{\phi}\Lambda\wedge\exp \omega_{\phi}dt,\]
where \(\dot{\phi}=\frac{d}{dt}\phi(t)\). We have the following:
**Proposition A.1**.: _Notations as above. The path integral ( A.2) depends only on \(\phi(0)\) and \(\phi(1)\). Therefore, \(\mathcal{F}_{1}\) is well defined._
Proof.: Since the space \(\text{PSH}(\omega_{0},M)\cap C^{\infty}(M)\) is contractible, it is sufficient to show that the one form \(\dot{\phi}\mapsto\int_{M}\dot{\phi}\Lambda\wedge\exp\omega_{\phi}\) is closed. Let \((t,s)\) be 2 parameters and \(\phi(t,s)\)
be a family of smooth functions. Therefore, it remains to show that
(A.3) \[\frac{\partial}{\partial s}\int_{M}\frac{\partial\phi}{\partial t}\Lambda\wedge \exp\omega_{\phi}-\frac{\partial}{\partial t}\int_{M}\frac{\partial\phi}{ \partial s}\Lambda\wedge\exp\omega_{\phi}=0.\]
We have the following computation
(A.4) \[\frac{\partial}{\partial s}\int_{M}\frac{\partial\phi}{\partial t }\Lambda\wedge\exp\omega_{\phi} =\int_{M}\frac{\partial^{2}\phi}{\partial s\partial t}\Lambda \wedge\exp\omega_{\phi}+\int_{M}\frac{\partial\phi}{\partial t}\Lambda\wedge \exp\omega_{\phi}\wedge\sqrt{-1}\partial\bar{\partial}\frac{\partial\phi}{ \partial s}\] \[=\int_{M}\frac{\partial^{2}\phi}{\partial s\partial t}\Lambda \wedge\exp\omega_{\phi}+\int_{M}\frac{\partial\phi}{\partial s}\sqrt{-1} \partial\bar{\partial}\frac{\partial\phi}{\partial t}\wedge\Lambda\wedge\exp \omega_{\phi}\] \[=\frac{\partial}{\partial t}\int_{M}\frac{\partial\phi}{\partial s }\Lambda\wedge\exp\omega_{\phi}.\]
The proof is now complete.
According to Proposition A.1, in order to evaluate \(\mathcal{F}_{1}\), we may choose a simple path \(\phi(t)=t\varphi\) for \(t\in[0,1]\). Therefore,
(A.5) \[\mathcal{F}_{1}(\varphi) =\int_{0}^{1}\int_{M}\varphi\Lambda\wedge\exp\omega_{t\varphi}dt\] \[=\int_{0}^{1}\int_{M}\varphi\Lambda\wedge\exp\omega_{0}\wedge\exp \left(t\sqrt{-1}\partial\bar{\partial}\varphi\right)dt\] \[=\int_{M}\varphi\Lambda\wedge\exp\omega_{0}\wedge\left(\sum_{k=0 }^{n}\frac{(\sqrt{-1}\partial\bar{\partial}\varphi)^{k}}{(k+1)!}\right)dt\] \[=\int_{M}\varphi\Lambda\wedge\exp\omega_{0}\wedge\left(\frac{ \exp(\sqrt{-1}\partial\bar{\partial}\varphi)-1}{\sqrt{-1}\partial\bar{ \partial}\varphi}\right)dt.\]
Here for any \((1,1)\)-form \(\alpha\), we use the following notation
(A.6) \[\frac{\exp\alpha-1}{\alpha}:=\sum_{k=0}^{n}\frac{\alpha^{k}}{(k+1)!}.\]
We are now ready to give the following
**Definition A.2**.: We define the following functional
(A.7) \[\mathcal{F}(\varphi)=\int_{M}\varphi\left(\Lambda-\kappa\right)\wedge\exp \omega_{0}\wedge\frac{\exp(\sqrt{-1}\partial\bar{\partial}\varphi)-1}{\sqrt{- 1}\partial\bar{\partial}\varphi}.\]
It is straightforward to see that the critical point of \(\mathcal{F}\) satisfies
(A.8) \[0=\delta\mathcal{F}=\int_{M}\delta\varphi(\Lambda-\kappa)\wedge\exp\omega_{ \varphi},\]
which is equivalent to a solution of equation (1.2), i.e.
(A.9) \[\left(\kappa\exp\omega_{\varphi}-\Lambda\wedge\exp\omega_{\varphi}\right)^{[n] }=0.\]
The global functional is a generalization of many well-known functionals. For example (A.7) coincides with the energy functional for complex Monge-Ampereequation if \(\Lambda\) is a smooth volume form, and Aubin's functional if \(\Lambda\) is a Kahler form.
We proceed to discuss some basic properties of \(\mathcal{F}\).
**Theorem A.3**.: \(\mathcal{F}\) _is convex in the set of subsolutions of (1.1). Furthermore, if \(\Lambda\) satisfies condition **H2**, then a solution to (1.2) is the unique minimizer of \(\mathcal{F}\)._
Proof.: Consider the second variation of \(\mathcal{F}\). Let \(\omega_{1}\) and \(\omega_{1}+\sqrt{-1}\partial\bar{\partial}\varphi\) be two subsolutions of ( 1.1), with \(\varphi\in\mathrm{PSH}(M,\omega_{1})\). Let
\[\omega_{t\varphi}=\omega_{1}+t\sqrt{-1}\partial\bar{\partial}\varphi.\]
By Lemma 5.10 (1), we know that the set of subsolution forms a convex set. Thus, \(\omega_{t\varphi}\) is a subsolution for \(t\in[0,1]\). Notice that
(A.10) \[\frac{d^{2}}{dt^{2}}\mathcal{F}(t\varphi) =\frac{d}{dt}\int_{M}\varphi(\Lambda-\kappa)\wedge\exp\omega_{t\varphi}\] \[=\int_{M}\varphi\left(\Lambda-\kappa\right)\wedge\exp\omega_{t \varphi}\wedge\sqrt{-1}\partial\bar{\partial}\varphi\] \[=\int_{M}\left(\kappa\exp\omega_{t\varphi}-\Lambda\wedge\exp \omega_{t\varphi}\right)^{[n-1]}\wedge\sqrt{-1}\partial\varphi\wedge\bar{ \partial}\varphi.\]
Since \(\omega_{1}\) is a subsolution, \(((\kappa-\Lambda)\wedge\exp\omega_{t\varphi})^{[n-1]}>0\) as an \((n-1,n-1)\)-form. Thus,
(A.11) \[\int_{M}\left(\kappa\exp\omega-\Lambda\wedge\exp\omega\right)^{[n-1]}\wedge \sqrt{-1}\partial\varphi\wedge\bar{\partial}\varphi\geq 0,\]
and the equality holds only if \(\varphi\) is constant. Thus, \(\mathcal{F}\) is convex in the set of subsolutions. Furthermore, any critical point of \(\mathcal{F}\) is a local minimum.
If \(\Lambda\) satisfies **H2**, by Lemma 6.1, a solution to (1.2) is a subsolution.
If \(\omega_{1}\) and \(\omega_{2}=\omega_{1}+\sqrt{-1}\partial\bar{\partial}\varphi\) are 2 solutions of (1.2), then \(\omega_{s}=\omega_{1}+s\sqrt{-1}\partial\bar{\partial}\varphi\) is a family of subsolutions for \(s\in[0,1]\). By the previous argument, \(\mathcal{F}\) is convex along \(\omega_{s}\). However, since both \(\omega_{1}\) and \(\omega_{2}\) are local minimum of \(\mathcal{F}\), \(\mathcal{F}\) is constant along \(\omega_{s}\) which implies that \(\varphi\) is constant. Therefore, \(\omega_{1}=\omega_{2}\). Hence, \(\omega_{1}\) is the unique minimizer of \(\mathcal{F}\).
## Appendix B Proof of Proposition 7.11
In this appendix, we prove Proposition 7.11, which follows Guan [17]. Our criterion (2) in Proposition 5.2 for subsolutions follows Szekelyhidi's definition in [35] closely. In general, this notion of subsolutions differs from that of Guan in [17]. Two definitions of subsolutions have different analytic and geometric flavors. However, for positive definite matrices, Proposition 7.11 implies these two are equivalent. This observation was first made by Fang-Lai-Ma [19] for inverse \(\sigma_{k}\)-equations.
Let \(F\) be as in (3.1). To emphasize the dependence of \(p\in M\), we denote
(B.1) \[F_{p}(A)=\left.\frac{(\Lambda\wedge\Omega)^{[n]}}{\Omega^{[n]}}\right|_{p}.\]
We pick a local coordinate in a ball neighborhood of a point \(p\). We may assume \(W\subset\mathbb{C}^{n}\) and trivialize the bundle of hermitian tensors over \(W\) as \(W\times\Gamma_{n\times n}\). We define a distance on \(W\times\Gamma_{n\times n}\) by
(B.2) \[d((q_{1},A_{1}),(q_{2},A_{2}))^{2}=|q_{1}-q_{2}|^{2}+|A_{1}-A_{2}|^{2}.\]
**Definition B.1**.: For \(q\in U\subset W\), denote
(B.3) \[\Gamma_{\Lambda}^{\kappa}(q):=\{A\in\Gamma_{n\times n}^{+}:F_{q}(A)<\kappa\},\]
and \(\Gamma_{\Lambda}^{\kappa}(U)=\{\{q\}\times\Gamma_{\Lambda}^{\kappa}(q):q\in U\}\). The level set \(\partial\Gamma_{\Lambda}^{\kappa}(q)\) consists of all \(A\) s.t. \(F_{q}(A)=\kappa\). Denote
(B.4) \[\mathcal{C}_{\Lambda}^{\kappa}(U):=\{\{q\}\times\mathcal{C}_{\Lambda}^{\kappa }(q):q\in U\}.\]
Since \(\mathcal{P}_{\Lambda}(A)\) is continuous in \(\Lambda\) and \(A\) (Lemma 5.10 (1) and (3)), \(\mathcal{C}_{\Lambda}^{\kappa}(U)\) is open in \(U\times\Gamma_{n\times n}\). By Lemma 6.1, \(\Gamma_{\Lambda}^{\kappa}(q)\subset\mathcal{C}_{\Lambda}^{\kappa}(q)\), and \(\Gamma_{\Lambda}^{\kappa}(U)\subset\mathcal{C}_{\Lambda}^{\kappa}(U)\).
_Remark B.2_.: Assuming **H2'** in section 7, by Lemma 6.6 and Lemma 6.7, \(F_{p}(A)\) is strictly decreasing and strictly convex in \(\mathcal{C}_{\Lambda}^{\kappa}(p)\).
**Lemma B.3**.: _Let \(U\Subset U^{\prime}\Subset W\) be open sets. Let \(E_{\Lambda}\subset\overline{\Gamma_{\Lambda}^{\kappa}(U^{\prime})}\cap \mathcal{C}_{\Lambda}^{\kappa}(U)\) be a compact set. There exist constants \(N>0\) and \(\mu>0\) depending on \(E_{\Lambda}\) s.t. for any \((q,\underline{A})\in E_{\Lambda}\), and \(A\) satisfying \(|A|>N\), \(F_{q}(A)=\kappa\), we have_
\[F_{q}^{i\bar{j}}(A)\left(A_{i\bar{j}}-\underline{A}_{i\bar{j}}\right)\geq\mu.\]
Proof.: Let \(\mathcal{B}_{R}(0)\) be all Hermitian matrices with norm smaller than \(R\). For \(R>|\underline{A}|\), we define
(B.5) \[\varrho_{R}(q,\underline{A})=\sup_{A\in\partial\Gamma_{\Lambda}^{\kappa}(q) \cap\partial\mathcal{B}_{R}(0)}\min_{t\in[0,1]}F_{q}(t\underline{A}+(1-t)A)-\kappa.\]
\(\varrho_{R}\leq 0\) since \(F_{q}\) is convex in the set \(\mathcal{C}_{\Lambda}^{\kappa}(q)\). Since \(\partial\Gamma_{\Lambda}^{\kappa}(q)\cap\partial\mathcal{B}_{R}(0)\) is a compact set and \(F_{q}\) is continuous, \(\varrho_{R}\) is a continuous function defined on \(\mathcal{C}_{\Lambda}^{\kappa}(W)\).
We claim that \(\varrho_{R}\) is non-increasing with respect to \(R\) for \(R>|\underline{A}|\). In fact, suppose \(R^{\prime}>R\). Let \(\kappa^{\prime}=\varrho_{R^{\prime}}(q,\underline{A})+\kappa\). We may choose \(A^{\prime}\in\partial\Gamma_{\Lambda}^{\kappa}(q)\cap\partial\mathcal{B}_{R^{ \prime}}(0)\) and \(B\) in the segment \(\{t\underline{A}+(1-t)A^{\prime}:t\in[0,1]\}\) s.t.
\[\kappa^{\prime}=\varrho_{R^{\prime}}(q,\underline{A})+\kappa=F_{q}(B).\]
Let \(\mathbf{g}(x,y)=F_{q}(x\underline{A}+yA^{\prime})\) for \((x,y)\in\mathbb{R}^{+}\times\mathbb{R}^{+}\). Then \(\mathbf{g}\) is also a strictly decreasing and convex function. By convexity and monotonicity, \(\mathbf{g}(x,y)\geq\kappa\) in \(\{(x,y):x+y\geq\kappa\}\)
\(1;x,y>0\)). Then, \(\gamma=\mathbf{g}^{-1}(\kappa)\) is a continuous convex curve in \(\mathbb{R}^{+}\times\mathbb{R}^{+}\) and satisfies
(B.6) \[\gamma\subset\{(x,y):x+y\leq 1;x,y>0\}.\]
Pick \(x_{0},y_{0}>0\) such that \(\mathbf{g}(x_{0},y_{0})=\kappa\) and \(|x_{0}\underline{A}+y_{0}A^{\prime}|=R\). Denote \(A=x_{0}\underline{A}+y_{0}A^{\prime}\). Since \(\gamma\) is continuous, \(|\underline{A}|<R\), and \(|A^{\prime}|>R^{\prime}\), by the mean value theorem, such \((x_{0},y_{0})\) exists. By (B.6), \(x_{0}+y_{0}\leq 1\). By the convexity of \(\mathbf{g}\), \(\{x+y=1\}\) separates \(\mathbf{g}^{-1}((-\infty,\kappa^{\prime}))\) and \((x_{0},y_{0})\). Thus,
\[\kappa^{\prime} \leq\min_{t\in[0,1]}\mathbf{g}((1-t)x_{0}+t,(1-t)y_{0})\] \[=\min_{t\in[0,1]}F_{q}(t\underline{A}+(1-t)A^{\prime})\] \[\leq\varrho_{R}(q,\underline{A})+\kappa\]
Hence \(\varrho_{R}\) is non-increasing in \(R\).
For \(\underline{A}\in\overline{\Gamma_{\Lambda}^{\kappa}}(q)\), by the strict convexity of \(F_{q}\), we see that for large \(R\), \(\varrho_{R}(q,\underline{A})<0\). Since \(\varrho_{R}\) is continuous and \(E_{\Lambda}\) is a compact, for large \(R>R(E_{\Lambda})\), \(\varrho_{R}(q,\underline{A})<-\mu<0\) in \(E_{\Lambda}\) for some \(\mu>0\). Then, we have for any \(|A|>R\) and \(F_{q}(A)=\kappa\), there exists \(A^{\prime}=t^{\prime}\underline{A}+(1-t^{\prime})A,t^{\prime}\in(0,1]\), s.t.
\[-\mu >\varrho_{R}(q,\underline{A})>\varrho_{|A|}(q,\underline{A})\] \[=F_{q}(A^{\prime})-F_{q}(A)\] \[\geq F_{q}^{i\bar{j}}(A)t^{\prime}\left(\underline{A}_{i\bar{j}} -A_{i\bar{j}}\right).\]
Thus, \(F_{q}^{i\bar{j}}(A)(A_{i\bar{j}}-\underline{A}_{i\bar{j}})\geq\mu>0\).
**Lemma B.4**.: _Let \(U\Subset U^{\prime}\Subset W\) be open sets. Let \(E_{\Lambda}\subset\mathcal{C}_{\Lambda}^{\kappa}(U)\backslash\overline{\Gamma _{\Lambda}^{\kappa}(U^{\prime})}\) be a compact subset. There exists a constant \(N(E_{\Lambda})>0\) depending on \(E_{\Lambda}\) s.t. for any \(\{q\}\times\underline{A}\in E_{\Lambda}\), and \(A\) satisfying \(|A|>N\), \(F_{q}(A)=\kappa\), we have_
\[F_{q}^{i\bar{j}}(A)(A_{i\bar{j}}-\underline{A}_{i\bar{j}})\geq 0.\]
Proof.: We argue by contradiction. If the claim is false, there exist
1. a sequence \((q_{i},\underline{A}_{i})\in E_{\Lambda}\),
2. a sequence of real numbers \(t_{i}\in(0,\infty)\) and \(t_{i}\to\infty\),
3. a sequence of positive Hermitian matrices \(A_{i}=\underline{A}_{i}+t_{i}B_{i}\) with \(|B_{i}|=1\) and \(F_{q_{i}}(A_{i})=\kappa\),
such that
(B.7) \[F_{q_{i}}^{k\bar{j}}(A_{i})((A_{i})_{k\bar{j}}-\left(\underline{A}_{i}\right)_ {k\bar{j}})<0.\]
Geometrically, (B.7) implies that the tangent plane of level set \(\partial\Gamma_{\Lambda}^{\kappa}(q_{i})\) at \(A_{i}\) separates the level set \(\partial\Gamma_{\Lambda}^{\kappa}(q_{i})\) and \(\underline{A}_{i}\). Therefore, for any \(t\in(0,t_{i})\),
(B.8) \[F_{q_{i}}(\underline{A}_{i}+tB_{i})>\kappa.\]
Since the unit sphere in the space of Hermitian matrices is compact, we may find subsequences such that
\[(q_{i},\underline{A}_{i})\rightarrow(q_{\infty},\underline{A}_{\infty})\in E _{\Lambda},\ B_{i}\to B_{\infty}\in\partial\mathcal{B}_{1}(0).\]
Since \(F_{q}\) is continuous in \(q\) and upper semicontinuous in \(A\) (by convexity), \(F_{q_{\infty}}(\underline{A}_{\infty}+tB_{\infty})\geq\kappa\) for all \(t\in(0,\infty)\). From the strict convexity of \(F_{q}\), \(F_{q_{\infty}}(\underline{A}_{\infty}+tB_{\infty})>\kappa\) for all \(t\in(0,\infty)\). Also, as \(\widetilde{\Gamma_{n\times n}^{+}}\) is closed, \(\underline{A}_{\infty}+tB_{\infty}\in\overline{\Gamma_{n\times n}^{+}}\) for all \(t\in[0,\infty)\). We have either of the following two cases.
Case 1. \(B_{\infty}\not\in\overline{\Gamma_{n\times n}^{+}}\). Then, \(B_{\infty}\) has a negative eigenvalue. For large enough \(t\), \(\underline{A}_{\infty}+tB_{\infty}\not\in\overline{\Gamma_{n\times n}^{+}}\). Here we reach a contradiction.
Case 2. \(B_{\infty}\in\overline{\Gamma_{n\times n}^{+}}\). Then, by the definition of \(\underline{A}_{\infty}\in\mathcal{C}_{\Lambda}^{\kappa}(q_{\infty})\),
\[\lim_{t\rightarrow\infty}F_{q_{\infty}}(\underline{A}_{\infty}+tB_{\infty})<\kappa.\]
It is clearly a contradiction to (B.8).
Therefore, we have proved the lemma.
**Lemma B.5**.: _Let \(U\Subset U^{\prime}\Subset W\) be open sets. Let \(E_{\Lambda}\subset\mathcal{C}_{\Lambda}^{\kappa}(U)\) be a compact set. There exist constants \(N>0\) and \(\mu>0\) depending on \(E_{\Lambda}\) s.t. for any \((q,\underline{A})\in E_{\Lambda}\), and \(A\) satisfying \(|A|>N\), \(F_{q}(A)=\kappa\), it holds_
(B.9) \[F_{q}^{i\bar{j}}(A)\left(A_{i\bar{j}}-\underline{A}_{i\bar{j}}\right)\geq\mu \left(1-\sum_{i}F_{q}^{i\bar{i}}(A)\right).\]
Proof.: Since \(E_{\Lambda}\) is compact, we pick \(\delta<\operatorname{dist}(E_{\Lambda},(\mathcal{C}_{\Lambda}^{\kappa}(U))^{ c})/(4n)\). Let
(B.10) \[E_{\Lambda}^{\delta}:=\{(q,B-\delta\mathrm{Id}):(q,B)\in E_{\Lambda}\}\subset \mathcal{C}_{\Lambda}^{\kappa}(U).\]
For \((q,\underline{A})\in E_{\Lambda}\), let \(\underline{A}^{\delta}=\underline{A}-\delta\mathrm{Id}\). We discuss the following two cases:
Case 1. \((q,\underline{A}^{\delta})\in E_{\Lambda}^{\delta}\cap\overline{\Gamma_{ \Lambda}^{\kappa}(U^{\prime})}\). If \(F_{q}(A)=\kappa\), the line \(l=\{t\underline{A}^{\delta}+(1-t)A:t\in[0,1]\}\) lies entirely in \(\overline{\Gamma_{\Lambda}^{\kappa}(q)}\). From Lemma B.3, if \(R>R(E_{\Lambda}^{\delta})\), \(|A|>R\), \(F_{q}(A)=\kappa\), then for some \(\mu_{0}>0\) depending on \(E_{\Lambda}^{\delta}\)
(B.11) \[F_{q}^{i\bar{j}}(A)(A_{i\bar{j}}-\underline{A}_{i\bar{j}}^{\delta})\geq\mu_{0}>0.\]
(B.11) then implies
(B.12) \[F_{q}^{i\bar{j}}(A)\left(A_{i\bar{j}}-\underline{A}_{i\bar{j}}\right)\geq\mu_{ 0}-\delta\sum_{i=1}^{n}F^{i\bar{i}}(A).\]
Then we obtain (B.9) by taking \(\mu=\min\{\mu_{0},\delta\}\) in (B.12).
Case 2. \((q,\underline{A}^{\delta})\in E^{\delta}_{\Lambda}\backslash\overline{\Gamma^{ \kappa}_{\Lambda}(U^{\prime})}\). We denote \(G^{\delta}_{\Lambda}:=\{(q,B-\delta\mathrm{Id}):(q,B)\in\overline{E^{\delta}_{ \Lambda}\backslash\overline{\Gamma^{\kappa}_{\Lambda}(U^{\prime})}}\}\). Then \(G^{\delta}_{\Lambda}\) is a compact subset in \(\mathcal{C}^{\kappa}_{\Lambda}(U)\backslash\overline{\Gamma^{\kappa}_{ \Lambda}(U^{\prime})}\) and \((q,\underline{A}^{2\delta})\in G^{\delta}_{\Lambda}\). From Lemma B.4, for large enough \(R>R(G^{\delta}_{\Lambda})\), \(|A|>R\), \(F_{q}(A)=\kappa\), the line \(l=\{t\underline{A}^{2\delta}+(1-t)A:t\in[0,1]\}\) intersect \(\Gamma^{\kappa}_{\Lambda}(q)\) at
(B.13) \[A^{\prime}=t_{0}\underline{A}^{2\delta}+(1-t_{0})A\]
for some \(t_{0}\in(0,1)\). Since \(F_{q}(\underline{A}^{2\delta})>\kappa\), the tangent plane of \(\partial\Gamma^{\kappa}_{\Lambda}(q)\) at \(A^{\prime}\) must separate \(\underline{A}^{2\delta}\) and \(\Gamma^{\kappa}_{\Lambda}(q)\) which indicates
(B.14) \[F^{i\bar{j}}_{q}(A^{\prime})(A^{\prime}_{i\bar{j}}-\underline{A}^{2\delta}_{ i\bar{j}})<0.\]
From Lemma B.4, \(|A^{\prime}|\leq N(G^{\delta}_{\Lambda})\). Therefore, all such pairs \((q,A^{\prime})\) lie in a compact subset \(\tilde{E}_{\Lambda}\) of \(\partial\Gamma^{\kappa}_{\Lambda}(U^{\prime})\cap\mathcal{C}^{\kappa}_{ \Lambda}(U)\), which depends on \(G^{\delta}_{\Lambda}\). We apply Lemma B.3 to \(\tilde{E}_{\Lambda}\) to get
(B.15) \[F^{i\bar{j}}_{q}(A)(A_{i\bar{j}}-A^{\prime}_{i\bar{j}})>\mu(\tilde{E}_{\Lambda }),\]
if \(|A|>R(\tilde{E}_{\Lambda})\). From (B.15),
(B.16) \[F^{i\bar{j}}_{q}\left(A\right)\left(A_{i\bar{j}}-\underline{A}^{2 \delta}_{i\bar{j}}\right) =t_{0}^{-1}F^{i\bar{j}}_{q}(A)(A_{i\bar{j}}-A^{\prime}_{i\bar{j}})\] \[>\mu(\tilde{E}_{\Lambda}).\]
Thus,
(B.17) \[F^{i\bar{j}}_{q}(A)\left(A_{i\bar{j}}-\underline{A}_{i\bar{j}}\right)\geq\mu( \tilde{E}_{\Lambda})-2\delta\sum_{i=1}^{n}F^{i\bar{i}}_{q}(A).\]
Again, we may choose \(\mu=\min\{\mu(\tilde{E}_{\Lambda}),2\delta\}\). We have finished the proof.
Finally, we prove Proposition 7.11.
Proof of 7.11.: We pick a finite coordinate ball covering \(\mathscr{P}=\{B_{j,4R}(q_{j})\}_{j=1}^{l}\) such that \(\{B_{j,R}(q_{j})\}_{j=1}^{l}\) also covers \(M\). Denote
\[\omega_{\mathrm{sub}}(p)=\frac{\sqrt{-1}}{2}\left(\underline{A}_{j}(p)\right)_ {i\bar{k}}dz^{i}\wedge d\bar{z}^{k},\]
in \(B_{j,4R}(q_{j})\). Then, \(\{(p,\underline{A}_{j}(p)):p\in\overline{B_{j,R}(q_{j})}\}\) forms a compact subset in \(\mathcal{C}^{\kappa}_{\Lambda}(B_{j,2R}(q_{j}))\). Proposition 7.11 then follows by applying Lemma B.5 to each \(\{(p,\underline{A}_{j}(p)):p\in\overline{B_{j,R}(q_{j})}\}\).
## Appendix C List of notations
* \(\Omega=\exp\omega\), \(P=\exp\rho\).
* \(\kappa:\) positive constant such that \(\int_{M}(\kappa-\Lambda)\wedge\Omega=0\).
* \(m\): positive constant in the definition of uniform positivity.
* \(k_{0}\) : degree in **H1**.
* \(\mathring{\Lambda}\): \(\Lambda\) minus \((n,n)\)-component.
* \(\mathcal{C}^{\kappa}_{\Lambda}\) :set of Kahler forms satisfying the cone condition (1.5) on \(M\).
* \(\Gamma_{n\times n},\Gamma^{+}_{n\times n},\overline{\Gamma^{+}_{n\times n}}\) : the set of Hermitian matrices, positive definite Hermitian matrices, and non-negative Hermitian matrices, respectively.
* \(\mathcal{O}\): a labeled orthogonal splitting structure on \(M\). Definition 2.1.
* \(\mathcal{T}M\): holomorphic tangent space.
* \(\sigma_{k}(A)\): \(k\)-th symmetric function of eigenvalues of a Hermitian matrix\(A\).
* \(T_{k-1}(A)\): linearized operator of \(\sigma_{k}(A)\).
* \(F(A),F_{k}(A)\): local functionals defined in (3.1) and (3.6).
* \(F_{\Lambda}(A:B)\): defined in (5.9).
* \(\mathcal{P}_{\Lambda}(A)\): defined in (5.10).
* \(\chi\): defined in Definition 3.3.
* \(\mathcal{F}(\varphi)\) : the global functional defined in (A.7).
* \(\widetilde{\max}_{\eta}(\cdot,\cdots,\cdot)\) : regularized maximum with parameter \(\eta\).
* \(\mathcal{M}=M_{1}\times M_{2}\): the product manifold of \(M_{1}=M_{2}=M\).
* \(\Phi:M^{\prime}\to M,\) desingularization map.
* \(Y\subset M\), \(\hat{Y}\subset M^{\prime}\) are subvarieties and \(\hat{Y}\) is the strict transform of \(Y\).
* \(\mathcal{Y}=\hat{Y}_{1}\times\hat{Y}_{2}\) where \(\hat{Y}_{1}\simeq\hat{Y}_{2}\simeq\hat{Y}\).
* \(\Delta=\{(y,y):y\in\hat{Y}\}\): the diagonal in \(\mathcal{Y}\).
* \(\hat{\epsilon}\): perturbation parameter.
* \(K,K_{1}:\) constants for comparing \(\varpi\), \(\omega_{0}\), \(\hat{\omega}_{0}\) in (9.13) and (10.84).
* \(c_{t,\hat{\epsilon}}\), (10.1); \(c_{t,\hat{\epsilon},\delta_{\rho}}\), (10.7); \(\hat{c}\), (10.23).
* \(\epsilon_{\Lambda}\): the parameter that controls the covering.
* \(S\): the exceptional locus of \(\Phi\).
* \(\varpi\): a metric on \(M^{\prime}\).
* \(\hat{\rho}\), \(\hat{P}\): \(\hat{\rho}\) is the perturbed reference metric on \(\hat{Y}\), (9.12), \(\hat{P}=\exp\hat{\rho}\).
* \(\Lambda_{x},\Lambda_{y}\): lifted \(\Lambda\) and \(\frac{1}{d}\hat{\rho}\) on \(\hat{Y}_{1}\) and \(\hat{Y}_{2}\), (10.2).
* \(\boldsymbol{\Lambda},\boldsymbol{\rho},\boldsymbol{\varpi}\) : lifted forms on \(\mathcal{Y}\), (10.4).
* \(\boldsymbol{\omega}_{x},\boldsymbol{\omega}_{y},\boldsymbol{\omega}_{m}\): (10.20), (10.21), (10.22).
* \(\mathbf{F},F_{1},F_{2},\mathcal{P}_{\boldsymbol{\Lambda}},\mathcal{P}_{1}, \mathcal{P}_{2}\) : (10.27)-(10.32).
* \(\mathbf{I}_{\hat{\epsilon}}\): the solution interval for equation (9.24).
* \(\Upsilon^{(r)}\): local regularization of a \((1,1)\)-current \(\Upsilon\) with scale \(r\).
* \(\nu_{\phi}(p),\nu_{\Upsilon}(p)\): Lelong number of \(\phi,\Upsilon\) at \(p\).
* \(\delta_{\Delta}\): constant that controls the mass concentration.
* \(\epsilon_{\Delta}\): mass concentration on \(\Delta\). |
2309.12458 | A Theory of Multimodal Learning | Human perception of the empirical world involves recognizing the diverse
appearances, or 'modalities', of underlying objects. Despite the longstanding
consideration of this perspective in philosophy and cognitive science, the
study of multimodality remains relatively under-explored within the field of
machine learning. Nevertheless, current studies of multimodal machine learning
are limited to empirical practices, lacking theoretical foundations beyond
heuristic arguments. An intriguing finding from the practice of multimodal
learning is that a model trained on multiple modalities can outperform a
finely-tuned unimodal model, even on unimodal tasks. This paper provides a
theoretical framework that explains this phenomenon, by studying generalization
properties of multimodal learning algorithms. We demonstrate that multimodal
learning allows for a superior generalization bound compared to unimodal
learning, up to a factor of $O(\sqrt{n})$, where $n$ represents the sample
size. Such advantage occurs when both connection and heterogeneity exist
between the modalities. | Zhou Lu | 2023-09-21T20:05:49Z | http://arxiv.org/abs/2309.12458v2 | # A Theory of Multimodal Learning
###### Abstract
Human perception of the empirical world involves recognizing the diverse appearances, or'modalities', of underlying objects. Despite the longstanding consideration of this perspective in philosophy and cognitive science, the study of multimodality remains relatively under-explored within the field of machine learning. Nevertheless, current studies of multimodal machine learning are limited to empirical practices, lacking theoretical foundations beyond heuristic arguments. An intriguing finding from the practice of multimodal learning is that a model trained on multiple modalities can outperform a finely-tuned unimodal model, even on unimodal tasks. This paper provides a theoretical framework that explains this phenomenon, by studying generalization properties of multimodal learning algorithms. We demonstrate that multimodal learning allows for a superior generalization bound compared to unimodal learning, up to a factor of \(O(\sqrt{n})\), where \(n\) represents the sample size. Such advantage occurs when both connection and heterogeneity exist between the modalities.
## 1 Introduction
Even before the common era, the concept of viewing an object as the collection of its appearances, has already sprouted in early philosophy. The Buddha, in the 'Diamond Sutra', separated the essence of universe from various modalities such as sight, sound, smell, taste and touch. Two centuries ago, Immanuel Kant made a further step, positing that humans perceive only the representations of 'noumena' from the empirical world. He wrote:
_"And we indeed, rightly considering objects of sense as mere appearances, confess thereby that they are based upon a thing in itself, though we know not this thing in its internal constitution, but only know its appearances, viz., the way in which our senses are affected by this unknown something. - Prolegomena"_
From this perspective, human cognition of the world, may therefore be considered effectively equivalent to the multiple modalities of the underlying objects. The importance of multimodality extends beyond metaphysics to everyday life: children learning languages often rely on illustrations, and even mathematicians benefit from visual aids.
However, machine learning, which could be seen as the cognition of computer systems, has not fully harnessed the power of multimodality. Multimodal machine learning, which processes and learns from data with multiple modalities, remained relatively under-explored until recently. Despite the impressive success of multimodal learning in empirical applications, such as Gato [32] and GPT-4 [27], the corresponding theoretical understanding is largely absent, often limited to heuristics.
A fascinating observation from empirical multimodal learning is that a model trained with multiple modalities can outperform a finely-tuned unimodal model, even on population data of the same unimodal task. It's not immediately clear why multimodality offers such an advantage, considering that the trained model's focus is spread across different modalities.
While it seems challenging to outperform unimodal learning asymptotically when sufficient data is available, multimodal learning can still provide an edge under a fixed data budget. Different modalities might focus on
different aspects of an object, and for a specific classification problem, one modality may require a smaller sample complexity. This phenomenon often occurs with large models handling many tasks and a vast amount of training data, suggesting that:
Training across tasks learns a common connection between modalities efficiently, allowing the model to adapt to the modality with the smallest sample complexity.
An intuitive example of how multiple modalities help is learning parametric sine functions. The samples come in the form of
\[x\in(0,1],y=\theta x,z=\sin(1/y),\]
where \(x,y\) are the two modalities and \(z\) is the label. Given data from both modalities the learning problem is trivial, even with a single training data point, while learning solely on \(x\) is hard albeit there is a bijective mapping between \(x,y\). From a perspective of VC-dimension, there is a gap between the class of linear functions \(\{\theta x\}\) and the class of parametric sine functions \(\{\sin(1/\theta x)\}\), in that the former one has VC-dimension 1 while the latter one has infinite VC-dimension. More details will be provided later.
The theory problem we study in this paper, is thus how to formalize the above heuristic with provable guarantees. To this end, we examine generalization bounds of a simple multimodal ERM algorithm, which involves two parallel stages: learning a predictor \(\hat{f}\in\mathcal{F}\) based on multimodal training data, and learning a connection \(\hat{g}\in\mathcal{G}\) that maps one modality to another with potentially unlabeled data. During inference, the composition \(\hat{f}\circ\hat{g}\) is used to perform prediction on unimodal population data.
In this setting, we prove that the learnt unimodal predictor \(\hat{f}\circ\hat{g}\) can achieve vanishing generalization error against the best multimodal predictor \(f^{*}\) as if given multiple modalities, whenever \(\mathcal{G}\) is expressive enough to realize the training data. In addition, such generalization bound depends on the complexities of both hypothesis classes \(\mathcal{F},\mathcal{G}\) separately, better than unimodal approaches which typically involve the complexity of \(\mathcal{F}\circ\mathcal{G}\) or a worst-case complexity of \(\mathcal{F}\), up to an \(O(\sqrt{n})\) factor where \(n\) denotes the size of training data. On the other hand, we show a separation between multimodal and unimodal learning, by constructing a hard instance learnable by multimodal learning, in which no matter what hypothesis class is chosen for the unimodal learning problem, it's either under-expressive or over-expressive and thus incurs constant error. Putting the two pieces together, our theory suggests that with both connection and heterogeneity, multimodal learning is provably better than unimodal learning.
The paper is organized as follows. In section 2 we formalize the setting of multimodal learning and provide a motivating example. Section 3 proves a generalization upper bound of the two-stage multimodal ERM algorithm on semi-supervised multitask learning problems. The lower bound on the separation between multimodal and unimodal learning is given in section 4, then we discuss the limitations of this paper and future directions in section 5.
### Related Works
**Theoretical Multimodal Learning**: while empirical multimodal learning has shown significant progress, theoretical studies are relatively sparse, lacking a firm foundation. Prior theoretical investigations often focus on specific settings or incorporate additional assumptions. Some of these studies adopt an information-theoretic perspective, proposing algorithms based on total correlation or utilizing partial information decomposition to quantify relationships between modalities [36, 17]. Other studies approach the problem from a multi-view setting, typically assuming that each view alone suffices for prediction [41, 1, 11, 35].
An important work in theoretical multimodal learning is [16], which also considered the advantage of multimodal learning in generalization and is the first and probably the only general theoretical result in this field so far. In particular, they considered the population risk of a representation learning based approach where learning under different subsets of modalities is performed on the same ERM objective with shared hypothesis classes. They proved that the gap between the population risks of different subsets of modalities is lower bounded by the difference between what they called the latent representation quality, which is the best achievable population risk with the learnt representation on the chosen subset of modalities.
There are two limitations in this result: 1, there is no quantitative characterization on how large the gap between latent representation qualities can be; 2, the comparison is not-only instance-dependent, but also carried over the same hypothesis classes and doesn't exclude the possibility that the smaller subset of modalities could
potentially use a different class to bypass the gap, making the lower bound somewhat restricted. We strengthen this result by showing the gap can be as large as \(\Omega(1)\) (Theorem 7), even if we allow the smaller subset of modalities to use any hypothesis class. They also showed an upper bound on the excess population risk via a standard representation learning analysis, which involves the complexity of a composition of hypothesis classes \(\mathcal{F}\circ\mathcal{G}\), while our analysis decouples the complexities of hypothesis classes, leading to an improved upper bound up to a factor of \(O(\sqrt{n})\).
A recent work of [33] made a more fine-grained study on multimodal learning, analyzing the benefit of contrastive loss in training dynamics. They considered the aspect of optimization instead of generalization for a particular problem, focusing on the setting of a linear data-generating model. They proved that the use of contrastive loss is both sufficient and necessary for the training algorithm to learn aligned and balanced representations.
**Empirical Multimodal Learning**: the inception of multimodal learning applications dates back to the last century, initially devised to enhance speech recognition using both vision and audio [40, 26]. Multimedia is another aspect in which multimodal learning inspired new methods for indexing and searching [10, 19]. With the development of deep learning and its success in computer vision [14] and natural language processing [39], people start studying deep multimodal learning in related tasks such as generating one modality from the other [8, 15, 31]. For a more comprehensive introduction of multimodal learning, we refer the readers to the excellent survey paper [4], see also [30, 13, 18].
Recently, the power of multimodal learning was carried out in large-scale generalist models. In [32], the training data includes a wide variety of modalities such as image, text, robotics and so on. The resulting model is reported to be able to beat fine-tuned unimodal models in some tasks. In a more recent ground-breaking result, the super-large language model GPT-4 [27] makes use of not only text data available on internet, but also data from other modalities such as audio, demonstrating excellent capabilities in integrating knowledge from multiple domains.
**Representation Learning**: this field, closely related to our work, focuses on learning a common underlying representation across multiple tasks. The typical framework of representation learning involves solving an ERM problem with a composition of hypotheses \(f_{t}\circ g\), where \(f_{t}\) is task-specific while \(g\) is the common representation. Generalization bounds of representation learning usually involve the complexity of \(\mathcal{F}\circ\mathcal{G}\) or a worst-case complexity of \(\mathcal{F}\).
Starting from Baxter's study [5] which gave theoretical error bounds via covering numbers on the inductive bias learning approach [37], a long line of work has followed, each improving upon and generalizing the previous results [6, 2, 21, 7, 20, 29, 28].
For more recent works we detail several representative ones here. The work of [24] studied both representation learning and transfer learning in the setting of multitask learning, achieving dimension independent generalization bounds with a chain rule on Gaussian averages [23]. For the problem of transfer learning, [38] improved the leading term of [24] by a \(O(\sqrt{n})\) factor under a task diversity assumption, while [9] obtained a similar bound under low-dimension and linear function assumptions. [3] analyzed a generalized setting called contrastive learning inspired by the success of empirical language models.
## 2 Setting
In this paper, we consider a straightforward yet non-trivial case of two modalities to ensure clarity in our presentation. Formally, we denote the set of possible observations \(\mathcal{S}\) to be \((\mathcal{X},\mathcal{Y},\mathbb{R})\). Here, each element \(s\in\mathcal{S}\) constitutes a pairing of inputs from both modalities \(x\in\mathcal{X}\subset\mathbb{R}^{q},y\in\mathcal{Y}\subset\mathbb{R}^{k}\) and their associated label \(z\in\mathbb{R}\), thus forming a tuple \((x,y,z)\). Assuming without loss of generality that both \(\mathcal{X}\) and \(\mathcal{Y}\) are contained within their respective Euclidean unit balls. Given a probability measure \(\mu\) on \(\mathcal{S}\) and a loss function \(\ell\), the performance of a learning algorithm \(\mathcal{A}\) is measured by the population loss if we interpret \(\mathcal{A}\) as a function, namely
\[\mathbb{E}_{(x,y,z)\sim_{p}}\ell(\mathcal{A}(x,y),z).\]
To leverage the hidden correlation between different modalities, we aim to learn both a connection function \(g\) bridging the modalities and a prediction function \(f\) that accepts inputs from both modalities. Although we focus on learning a connection from \(\mathcal{X}\) to \(\mathcal{Y}\) for simplicity, a symmetrical approach can handle the reverse direction.
In particular, we will consider learning algorithms as a composition of functions \(\mathcal{A}(x,y)=f(x,g(x))\), where
\(f\in\mathcal{F}\) and \(g\in\mathcal{G}\) represent the hypothesis classes for both functions. This form is most general and common practical forms such as fusion \(\mathcal{A}(g(x),h(y))\) can be subsumed by the general form.
The goal is to identify \(\hat{f},\hat{g}\) using multi-modal training data, to minimize the excess population risk
\[\mathbb{E}_{(x,y,z)\sim\mu}\ell(\hat{f}(x,\hat{g}(x)),z)-\min_{f\in\mathcal{F} }\mathbb{E}_{(x,y,z)\sim\mu}\ell(f(x,y),z). \tag{1}\]
In this context, we compare with the optimal predictor \(f^{*}\) as if given complete observations of both modalities, because our objective is to achieve a performance comparable to the best predictor given both modalities. The reason is that such a predictor could have a significant advantage over any unimodal predictor that does not learn these connections (either explicitly or implicitly).
We seek statistical guarantees for \(\hat{f}\) and \(\hat{g}\). To provide generalization bounds on the excess population risk, we require a complexity measure of hypothesis classes, defined as follows.
**Definition 1** (Gaussian average).: Let \(Y\) be a non-empty subset of \(\mathbb{R}^{n}\), the Gaussian average of \(Y\) is defined as
\[G(Y)=\mathbb{E}_{\sigma}\left[\sup_{y\in Y}\sum_{i=1}^{n}\sigma_{i}y_{i}\right]\]
where \(\sigma_{i}\) are iid standard normal variables. Similarly, we can define the function version of Gaussian average. Let \(\mathcal{G}\) be a function class from the domain \(\mathcal{X}\) to \(\mathbb{R}^{k}\), and \(X=\{x_{1},...,x_{n}\}\) be the set of input sample points. We define the Gaussian average of the class \(\mathcal{G}\) on sample \(X\) as:
\[G(\mathcal{G}(X))=\mathbb{E}_{\sigma}\left[\sup_{g\in\mathcal{G}}\sum_{i=1}^{k }\sum_{j=1}^{n}\sigma_{i,j}g_{i}(x_{j})\right],\]
where \(\sigma_{i,j}\) are iid standard normal variables.
We make the following Lipschitz assumption on the class \(\mathcal{F}\) and the loss function \(\ell\), which is standard in literature.
**Assumption 1**.: _We assume that any function \(f:\mathbb{R}^{q+k}\rightarrow\mathbb{R}\) in the class \(\mathcal{F}\) is \(L\)-Lipschitz, for some constant \(L>0\). The loss function \(\ell\) takes value in \([0,1]\), and is 1-Lipschitz in the first argument for every value of \(z\)._
### A Motivating Example
To introduce our theoretical findings, we present a straightforward but insightful example, illustrating the circumstances and mechanisms through which multimodal learning can outperform unimodal learning. Despite its simplicity and informal nature, this example captures the core concept of why multimodal learning requires both connection and heterogeneity, and we believe it is as vital as the more formal statements that follow.
Consider the problem where \(\mathcal{X}=\mathcal{Y}=(0,1]\). Any potential data point \((x,y,z)\) from \(\mathcal{S}\) is governed by a parameter \(\theta^{*}\in(0,1]\), such that
\[y=\theta^{*}x,z=\sin(1/y).\]
The loss function choice is flexible in this case, and any frequently used loss function like the \(\ell_{1}\) loss will suffice.
Suppose we have prior knowledge about the structure of the problem, and we select \(\mathcal{G}=g(x)=\theta x|\theta\in(0,1]\) and \(\mathcal{F}=\sin(1/x)\) as our hypothesis classes. If we have sampled data from both modalities, we can easily learn the correct hypothesis via Empirical Risk Minimization (ERM): simply take any \((x,y,z)\) sample and compute \(\theta=y/x\).
However, if we only have sampled data with the \(\mathcal{Y}\) modality concealed, there could be multiple \(\theta\) values that minimize the empirical loss, making the learning process with \(\mathcal{F}\circ\mathcal{G}\) significantly more challenging. To formalize this, we can calculate the Gaussian averages for both scenarios. In the multimodal case, the Gaussian average of \(\mathcal{F}\) is zero since it's a singleton. \(G(\mathcal{G}(X))\) can be upper bounded by
\[G(\mathcal{G}(X))=\mathbb{E}_{\sigma}\left[\sup_{\theta\in(0,1]}\theta\sum_{i =1}^{n}\sigma_{i}x_{i}\right]\leq\mathbb{E}_{\sigma}\left[\left|\sum_{i=1}^{ n}\sigma_{i}x_{i}\right|\right]=O(\sqrt{n}).\]
In contrast, the Gaussian average of \(\mathcal{F}\circ\mathcal{G}\) is larger by a factor of \(\sqrt{n}\)
\[G(\mathcal{F}\circ\mathcal{G}(X))=\mathbb{E}_{\sigma}\left[\sup_{\theta\in(0,1]} \sum_{i=1}^{n}\sigma_{i}\sin(\frac{1}{\theta x_{i}})\right]\geq\mathbb{E}_{ \sigma}\left[\sum_{i=1}^{n}\frac{1}{2}|\sigma_{i}|\right]=\Omega(n), \tag{2}\]
for some sample \(X\) (we leave the proof in the appendix), see also [25].
This separation in Gaussian averages implies that unimodal learning can be statistically harder than multimodal learning, even if there exists a simple bijective mapping between \(x,y\). We summarize the intrinsic properties of multimodal learning leading to such separation as follows:
**Heterogeneity:**: multimodal data is easier to learn than unimodal data.
**Connection:**: a mapping between multiple modalities is learnable.
Thereby, the superiority of multi-modality can be naturally decomposed into two parts: a model trained with multi-modal data performs comparably on uni-modal population data as if multi-modal data is provided (connection), a model trained and tested with multi-modal data outperforms any model given only uni-modal data (heterogeneity).
We note that both connection and heterogeneity are crucial to achieving such advantage: connection allows efficiently learning of \(\mathcal{Y}\) from \(\mathcal{X}\), while heterogeneity guarantees that learning with \(\mathcal{X}\) is harder than learning with both \(\mathcal{X},\mathcal{Y}\). Lacking either one can lead to ill-conditioned cases: when \(x\equiv y\) the connection is perfect while there is no heterogeneity and thus no need to learn anything about \(\mathcal{Y}\) at all. When \(x\) is a random noise the heterogeneity is large while there can't be any connection between \(\mathcal{X},\mathcal{Y}\), and it's impossible to come up with a non-trivial learner solely on \(\mathcal{X}\).
**Remark 2**.: The example above can be converted to be hard for each single modality. Any potential data point \((x,y,z)\) is now generated by three parameters \(c\in(0,1),\theta_{1}\in(1,2),\theta_{2}\in(-2,-1)\), under the constraint that \(\theta_{1}+\theta_{2}\neq 0\), and \((x,y,z)\) is of form \((c\theta_{1},c\theta_{2},c(\theta_{1}+\theta_{2}))\). The hypothesis classes are now \(\mathcal{G}=\{g(x)=\theta x,\theta\in(-1,0)\cup(0,1)\}\) and \(\mathcal{F}=\sin(1/x)\). For any uni-modal data \(x=c\theta_{1}\), the range of ratio \((x+y)/x\) is \((1-2/\theta_{1},0)\cup(0,1-1/\theta_{1})\). This range is a subset of \((-1,0)\cup(0,1)\) and we have that \(\max(|1-2/\theta_{1}|,|1-1/\theta_{1}|)\geq 1/4\). As a result, \(G(\mathcal{F}\circ\mathcal{G}(X))\) in this case is at least \(1/4\) of that in the simpler example, thus the term remains \(\Omega(n)\). On the other hand, we have that \(\max(|1-2/\theta_{1}|,|1-1/\theta_{1}|)\leq 1\), so \(G(\mathcal{G}(X))=O(\sqrt{n})\) holds still. The same argument holds for \(\mathcal{Y}\) similarly.
## 3 The Case of Semi-supervised Multitask Learning
The efficacy of practical multimodal learning often hinges on large models and extensive datasets, with the majority potentially being unlabeled. This is especially true when the training data encompasses a broad spectrum of tasks. Given these conditions, we are interested in the power of multimodality within the realm of semi-supervised multitask learning, where the model leverages substantial amounts of unlabeled data from various tasks.
Consider the following setup for semi-supervised multitask multimodal learning. The training data is taken from a number of tasks coming in the form of a multi-sample, partitioned into two parts \(S,S^{\prime}\). The labeled sample \(S\) takes form \(S=(s_{1},...,S_{T})\) where \(S_{t}=(s_{t1},...,s_{tn})\sim\mu_{t}^{n}\), in which \(\mu_{t}\) represents the probability measures of the \(T\) different tasks from which we draw the independent data points \(s_{ti}=(x_{ti},y_{ti},z_{ti})\) from. The unlabeled sample \(S^{\prime}\) takes a similar form, that \(S^{\prime}=(S^{\prime}_{1},...,S^{\prime}_{T})\) where \(S^{\prime}_{t}=((x_{t1},y_{t1}),...,(x_{tm},y_{tm}))\sim\mu_{t,(x,y)}^{m}\), drawn independently of \(S\), here by \(\mu_{t,(x,y)}\) we denote the marginal distribution of \(\mu_{t}\). We assume \(S^{\prime}\) has a larger size \(m\gg n\) than the labeled sample \(S\).
Using \(S^{\prime}\), we aim to learn a connection function, and with \(S\), we learn a predictor that leverages both modalities. A common approach to this learning problem is empirical risk minimization (ERM). In particular, we solve the following optimization problem, where the predictors \(\hat{f}_{t}\) on both modalities are learnt via an ERM on the labeled sample \(S\),
\[\hat{f}_{1},..,\hat{f}_{T}=\mathbf{argmin}_{f_{1},...,f_{T}\in\mathcal{F}}\frac {1}{nT}\sum_{t=1}^{T}\sum_{i=1}^{n}\ell(f_{t}(x_{ti},y_{ti}),z_{ti}).\]
Meanwhile, the connection \(\hat{g}\) is learned by minimizing the distance to the true input, using the unlabeled sample \(S^{\prime}\) instead:
\[\hat{g}=\mathbf{argmin}_{g\in\mathcal{G}}\frac{1}{mT}\sum_{t=1}^{T}\sum_{i=1}^{m }\|g(x^{\prime}_{ti})-y^{\prime}_{ti}\|.\]
Our focus is not on solving the above optimization problems (as modern deep learning techniques readily address ERM) but rather on the statistical guarantees of the solutions to these ERM problems.
To measure the performance of the solution \(\hat{g},\hat{f}_{1},..,\hat{f}_{T}\) on the modality \(\mathcal{X}\), we define the task-averaged excess risk as follows:
\[L(\hat{g},\hat{f}_{1},...,\hat{f}_{T})=\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{( x,y,z)\sim\mu_{t}}\ell(\hat{f}_{t}(x,\hat{g}(x)),z)-\min_{f\in\mathcal{F}} \frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{(x,y,z)\sim\mu_{t}}\ell(f_{t}(x,y),z).\]
In order to bound the excess risk, it's crucial to require the class \(\mathcal{G}\) to contain, at least, an approximate of a "ground truth" connection function, which maps \(x\) to \(y\) for any empirical observation. Later we will show that such requirement is inevitable, which can be seen a fundamental limit of our theoretical model.
**Definition 3** (Approximate realizability).: We define the approximate realizability of a function class \(\mathcal{G}\) on a set of input data \(S=\{(x_{1},y_{1}),...,(x_{n},y_{n})\}\) as
\[\mathcal{R}(\mathcal{G},S)=\min_{g\in\mathcal{G}}\frac{1}{n}\sum_{i=1}^{n}\| g(x_{n})-y_{n}\|.\]
When the set \(S\) is labeled, we abuse the notation \(\mathcal{R}(\mathcal{G},S)\) to denote \(\mathcal{R}(\mathcal{G},(X,Y))\) for simplicity.
We have the following theorem that bounds the generalization error of our ERM algorithm in terms of Gaussian averages and the approximate realizability.
**Theorem 4**.: _For any \(\delta>0\), with probability at least \(1-\delta\) in the drawing of the samples \(S,S^{\prime}\), we have that_
\[L(\hat{g},\hat{f}_{1},...,\hat{f}_{T})\leq\frac{\sqrt{2\pi}}{nT}\sum_{t=1}^{T }G(\mathcal{F}(\hat{X}_{t},\hat{Y}_{t}))+\frac{2\sqrt{2\pi}L}{mT}G(\mathcal{ G}(X^{\prime}))+L\mathcal{R}(\mathcal{G},S^{\prime})+(8L+4)\sqrt{\frac{\log(8/ \delta)}{2nT}},\]
_where \((\hat{X}_{t},\hat{Y}_{t})\) denotes the set of \(\{x_{ti},\hat{g}(x_{ti})|i=1,...,n\}\)._
**Remark 5**.: It's important to note that the Gaussian average is typically on the order of \(O(\sqrt{Nd})\) when \(N\) is the sample size and \(d\) is the intrinsic complexity of the hypothesis class, such as the VC dimension. If we treat \(d\) as a constant, for most hypothesis classes in machine learning applications, the term \(G(\mathcal{G}(X^{\prime}))\) typically scales as \(O(\sqrt{mT})\) and each term \(G(\mathcal{F}(\hat{X}_{t},\hat{Y}_{t}))\) scales as \(O(\sqrt{n})\). In practice, learning the connection \(g\) is often more challenging than learning a predictor \(f_{t}\), so it's encouraging to see the leading term \(G(\mathcal{G}(X^{\prime}))/mT\) decay in both \(m,T\).
Theorem 4 asserts that the ERM model trained with multi-modal data, achieves low excess risk on unimodal test data to the optimal model as if multi-modal test data is provided, when connection is learnable. This result can be naturally extended to accommodate multiple modalities in a similar way. In this case, the ERM algorithm would learn a mapping from a subset of modalities to all modalities, which involves only one hierarchy as in the two-modality case, thus our analysis naturally carries over to this new setting.
### Necessity of A Good Connection
Recall that in the upper bound of Theorem 4, all the terms vanish as \(n,T\) tend to infinity, except for \(L\mathcal{R}(\mathcal{G},S^{\prime})\). It's therefore important to determine whether the term is a defect of our analysis or a fundamental limit of our theoretical model. Here we present a simple example showing that the dependence on approximate realizability is indeed inevitable.
Let \(\mathcal{X}=\mathcal{Y}=\{0,1\}\) and \(n,T\geq 2\). Each probability measure \(\mu_{t}\) is determined by a Boolean function \(b_{t}:\{0,1\}\rightarrow\{0,1\}\), and for each observation \(s_{t}\), the label \(z_{t}=b_{t}(y_{t})\) is purely determined by \(y_{t}\). In particular, the four possible observations
\[(0,0,b_{t}(0)),(1,0,b_{t}(0)),(0,1,b_{t}(1)),(1,1,b_{t}(1))\]
happen with the same probability for any \(t\).
For the hypothesis classes, \(\mathcal{G}\) includes all Boolean functions \(g:\{0,1\}\to\{0,1\}\), while \(\mathcal{F}\) includes all 1-Lipschitz functions \(\mathbb{R}^{2}\to\mathbb{R}\). It's straightforward to verify that
\[L\mathcal{R}(\mathcal{G},S)=\frac{c_{0}}{nT}\left(\frac{1}{2}-\frac{|\sum_{i=1 }^{c_{0}}\sigma_{i}|}{2c_{0}}\right)+\frac{c_{1}}{nT}\left(\frac{1}{2}-\frac{ |\sum_{i=1}^{c_{1}}\sigma_{i}|}{2c_{1}}\right),\]
where \(\sigma_{i}\) are iid Rademacher random variables, and \(c_{0},c_{1}\) denotes the number of observations with \(x=0\) and \(x=1\) respectively. The loss function \(\ell\) is set as \(|f(x,y)-z|\).
The simplest version of Bernstein inequality states that for any \(\epsilon>0\) and \(m\in\mathbb{N}^{+}\)
\[\mathbb{P}\left(\frac{1}{m}|\sum_{i=1}^{m}\sigma_{i}|>\epsilon\right)\leq 2e^{ -\frac{m^{2}}{2(1+\frac{\epsilon}{8})}},\]
therefore with probability at least \(\frac{3}{4}\), we have that \(|c_{0}-c_{1}|\leq 3\sqrt{nT}\) since \(|c_{0}-c_{1}|\) itself can be written in the form of \(|\sum_{i=1}^{nT}\sigma_{i}|\).
Condition on \(|c_{0}-c_{1}|\leq 3\sqrt{nT}\), we have that \(c_{0},c_{1}\leq\frac{nT}{2}+2\sqrt{nT}\leq\frac{3nT}{4}\). Using the Bernstein inequality again, with probability at least \(\frac{7}{8}\), it holds \(|\sum_{i=1}^{c_{0}}\sigma_{i}|\leq 8\sqrt{c_{0}}\) and similarly for \(c_{1}\). Putting it together, with probability at least \(\frac{1}{2}\), the term \(L\mathcal{R}(\mathcal{G},S)\) can be lower bounded as
\[L\mathcal{R}(\mathcal{G},S)\geq\frac{1}{2}-\frac{4\sqrt{3}}{\sqrt{nT}}.\]
On the other hand, the population loss of any \(f(x,g(x))\) composition is clearly at least \(\frac{1}{2}\) because the label \(z\) is independent of \(x\), while the population loss of \(\{f_{t}(x,y)\}\) with the choice of \(f_{t}(x,y)=b_{t}(y)\) is zero. As a result the excess risk on population is at least \(\frac{1}{2}\) which doesn't scale with \(n,T\). When \(n,T\) are large enough, the term \(L\mathcal{R}(\mathcal{G},S)\) matches the optimal achievable excess risk.
## 4 The Role of Heterogeneity
So far we have demonstrated that as long as a good connection indeed exists, learning with multimodal training data using a simple ERM algorithm yields a unimodal model which is guaranteed to perform as well as the best model \(f_{t}^{*}(x,y)\) with both modalities. The story is not yet complete. In order to explain the empirical phenomenon we're investigating, we still need to determine in what circumstance learning with multimodal data is strictly easier than unimodal learning. A good connection itself isn't sufficient: in the case of \(y\equiv x\) which admits a perfect connection function, bringing \(\mathcal{Y}\) into consideration apparently gives no advantage.
To address this question, we move our eyes on heterogeneity, another fundamental property of multimodal learning that describes how modalities diverge and complement each other. Intuitively, heterogeneity can potentially lead to a separation between learnability in the following way: learning from a single modality is much harder than learning from both modalities, in the sense that it requires a much more complicated hypothesis class. As a result, either the sample complexity is huge due to a complicated hypothesis class, or the hypothesis class is so simple that even the best hypothesis performs poorly on population.
Consequently, we compare not only the best achievable population risks, but also the generalization errors. For unimodal learning denote \(\mathcal{G}\) as the hypothesis class, we consider the ERM solution
\[\tilde{g}=\operatorname*{arg\,min}_{g\in G}\frac{1}{n}\sum_{i=1}^{n}\ell(g(x),z).\]
The generalization error of \(\tilde{g}\) can be bounded via Gaussian average of the hypothesis class \(\mathcal{G}\) in the following way if we denote \(g^{*}=\operatorname*{argmin}_{g\in\mathcal{G}}\mathbb{E}_{(x,y,z)\sim\mu}\ell (g(x),z)\):
\[\mathbb{E}_{(x,y,z)\sim\mu}\ell(\tilde{g}(x),z)-\mathbb{E}_{(x,y,z)\sim\mu} \ell(g^{*}(x),z)\leq\tilde{O}\left(\frac{G(\mathcal{G}(X))}{n}\right),\]
which is tight in general without additional assumptions. For multimodal learning, \(\mathcal{F}\) and \(f^{*}\) can be defined in the same way.
We are going to show that, either the intrinsic gap of risk between unimodality and multimodality
\[\mathbb{E}_{(x,y,z)\sim\mu}\ell(g^{*}(x),z)-\mathbb{E}_{(x,y,z)\sim\mu}\ell(f^{*} (x,y),z) \tag{3}\]
is substantial, or the Gaussian average gap is large. This implies a separation between multimodal learning and unimodal learning, when heterogeneity is present. Consequently, we define the heterogeneity gap as follow.
**Definition 6** (Heterogeneity gap).: Given a fixed hypothesis class \(\mathcal{F}\) and number of sample \(n\geq 2\), the heterogeneity gap between w.r.t some distribution \(\mu\) and hypothesis class \(\mathcal{G}\) is defined as
\[H(\mu,\mathcal{G})=\left[\mathbb{E}\frac{G(\mathcal{G}(X))}{n}+\mathbb{E}_{(x,y,z)\sim\mu}\ell(g^{*}(x),z)\right]-\left[\mathbb{E}\frac{G(\mathcal{F}(X,Y)) }{n}+\mathbb{E}_{(x,y,z)\sim\mu}\ell(f^{*}(x,y),z)\right].\]
The above definition measures the population risks between learning with a single modality \(\mathcal{X}\) or with both modalities \(\mathcal{X},\mathcal{Y}\). When \(H(\mu,\mathcal{G})\) is large, unimodal learning is harder since ERM is arguably the optimal algorithm in general. As long as Theorem 4 admits a low risk, the heterogeneity gap itself directly implies the superiority of multi-modality by definition. Therefore, the only question left is whether such desired instance (large heterogeneity gap + perfect connection) actually exists.
To this end, the following theorem provides the existence of such instance, proving that our theory is indeed effective. Let's sightly abuse the notation \(L(\tilde{g})\) to denote the excess population risk of \(\tilde{g}\) which is the output of the unimodal ERM. We have the following lower bound w.r.t heterogeneity gap.
**Theorem 7**.: _There exist \(\mathcal{X},\mathcal{Y},\mathcal{F}\) and a class \(U\) of distributions on \((\mathcal{X},\mathcal{Y},\mathbb{R})\), such that_
\[\forall\mathcal{G},\forall n\in\mathbb{N}^{+},\exists\mu\in U,s.t.\;H(\mu, \mathcal{G})=\Omega(1),\text{and }\mathbb{E}_{X}[L(\tilde{g})]=\Omega(1).\]
_Meanwhile, let \(\hat{f},\hat{g}\) denote the outputs of the multimodal ERM algorithm in section 3, we have that_
\[\exists\mathcal{G},s.t.\;\forall\mu\in U,\forall n\in\mathbb{N}^{+},L(\hat{g},\hat{f})=0.\]
**Remark 8**.: We compare with the work of [16]. They showed a similar separation in terms of the intrinsic gap (the difference between optimal hypotheses), under a representation learning framework. We make a step further by taking the Gaussian average (how hard to learn the optimal hypothesis) into consideration, which is crucial to the \(\Omega(1)\) gap in the more general setting.
Theorem 7 shows the existence of hard instances, where not only the heterogeneity gap is large, but the difference between actual risks is also substantial. It implies that under certain circumstances, multimodal learning is statistically easy, while unimodal learning incurs constant error no matter what hypothesis classes are used.
To sum up, our theory demonstrates that the superiority of multi-modality can be explained as the combined impact of connection and heterogeneity: when connection (Theorem 4) and heterogeneity (Theorem 7) both exist, multimodal learning has an edge over unimodal learning even if tested on unimodal data, providing an explanation to the empirical findings. Nevertheless, our theory also suggests a simple principle potentially useful for guiding empirical multimodal learning:
1. Collect numerous unlabeled multimodal data. Learn a connection via generative models.
2. Learn a predictor based on a modest amount of labeled multimodal data.
Such framework can be easily carried out with modern deep learning algorithms, for example we can learn the connection by generative models [12, 34].
### Comparison with Representation Learning
It's possible to learn \(\hat{f},\hat{g}\) and minimize 1, based on observations of a single modality \(\mathcal{X}\), via representation learning. In particular, representation learning solves the following unimodal ERM problem by treating \(y\) as an unknown representation of \(x\), on a labeled sample \(S\) where \(y\) is hidden
\[\hat{g},\hat{f}_{1},..,\hat{f}_{T}=\mathbf{argmin}_{g\in\mathcal{G},f_{1},..., f_{T}\in\mathcal{F}}\frac{1}{nT}\sum_{t=1}^{T}\sum_{i=1}^{n}\ell(f_{t}(x_{ti},g(x_{ti}) ),z_{ti}).\]
Unfortunately, although this representation learning method uses the same set of hypotheses, it leads to a worse sample complexity bound. Such method fails to exploit two essential parts of semi-supervised multimodal learning, namely the rich observation of unlabeled data \(S^{\prime}\), and separate learning of \(f,g\). Failing to utilize \(S^{\prime}\) will lead to a worse factor \(G(\mathcal{G}(X))/nT\) which scales as \(1/\sqrt{nT}\), while in Theorem 4 the factor \(G(\mathcal{G}(X^{\prime}))/mT\) scales as \(1/\sqrt{mT}\) which is much smaller than \(1/\sqrt{nT}\).
Failing to exploit the "explicit representations" \(y\) from the training data requires learning a composition \(f\circ g\) from scratch, which typically leads to a worst case Gaussian average term, for example in [38] they have \(\max_{g\in\mathcal{G}}G(\mathcal{F}(S(g)))\) instead where \(S(g)=\{(x_{ti},g(x_{ti}))\}\) is a random set induced by \(g\). As a comparison, our approach decouples the learning of \(f,g\), and the Gaussian average term \(G(\mathcal{F}(\hat{X}_{t},\hat{Y}_{t}))\) is only measured over the "instance" \(\hat{S}\) which can be smaller. In fact, \(G(\mathcal{F}(\hat{X}_{t},\hat{Y}_{t}))\) can be smaller than \(\max_{g\in\mathcal{G}}G(\mathcal{F}(S(g)))\) up to a factor of \(O(\sqrt{n})\), see the example in appendix.
## 5 Limitations and Future Directions
In this paper we propose a theoretical framework on explaining the empirical success of multimodal learning, serving as a stepping stone towards more in-depth understanding and development of the theory. Nevertheless, as a preliminary study on the relatively unexplored field of theoretical multimodal learning, our result comes with limitations, and potentially opportunities for future research as well. We elaborate on these points below.
**More natural assumptions**: the current set of assumptions, while generally consistent with practical scenarios, does not fully align with them. Although most assumptions are satisfied in practice, the assumption on \(\mathcal{F}\) containing only Lipschitz functions, is restrictive: the predictor class \(\mathcal{F}\) typically comprises deep neural networks. Future work could seek to overcome the Lipschitz assumption, which is not only fundamental to our results but also a cornerstone in representation learning theory.
**Hypothesis-independent definitions**: our study characterizes multimodal learning through two central properties: connection and heterogeneity. However, their current definitions, \(\mathcal{R}(\mathcal{G},S)\) and \(H(\mu,\mathcal{G})\), depend on both the data and the hypothesis class \(\mathcal{G}\). It's interesting to see if we can develop theories with hypothesis-independent definitions, such as mutual information or correlation. A potential difficulty is that such statistical notions are not totally aligned with the perspective of machine learning, for example it could happen that the two modalities are independent with zero mutual information, while there exists a trivial connection mapping.
**Fine-grained analysis**: our theory, which focuses on the statistical guarantees of ERM solutions, is fairly abstract and does not make specific assumptions about the learning problem. To study other more concrete multimodal learning algorithms, for example the subspace learning method, we need more fine-grained analysis which takes the particular algorithm into account.
**More realistic examples**: while the example in section 2.1 provides theoretical justification, it remains somewhat artificial and diverges from typical multimodal learning problem structures. Ideally, we would like to see examples that closely mirror real-world scenarios. For instance, can we progress beyond the current example to provide one that captures the inherent structures of NLP and CV data?
**Benefit in optimization**: our work demonstrates the advantages of multimodal learning primarily in terms of generalization properties. However, optimization, another crucial aspect of machine learning, could also benefit from multimodality. One potential direction to explore is the possibility that multimodal data is more separable and thus easier to optimize. We provide a simple example in the appendix that demonstrates the existence of linearly separable multimodal data, where the decision boundary for any single modality is arbitrary.
## 6 Conclusion
In this paper we study multimodal learning from a perspective of generalization. By adopting classic tools in statistic learning theory on this new problem, we prove upper and lower bounds on the population risk of a simple multimodal ERM algorithm. Compared with previous works, our framework improves the upper bound up to an \(O(\sqrt{n})\) factor by decoupling the learning of hypotheses, and gives a quantitative example in the separation between multimodal and unimodal learning. Our results relate the heuristic concepts connection and heterogeneity to a provable statistical guarantee, providing an explanation to an important phenomenon
in empirical multimodal learning, that a multimodal model can beat a fine-tuned unimodal one. We hope our result, though being a preliminary step into a deeper understanding of multimodal learning, can shed some light on the directions of future theoretical studies.
|
2309.05124 | WIP: Development of a Student-Centered Personalized Learning Framework
to Advance Undergraduate Robotics Education | This paper presents a work-in-progress on a learn-ing system that will
provide robotics students with a personalized learning environment. This
addresses both the scarcity of skilled robotics instructors, particularly in
community colleges and the expensive demand for training equipment. The study
of robotics at the college level represents a wide range of interests,
experiences, and aims. This project works to provide students the flexibility
to adapt their learning to their own goals and prior experience. We are
developing a system to enable robotics instruction through a web-based
interface that is compatible with less expensive hardware. Therefore, the free
distribution of teaching materials will empower educators. This project has the
potential to increase the number of robotics courses offered at both two- and
four-year schools and universities. The course materials are being designed
with small units and a hierarchical dependency tree in mind; students will be
able to customize their course of study based on the robotics skills they have
already mastered. We present an evaluation of a five module mini-course in
robotics. Students indicated that they had a positive experience with the
online content. They also scored the experience highly on relatedness, mastery,
and autonomy perspectives, demonstrating strong motivation potential for this
approach. | Ponkoj Chandra Shill, Rui Wu, Hossein Jamali, Bryan Hutchins, Sergiu Dascalu, Frederick C. Harris, David Feil-Seifer | 2023-09-10T20:00:25Z | http://arxiv.org/abs/2309.05124v1 | WIP: Development of a Student-Centered Personalized Learning Framework to Advance Undergraduate Robotics Education
###### Abstract
This paper presents a work-in-progress on a learning system that will provide robotics students with a personalized learning environment. This addresses both the scarcity of skilled robotics instructors, particularly in community colleges and the expensive demand for training equipment. The study of robotics at the college level represents a wide range of interests, experiences, and aims. This project works to provide students the flexibility to adapt their learning to their own goals and prior experience. We are developing a system to enable robotics instruction through a web-based interface that is compatible with less expensive hardware. Therefore, the free distribution of teaching materials will empower educators. This project has the potential to increase the number of robotics courses offered at both two- and four-year schools and universities. The course materials are being designed with small units and a hierarchical dependency tree in mind; students will be able to customize their course of study based on the robotics skills they have already mastered. We present an evaluation of a five module mini-course in robotics. Students indicated that they had a positive experience with the online content. They also scored the experience highly on relatedness, mastery, and autonomy perspectives, demonstrating strong motivation potential for this approach.
Robotics, Undergraduate Course Development
## I Introduction
Robotics education can prepare students for career success. However, it can be very difficult to give students a robotics education at the community college or primarily undergraduate institution level if those institutions do not have any robotics-trained faculty. We are developing self-paced, online course materials, which could be deployed at a community college or a university. A personalized learning server could remotely offer robotics course content for campuses without local robotics experts. Each student can study their choice of critical robotics concepts, in the same classroom, assisted by a local instructor, and utilizing an online coding/lab environment.
In this proposed teaching method, the main jobs of an instructor are to make sure students reach educational milestones in every class, collect students' questions, and distribute back answers from the course module designer and course content advisory committee. We are inspired by self-determination theory [1], which shows increasing students' autonomy can enhance their motivation and engagement. The overarching goal of this project is to make headway in resolving problems that threaten the expansion and accessibility of robotics education. We are studying solutions for accessibility issues such as the difficulty institutions have locating qualified professors to teach these cutting-edge robotics courses.
In this work-in-progress paper, we describe the initial personalized learning environment development, course module design and a 5-module mini-course with an evaluation of the content with University students in a classroom setting.
## II Background
The emergence of advanced robotics technologies such as autonomous vehicles, drones, and medical robots has created many job opportunities. Robotics technology can create new employment opportunities [2]. The development of robotics technology will lead to the creation of new jobs in industries such as manufacturing, software development, and even healthcare. Investments in robotics are likely to lead to net gains in employment, wages, and economic growth [3]. The use of industrial robots led to the creation of three to five million jobs globally in 2015, which increased the demand and created new jobs representing a 10-15% increase in the number of jobs in industries that use robots [4].
To effectively instruct on advanced robotics, community colleges, and universities require significant resources, includ
ing specialized hardware and proficient educators. Specialized hardware and proficient educators are necessary for teaching robotics because mobile robots present unique challenges that require a deep understanding of electronics, software development, and experimental methods [5]. The environment, robot hardware, and software all play equally important roles in the behavior of a mobile robot, making it necessary to have specialized hardware and educators who can effectively teach students how to navigate these challenges [6]. Additionally, the cost of robotics equipment can be prohibitive, and course content may quickly become obsolete due to the rapidly evolving nature of robotics [7].
The gap between training ability and training need in undergraduate robotics education is a significant challenge, particularly in community schools and technologically underserved communities. Factors contributing to this issue include the insufficient availability of qualified robotics instructors, inadequate funding for equipment, and significant variations in the backgrounds and experiences of undergraduate students [6]. Training programs offered by major robotics companies may not be as beneficial as general robotics programs available at universities or community colleges. Financial barriers may exist for some students who cannot afford the cost of purchasing or renting robots [8]. Establishing a robotics program may require substantial financial investments due to the need for specialized hardware and proficient educators [7]
Proficient educators are necessary for teaching robotics because robots are physically manifested computing devices that inherently show students how computing programs that they write can impact the real world [9]. However, the interdisciplinary nature of robotics can add a significant teaching challenge for instructors new to the field. Robots provide an opportunity for students to see how their programming skills can be applied in practical settings, which can be difficult to achieve with purely theoretical coursework. Additionally, specialized hardware is required because robots have unique physical characteristics and capabilities that must be taken into account when designing and programming them [8].
This work in progress reduce the skills required to teach a robotics course so an instructor need not have multidisciplinary engineering expertise [9]. By adapting the materials to be more accessible and providing support and resources for faculty members who may not have extensive experience in robotics education or research, costs can be reduced, and accessibility increased, allowing more institutions and students to participate in robotics education and training. This can lead to more diverse and skilled professionals entering the field. Reduced expenses help in reducing obstacles to participation in robotics classes by making it more affordable for students and institutions to offer and participate in such courses [9]. Furthermore, reducing financial barriers enables students from different backgrounds to pursue their interests in robotics without worrying about the high costs associated with learning materials or equipment.
To address these challenges, the authors propose the development of a customized learning framework that prioritizes individual students' needs in undergraduate robotics education. We aim to reduce the expenses required for developing a robotics curriculum and enable educators who lack expertise in robotics to instruct on the advanced subject matter.
## III Approach
Our proposed framework has three objectives:
1. Implement a student-centered personalized learning framework for hands-on robotics education;
2. Develop a mini-course in robotics utilizing this framework; and
3. Conduct a study to evaluate the effectiveness of the proposed framework.
The proposed teaching method will enhance undergraduate robotics education by offering students the freedom to choose their robotics learning path, while at the same time without requiring instructors to have robotics expertise. We want to develop an education framework (see Figure 1) that does not require a robotics expert instructor in the classroom.
### _Student Learning Framework_
The proposed teaching method for the mini-course involves utilizing the ISPeL platform (see Figure 1) developed by East Carolina Univerity (ECU). The course content, including videos and sample code wrapped in Jupyter-notebooks [10], can be accessed by students through the platform. Instructors are responsible for ensuring that students reach milestones in each class and collecting their questions. If the instructor is unable to answer a question, they can refer to a "frequently asked questions and guidelines" document created by the course module designer and course content advisory committee. If the question remains unresolved, the instructor can consult with the course module designer and course content advisory
Fig. 1: Proposed student-centered personalized learning framework: students are at the center and an instructor is required to assist instead of leading the students. Instructors need to ensure students make progress in every class and collect questions from students. Students can work on different topics in the same classroom with different required hardware. This framework does not require students to have powerful devices.
committee via email. The main focus of the instructor is to facilitate student progress, collect and distribute answers, and manage devices correctly. The teaching method emphasizes the importance in-person classroom attendance to ensure progress monitoring, correct device usage, and collaborative knowledge sharing among students.
To implement or start the mini-course, instructors can utilize the ISPeL platform hosted on an ECU server. They have the option to upload course content to the platform or customize and reuse topic components from another course. The instructor can organize the course content using a dependency graph, which is automatically generated when they order the topic components in a book chapter/sub-chapter style through simple mouse movements. This allows for visualizing the relationships and dependencies between different components. The dependency graph is based on the hierarchy defined by the instructor, ensuring a logical progression of learning.
### _Topic Selection_
We developed the mini-course to establish this logical progression and interdependence among the selected topics. This implies that the concepts taught in earlier topics should serve as a basis for later ones. By establishing minimal dependencies, students can incrementally build upon their knowledge, resulting in a deeper understanding of the subject matter. If the course begins with an introduction to programming concepts, for instance, subsequent topics could concentrate on programming in the context of robotics, such as controlling robot movements or integrating sensors. Additionally, Given the short duration of the mini-course and the desire to enable students to choose topics based on their interests, it is essential to minimize topic dependencies. This ensures that students can enlist in individual courses without feeling overwhelmed or disadvantaged if they have not completed prerequisite courses. By reducing dependencies, students are able to select topics that correspond to their specific interests.
To appeal to a wider spectrum of student interests, it is essential to choose diverse and varied topics. This can include various facets of robotics, such as mobile robotics, robotics navigation, and the physics underlying robotics. By providing a variety of topics, students can investigate several aspects of robotics and obtain a deeper understanding of the field. Consider the mini course's logical progression and intended learning outcomes when organizing the selected topics (see Figure 2). Start with topics that provide a solid comprehension of fundamental concepts and progress gradually to more advanced and specialized subjects. This progression enables students to build a solid foundation of knowledge and skills applicable to real-world situations. The selection and arrangement of topics for the mini-course can ensure a balanced curriculum that caters to the interests of students, encourages effective learning, and provides a solid foundation
We have selected five core robotics subjects that are essential for a basic understanding of sensing and navigation problems. These include:
* **Sensors:** acquaints students with a wide array of sensors employed in the field of robotics, including but not limited to proximity sensors, cameras, LIDAR, and IMUs. This promotes the development of perception systems, which in turn facilitate effective interaction between robots and their surroundings;
* **Navigation:** is instrumental in enabling robots to independently traverse and orient themselves within their environment, a critical capability for a wide range of applications. Effective navigation leverages a robot's sensors to safely navigate in its environment;
* **Dead Reckoning:** allows a robot to estimate its position and movement without being dependent on external localization systems. This skill is particularly useful in situations where these types of systems are either not available or not dependable;
* **Potential Fields:** is one of many methodologies for path planning and obstacle avoidance, fundamental competencies for robots functioning in complex and dynamic surroundings; and
* **Odometry:** the calculations from wheel encoders and sensors to calculate accurate location information.
The above topics provide a foundational exposure to robotics suitable for novice students to the field.
### _Course Module Design_
The mini course's materials have been thoughtfully created to be beginner-friendly, allowing students to understand the topics with ease. Each module consists of two to three different sections including the reading section, the practical section, and the assessment or evaluation section.
The reading section incorporates the objectives of instruction of the subject matter, a comprehensive overview of the topic with detailed explanations, and visual aids such as diagrams, charts, and infographics to enhance understanding and simplify complex ideas. We establish student learning
Fig. 2: Dependency Graph Creation: an instructor can choose topics (i.e., select area, see left top corner), select topic components, and design how to connect topic components (see left bottom corner) with a book chapter and sub-chapter style to define the dependencies.
objectives for students to delineate the intended knowledge and skills to be acquired, as well as the ultimate aim of the topic upon completion. We want to clearly present the linkage between the students' theoretical comprehension and practical applications of robotics. This section leverages reference sources for follow-up and includes supplementary materials.
Some topics have mathematical formulas that are necessary to understand the subject from its underlying theory. Students are also given additional information from outside sources during the course to help them better understand mathematical equations. Calculations are required for the concepts of potential field, odometry, and dead reckoning since they require a thorough understanding of the fundamental physics ideas. Students are given mathematical problems that have been solved and are then given equivalent activities to complete independently. For instance, after obtaining information on a robot's initial position, people can be asked to estimate the distance the robot has traveled in a given amount of time. An example would be to consider a two-wheeled robot that advances for five seconds. The left wheel rotates at a speed of ten revolutions per second, while the right wheel rotates at eight. The wheels have a 5-centimeter radius. What is the position and orientation of the robot?
The curriculum incorporates programming exercises to consolidate the fundamental concepts of the course into practical application of that knowledge. Programming examples and problems have been incorporated into the topics. When presented with the positions of an object, obstacle, and goal, students are required to determine the optimal path to reach the goal while avoiding the obstacle.
### _Student Evaluation_
Overall, the curriculum of the abridged course efforts to achieve a satisfactory balance between theoretical comprehension, mathematical principles, practical application, challenges, and evaluations. At the conclusion of each course, quizzes are administered as a means of assessing students' comprehension and progress. Through the utilization of these assessments, educators are able to evaluate the level of understanding of their students and pinpoint any areas that may necessitate additional clarification or reinforcement. The all-encompassing methodology guarantees that learners not only gain a strong theoretical basis but also practical proficiency, critical thinking skills, and the ability to apply their knowledge in real-life situations.
## IV Evaluation
We recruited 16 participants from an Atlantic university campus to participate in the mini-course and to take a survey on their experience. Of these participants, 11 identified as male, 3 identified as female, and 2 preferred not to say. All students were fourth year students or higher; 4 were first-generation university students. When asked about racial backgrounds, 10 students identified as White (66%), 1 as Hispanic (7%), 2 as Black/African American (13%), and 2 preferred not to say (13%). All students expected to get an 'A' or 'B' in the course.
**Course Satisfaction:** Half of the students took the course out of interest in robotics. Students generally evaluated the course positively, with 88-94% agreeing or strongly agreeing with positive general characteristics of the course and 81-88% agreeing or strongly agreeing with positive items related to the course materials. Students were also asked about the course's impact on their plans related to robotics and their feelings about being a roboticist. Table III shows that 44% of students were somewhat or extremely likely to go into robotics before taking the course, with 50% reporting the same likelihood after the course. Additionally, 69% of students agreed or strongly agreed that the course made them feel like a real roboticist. Students were also asked to provide open-ended feedback on the course, with many giving positive responses but noting glitches and revisions needed to the personalized learning system.
**Student Motivation:** The survey also included 12 items based on Self Determination Theory (SDT) to assess the extent to which the course supported students' autonomy, personal competence, and sense of relatedness to the class. In terms of autonomy, 63-100% of students agreed or strongly agreed with items indicating that the course allowed them to make decisions about their learning. Between 75-93% of students agreed or strongly agreed with items related to their ability to master course content, indicating a sense of competence. Between 80-93% of students felt connected to the instructor, other students, and the class as a whole, indicating a sense of relatedness.
## V Conclusions and Future Work
We present an online learning system for self-selected learning for eventual deployment in community colleges, primarily undergraduate institutions, or other higher-education institutions where there is no robotics faculty member. The course model will hopefully facilitate student motivation and knowledge gain.
These preliminary results presented in this paper indicate that the course content presentation fosters both a sense of mastery of robotics content as well as engaging key motivational components of autonomy, competence, and relatedness. This is encouraging as one outcome of online courses can be a decrease in motivation to participate in course activities [11]. These results also show that the students were interested in the course content and enjoyed their participation in the mini-course.
Future work will resolve the technical issues identified above before the next round of student evaluations. Future evaluation work will also add a comparison of knowledge gained in robotics between online and in-person versions of the course to study whether this course model is effective for students in real classroom environments. While the size of the mini-course is likely too small to assess the effect of self-selection of topics for course content, future work will examine this question. |
2309.13746 | Deep Learning-Based Connector Detection for Robotized Assembly of
Automotive Wire Harnesses | The shift towards electrification and autonomous driving in the automotive
industry results in more and more automotive wire harnesses being installed in
modern automobiles, which stresses the great significance of guaranteeing the
quality of automotive wire harness assembly. The mating of connectors is
essential in the final assembly of automotive wire harnesses due to the
importance of connectors on wire harness connection and signal transmission.
However, the current manual operation of mating connectors leads to severe
problems regarding assembly quality and ergonomics, where the robotized
assembly has been considered, and different vision-based solutions have been
proposed to facilitate a better perception of the robot control system on
connectors. Nonetheless, there has been a lack of deep learning-based solutions
for detecting automotive wire harness connectors in previous literature. This
paper presents a deep learning-based connector detection for robotized
automotive wire harness assembly. A dataset of twenty automotive wire harness
connectors was created to train and evaluate a two-stage and a one-stage object
detection model, respectively. The experiment results indicate the
effectiveness of deep learning-based connector detection for automotive wire
harness assembly but are limited by the design of the exteriors of connectors. | Hao Wang, Björn Johansson | 2023-09-24T20:28:35Z | http://arxiv.org/abs/2309.13746v1 | # Deep Learning-Based Connector Detection for Robotized Assembly of Automotive Wire Harnesses*
###### Abstract
The shift towards electrification and autonomous driving in the automotive industry makes automotive wire harnesses increasingly more critical for various functions of automobiles, such as maneuvering, driving assistance, and safety system. It leads to more and more wire harnesses installed in modern automobiles, which stresses the great significance of guaranteeing the quality of automotive wire harnesses assembly. The mating of connectors is essential in the final assembly of automotive wire harnesses due to the importance of connectors on wire harnesses connection and signal transmission. However, the current manual operation of mating connectors leads to severe problems regarding assembly quality and ergonomics, where the robotized assembly has been considered, and different vision-based solutions have been proposed to facilitate the robot control system's better recognition of connectors. Nonetheless, there has been a lack of deep learning-based solutions for detecting wire harnesses connectors in previous studies. This paper presents a deep learning-based connector detection for robotized automotive wire harnesses assembly. A dataset of twenty types of automotive wire harnesses connectors was created to train and evaluate a two-stage object detection model and a one-stage object detection model, respectively. The experiment results indicate the effectiveness of deep learning-based connector detection for automotive wire harness assembly but are limited by the design of the exteriors of connectors.
## I Introduction
Electrification and autonomous driving have driven a paradigm shift in the current automotive industry, making the electronic system increasingly critical in modern automobiles. Numerous automotive wire harnesses have been installed in current vehicles as an essential infrastructure for supporting signal transmission within the electronic system. Meanwhile, more and more wire harnesses are expected to be installed, considering the increase of automotive wire harnesses in vehicles in the past decades and the paradigm shift in the industry. Thus, it is crucial to guarantee the quality of the assembly of automotive wire harnesses.
However, the current final assembly of automotive wire harnesses into vehicles remains mostly manual and skill-demanding, which makes it challenging to control and improve the quality and productivity of the assembly. Some manual operations also involve heavy lifting (for example, approximately 40 kg for some low-voltage automotive wire harnesses) and high-pressure manual manipulations on different components of automotive wire harnesses, which poses severe ergonomic problems to human operators. In particular, the mating of connectors is one of the sub-process relating to ergonomic issues due to the repetitive high-pressure manual pressing in the assembly line. Fig. 1 demonstrates an example of an automotive wire harness, where red rectangles highlight connectors on the automotive wire harness.
Connectors are essential components on automotive wire harnesses, among the others, such as clamps and cables. Automotive wire harnesses are connected to the target unit or other automotive wire harnesses via connectors so the signal can be transmitted continuously within the electronic systems responsible for various functions of automobiles, which are safety-critical in particular. Thus, ensuring the quality of mating connectors in the final assembly of wire harnesses into vehicles is critical. However, the current manual process of mating connectors constrains the productivity and quality of assembly and generates ergonomic problems for human operators. To relieve the problems regarding productivity, assembly quality, and ergonomics, robotized wire harness assembly is of great interest to the automotive industry, considering its better replicability, transparency, and comprehensibility, and has been discussed in different studies previously [1, 2, 3, 4]. Nevertheless, the robotized mating of connectors is non-trivial as the robotic operation needs to address not only high manipulation accuracy but also intricate structures and non-rigid materials of connectors [5].
Fig. 1: An example of an automotive wire harness, with connectors highlighted by red rectangles.
It is also fundamental to retrieve the geometrical information of connectors beforehand so that a robot arm can flexibly reach, grasp, and manipulate the perceived connector.
Computer vision has demonstrated a significant potential on the robotized assembly to the manufacturing industry in solving ergonomic issues while increasing quality and productivity [6]. Previously, there have also been studies on computer vision techniques for robotized manipulation of wire harness connectors [5, 7, 8, 9, 10, 11, 12]. However, a few studies discussed the task of connector detection [9, 11, 12], where methods based on basic image processing techniques were mainly explored [9, 11]. Considering the various designs of connectors on automotive wire harnesses, such as colors, shapes, and sizes, it is intricate to manage the manual feature engineering on connectors for flexible robotized manipulation. The recent advancement in implementing convolutional neural networks (CNN) and deep learning in computer vision research has demonstrated the extraordinary effectiveness of learning-based solutions for object detection compared to traditional image processing-based solutions [13]. Zhou et al. [12] have previously explored deep learning-based connector detection for the robotized wire harness connection, but the proposal mainly focused on one-connector detection. The learning-based detection on multiple connectors remained unsolved but is required for the robotized assembly of automotive wire harnesses in actual production.
This paper presents a study on the deep learning-based connector detection for the robotized mating of connectors on automotive wire harnesses and discusses the feasibility and potential problems of implementing deep learning-based object detection on the task of mating connectors in robotized automotive wire harness assembly under laboratory conditions. As there is no publicly available dataset on automotive wire harness connectors, a dataset comprising twenty different types of connectors was collected initially. Then, two different detection models, a two-stage object detection model, Faster R-CNN [14], and a one-stage object detection model, YOLOv5 [15], were adopted for the training and inference. The experiment results demonstrate the effectiveness of deep learning-based connector detection as both detection methods achieved remarkable detection outcomes with various combinations of connectors presenting in the scene. Yet, detection performance can be improved further, and a more extensive dataset comprising more connectors and more images per connector is needed. Some detection errors on classes and positions of connectors in inference results further reflect the effect of the design of the exterior of connectors, which motivates the future connector detection based on multi-view images of connectors and with new exterior design of connectors so that more visually distinguishable features of the connector can be extracted.
This paper is organized in the following structure: Section II introduces the related research in connector detection and deep learning-based object detection. Section III introduces the data collection and annotation strategy and the statistics of the collected dataset of connectors. Section IV introduces the experiment setups of two-stage and one-stage connector detection, whose results are presented and further discussed in Section V. The study is concluded in Section VI with an outlook on the future work of this study.
## II Related Work
### _Connector Detection for Robotized Mating of Connectors_
Connector detection is needed to acquire the position and categories of connectors so that the robot can flexibly reach, grasp, and manipulate connectors. Although some vision-based solutions have been proposed for facilitating different sub-tasks in robotized mating of connectors [5, 7, 8, 9, 10, 11, 12], connector detection has yet gathered few attention in previous studies [9, 11, 12], where the basic image processing-based methods are dominant [9, 11].
Tamada et al. [9] proposed to recognize the types and poses of connectors using a high-speed vision system. An image processing method was adopted in Tamada et al. [9] to detect the positions of connectors via detecting the corners of connectors, which was further processed to calculate the orientations of connectors. Yumbla et al. [11] later proposed a basic image processing-based method to detect multiple connectors, including converting color space and applying color thresholding. However, the task in Yumbla et al. [11] was a one-class detection, where all connectors were considered the same class. Deep learning-based connector detection has also been discussed in a recent study [12], which proposed to roughly locate the position of a connector and then zoom in to the detected connector to acquire the finer pose of the connector. Nevertheless, the proposal in Zhou et al. [12] mainly focused on manipulating one pair of connectors instead of multi-connector manipulation, which is more common in actual production.
### _Deep Learning-Based Object Detection_
The rebirth of convolutional neural networks (CNNs) in 2012 [16] initiated the research on introducing deep learning [13] to object detection [17], which further promoted the remarkable development of two major groups of detectors for object detection based on deep learning in previous years: two-stage detection and one-stage detection [17].
Similar to the attentional mechanism of the human brain, the two-stage detection model first scans the whole scenario coarsely and then focuses on regions of interest (ROIs) to distinguish the object [18]. The region-based convolutional neural network (R-CNN) proposed by Girshick et al. [19, 20] symbolized the inauguration of two-stage object detection. In R-CNN [19, 20], a set of object proposals were extracted and fed into a CNN model to extract features for classification. However, the redundant feature computations due to many overlapped proposals made the detection speed extremely slow, which was improved later by Spatial Pyramid Pooling Networks (SPPNet) [21]. A Spatial Pyramid Pooling (SPP) layer was introduced in SPPNET [21] to enable a CNN to generate a fixed-length representation to avoid re-scaling. Nevertheless, SPPNET [21] remained multi-stage training and only fine-tuning fully-connected
layers. To improve R-CNN [19] and SPPNet [21], Fast R-CNN [22] was proposed later, where the detector and the bounding box regressor could be trained under the same network configurations simultaneously. Furthermore, Faster R-CNN [14] was proposed to accelerate the detection further by introducing a Region Proposal Network (RPN), but the problem of computation redundancy remained at the subsequent detection stage. Besides the R-CNN family, Lin et al. [23] proposed Feature Pyramid Networks (FPNs), which can be integrated into other detectors to enable high-level semantics building at all scales besides the feature maps of the networks' top layer.
Though able to attain high-precision detection, two-stage detection methods are constrained by their ponderous detection speed and computation, stimulating the research on one-stage detection. You Only Look Once (YOLO) [24] was the first deep learning-based one-stage detection that simultaneously predicted bounding boxes and probabilities for each sub-region of an image. Although the detection speed was improved significantly, the localization accuracy dropped remarkably compared to two-stage detection, especially for some small objects, which was enhanced in YOLO's subsequent versions [25, 26, 27, 28]. There were also other one-stage detection methods besides the YOLO family proposed to improve the detection accuracy while maintaining the advantage of high detection speed, including Single-Shot Multibox Detector (SSD) [29], RetinaNet [30], and CornerNet [31].
Recent years have also witnessed the profound influence of Transformer models [32] in deep learning and computer vision [33], which has spawned DEtection TRansformer (DETR) [34] and Deformable DETR [35] and promoted deep learning-based object detection to higher performance.
## III The Dataset of Connectors
The dataset is essential for learning-based object detection [36, 37, 38] and scalable deep learning-based solutions in industry [39]. However, to the best of the authors' knowledge, there is no publicly available benchmark dataset dedicated to the detection of automotive wire harness connectors. Thus, to facilitate the study of deep learning-based connector detection for the robotized assembly of automotive wire harnesses, a dataset was collected and annotated first, consisting of 20 types of connectors commonly occurring on automotive wire harnesses installed in passenger vehicles. Fig. 2 demonstrates one example image for each of the 20 connectors. The following subsections will introduce the strategy for image collection and annotation and summarize the statistics of the dataset used in the experiments.
### _Image Collection Procedure_
Connectors are placed on a white workbench for image acquisition using the main camera of an iPhone 11. The original image format is RGB, and each image has a size of \(4032\times 3024\) pixels. The distance between the camera and connectors was not fixed, considering the various locations of connectors in the three-dimensional (3D) space in actual assembly situations.
There are 360 images captured in total. Initially, 60 images of various combinations of connectors with random poses were collected to simulate the random distribution of connectors in the actual assembly scenario. Fig. 3 demonstrates some examples of these 60 images. For clarification, the distribution of connectors in each of these 60 images does not represent the actual distribution of connectors on practical automotive wire harnesses or in the final assembly of automotive wire harnesses.
In addition, images of each of the 20 connectors were also collected to train the detector with more features of respective classes. For each connector, 15 images were captured from different views, including six images captured from the front, back, top, down, left, and right of the connector, as an example of class A0 shown in Fig. 4, and nine images captured from random perspectives, as an example of class A0 shown in Fig. 5.
Fig. 3: Examples of images with different combinations of connectors.
Fig. 2: The twenty types of connectors collected for dataset creation. The class of each connector is simplified and labeled below images.
### _Image Annotation Procedure_
The image annotation procedure of the dataset of connectors followed the methodology implemented in the PASCAL visual object classes (VOC) challenge 2007 [36].
The image annotation includes the **class** and the **bounding box** for every connector in the target set of classes. As shown in Fig. 2, this study simplified the 20 classes of connectors into A0, A1, B0, B1, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, and R, which can be easily mapped to the actual types of connectors in practical applications. An axis-aligned rectangular bounding box surrounding the connector was drawn for each connector visible in each image in the dataset. Though relatively quick to annotate, choosing an axis-aligned rectangular bounding box for the annotation is a compromise. Some connectors in images fit well because of their rectangular or approximately rectangular profiles, for example, class A0 shown in Fig. 4. However, for other connectors presented in images, an axis-aligned bounding box can be a poor fit because either they are not axis-aligned, for example, been captured from random perspectives (Fig. 5) or placed randomly (Fig. 3), or the connector is not in the shape of a box, for example, class I shown in Fig. 2.
The actual image annotation was conducted using an annotation platform, Labelme [40]. It was trivial to annotate images with a single connector due to the structured storage of images. For images with multiple connectors, a list of visible connectors in each image was documented first during the image collection procedure. Then, each connector visible in the images was compared with the original physical counterpart and annotated exhaustively. The annotation results were compared to the documented list to guarantee the consistency and accuracy of the image annotation.
### _Dataset Statistics_
The total number of annotated images is 360. The data is primarily divided into three main subsets: training data (Train), validation data (Validation), and test data (Test), with a ratio of \(90\%/5\%/5\%\). The images in the validation set and test set were selected randomly. For each subset of the connector dataset and class of connectors, the number of object instances is shown in TABLE I. In the collected dataset, the most frequent class is "L", with 46 object instances, and the least frequent class is "M", with 31 object instances. Fig. 6 illustrates a histogram of the number of object instances presented in different subsets of the collected connector dataset for each class of connectors.
## IV Experiment Settings
This study investigated a two-stage detector and a one-stage detector for automotive wire harness connector detection. The experiment on two-stage detection was conducted based on Faster R-CNN [14], and the one-stage detection
Fig. 4: The six images captured from the front, back, top, down, left, and right of A0. These images are cropped from the raw data for demonstration.
Fig. 5: The other nine images of A0 were captured from random perspectives. These images are cropped from the raw data for demonstration.
Fig. 6: Histogram of the numbers of object instances shown in the collected connector dataset. The classes and the corresponding counts are shown on the x-axis and the y-axis, respectively.
\begin{tabular}{|c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||||c||||c||c||c||||c||||c||c||c||c||c||||c||c||c||||c||c||c||c||c||c||||||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||c||c||c||c||c||||c||c||c||c||c||||c||||c||||c||c||c||||c||c||c||c||c||||c||||c||c||||c||c||||c||||c||c||||c||c||||c||||c||c||||||c||||c||||c||||c||||c||||c||||c||||c||||c||c||||c||||c||||c||||c||||c||||c||||c||c||||c||||c||c||c||||c||c||||c||c||||c||c||||c||||c||c||c||c||c||||c||||c||c||||c||c||||c||||c||c||||c||||||c||c||||c||c||c||||c||||c||||c||c||||||c||||c||||c||||c||||||c||c||||c||||||c||c||||c||||c||||||c||||c||||c||||c||||c||||c||||c||||c||||c||||c||||c||c||||||c||||||c||||||c||c||||||c||||c||c||||c||||||c||||c||||c||||c||||c||c||||||c||||c||||c||||c||||c||||c||||c||||c||||c||||c||||c||||c||||||c||c||||c||||c||||||c||||||c||||c||c||c||||||c||||c||||c||||c||||c||||||c||||c||||c||||||c||||c||||c||||||c||||c||||c||||||c||||c||||c||||||c||||c||||||c||c||||c||||c||c||||c||||c||||c||||||c||c||||c||||c||||c||||||c||||||c||||c||||||c||||||c||||c||||c||||c||||||c||||c||||||c||||||||c||c||||c||||c||||||c||||||c||||||c||||||c||||||c||||||c||||||||c||||||c||||||||c||||||c||||||c||||||||c||||||c||||||||c||||||c||||||||c||||||||c||||||||c||||c||||||||c||||||||c||||||||c||||||c||||||||||||c||||||||||c||||||||||||c||||||c||||||||||c||||||||c||||||||||c||||||||c||||||||||c||||||||||c||||||||c||||||||c||||||||c||
was achieved based on YOLO [24]. Both models were trained using the union of the train and validation set of the collected connector dataset and evaluated on the test set using an NVIDIA GeForce RTX 4090. The following subsections introduce the detailed implementation of the two-stage detection and the one-stage detection, respectively.
### _Two-Stage Detection_
This study investigated the two-stage detection based on Faster R-CNN [14] and implemented Faster R-CNN [14] with ResNet [41] plus Feature Pyramid Network (FPN) [23] as the backbone. The overall baselines and hyper-parameters followed Faster R-CNN [14] provided in the publicly available code of Detectron2 [42]. Specifically, the model was trained with a learning rate of 0.00025 using Stochastic Gradient Descent (SGD) as the optimizer. The batch size was 8. The weights of the model were initiated with the pre-trained checkpoint, _faster_R-CNN_R_101_FPN_3_x, provided by Detectron2 [42].
### _One-Stage Detection_
YOLO [24] was selected as the backbone of the one-stage detection in the experiment. The overall baselines and hyper-parameters of the one-stage detection in this study followed the publicly available code of YOLOv5 [15]. Specifically, the model was trained with an initial learning rate of 0.01 using SGD as the optimizer. The weight decay was 0.0005, and the momentum was 0.937. The batch size was 16. The weights of the model were initiated with the pre-trained checkpoint, _yolov5x_, provided by YOLOv5 [15]. An early-stop module was adopted to control the end of the training process, which terminated the training if there was no improvement after 300 consecutive epochs.
## V Results and Discussion
The initialization, training, and evaluation of the two-stage detection model based on Faster R-CNN [14] and the one-stage detection model based on YOLOv5 [15] were conducted following the experiment protocol explained in section IV. Fig. 7 demonstrates some inference results of Faster R-CNN [14] with two threshold values and YOLOv5 [15] as well as the corresponding ground-truth images with original bounding boxes and labels.
experiment settings in this study. However, there are several rates of precision in TABLE II lower than 50%, including the ones of both detection model on class D and E, the ones of the Faster R-CNN-based model on class C, and the one of the YOLOv5-based model on the class G.
By observing the exteriors of the connectors in the collected dataset, we find that similar designs of some connectors may affect detection performance. For example, the widths of classes A1, B1, C, D, and E are different, but their left and right profiles are highly similar, as shown in Fig. 8, and classes G and J have identical exteriors but different seal rings inside the connectors, which are occluded when the images are captured from specific perspectives, as shown in Fig. 9. These observations indicate that if some connectors share similar exterior designs and are placed with specific poses, their distinguishable features can be occluded, making it hard to recognize them. Nonetheless, similar exteriors motivate two feasible strategies to relieve this detection problem: 1) conducting further connector detection based on multi-view images or videos; 2) re-design the exteriors of connectors with more distinguishable features. Specifically, for the former solution, if the inference of the class of a connector is uncertain, multi-view images of the connector or a video capturing different views of the connector can be acquired for further classification. And for the latter solution, changing the design of the exteriors of connectors, for example, changing the color of the whole connector or part of the connector, may substantially facilitate the detection, which calls for collaboration with the manufacturers of connectors.
## VI Conclusions and Future Work
This study collects a dataset with twenty types of connectors commonly used on automotive wire harnesses and trains a two-stage Faster R-CNN-based detection model and a one-stage YOLOv5-based detection model to validate the feasibility of deep learning-based connector detection for robotized automotive wire harnesses assembly. The experiment results indicate the effectiveness of both types of object detection methods and demonstrate the better performance achieved by the one-stage YOLOv5-based model on detecting automotive wire harness connectors but also reflect problematic detection outcomes that require further study with other detection algorithms and more data, which will be investigated in the future research. In addition, observations on collected connectors motivate the problematic detection potentially affected by the similar designs of some connectors, especially the exteriors, which leads to future studies on multi-view image-based and video-based connector detection as well as on new exterior designs of connectors.
Fig. 8: Class A1, B1, C, D, and E with highly similar profiles.
Fig. 9: Inference result by YOLOv5 (left) on class G and J, whose exteriors are highly similar but the colors of seal rings inside are different (highlighted by red rectangles). |
2309.03555 | Compositional properties of planet-crossing asteroids from astronomical
surveys | Context. The study of planet-crossing asteroids is of both practical and
fundamental importance. As they are closer than asteroids in the Main Belt, we
have access to a smaller size range, and this population frequently impacts
planetary surfaces and can pose a threat to life. Aims. We aim to characterize
the compositions of a large corpus of planet-crossing asteroids and to study
how these compositions are related to orbital and physical parameters. Methods.
We gathered publicly available visible colors of near-Earth objects (NEOs) from
the Sloan Digital Sky Survey (SDSS) and SkyMapper surveys. We also computed
SDSS-compatible colors from reflectance spectra of the Gaia mission and a
compilation of ground-based observations. We determined the taxonomy of each
NEO from its colors and studied the distribution of the taxonomic classes and
spectral slope against the orbital parameters and diameter. Results. We provide
updated photometry for 470 NEOs from the SDSS, and taxonomic classification of
7,401 NEOs. We classify 42 NEOs that are mission-accessible, including six of
the seven flyby candidates of the ESA Hera mission. We confirm the perihelion
dependance of spectral slope among S-type NEOs, likely related to a
rejuvenation mechanism linked with thermal fatigue. We also confirm the
clustering of A-type NEOs around 1.5-2 AU, and predict the taxonomic
distribution of small asteroids in the NEO source regions in the Main Belt. | A. V. Sergeyev, B. Carry, M. Marsset, P. Pravec, D. Perna, F. E. DeMeo, V. Petropoulou, M. Lazzarin, F. La Forgia, I. Di Petro, the NEOROCKS team | 2023-09-07T08:36:42Z | http://arxiv.org/abs/2309.03555v1 | # Compositional properties of planet-crossing asteroids from astronomical surveys
###### Abstract
Context:The study of planet-crossing asteroids is of both practical and fundamental importance. As they are closer than asteroids in the Main Belt, we have access to a smaller size range, and this population frequently impacts planetary surfaces and can pose a threat to life.
Aims:We aim to characterize the compositions of a large corpus of planet-crossing asteroids and to study how these compositions are related to orbital and physical parameters.
Methods:We gathered publicly available visible colors of near-Earth objects (NEOs) from the Sloan Digital Sky Survey (SDSS) and SkrMapper surveys. We also computed SDSS-compatible colors from reflectance spectra of the Gaia mission and a compilation of ground-based observations. We determined the taxonomy of each NEO from its colors and studied the distribution of the taxonomic classes and spectral slope against the orbital parameters and diameter.
Results:We provide updated photometry for 470 NEOs from the SDSS, and taxonomic classification of 7,401 NEOs. We classify 42 NEOs that are mission-accessible, including six of the seven flyby candidates of the ESA Hera mission. We confirm the perihelion dependance of spectral slope among S-type NEOs, likely related to a rejuvenation mechanism linked with thermal fatigue. We also confirm the clustering of A-type NEOs around 1.5-2 AU, and predict the taxonomic distribution of small asteroids in the NEO source regions in the Main Belt.
## 1 Introduction
Asteroids are the remnants of the building blocks that accreted to form the terrestrial planets and the core of the giant planets in the early Solar System 4.6 Gy ago. Asteroids are also the origin of the meteorites that fell on the planets, including the Earth. These meteorites represent the only possibility to study in detail the composition of asteroids in the laboratory (e.g., Consolmagno et al., 2008; Cloutis et al., 2015), with the exception of the tiny samples of rock, provided by return-sample missions: JAXA Hayabusa (Yurimoto et al., 2011) and Hayabusa-2 (Tachibana et al., 2022), as well as the soon due NASA OSIRIS-REx (Laretta et al., 2017).
In contrast to targeted sample collection, we cannot choose the origin of meteorites striking the Earth. Identifying their source regions is therefore crucial to determining the physical conditions and abundances in elements that reigned in the protoplanetary nebula around the young Sun (McSween et al., 2006). From the analysis of a bolide trajectory, it is possible to reconstruct a meteorite's heliocentric orbit (Gounelle et al., 2006), although such determinations have been limited to only a few meteorites (Granvik & Brown, 2018).
Among the different dynamical classes of asteroids, the near-Earth and Mars-crosser asteroids (NEAs and MCs), whose orbits cross that of the telluric planets, form a transient population. Their typical lifetime is of only a few million years before they are ejected from the Solar System, fall into the Sun, or impact a planet (Gladman et al., 1997). We refer here to near-Earth objects (NEOs) in a liberal sense, encompassing both asteroid-like and comet-like objects whose orbits cross that of a terrestrial planet (hence including NEAs, MCs, and some Hungarias).
These populations are of both scientific and pragmatic interest. As they are closer to the Earth than the asteroid belt, we have access to smaller objects from ground-based telescopes. Their orbital proximity implies a much smaller impulsion to reach them with a spacecraft and make them favorable targets for space exploration (Abell et al., 2012). On the other hand, these objects could potentially pose a threat, and studying their properties is a key aspect in planning risk mitigation (Drube et al., 2015), of which the National Aeronautics and Space Administration (NASA) Demonstration for Autonomous Rendezvous Technology (DART) and European Space Agency (ESA) Hera missions are lively demonstrators (Rivkin et al., 2021; Michel et al., 2022).
We focus here on the compositional properties of a large corpus of NEOs as part of the NEOROCKS project (Dotto et al., 2021), whose goal is the characterization of the NEO population. The article is organized as follows:. In Section 2 we present the data we have collected and the way in which we are building a large catalog of NEOs with visible colors (including a refinement of the photometry of the NEOs present in the SDSS catalog, Appendix B). We then present in Section 3 the way in which we determine the taxonomic class of each NEO. We focus on the taxonomy of the potential targets for space missions in Section 4, and finally, we discuss the distribution of taxonomic classes, the effect of space weathering and planetary encounters, and NEO source regions in Section 5.
## 2 Data sources
In this section, we describe the data sets we collect, how they compare in terms of precision, and the way in which we merge them into a single catalog of colors. The entire process is summarized in Figure 1.
### Collecting data sets
We gathered the colors of NEOs from four recently published sources: the Sloan Digital Sky Survey (SDSS, Sergeyev and Carry, 2021), the SkyMapper Southern Survey (SMSS, Sergeyev et al., 2022), the Gaia DR3 visible spectra (Gaia, Galluccio et al., 2022), and a compilation of ground-based spectra (Classy, Mahlke et al., 2022). For the last two sources, we converted the reflectance to colors in order to obtain the largest possible homogeneous data set (Appendix A).
Each SDSS observation sequence contains quasi-simultaneous measurements in five filters (\(u\), \(g\), \(r\), \(i\), \(z\)), providing colors of all combinations. There is a constant
Figure 1: Schematic view of the extraction, convertion, and merging of NEOs from SDSS, SMSS, Gaia, and Classy catalogs.
time difference between two exposures in consecutive filters, equal to 57 s. The largest time difference between two exposures occurs for the \(g\) and \(r\) filters, and is approximately 230 s. The initial SDSS catalog contains 11,142 individual multi-filter observations of 5,425 unique NEOs. For each NEO, we computed the weighted mean of each color from multiple measurements. Owing to potential biases on the SDSS photometry for fast-moving NEOs (Solano et al., 2014; Carry et al., 2016), we remeasured 470 NEO colors on SDSS frames (see Appendix B).
The SkyMapper includes several observing strategies. A shallow six-filter sequence with exposure times between 5 s and 40 s, a deep ten-image sequence of \(wqruwizuw\) with 100 s exposures, and pairs of deep exposures in (\(g\),\(r\)) and (\(i\),\(z\)). This observing strategy, in conjunction with the enhanced sensitivity in \(g\) and \(r\), implies a predominance of \(g-r\) colors in the results, but almost always leads to the measurement of at least one photometric color obtained within \(\lesssim 2\) min (see Sergeyev et al., 2022, for more details). The initial SkyMapper catalog contains 12,001 individual observations of 3,149 unique NEOs. We computed the asteroid color indexes by limiting the observation time between two filters to 20 minutes and weighted the mean color of multiple asteroid measurements whenever possible. Through this method, we retrieved 9,212 colors of 2,081 individual NEOs. The SkyMapper filters are slightly different from those of SDSS. We thus converted the SkyMapper colors into SDSS colors using color-transformation coefficients that were computed from a wide range of stellar classes (Sergeyev et al., 2022).
Gaia DR3 (Collaboration et al., 2016; Vallenari et al., 2022) contains 60,518 low-resolution reflectance spectra of asteroids (Galluccio et al., 2022). Among these, 838 are NEOs, of which 199 have not been recorded in other catalogs. These optical spectra range from 374 to 1034 nm, meaning that they almost fully overlap with SDSS \(g\) to \(z\) filters (see Figure 1). We thus converted the Gaia reflectance spectra to SDSS colors to homogenize the data set. We detail the procedure in Appendix A.
The Gaia DR3 represents the largest catalog of asteroid reflectance spectra. However, the spectra of NEOs have regularly been acquired with ground-based facilities for decades, often over a larger wavelength range and with a higher spectral resolution (e.g., NEOSHIELD2, MITHNEOS, and MANOS surveys, see Perna et al., 2018; Binzel et al., 2019; Devogele et al., 2019). Therefore, we used the preprocessed and resampled ground-based spectra from Mahlke et al. (2022), which comprises 4,548 spectra of 3,157 unique asteroids. We extracted 1,072 spectra of 846 unique NEOs and converted them to SDSS colors with the same procedure as for the Gaia data (Appendix A).
### Comparing data sets
Before merging the four catalogs of colors, we checked for systematic differences in colors and uncertainties among the four data sets. To do this, we did not restrict the comparison to NEOs, but used all of the available asteroid colors from the entire four data sets: 400,894 for SDSS, 139,220 for SDSS, 60,518 for Gaia, and 3,157 for ground-based (Classy).
We cross-matched the asteroid colors from the other sources to the SDSS, which contains the largest number of asteroids and is used as a reference here. We found 67,921, 28,948, and 1,951 asteroids in common for the SDSS, Gaia, and Classy catalogs, respectively. We then computed the color difference between the SDSS and the other catalogs. The distribution of these differences were normal for all pairs of filters and catalogs, with mean values close to zero (Figure 2). The spread (standard deviation) of these differences reflects a combination of several effects: the measurement uncertainties of each catalog (either magnitudes or spectra), the potential effect of asteroid rotation (due to the non-simultaneous acquisition of asteroid images in different filters; see, e.g., Carry, 2018) and observations at different phase angles (Sanchez et al., 2012; Galluccio et al., 2022; Cellino et al., 2020).
The detailed results of this comparison are presented in Table 1. There are small systematic offsets between catalogs on average, much smaller than their standard deviation but larger than the standard error (\(\sigma/\sqrt{n}\), where \(n\) is the number of observations). For instance, SDSS matches SDSS with an average \(g\)-\(r\) color difference of 0.033 mags and a standard deviation of 0.106 magnitude. This was determined using 44,005 shared \(g\)-\(r\) color measurements that had an error of less than 0.1 magnitude. Although the systematic offset is three times smaller than the standard deviation, the standard error is approximately 0.0005. Therefore, these systematic biases were corrected by adding the precomputed offsets for each color before merging the data sets.
As visible in Figure 2, the width of the color difference distributions is largest between SDSS and SDSS, because both catalogs have the largest color uncertainties. Once the color difference between the catalogs is corrected, the standard deviation can be independently computed as
\[\sigma_{\texttt{SDSS-SMS}}=\sqrt{\sigma_{\texttt{SDSS}}^{2}+\sigma_{\texttt{ SDSS}}^{2}}. \tag{1}\]
We present a detailed comparison of the color differences and uncertainties in Appendix C. Based on this analysis, we note that some uncertainties are either over- or under-estimated (e.g., Gaia and SDSS, respectively), and we applied multiplicatively correcting factors to select the best color value between identical asteroid color measurements in the catalogs (see Table 1).
Figure 2: Distribution of color differences between the SDSS, Gaia, and Classy with respect to the SDSS data set, using asteroids commonly found in these data sets. The distribution was fitted with a Gaussian curve, represented by the black line. The central gray vertical line denotes the zero offset.
### Merging data sets
We merged the four data sets based on asteroid designation (we used the rocks" interface to the name resolver of SsODNet1, see Berthier et al.2022). Each catalog contains NEOs that have not been measured in the others. The most prolific source is the SDSS, which contains 4,398 unique NEOs, followed by SkyMapper, with 964 unique NEOs. The Classy and Gaia catalogs contain 507 and 199 unique NEOs, respectively.
Footnote 1: [https://rocks.readthedocs.io](https://rocks.readthedocs.io)
For NEOs present in more than one catalog, the color with the smallest uncertainty is selected. This results in a catalog of 7,401 NEOs (i.e., NEAs and MCs) with at least one color measurement, which we call NEOROKS. We collected the ancillLY parameters of each asteroid in our sample with SsODNet, including orbital elements and albedo, for instance. The description of the catalog is presented in Appendix E.
We present in Figure 3 the orbital distribution of the NEOROKS sample and detail in Table 2 the dynamical classes, including 2277 NEOs (Aten, Amor, Apollo, and Atira) and 5124 MCs. We also included the Hungarians that, owing to their eccentricity, have a perihelion within the orbit of Mars in the MC sample.
The absolute magnitudes in the NEOROKS catalog extracted from the virtual observatory Solar System open database network (SsODNet) (Berthier et al., 2023) show a bimodal distribution (Figure 4), resulting from the typical larger distance of MCs compared with NEAs. The average absolute magnitude of the NEAs is \(19.2\pm 2.0\), while it is \(17.8\pm 1.3\) for MCs. Assuming an albedo of 0.24 for all NEOs results in an average diameter of \(0.40^{+0.61}_{-0.24}\) km for NEAs and \(0.76^{+1.34}_{-0.35}\) km for MCs, covering a compressive range from \(\approx\) 10km down to 50m. We chose this albedo as it is the mean albedo of 5-type asteroids (Mahlke et al., 2022), the most represented taxonomic class among NEOs (Section 5 and, e.g., Binzel et al. (2019)).
## 3 Taxonomy
Taxonomy is a convenient way to summarize observations into a simpler set of labels that describe categories of objects that share the same properties. Asteroid taxonomy is based on the spectral signatures of the light reflected by the surface (e.g., Belskaya et al., 2015; Reddy et al., 2015). Widely used asteroid taxonomy schemes include those of Tholen (1984), using visible colors and albedo, and DeMeo
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Sample} & \multicolumn{2}{c|}{\(g\)-\(r\)} & \multicolumn{2}{c|}{\(g\)-\(i\)} & \multicolumn{2}{c|}{\(r\)-\(i\)} & \multicolumn{2}{c}{\(i\)-\(z\)} \\ \cline{2-9} & Difference & \(n\) & Difference & \(n\) & Difference & \(n\) & Difference & \(n\) \\ \hline SDSS-SMSS & \(0.033\pm 0.106\) & 54283 & \(0.056\pm 0.112\) & 52546 & \(0.013\pm 0.074\) & 59242 & \(0.016\pm 0.091\) & 35252 \\ SDSS-Gaia & \(0.007\pm 0.076\) & 27158 & \(-0.002\pm 0.079\) & 27043 & \(-0.010\pm 0.052\) & 28455 & \(-0.051\pm 0.064\) & 24768 \\ SDSS-Classy & \(0.005\pm 0.066\) & 1807 & \(0.029\pm 0.078\) & 1796 & \(0.024\pm 0.033\) & 1843 & \(0.008\pm 0.054\) & 1734 \\ \hline \end{tabular}
\end{table}
Table 1: The mean and standard deviation of the color difference between SDSS and the other samples. The number of asteroids in each of the samples is also reported. We limit the SDSS sample to asteroids with uncertainties below 0.1 mag.
Figure 4: Distribution of absolute magnitude of MCs (blue) and NEAs (orange). The diameter scale is a guideline, computed with an average albedo of 0.24.
Figure 3: Distribution of the orbital elements of the NEOs, color-coded by dynamic class.
et al. (2009), using visible and near-infrared spectrum (itself an extension of Bus & Binzel, 2002, based on the visible spectrum). These have recently been unified into a taxonomy using both visible and near-infrared spectra and albedos (Mahlke et al., 2022).
### Classification of multi-color NEOs
We used the same approach as earlier works on photometry, deriving consistent classification with spectroscopy (e.g., DeMeo & Carry, 2013; Popescu et al., 2018; Sergeyev & Carry, 2021). We converted reference spectra into colors (Appendix A) and used them to define the taxonomic class in the photometry space. To determine the taxonomic class of each asteroid, we employed the probabilistic approach of Sergeyev et al. (2022), which involves computing the intersection between the volume occupied by the color (with uncertainty) of an object and the regions of each taxonomic class. We updated the regions to match the recent taxonomy by Mahlke et al. (2022) instead of using the templates from Bus-DeMeo (DeMeo et al., 2009) and computed the probability for each asteroid belonging to each of the ten broad taxonomy complexes: A, B, C, D, K, L, Q, S, V, and X.
The final taxonomy for each asteroid was selected based on the most probable taxonomic complex. We also provided the second-highest probability taxonomic complex.
\begin{table}
\begin{tabular}{l r} \hline \hline Dynamical class & Number \\ \hline Mars-Crossers & 4,380 \\ Amor & 1074 \\ Apollo & 1078 \\ Hungarias & 744 \\ Aten & 124 \\ Atira & 1 \\ \hline Total & 7,401 \\ \hline \end{tabular}
\end{table}
Table 2: Distribution of NEOs among dynamic classes.
Figure 5: Color-color distribution NEOs with a taxonomy probability above 0.2, color-coded by taxonomic classes.
Figure 6: Pseudo-reflectance spectra of asteroids based on their \(g\)-\(r\), \(g\)-\(i\), and \(i\)-\(z\) colors. The distribution of values for each band is represented by whiskers (95% extrema, and the 25, 50, and 75% quartiles). For each class, we also represent the associated template spectra of the Mahlke et al. (2022) taxonomy.
Asteroids with a likelihood of less than 10% fitting into any taxonomy complex were labeled as U (unclassified). (Appendix E).
We present in Figure 5 the color-color distribution of 2341 NEOs for which taxonomy is predicted with a probability higher than 20%. This constraint was selected to avoid the visual overloading of the figure. The distribution follows the reported color distribution of asteroids in the SDSS filter system (Nesvorny et al., 2005; Parker et al., 2008; Carry et al., 2016). We also present a comparison of pseudo-reflectance spectra based on the photometry of our sample with the template spectra of the taxonomic class from Mahlke et al. (2022) in Figure 6. The correspondence of the SDSS median spectra with the template spectra confirms the chosen taxonomy boundaries. The method provides a reliable way to determine the taxonomic classification of NEOs using photometry data. With the increasing number of NEOs discovered every year, it is becoming increasingly important to be able to classify these objects accurately and efficiently. Spectroscopy is the most accurate method for determining asteroid taxonomy, but it is time-consuming and requires a significant amount of telescope time. On the other hand, photometry data can be obtained much more efficiently, making it a more practical choice for large-scale surveys.
### Classification based on a single color
Many observations in the present data set have a significantly better signal-to-noise ratio in the \(g\) and \(r\) filters. Furthermore, some of the asteroids from the SNSS sample only have \(g\)-\(r\) color. Thus we also classified asteroids from this single color. We utilized the \(g\)-\(r\) color of one million asteroid observations from Sergeyev & Carry (2021) to build a reference distribution. We fitted this distribution with two normal distributions, corresponding to two wide complexes (carbonaceous, \(C_{1}\), and silicates, \(S_{1}\)). We used these two distributions to compute the probability that a NEO belongs to each wide complex, based on its \(g\)-\(r\) color. Whenever the difference between the probabilities was smaller than 20 percent, we marked these asteroids as unclassified. We present the \(g\)-\(r\) color distribution of NEOs in Figure 7. It is of course a crater classification than the classification based on three colors. However, it allows for discrimination between "red" (S, A, V, L, and D) and "blue" objects (C and B) in a manner similar to Erasmus et al. (2020). A significant number of the unclassified asteroids belong to the X complex, while the remainder are of the D- and K- asteroid types. (Figure 8). Although a taxonomy based on a single color may appear limited, we present in Figure 8 the confusion matrix between the one- and three-color classes. The \(C_{1}\) and \(S_{1}\) classes accurately separate asteroids belonging to the C complex from those displaying an absorption band of around 1 micron (which are redder: K types, L types, and S complex).
As a final step, we merged the taxonomy obtained with three colors (\(g\)-\(r\), \(g\)-\(i\), and \(i\)-\(z\)) and that with a single color only (\(g\)-\(r\)). The former is preferred over the latter (Appendix E). If neither approach could classify an asteroid, we set the classification method to "none."
Figure 8: Confusion matrix illustrating the correlation between predicted single-color (\(g\)-\(r\)) taxonomy outcomes and the results of a three-color taxonomy (\(g\)-\(r\), \(g\)-\(i\), \(i\)-\(z\)). This matrix displays the fractions of true positives, false positives, true negatives, and false negatives.
Figure 7: Distribution of g-\(r\) colors in SDSS asteroids and taxonomic categorization of NEOs. Top: Color distribution of one million asteroids obtained from the SDSS (Sergeyev & Carry, 2021) data set modeled by fitting a mixture of two Gaussians (represented by the black line). The two main taxonomic classes, silicate (depicted in orange) and carbonaceous (depicted in blue), were represented by the model.
Bottom: Distribution of \(g\)-\(r\) colors and the taxonomy of NEOs analyzed using the two-component mixture model of the two primary classes in the SDSS data set (shown by lines). The carbonaceous and silicate taxonomy complexes are represented by blue and orange, respectively. Unclassified asteroids, where the probability of belonging to each complex is comparable, are represented in gray.
Figure 9: NEO taxonomy distribution computed by (\(g\)-\(r\), \(g\)-\(i\), \(i\)-\(z\)) color indexes.
### Distribution of taxonomy and albedos
The prevalence of S types is striking (Figure 9). It is notable that the distribution presented here is influenced by the selection function of the observations, which introduces a bias, mainly due to the fact that the surveys used here are magnitude limited, which will impact different taxonomic classes of different albedos (DeMeo and Carry, 2013; Marset et al., 2022). The albedo is an important characteristic related to the composition of asteroids (Tholen, 1989; Mahlke et al., 2022). For instance, asteroids in the B, C, and D classes have low albedos (below 10%) while mafic-rich asteroids (e.g., A, Q, and S types) have albedos around 0.24. The main advantage of taking the albedo into account is the possibility to split the degenerate X complex into high albedo E-type asteroids (albedo above 0.30), moderate albedo M (metallic) asteroids, and the "dark" P asteroids (below 0.10).
We used SsODNet(Berthier et al., 2022) to retrieve the albedo of the NEOs in our data set for a consistency check. In Figure 10 we compare the \(i\)-\(z\) and \(g\)-\(r\) colors of 898 NEOs that have estimated albedo values. There is an overall agreement between the range of albedos for the different taxonomic complexes, although outliers are visible. These outliers are a consequence of either misclassifications or biased albedos (Masiero et al., 2021), or both. Mismatches occur mainly in classes with highly different albedos but similar colors, such as D- and L-type asteroids (here, some D types have albedos around 0.2, more consistent with L types).
The albedo distribution of X types reveals that approximately 45% of them are actually P types. The fraction of M types is approximately 45% and the remaining 10% are high-albedo E-type asteroids (Usui et al., 2013). However, P-type asteroids are very similar to C-type asteroids in both color and albedo, and can therefore be misclassified.
### Comparison with previous surveys
We compared the distribution of taxonomic classes of the present NEOROCKS sample with the three previous main spectral surveys of NEOs: MITHNEOS (Binzel et al., 2019), NEOSHIELD (Perna et al., 2018), and MANOS (DeVogele et al., 2019) (see Figure 11). The NEOROCKS sample overlaps almost completely with the NEOSHIELD-2 and MANOS catalogs because the Classy data include all available ground-based spectral observations. The overlap with MITHNEOS is limited to approximately half of this catalog, for which a majority of spectra only cover the near-infrared range. While differences are visible (and partly expected owing to the size dependence of taxonomic distribution; e.g., Devogele et al., 2019), we note an overall agreement with the different data sets.
The confusion matrix presented in Figure 12 indicates that there is a high level of agreement in the taxonomic classification of S-, V-, and X-type asteroids. However, some confusion is observed among the less common classes in the NEOs population, particularly K versus L and (A, L, Q) versus S. Additionally, a significant number of C-type asteroids were classified as part of the wide X asteroid complexes, which also include P-type asteroids that share similar photometry and albedo properties with C-type asteroids. This highlights both the strengths and limitations of using broadband colors as the basis for taxonomic classification.
## 4 Targets accessible to space missions
As opposed to other domains in astrophysics, the Solar System can almost be considered as a close neighborhood. Distances are small enough that we have sent space probes (some of which returned), providing ground-truths for Earth-based studies and leading to great discoveries, such as satellites of asteroids (Chapman et al., 1995; Belton et al., 1995), the asteroid-meteorite link (Fujiwara et al., 2006; Yurimoto et al., 2011), and cryo-volcanism (on Ceres, Kuppers et al., 2014; Ruesch et al., 2016), for instance.
Figure 10: Colors and albedo of NEOs. Taxonomy is marked by colored letters (same color-code as in Fig. 5Color-color distribution NEOs with a taxonomy probability above 0.2, color-coded by taxonomic classesfigure.5). Vertical ranges between the panel indicate the one sigma range of albedo for each taxonomic class. (Mahlke et al., 2022).
Since the 1990s, opportunities to encounter an asteroid during an interplanetary mission have been considered, and dynamical studies have been conducted to find candidates for potential flyby missions (e.g., Di Martino et al., 1990; Agostini et al., 2022). These candidates are often at the origin of characterization efforts to select the actual target of the flyby and prepare the spacecraft operations during the short encounter (e.g., Doressoundiram et al., 1999; Carry et al., 2010). As a result, there have been almost as many encounters (seven) during opportunity flybys4 as targeted encounters with asteroids5 (ten).
Footnote 4: [https://echo.jpl.nasa.gov/lance/delta_v](https://echo.jpl.nasa.gov/lance/delta_v).
Footnote 5: (1) Cres (Dawn), (4) Vesta (Dawn), (433) Eros (NEAR Shoemaker), (4179) Toutatis (Chane’e), (25143) Itokawa (Hayabusa), (65803) Didymos (DART), (134340) Pluto (New Horizons), (162173) Ryugu (Hayabusa2), (486958) Arrokoth (New Horizons), (101955) Bennu (OSIRIS-REx).
We searched in the present NEOROCKS data set for any candidate of upcoming space missions (e.g., NASA JANUS, JAXA Hayabusa-2 extension, Scheeres et al., 2020; Yano et al., 2022) and found many objects (Table 3) listed as flyby candidates for the ESA Hera mission (approximately one hundred, see Fitzsimmons et al., 2020).
A critical parameter in selecting a space mission target is the amount of energy required to reach it. This quantity is often expressed as the total change of velocity, \(\Delta v\). We collected \(\Delta v\) computed and provided by L. Benner6 and for NEOs in our NEOROCKS catalog with a \(\Delta v<6.5\) km, the typical \(\Delta v\) required for a mission to Mars. We present in Table 4 the taxonomy of these 42 mission-accessible NEOs.
Footnote 6: [https://echo.jpl.nasa.gov/lance/delta_v](https://echo.jpl.nasa.gov/lance/delta_v).
\begin{table}
\begin{tabular}{l c c c c} \hline Designation & Number & Dyn.class & Taxo & Prob \\ \hline
1995 OR & 42532 & MB\textgreater{}Inner & D & – \\
2000 HJ89 & 54212 & MB\textgreater{}Inner & V & 0.72 \\
2001 TJ72 & 88992 & MB\textgreater{}Inner & S & 0.17 \\ Fracismuri & 95802 & MB\textgreater{}Inner & K & 0.84 \\ \hline Etiennemarney & 3456 & MB\textgreater{}Inner & M & 0.95 \\ Gorlitsa & 3818 & MB\textgreater{}Inner & C & 0.98 \\
1981 EW30 & 10278 & MB\textgreater{}Inner & S & 0.62 \\
2000 CC33 & 14710 & MB\textgreater{}Inner & S & 0.24 \\
1998 WS9 & 49352 & MB\textgreater{}Inner & M & 0.88 \\
1996 HL21 & 79317 & MB\textgreater{}Middle & V & 0.03 \\
2000 EP110 & 86616 & MB\textgreater{}Inner & S & 0.84 \\
2000 NF22 & 118687 & MB\textgreater{}Inner & X & 0.93 \\
2001 UH40 & 125107 & MB\textgreater{}Inner & K & 0.28 \\
2004 FQ111 & 128338 & MB\textgreater{}Inner & S & 0.73 \\
2003 AQ28 & 151682 & MB\textgreater{}Inner & S & 0.54 \\
2003 CB7 & 151738 & MB\textgreater{}Inner & S & 0.90 \\
2001 QU65 & 189092 & MB\textgreater{}Inner & V & 0.35 \\
2006 DR115 & 245739 & MB\textgreater{}Inner & S & 0.57 \\
2008 EZ75 & 263476 & MB\textgreater{}Inner & B & 0.21 \\
2008 FK125 & 274163 & MB\textgreater{}Inner & X & 0.27 \\
2008 UE268 & 309745 & MB\textgreater{}Inner & S & 0.51 \\
2007 UT127 & 355419 & MB\textgreater{}Middle & S & 0.16 \\
2005 YX13 & 388155 & MB\textgreater{}Inner & S & 0.08 \\
2012 AE1 & 392704 & NEA\textgreater{}Apollo & V & 0.17 \\
2008 FL108 & 431739 & MB\textgreater{}Inner & C & 0.22 \\
2013 YQ49 & 479408 & MB\textgreater{}Inner & V & 0.18 \\
2015 PT9 & 515878 & MB\textgreater{}Inner & S & 0.28 \\
2011 HF9 & – & MB\textgreater{}Inner & C & 0.36 \\
2013 LG2 & – & MB\textgreater{}Inner & V & 0.06 \\
2014 JE85 & – & MB\textgreater{}Inner & S & 0.09 \\ \hline \end{tabular}
\end{table}
Table 3: Flyby candidates of the ESA Hera mission. At the top of the table are candidates from the shortlist targets, and at the bottom, the candidates from the longlist targets.
Figure 11: Comparison of the distribution of taxonomic classes of the NEOROCKS sample computed from (\(g\)-\(r\), \(g\)-\(i\), \(i\)-\(z\)) color indexes with the MITHNEOS, NEOSHIELD, and MANOS spectral surveys.
Figure 12: Comparison of the NEOROCKS NEO taxonomy with the MITHNEOS, MANOS, and NEOSHIELD-2 catalogs.
We also provide an analysis of the spectrum for the flyby candidate (10278) Virkki in Appendix D.
## 5 Discussion
We used the derived colors and taxonomic classes to address several topics. In Section 5.1Space weatheringsubsection.5.1, we discuss the space weathering for the NEOs in the S complex. We then present the distribution of A types in Section 5.2Distribution of A typessubsection.5.2. We finally discuss the taxonomic distribution of small asteroids in the source regions of NEOs in Section 5.4Source region-subsection.5.4.
### Space weathering
The surface of atmosphereless bodies in the Solar System is aging from micro-meteorite impacts and ions of the solar wind, commonly referred to as space weathering (Chapman, 2004). Space weathering changes the properties of the top-most surface layer (nanometer thick, Noguchi et al., 2011), as function of exposure (age and heliocentric distance) and composition. Thanks to laboratory experiments (e.g., Sasaki et al., 2001; Strazzulla et al., 2005; Brunetto et al., 2006), the effect of space weathering on mixtures of olivines and pyroxenes (such as A, S, and V types) is well understood (Brunetto et al., 2015): it reddens and darkens surfaces. Its effects on the reflectance of more primitive material linked with carbonaceous chondrites (such as B- and C-types) is less straightforward, with both blueing and reddening as possible outputs (Lantz et al., 2017; Lantz et al., 2018).
In the case of S types, the effect is expected to be very fast, changing ordinary chondrite-like material (the Q types) into S types in less than a million years (Vernazza et al., 2009). The presence of Q types among asteroids implies that their surfaces are young. Considering the short timescale for space weathering (longer than the timescale to be injected from the Main Belt, Gladman et al., 1997), some rejuvenating mechanisms must be present (Marchi et al., 2012).
Q-type asteroids were originally found among NEOs only, so planetary encounters were proposed as a rejuvenation mechanism (Nesvorny et al., 2005; Nesvorny et al., 2010; Binzel et al., 2010). However, this early observation was due to an observing bias: the fraction of Q increases
\begin{table}
\begin{tabular}{l c c c l} \hline Designation & \(\Delta V\) & Complex & Prob & Dyn. Class \\ & km/sec & & & \\ \hline
2004 EU22 & 4.4 & D & 0.55 & Apollo \\
1998 SF36 & 4.6 & S & 0.96 & Apollo \\
2015 DP155 & 4.7 & V & 0.81 & Amor \\
2008 DG5 & 4.8 & S & 0.81 & Apollo \\
1996 GT & 5.2 & S & 1.00 & Amor \\
1994 CN2 & 5.2 & S & 0.69 & Apollo \\
2001 SW169 & 5.3 & S & 0.64 & Amor \\
1997 WT22 & 5.3 & S & 0.96 & Amor \\
2002 LJ3 & 5.3 & S & 0.96 & Amor \\
1973 EC & 5.4 & L & 0.96 & Amor \\
2006 UP & 5.4 & S & 0.67 & Amor \\
1982 HR & 5.5 & V & 1.00 & Apollo \\
1999 VG22 & 5.5 & S & 0.75 & Amor \\
1980 PA & 5.7 & V & 1.00 & Amor \\
2010 MV8 & 5.7 & S & 0.62 & Amor \\
2003 RB & 5.7 & S & 0.99 & Amor \\
2002 XP40 & 5.7 & S & 1.00 & Amor \\
2001 FC7 & 5.8 & X & 0.58 & Amor \\
1993 QA & 5.9 & S & 0.69 & Amor \\
2008 KZ5 & 6.0 & S & 0.98 & Amor \\
1977 VA & 6.0 & X & 1.00 & Amor \\
2005 Y36 & 6.1 & X & 0.56 & Amor \\
2001 WL15 & 6.1 & S & 0.76 & Amor \\
2001 UA5 & 6.1 & S & 0.54 & Apollo \\ A898 PA & 6.1 & S & 1.00 & Amor \\
2010 LJ14 & 6.2 & Q & 0.55 & Amor \\
2007 VY7 & 6.2 & V & 0.68 & Apollo \\
1998 KU2 & 6.3 & B & 0.94 & Amor \\
1982 DV & 6.3 & S & 0.75 & Amor \\
2002 KL6 & 6.3 & V & 0.97 & Amor \\
2000 JS66 & 6.3 & S & 0.52 & Apollo \\
1929 SH & 6.3 & S & 1.00 & Amor \\
2005 DO33 & 6.3 & S & 0.52 & Amor \\
2002 PG80 & 6.3 & S & 0.62 & Amor \\
2001 FD90 & 6.3 & V & 0.59 & Amor \\
1993 VW & 6.3 & V & 0.80 & Apollo \\
1981 CW & 6.3 & S & 0.64 & Amor \\
2004 YB & 6.3 & S & 0.97 & Apollo \\
2006 SV19 & 6.4 & Q & 0.93 & Amor \\
2018 NB & 6.4 & S & 0.81 & Amor \\
2015 DV215 & 6.4 & V & 0.58 & Apollo \\
2007 SJ & 6.4 & S & 0.57 & Apollo \\ \hline \end{tabular}
\end{table}
Table 4: Mission-accessible NEOs (\(\Delta v<6.5\) km) with a taxonomy probability above 0.5.
Figure 13: Gaia reflectance spectra of asteroid (95802) Francis-muir and (42532) 1995 OR, flyby candidates of the ESA Hera mission. The orange line shows pre-computed reflectance templates and their uncertainties (Mahlke et al., 2022) for P-type asteroids (top) and D-type asteroids (bottom).
toward smaller diameters, which are harder to observe at larger distances (Thomas et al., 2012; Carry et al., 2016). As space weathering is a continuous process (ultimately resulting in asteroids being classified into two groups: S and Q), the observed trend of shallower slopes among S/Q asteroids with smaller diameters explains this bias, and can be explained by a resurfacing due to landslides or failure linked with Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) spin-up (Graves et al., 2018).
Recently, Graves et al. (2019) tested another mechanism for rejuvenation among NEOs: a cracking mechanism due to thermal fatigue (Viles et al., 2010; Delbo et al., 2014). Based on almost 300 NEOs (from Lazzarin et al., 2004; Lazzarin et al., 2005; Binzel et al., 2004), this model explains the overall behavior of spectral slope against perihelion, which was apparently misinterpreted as being linked to planetary encounters.
In the present section, we use the large NEOROKS catalog to address the question of space weathering. Our sample contains 1,175 S-type and 196 Q-type asteroids whose taxonomy is based on three colors with a probability higher than 0.2. We chose to use both the taxonomic types (i.e., the Q/S ratio) and the spectral slope as indicators of space weathering. The former highlights the fraction of very fresh surfaces in the sample, while the latter is more nuanced, with the weathering creating a continuous trend from blue to red surfaces.
We first studied the size dependence of space weathering of S-type asteroids. We present in Figure 14 their spectral slope (computed over the \(g\) and \(i\) filters, expressed in \(\%/\mu m\) consistently with reflectance spectroscopy) against their diameter. A similar plot for the Q/S ratio is provided in Figure 16. The diameter of the asteroids (\(D\)) was estimated using their known absolute magnitude (\(H\)) via the equation \(D=1329\cdot p_{V}^{-0.5}\cdot 10^{-0.2H}\)(Harris and Lagerros, 2002) and assuming an albedo of S-type asteroids \(p_{V}=0.24\). The slope of S-type asteroids is constant for asteroids smaller than approximately 1-5 km, and increases for larger asteroids. This is consistent with the previous report by Binzel et al. (2004). Such behavior was indeed already reported (e.g., Thomas et al., 2012; DeMeo et al., 2023) and explained by resurfacing through YORP spin-up and failure (Graves et al., 2018). The decrease in the Q/S ratio for the smallest NEOs may be attributed to the increasing number of monoliths, for which resurfacing may be difficult.
In Figure 15, we present the relationship between the spectral slope of S-type asteroids and their perihelion. Our analysis shows a probable trend of increasing spectral slope with a more distant perihelion, which is consistent with the findings of previous studies Graves et al. (2019). The spectral slope remains constant until approximately 1.3-1.4 AU, beyond which it again increases. As noted by Graves et al. (2019), this last behavior is likely an observing bias: the farther away the asteroids, the less we observe small diameters, and the fraction of fresh surfaces is not constant with diameters (Carry et al., 2016; Graves et al., 2018). A spectral slope value variation is 0.86\(\pm\)0.07(%/\(\mu\)m)/AU from 0.2 to 0.8 AU and is 0.64 \(\pm\) 0.07(%/\(\mu\)m)/AU beyond 1.4 AU. Our analysis shows that within the orbit of Venus, the spectral slope is higher than previously estimated by Graves et al. (2019), who reported a value of 0.52 \(\pm\) 0.21%/\(\mu\)m/AU.
This behavior is also visible in the fraction of Q and S types (Figure 16). There is a strong correlation between the Q/S ratio and the perihelion distance, with the fraction of Q types increasing across a wide range of distances from 0.2 to 1.6 AU. A similar trend was observed by (Devogele et al., 2019), who compared the perihelion distribution of 138 S-type NEOs to that of 178 NEOs, including 91 Sq and 87 Q subtypes for perihelions ranging from 0.7 to 1.0. Outside this range, however, their data showed a flat behavior. The recent study by (DeMeo et al., 2023) presents an almost linear trend of increasing Q-type asteroid fraction with decreasing perihelion in an interval from 0.5 to 1.3 AU, very similar to our result presented here.
We then tested the level of space weathering against planetary encounters, using the minimum orbit insertion distance1 (MOID) as an indicator of the proximity to the planets (following, e.g., Binzel et al., 2010). The Q/S ratio is shown as a function of MOID for the Earth, Venus, and Mars in Figure 17. While there is a trend of increasing fractions of Q-type asteroids toward smaller MOIDs, it happens at distances too far to be due to the planetary encounter and apparently is the result of the correlation with
Figure 14: Spectral slope of S types as a function of asteroid diameters (gray points), the weighted average in logarithmic size bins shown by red points. Weights were estimated by color uncertainty.
Figure 15: Spectral slope against perihelion for S types. Red dots and the shaded area are the running average and deviation, and blue lines are linear regressions on the running average. Although the entire sample presents a large spread, the running average shows two kinks.
the perihelion distance. (Carry et al., 2016; Graves et al., 2019). For the Earth, it even drops for MOIDs below the lunar distance, counterintuitively (a similar situation occurs for Mars). We note that here we use the current MOID of each NEO, while Binzel et al. (2010) argued in favor of probing the dynamical history of individual objects (which is beyond the scope of the present analysis).
We finally tested the ratio of Q to S types with the orbital inclination. The ratio is overall flat, with a shallow peak around 15\({}^{\circ}\)and an increase above 30\({}^{\circ}\). The slightly decreasing fraction of Q asteroids in the inclination range of 15-35\({}^{\circ}\) corresponds to the inclination range of the Hungarias and Phocaeas. The maximum Q/S ratio at 5\({}^{\circ}\) reported by (DeMeo et al., 2023) on 477 S types is three times larger than that of our sample. This disparity may be attributed to differences in the asteroid samples and variations in the techniques employed to distinguish between Q- and S-type asteroids.
The present sample contains 1,371 S- and Q-type NEOs, a factor of 2-3 larger than the previous studies. We confirm the trend of decreasing spectral slope (increasing fraction of Q-types) toward smaller diameters. We did not detect a clear signature against orbital inclination. There is a clear increase in the fraction of Q types with smaller perihelion (also visible in the decrease in spectral slope), pointing to a strong effect of thermal fatigue in refreshing asteroid surfaces.
### Distribution of A types
A types are a rare type of asteroids in the Main Belt. Their spectra exhibit a broad and deep absorption band around 1 \(\mu m\), indicating an olivine-rich composition (e.g., Rivkin et al., 2007). They have been thought to originate from the mantle of differentiated planetesimals (Cruikshank & Hartmann, 1984), leading to the "missing mantle issue" (Burbine et al., 1996). The origin of A types is still debated (Sanchez et al., 2014; DeMeo et al., 2019), although, the study of
Figure 16: Running mean of the ratio between the number of Q and S asteroids as a function of perihelion, inclination, and diameter. Shaded areas correspond to the uncertainties considering Poisson statistic for the Q/S ratio.
Figure 17: Running mean of the ratio between the number of Q and S asteroids as a function of their MOID with the Earth, Venus, and Mars.
Figure 18: Relative distribution of A-types along the semi-major axis.
Mars Trojans indicates that certain A-type asteroids could be fragments that were ejected from Mars (Polishook et al., 2017; Christou et al., 2021).
The fraction of A-type asteroids by number in the entire Main Belt is estimated at about 0.16% and are believed to be homogeneously distributed (DeMeo et al., 2019). We report here a fraction of \(2.5\pm 0.2\)% A types among NEOs (focusing on classifications with a probability higher than 0.5). This much higher fraction of A types has already been reported by both the MANOS and NEOSHIELD-2 surveys (Fig. 11Comparison of the distribution of taxonomic classes of the NEOROCKS sample computed from (\(g\)-\(r\), \(g\)-\(i\), \(i\)-\(z\)) color indexes with the MITHNEOS, NEOSHIELD, and MANOS spectral surveysfigure.11, from 1.7 to 5.5%, Popescu et al., 2018; Devogele et al., 2019). While a classification based on visible wavelengths only may overestimate the fraction of A (misclassified from red S types owing to space weathering or observations at high phase angles, Sanchez et al., 2012), the fraction of A types among NEOs appears to be an order of magnitude higher than in the Main Belt. Finally, Devogele et al. (2019) reported a concentration of A types with a semi major axis close to that of Mars (1.5 AU).
We present in Figure 18 the fraction of A- and S-type NEOs as a function of semi major axis. While S types are evenly distributed, A types are concentrated between the orbit of Mars and the 4:1 resonance with Jupiter, similar to the report by Devogele et al. (2019). Most A-type NEOs seem to be related to the Hungarias. In this region, the fraction of A-type asteroids increase by up to 4%. So, while the majority of the Hungarias are C and E types (DeMeo and Carry, 2014; Lucas et al., 2019), approximately 3% of asteroids in this region are A types.
### The dependence of asteroid colors on phase angle
The color of an asteroid is determined by the light it reflects, which is influenced by the composition of its surface material. However, the observed color of an asteroid can also change with the phase angle, which is the angle between the observer (usually Earth), the asteroid, and the Sun (Belskaya and Shevchenko, 2000; Waszczak et al., 2015). This change in color with phase angle is likely due to the way light scatters off the asteroid's surface. At higher phase angles, the light we see is more likely to have been scattered multiple times within the asteroid's surface before being reflected back to us. This multiple scattering can cause a redder object to appear bluer and vice versa, although this effect is only noticeable for phase angles of less than 7.5 degrees (Alvarez-Candal et al., 2022). Considering the change in asteroid color with phase angle can be important for accurate taxonomy classification using color analysis techniques (Colazo et al., 2022).
However, the exact mechanisms behind this color change with phase angle are still not fully understood and are an active area of research. The shape of the asteroid, its rotational state, and the macroscopic roughness of its surface can also influence the observed color and its change with phase angle (Carvano and Davalos, 2015).
To investigate the impact of the phase effect on asteroid colors, we compared both the SDSS and SDSS data sets with the absolute magnitude colors from the study by (Alvarez-Candal et al., 2022). The histograms of the \(g\)-\(i\) difference between the two data sets are shown in Figure 19.
We also selected asteroids that were observed at phase angles of greater than 20 degrees and had a phase difference of more than 5 degrees between observations. Subsequently, we determined the slope of the asteroid's \(g\)-\(i\) color as a function of phase angle. We found that color slope changes randomly and is comparable to the uncertainties in color.
To investigate the trends among "red" and "blue" asteroids, we subdivided the asteroid data set into two groups based on their \(g\)-\(i\) colors. A red group is indicative of silicate asteroids, and a blue group is representative of carbonaceous asteroids. With this analysis, we did not catch any trends toward reddening or bluing within these subsets. The random behavior of asteroid color slope indicates that more significant factors, such as the shape of the asteroid and uncertainties in photometry, may have a greater influence on the observed color and consequently on asteroid taxonomy.
Given that the phase effect could significantly alter the colors of asteroids only at large phase angles, and considering that our sample does not include NEOs observed at phase angles exceeding 40 degrees, we conclude that we cannot precisely predict and then correct the phase effect. Therefore, we did not take the phase effect into account in the color analysis of the NEOs data set.
### Source regions
Investigating the orbital and size characteristics, as well as the origin of NEOs, is a crucial area of research in planetary sciences (Binzel et al., 2015; Abell et al., 2015). The dynamical pathway from the source regions to the planet-crossing space is a crucial foundation for studying both in
Figure 19: Distribution of the difference between the SDSS \(g\)-\(i\) asteroid colors and absolute magnitude (H) colors from (Alvarez-Candal et al., 2022) as a function of phase angle.
dividual NEAs and broader population-level questions. Understanding these distributions gives a holistic understanding of the dynamics, origins, and potential risks associated with NEAs.
To deduce the probable origins of NEAs, we relied on what is known of their orbital properties in conjunction with previously simulated probabilities of seven-region**+ ** escape regions by** Granvik et al. **(**2018**)****. We assigned each asteroid to its most probable origin area by employing a three-dimensional grid of orbital elements and a value of absolute magnitude as the fourth parameter. The grid includes semi major axis, a, eccentricity, e, and inclination, i, which was predicated on the calculations previously detailed in the research of** Granvik et al. **(**2018**)****. The orbital elements of these celestial bodies were obtained from the Minor Planet Center (MPC) database.**
Footnote **: \(\nu_{6}\) secular resonance, 2:1, 3:1, 5:2, mean-motion resonances (MMR) with Jupiter, high inclination Phoacaas and Hungarias, and Jupiter family comets (JFC)**
**The most abundant source of NEOs is the** \(\nu_{6}\)**, which limits the inner border of the Main Belt. We predict it to be dominated by mafic-silicate-rich asteroids (S, Q, V, see Figure** 20**). The distribution of taxonomic classes is almost similar for the other source regions in the inner belt: the 3:1 MMR limiting the inner and middle belt, and the Phoacaa and Hungaria regions. The fraction of mafic-silicate-rich asteroids decreases for source regions located further from the Sun (5:2 and 2:1 MMR, JFC). These are dominated by opaque-rich asteroids (B, C, D, see Figure** 20**). Despite the observation biases (mainly related to albedo) and the relative low number of NEOs predicted to originate from the outer regions, our results are in close agreement with** Marsset et al. **(**2022**)****, in line with the current understanding of taxonomic distribution** **(**DeMeo and Carry****,** 2014**)****, but in a smaller size range.**
## 6Conclusions
**We combined a large sample of colors of planet-crossing asteroids, combining broadband photometry from the SDSS and SDSS surveys and reflectance spectroscopy from the ESA Gaia mission and ground-based observations. We determined the taxonomy of 7,401 NEOs, with diameters from approximately 10 km to 50 m. The sample is dominated by S-type asteroids (approximately 45%), as occurs for other NEOs surveys. However, it is notable that the proportion of S types is overestimated due to observational bias. We also report a much higher (up to 4%) fraction of A types among NEOs as compared to the Main Belt. These A types are concentrated on a semi major axis between 1.5 and 2 AU. We confirm a strong dependence of the spectral slope of S types with perihelion, based on a sample of over one thousand objects. The distribution of slope is consistent with the recently proposed rejuvenation model through thermal fatigue.**
###### Acknowledgements.
**This research has been conducted within the NEOROCKS project, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 870403. The NEOROCKS team is composed by E. Dotto, M. Banaszkiewicz, S. Banchi, M.A. Barucci, F. Bernardi, M. Birlan, A. Cellino, J. De Leon, M. Lazzarin, E. Mazzotta Epifani, A. Medievilla, D. Perna, E. Perozzi, P. Prave, C. Siodras, C. Teodorescu, S. Anghel, A. Bertolucci, F. Calderini, F. Colas, A. Del Vigna, A. Dell'Oro, A. Di Cecco, L. Dimare, I. Di Pietro, P. Fatka, S. Fornasier, E. Fratting, P. Frosini, M. Fulchulignoni, R. Gabryszewski, M. Giardino, A. Giunta, T. Homakina, J. Huntingford, S. Ieva, J.P. Kotlarz, F. La Forgia, J. Licandro, H. Medeiros, F. Merlin, J. Nomen Torres, V. Petropoulou, F. Pina, G. Polenta, M. Popescu, A. Rozek, P. Scheirich, A. Sonka, G.B. Valsecchi, P. Wajer, A. Zizni. This research has made use of the SVO Filter Profile Service supported from the Spanish MINECO through grant AYA2017-84089 (Rodrigo et al.****,** 2012; Rodrigo and Solano****,** 2020). We did an extensive use of the Virtual Observatory (VO) TOPCAT software (Taylor, 2005), and IMCCE's VO tools SkyBot (Berthier et al.****,** 2006) and SBDNet (Berthier et al.****,** 2022). This work made use of Astropy:13 a
Footnote 13: [http://www.astropy.org](http://www.astropy.org)
Footnote 14: [http://www.astropy.org](http://www.astropy.org)
Footnote 15: [http://www.astropy.org](http://www.astropy.org)
Figure 20: Taxonomic distribution of NEAs as per the seven-region model, previously calculated by (**Granvik et al.****,** 2018**)
community-developed core Python package and an ecosystem of tools and resources for astronomy (Astropy Collaboration et al., 2013, 2018, 2022). This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley et al., 2020). Thanks to all the developers and maintainers.
This work has made use of data from the European Space Agency (ESA) mission _Gaia_11, processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, 12). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is 13T. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame-J/INA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
Footnote 13: [https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)
Footnote 14: [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)
Footnote 15: [http://www.sdss3.org](http://www.sdss3.org)
## Appendix A Conversion of reflectance to color
A common strategy used to improve the detection of compositionally similar dependences in data analysis is to reduce the data's dimensionality. This process involves simplifying the data without losing critical information. When working with reflectance spectra that cover the same wavelength range, they can be transformed into colors, under the condition that the data encompass similarly broad wavelength ranges. This conversion facilitates better visualization and comparison of our data.
The transformation procedure involves several steps. Initially, each reflectance value was multiplied by a solar spectrum, which was taken from Bohlin et al. (2014) and the filter transmission curves. The solar spectrum was used because it is the light from the Sun that is being reflected off the surfaces of the asteroids. The filter transmission curves were used to mimic the response of the instrument that would be observing this reflected light. Once these multiplications were completed, we calculated the integrals by summing up all of the individual product values across the wavelength range to produce a single, complete value that characterizes the source photometry value in the filter. This process is detailed in (Chap. 7 in IMCCE, 2021). The color index value is the logarithm relation of photometry obtained in two filters, which provides a measure of the object's color.
We have two types of reflectance spectra under consideration. The Gaia reflectance spectra consist of 16 data points, each a measurement of how much light an asteroid reflects at specific wavelengths. These values span from 374 nm to 1034 nm, increasing by 44 nm increments. The Classy reflectance spectra are more extensive, comprising 53 tabulated values ranging from 0.45 \(\mu m\) to 2.45 \(\mu m\). The intervals between these values are 0.025 \(\mu m\) up to 1.025 \(\mu m\), and increase to 0.05 \(\mu m\) beyond this point.
We note that not all data from the Gaia spectra are reliable. Specifically, the first two values (which represent blue light) and the last two values (representing red light) can sometimes be inaccurate or spurious. Although these suspect values are often flagged in the Gaia DR3 catalog (Galluccio et al., 2022), this is not always the case. To address this problem, we discarded these unreliable values and replaced them with extrapolated values. This extrapolation was based on the trend observed in the three nearest (and more reliable) reflectance values, as illustrated in (Figure A.1).
Having carried out these preliminary steps, we proceeded to convert the refined reflectance spectra into standard color indexes used in astronomy, namely _g-r_, _g-i_, _r-i_, and _i-z_ colors. This transformation is undertaken within the photometric system of the SDSS, a major astronomical survey that has provided extensive data on the night sky. To ensure accuracy in this conversion, we retrieved the transmission curves of the SDSS filters from the SVO filter profile service5. This service contains a variety of transmission curves from a multitude of observatories and astronomical instruments (Rodrigo et al., 2012; Rodrigo and Solano, 2020).
Footnote 5: [http://svo2.cab.inta-csic.es/theory/fps/index.php?mode=voservice](http://svo2.cab.inta-csic.es/theory/fps/index.php?mode=voservice)
Finally, we calculated the uncertainties associated with these color values. These uncertainties provide a measure of the potential error or variability in our color measurements. They are calculated as half the difference of color computed using the reflectance, plus and minus uncertainties. This gives us a measure of the range within which the true color value is likely to lie, thereby providing us with a more comprehensive understanding of the asteroids' color data.
## Appendix B Photometry of fast-moving targets
The apparent motion of Main Belt asteroids is typically about 40\(\,\arcsec\)/h. The length of the streak during the exposure (54 s) is thus comparable with the typical seeing of SDSS images. However, NEOs have significantly faster apparent motion, up to hundreds of arcseconds per hour, leading to trailed signatures (Figure B.1, Solano et al. (2014)).
Through visual inspection of random NEOs images from the SDSS database, and checking their photometry from the SDSS pipeline (as reported by Sergeyev and Carry, 2021), we found that fast-moving NEOs sometimes have incorrect photometry. This is likely because they were not recognized as a single object in the different filters by the SDSS pipeline. Furthermore, the SDSS PSF photometry of elongated NEO tracks is biased. We finally identified a few cases of erroneous estimate of the zero-point in individual SDSS Flexible Image Transport System (FITS) frames. We note that SDSS magnitudes are expressed as inverse hyperbolic sines ("asinh" magnitudes, Lupton et al., 1999). They are virtually identical to the usual Pogson astronomical magnitude in the high signal-to-noise ratio (S/N) regime, but can diverge for faint objects such as NEOs.
We overcome these issues by remesuring the photometry of NEOs moving faster than 80\(\,\arcsec\)/h on SDSS images. We selected 470 NEOs with either an expected S/N above 10 in the \(z\) filter or multiple measurements. We used these criteria
Figure A.1: Examples of Gaia reflectance spectra of the asteroid (3768117) 2001 AT43 (top) and Classy reflectance of the asteroid (4688) 1980 WF (bottom). Red points indicate outliers (see Galluccio et al., 2022), while gray points indicate extrapolated values (see text). We overplot the transmission curves of the SDSS \(g\), \(r\), \(i\), and \(z\) filters to show the wavelength range covered by the reflectance spectra.
to ensure meaningful colors for taxonomy: typical color differences between classes are on the order of 0.1 mag (DeMeo and Carry, 2013), and the \(z\) filter is crucial for probing the presence of an absorption band around 1 \(\mu\)m (Carry et al., 2016), which has been one of the major discriminants in all taxonomies for the past half a century (Chapman et al., 1975).
For this task, we developed a python software using the astropy (Astropy Collaboration et al., 2013, 2018, 2022) photoutils (Bradley et al., 2020), astroquery (Ginsburg et al., 2019), and sep (the core algorithms of SExtractor, Bertin and Arnouts, 1996; Barbary, 2016) packages. The procedure to measure the photometry encompassed the following steps.
First, we estimated the zero-point value of each SDSS frame. We identified non-saturated bright stars and measured their instrumental magnitude with aperture photometry. We then derived the slope and zero-point of individual frames by comparing these values with the photometry from the SDSS PhotoPrimary catalog (York et al., 2000), which contains only stationary sources.
Using the sep package, we identified all sources in cut-out images centered on the predicted location of the asteroid. The SDSS images in different filters were obtained sequentially, with a delay of 17.7 s between each of the 54 s exposures. The position of the cut-out image of the asteroid hence changes in each filter, with the largest shift occuring between filters \(g\) and \(r\). Therefore, we identified the NEOs in these two filters using SkyBoT (Berthier et al., 2006, 2016), since it provides the best S/N and brackets the other observations. We then predicted the NEOs positions in other filters based on these determinations. We next checked the images visually to select only those NEOs not blended with stars. Whenever a NEO was observed on multiple epochs, we co-added the asteroid-centered cut-out images to increase the asteroid S/N prior to measuring its photometry.
We finally measured the magnitude of each NEO in each filter using an elliptical aperture to account for the PSF elongation due to the fast motion (Figure 2). We illustrate the improvement on the photometry in Figure 3. These updated magnitudes are the ones used in the creation of the NEORCKS data set.
## Appendix C Estimation of color uncertainties
In order to select the optimal color value amongst multiple catalogs, we had to take into account color value uncertainty. Nevertheless, there may be situations where the reported photometric errors, calculated via diverse methodologies, do not align. For instance, such discrepancies can arise when uncertainties are quantified as either standard deviations or standard errors, particularly when these uncertainties do not follow a normal distribution.
The availability of color estimates for the same asteroids in the different catalogs allowed us to compare the difference in color distribution with photometric uncertainties. Color indexes, such as the \(g\)-\(r\) index, represent the difference in magnitude (brightness) between two different wavelength bands for a given object. Uncertainties in these indices can be calculated from the uncertainties in the photometric measurements for each band.
For example, in the g-r color index, the uncertainty can be calculated from the errors in the \(g\) and \(r\) magnitudes. For two different catalogs, we could represent these calculations as follows:
\(gr1_{err}=\sqrt{g1_{err}^{2}+r1_{err}^{2}}\)
\(gr2_{err}=\sqrt{g2_{err}^{2}+r2_{err}^{2}}\).
Here, \(g1_{err}^{2}\) and \(r1_{err}^{2}\) are the uncertainties of the g and r photometry from the first catalog, and \(g2_{err}\) and \(r2_{err}\) are the uncertainties from the second catalog. If we assume that the color of an asteroid does not change over time, we can calculate the difference in the color indices measured in two different catalogs. This can be done using the previously computed uncertainties:
\(\Delta(gr1-gr2)=\sqrt{gr1_{err}^{2}+gr2_{err}^{2}}\),
where \(\Delta(gr1-gr2)\) is the difference in the g-r color index between the two catalogs and \(gr1_{err}\) and \(gr2_{err}\) are the uncertainties of this color index in the first and second catalogs, respectively.
Estimating the uncertainty of stellar objects is a complex task. While internal errors could provide a reasonable uncertainty estimate, systematic errors may distort these results. It is important to keep in mind that published uncertainties may potentially contain distortions that have not been accounted for. If we consider that the published uncertainties might not be accurate, and the true uncer
Figure 1: Examples of fast-moving NEOs in SDSS images. The color images are a combination of FITS images in \(g\) (green), \(r\) (red), and \(i\) (blue) filters.
Figure 2: Photometry of 2006 UA on SDSS images, illustrating the elliptical aperture. The inner ellipse shows the region in which photons are counted. The two outer circles show the annulus used to estimate the sky background.
tainties are \(gr1_{err}*k1\) and \(gr2_{err}*k2\), where \(k1\) and \(k2\) are unknown factors, in this case, the difference in the color indices can be calculated as
\(\Delta(gr1-gr2)=\sqrt{gr1_{err}^{2}*k1^{2}+gr2_{err}^{2}*k2^{2}}\).
In instances where there are more than two catalogs at our disposal, we can calculate the color difference between each pair of catalogs. For example, if we have three catalogs, we can formulate the following:
\(\Delta(gr1-gr2)=\sqrt{gr1_{err}^{2}*k1^{2}+gr2_{err}^{2}*k2^{2}}\)
\(\Delta(gr1-gr3)=\sqrt{gr1_{err}^{2}*k1^{2}+gr3_{err}^{2}*k3^{2}}\)
\(\Delta(gr2-gr3)=\sqrt{gr2_{err}^{2}*k2^{2}+gr3_{err}^{2}*k3^{2}}\).
This formulation provides us with a system of three equations featuring three unknown variables (\(k1\), \(k2\), and \(k3\)). These equations can be resolved in order to estimate the authentic uncertainties inherent to each catalog.
In situations involving four catalogs (for instance, SDSS, SMSS, Gaia, and Classy, in our example), we can compute the color differences between every pair, resulting in a system of six equations with four unknowns. This system is generally resolved using a least squares method. The solutions derived from this system would produce the estimated authentic uncertainties associated with each catalog.
We extracted the common asteroids from each of our four catalogs and obtained three samples for each of them. For example, for the SDSS catalog, we obtained SMSS, Gaia, and Classy cross-match samples that contain 54,283 27,158, and 1,807 of common asteroids, correspondingly. Cumulative distributions of color errors for four colors are presented in Figure 11, where we can see the typical photometry error distribution of the SDSS and SkyMapper data that are limited by the magnitude. While the Gaia errors have a uniform distribution because the data have no dependence on the asteroid magnitude, the Classy data have no information about their errors, and therefore we generated random uniform errors in the range from 0 to 0.1 magnitudes.
The variation between the three distributions of the same catalog errors shows a different composition of the common samples. The correction coefficients of color uncertainties for each catalog, calculated using the least squares method, are presented in Table 11.
We subsequently calculated the cumulative distribution of color differences between asteroids found in varying catalogs. In Figure 12, we depict the declared cumulative error distribution of each catalog. It is observable that the distribution of the declared SMSS color uncertainties is overestimated compared with the computed distribution, especially within the SMSS\(\cap\)SDSS sample. Conversely, the Gaia uncertainties seem to be underestimated, possibly owing to the manner in which we computed the uncertainties during the derivation of the color.
## Appendix D Vikiri spectrum
We present a near-infrared spectrum of (10278) Virkki in Figure 13. This spectrum was collected with the 3-m IRTF located on Maunakea, Hawaii, on October 14, 2020, through
Figure 11: Cumulative distributions of the difference of color for asteroids in SDSS and the rest of the catalogs (blue), as well as their color uncertainties obtained from photometry (orange).
\begin{table}
\begin{tabular}{l c c c c} Color & SDSS & SkyMapper & Gaia & Classy \\ \hline \(g-r\) & 0.779 & 2.594 & 0.539 & 0.629 \\ \(g-i\) & 0.716 & 2.695 & 0.542 & 0.906 \\ \(i-z\) & 0.321 & 2.240 & 0.513 & 0.367 \\ \(r-i\) & 0.822 & 1.633 & 0.361 & 0.285 \\ \hline \end{tabular}
\end{table}
Table 11: Correction factors for catalog color uncertainty estimates.
Figure 12: Colors of (277958) 2006 SP134 from individual SDSS catalog values (in blue) and from our elliptical photometry (orange). The color boxes represent the limits of taxonomic classes.
the MITHNEOS program (Binzel et al., 2019, PI: DeMeo). We used the SpeX NIR spectrograph (Rayner et al., 2003) combined with a 0.8x15\({}^{\prime\prime}\) slit in the low-resolution prism mode to measure the spectra over the 0.7-2.5 \(\mu\)m wavelength range. Asteroid observations were bracketed with measurements of the following calibration stars, which are known to be very close spectral analogs to the Sun: Hyades 64 and Landolt (1983) stars 93-101 and 113-276. In-depth analysis of these calibration stars and additional stars used in MITHNEOS is provided in Marset et al. (2020). Data reduction and spectral extraction followed the procedure outlined in Binzel et al. (2019), with the Autospex software tool (Rivkin et al., 2005).
These steps included trimming the images, creating a bad pixel map, flat-fielding the images, sky subtraction, tracing the spectra in both the wavelength and spatial dimensions, co-adding the spectral images, extracting the spectra, performing wavelength calibration, and correcting for air-mass differences between the asteroids and the corresponding solar analogs. The resulting asteroid spectra were divided by the mean stellar spectra to remove the solar gradient.
Finally, we present the Gaia spectra for two candidates from the short list of seven candidates still considered for a flyby by Hera in Figure 13. We also present the SDSS/SkyMapper colors of four other candidates.
## Appendix E Catalog description
We describe here the catalog of the NEAs we have released. The catalog contains four colors (\(g\)-\(r\), \(g\)-\(i\), \(r\)-\(i\), and \(i\)-\(z\)), osculating elements, the most probable taxonomy, and the source region for each asteroid. The catalog presented here is available at the CDS via anonymous ftp to 12 or via 13
Footnote 12: [http://cdsarc.u-strasbg.fr/](http://cdsarc.u-strasbg.fr/)
Footnote 13: [http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/xxx/xxx](http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/xxx/xxx)
|
2309.07733 | Explaining Speech Classification Models via Word-Level Audio Segments
and Paralinguistic Features | Recent advances in eXplainable AI (XAI) have provided new insights into how
models for vision, language, and tabular data operate. However, few approaches
exist for understanding speech models. Existing work focuses on a few spoken
language understanding (SLU) tasks, and explanations are difficult to interpret
for most users. We introduce a new approach to explain speech classification
models. We generate easy-to-interpret explanations via input perturbation on
two information levels. 1) Word-level explanations reveal how each word-related
audio segment impacts the outcome. 2) Paralinguistic features (e.g., prosody
and background noise) answer the counterfactual: ``What would the model
prediction be if we edited the audio signal in this way?'' We validate our
approach by explaining two state-of-the-art SLU models on two speech
classification tasks in English and Italian. Our findings demonstrate that the
explanations are faithful to the model's inner workings and plausible to
humans. Our method and findings pave the way for future research on
interpreting speech models. | Eliana Pastor, Alkis Koudounas, Giuseppe Attanasio, Dirk Hovy, Elena Baralis | 2023-09-14T14:12:34Z | http://arxiv.org/abs/2309.07733v1 | # Explaining Speech Classification Models via Word-Level
###### Abstract
Recent advances in eXplainable AI (XAI) have provided new insights into how models for vision, language, and tabular data operate. However, few approaches exist for understanding speech models. Existing work focuses on a few spoken language understanding (SLU) tasks, and explanations are difficult to interpret for most users. We introduce a new approach to explain speech classification models. We generate easy-to-interpret explanations via input perturbation on two information levels. 1) Word-level explanations reveal how each word-related audio segment impacts the outcome. 2) Paralinguistic features (e.g., prosody and background noise) answer the counterfactual: "What would the model prediction be if we edited the audio signal in this way?" We validate our approach by explaining two state-of-the-art SLU models on two speech classification tasks in English and Italian. Our findings demonstrate that the explanations are faithful to the model's inner workings and plausible to humans. Our method and findings pave the way for future research on interpreting speech models.
_Note: This preprint documents our approach and preliminary results. We are working on expanding the evaluations and discussions._
## 1 Introduction
Recently, several eXplainable AI (XAI) techniques have been proposed to gain insights into how models get to their outputs. Seminal work in computer vision used gradients (Simonyan et al., 2013; Sundararajan et al., 2017; Selvaraju et al., 2022, _inter alia_) or input perturbation (Zeiler and Fergus, 2013) to build input saliency maps, i.e., visual artifacts to highlight the most relevant parts for the prediction. Similar solutions have also been proposed to explain language (Ribeiro et al., 2016; Sanyal and Ren, 2021; Jacovi et al., 2021, _inter alia_) and tabular (Lundberg and Lee, 2017) models.
And while there is significant progress in explaining model predictions for image, text, and structured data models, explanations for Spoken Language Understanding (SLU) models remain largely unexplored. Speech data consists of both explicit content and discrete words, but also acoustic features, linguistic variations, and paralinguistic cues, making it more complex to decipher each element's contribution to the model predictions. Existing approaches use frequency features, e.g., spectrogram segments (Becker et al., 2018; Frommholz et al., 2023). However, spectrograms are difficult to interpret for most humans. Wu et al. (2023) have instead proposed identifying time segments, e.g., those corresponding to relevant phonemes. However, meaningful, phoneme-level explanations are fine-grained and only serve a limited number of tasks like Automatic Speech Recognition (ASR) or Phoneme Recognition. They fail to capture more interpretable word-level attribution needed
Figure 1: Explanation with word-level and paralinguistic attributes for a sample in Fluent Speech Commands (Lugosch et al., 2019). Word-level audio-transcript alignment represented through color. Word-level attributions to explain the _Increase_ (green, left boxes) and _Bedroom_ (orange, right) target classes.
for semantically-intensive tasks such as Speech Classification. Moreover, these methods _entirely overlook_ any paralinguistic aspects, e.g., prosody or channel noise, which carry information.
We propose a new approach to explaining speech models, producing easy-to-interpret explanations including paralinguistic features. We base our approach on input perturbation, an established XAI method. Our explanations provide insights on two different but complementary levels: The uttered content and paralinguistic features.
To quantify the contribution of each part of the utterance, we compute word-level attribution scores as follows. First, we align the audio signal to its transcript and get word-level timestamps. Then, we use these timestamps to iteratively mask audio segments. Finally, we estimate word-level contributions as the difference in the model's output between the original signal and the masked one. We follow a similar perturbation-based approach to measure the contribution of paralinguistic aspects. Given an input utterance, we transform the raw audio signal and measure the effect on the model's prediction. We perturb pitch to account for prosody, and audio stretching, background noise, and reverb levels for channel-related aspects. Figure 1 shows a sample explanation.
We test our approach by explaining wav2vec-2.0 (Baevski et al., 2020) and XLS-R (Babu et al., 2022), two state-of-the-art SLU models, on two datasets for Intent Classification and one for Emotion Recognition in English and Italian. We assess the quality of our explanations under the faithfulness and plausibility paradigms (Jacovi and Goldberg, 2020). Our experimental results demonstrate that the explanations are faithful to the model's inner workings and plausible to humans.
Contributions.We introduce a new method for explaining speech classification models. Using word-level audio segments and paralinguistic features, it generates easy-to-interpret visualizations that are faithful and plausible across two models, languages, and tasks. We release the code at [https://github.com/elianap/SpeechXAI](https://github.com/elianap/SpeechXAI) to encourage future research at the intersection of SLU and interpretability.
## 2 Methodology
We generate explanations by assigning a single numerical attribution score to each uttered word (SS2.1) and paralinguistic feature (SS2.2). Each score is generated via input perturbation and quantifies the contribution the entity (either a word or a paralinguistic feature) had in predicting a given target class.
### Word-level Audio Segment Attribution
We compute word-level contribution in two steps. First, we perform a word-level audio-transcript alignment. In practice, we extract beginning and ending timestamps for each uttered word. If no transcript or timestamp is available, we use WhisperX (Bain et al., 2023) to generate it along with the word-level timestamps. The resulting timestamps define a set of audio segments corresponding to words in the time domain. See Figure 1 (top) for an example.
Second, we compute each segment's contribution by masking it and measuring how the model's prediction changes. More formally, let \(x\) be an utterance and let \(\{x_{1},..,x_{n}\}\) the set of \(n\) word-level audio segments within. Consider a speech classification model \(f\) applied for tasks such as intent classification or emotion recognition. Let \(f(y=k|x)\) be the output probability of the model \(f\) for class \(k\) given the input utterance \(x\). We define the relevance \(r(x_{i})\in\mathbb{R}\) of each segment \(x_{i}\) to the model's prediction for a target class \(k\) as:
\[r(x_{i})=f(y=k|x)-f(y=k|x\setminus x_{i}) \tag{1}\]
where \(x\setminus x_{i}\) refers to the utterance when the segment \(x_{i}\) is masked. Following Wu et al. (2023), we mask out segments by zeroing the corresponding samples in the time domain.
Higher values for \(r(x_{i})\) indicate greater relevance of the segments to the prediction. A positive score indicates that the segment contributes positively to the probability of belonging to a specific class, while a negative score suggests that the segment may "push" the prediction toward another class. See Figure 1 (middle) for an example.
### Paralinguistic Attributions
Speech includes not only the semantic information conveyed by words but also additional paralinguistic information communicated through the speaker's voice or from external conditions, such as pitch, speaking rate, and background noise levels. We investigate the relevance of paralinguistic features by introducing ad hoc perturbations of the utterances and studying the resulting changes in class prediction probabilities.
Let \(p(x)\) be a paralinguistic feature of interest of utterance \(x\). For example, it can correspond to the pitch of the utterance. We transform \(x\) into \(\widetilde{x}\) such that the value of feature \(p(\widetilde{x})\) varies from \(p(x)\). Rather than a random perturbation, we control the induced transformation so that it is interpretable, and we can trace back the impact to feature \(p\). For instance, we may increase the pitch.
We consider a series of transformation \(\widetilde{X}_{p}\) = \(\{\widetilde{x}_{1},..,\widetilde{x}_{t}\}\) to study the impact of changing the paralinguistic feature \(p\) on the model's predictions. We compute the relevance of \(p(x)\) as follows.
\[r(p(x))=f(y=k|x)-\frac{1}{|X|}\sum_{\widetilde{x}\in\widetilde{X}}f(y=k| \widetilde{x}) \tag{2}\]
The term \(\frac{1}{|X|}\sum_{\widetilde{x}\in\widetilde{X}}f(y=k|\widetilde{x})\) represents the average change in the prediction probability when perturbing \(p(x)\). In addition, we visualize the terms \(f(y=k|x)-f(y=k|\widetilde{x})\) in a heatmap representation to visualize the impact of each perturbation. Heatmaps provide an intuitive way to observe the changes in prediction probabilities as we vary the paralinguistic features.
## 3 Experiments
### Experimental Setting
Paralinguistic Features.In the experiments, we consider transformations of the pitch, time stretching, the introduction of background white noise, and of reverberation. We describe the libraries adopted for the transformations in our repository.
Datasets.We evaluate our explanation on three publicly available datasets and two tasks: Fluent Speech Commands (FSC; Lugosch et al., 2019) and the Italian Intent Classification Dataset (ITALIC; Koudounas et al., 2023) datasets for Intent Classification (IC) task and the IEMOCAP (Busso et al., 2008) for Emotion Recognition (ER). FSC is a widely utilized benchmark dataset for the IC task. Its test set comprises 3793 audio samples, each characterized by three slots -- action, object, and location -- whose combination defines the intent. ITALIC is an intent classification dataset for the Italian language. The dataset includes 60 intents, and the test set consists of 1441 samples. We use the "Speaker" setup, wherein the utterances of each speaker belong to a single set among the train, validation, and test. IEMOCAP is a dataset for the ER task annotated with emotion labels (i.e., happiness, anger, sadness, frustration, and neutral state). It consists of recorded interactions between pairs of actors engaged in scripted scenarios involving ten actors. Among its five sessions, we consider Session '1', consisting of 942 utterances.
Models.We consider the monolingual wav2vec 2.0 base (Baevski et al., 2020) for FSC and IEMOCAP. We use the public fine-tuned checkpoints (Yang et al., 2021). We use the multilingual XLS-R (Babu et al., 2022) for ITALIC and its fine-tuned checkpoints (Koudounas et al., 2023).
### Qualitative evaluation
In this section, we show how our explanation method reveals the reasons behind a model prediction from the perspective of an _individual_ prediction and _globally_ across the entire dataset.
Individual level.Consider the FSC dataset and wav2vec 2.0 base fine-tuned-model. For a specific utterance with transcription 'Turn up the bedroom heat', the model correctly predicts _increase_ as the action, _heat_ as the object, and _bedroom_ as the location, fully identifying the intent. We may wonder: Is it correct for the right reasons? Which are the paralinguistic features whose change would impact the predictions? Our approach answers these questions.
Table 1 shows the word-level audio segment explanation for this utterance computed with respect to the predicted class for each intent slot. For each segment, we report its importance for the prediction. We visualize only the word-level transcrip
\begin{table}
\begin{tabular}{l|c c c c c} \hline & **Turn** & **up** & **the** & **bedroom** & **heat.** \\ \hline act=increase & 0.250 & 0.545 & 0.260 & 0.139 & 0.021 \\ obj=heat & 0 & 0 & 0 & 0.014 & 0.550 \\ loc=bedroom & 0.002 & 0.006 & 0.087 & 0.097 & 0.323 \\ \hline \end{tabular}
\end{table}
Table 1: Example of word-level audio segment explanation; FSC dataset. The higher the value, the more the audio segment is relevant for the prediction.
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline & \multicolumn{2}{c}{**pitch**} & \multicolumn{2}{c}{**stretch**} & \multicolumn{2}{c}{**reverb noise**} \\ & **down** & **up** & **down** & **up** & **reverb** & **noise** \\ \hline act=increase & 0 & 0.01 & 0.19 & 0.04 & 0.74 & 0.54 \\ obj=heat & 0 & 0 & 0 & 0 & 0 & 0.86 \\ loc=bedroom & 0.02 & 0 & 0.03 & 0.01 & 0.20 & 0.97 \\ \hline \end{tabular}
\end{table}
Table 2: Example of paralinguistic explanation, FSC dataset, instance in Table 1. The higher the value, the more the perturbations on the paralinguistic feature impact the prediction.
tions for convenience and visualization constraints. However, recall that our approach works end-to-end at the audio level, and importance scores relate to audio segments. The explanation reveals that the segment associated with the word '_up_' is the most relevant term for the action _increase_. Spoken words '_heat_' and '_bedroom_' are associated with the target object _heat_ and the target location _bedroom_. Hence, we can say that the explanation is _plausible_ and _trust_ the model for this prediction.
Table 2 shows the paralinguistic explanation. The prediction for this instance is greatly affected by the introduction of noise. The reverberation impacts the prediction for the slot action and slightly for the location; on the other hand, the object prediction is not affected. The pitch transformation we introduce does not impact the predictions, both when increasing ('_up_') and lowering ('_down_') the pitch. Finally, we reveal that shrinking the utterance duration (_time_ '_stretch down_') and hence increasing the utterance speed impacts only the action _increase_.
We can further inspect the impact of paralinguistic transformations on predictions by visualizing the prediction difference for each individual transformation via heatmaps. Figure 2 shows the prediction difference when stretching the audio and introducing reverberation. Note that '1' and '0' are the reference values for time stretching and reverberation, respectively, and hence correspond to the original utterance. We observe no impact when extending the utterance duration (values \(\geq\)1.05). At the same time, we note that the prediction probability of the action _increase_ highly changes when increasing the utterance speed (which corresponds to values 0.55-0.7).
Our approach reveals the relevant factors for _individual_ predictions, and it is, hence, a tool for model understanding. We include further examples of explanations in our repository.
Global level.We can also analyze model behavior across the entire dataset. We aggregate the importance scores of word audio segments or paralinguistic levels to investigate the _global_ influence of each component.
Figure 3 shows a summary plot for the word-level audio segment explanations of wav2vec 2.0 predictions on FSC test set for the label 'action'. We first compute the explanations for the predicted classes. Then, we aggregate audio segments corresponding to the same transcripted word after basic processing (i.e., lowercase and punctuation removal). We report the top 15 segments with the highest average importance. Each term represents the average importance scores separately for each class.
The summary plot reveals which spoken words are associated with a predicted class. From Figure 3, we infer that the importance scores for some spoken words such as '_language_', '_newspaper_', and '_cooler_' across the entire test set are associated with a single class value. Each class corresponds to a plausible value ('_change language_', '_bring_', and '_decrease_'), making the explanations plausible. In
Figure 3: Summary plot of average importance of word-level audio segments, separately for each predicted class. Top-15 segments, action label of FSC dataset.
\begin{table}
\begin{tabular}{c|c c c c|c c} \hline & \multicolumn{2}{c|}{**pitch**} & \multicolumn{2}{c|}{**stretch**} & \multirow{2}{*}{**reverb**} & \multirow{2}{*}{**noise**} \\ & **down** & **up** & & & & \\ \cline{1-1} action & 0.04 & 0.03 & 0.13 & 0.09 & 0.27 & 0.59 \\ object & 0.02 & 0.01 & 0.07 & 0.05 & 0.17 & 0.69 \\ location & 0.01 & 0.01 & 0.06 & 0.04 & 0.11 & 0.35 \\ \hline \end{tabular}
\end{table}
Table 3: Average paralinguistic attributions for the FSC dataset. The higher the score, the more the corresponding change in the paralinguistic feature impacts the prediction probability.
Figure 2: Heatmap of the prediction differences when varying the paralinguistic information. The higher the value, the more the paralinguistic changes impact the prediction.
cases where a term is associated with multiple labels, the summary plot can serve as a debugging tool. For instance, the spoken word '_pause_' is correctly linked to the predicted action '_deactivate_' but erroneously connected to '_decrease_'. Similar considerations apply to the other two labels we include in the repository.
Table 3 shows the average importance score of paralinguistic explanations aggregated for each label. The results reveal that adding background noise globally impacts the model prediction. The reverberation affects more the predictions of the action label than the ones of the location. We observe higher average importance scores for the action label for the time stretching component, specifically when compressing the utterance duration ('_stretch down_') and, therefore, increasing the audio speed. Conversely, the pitch transformation we introduce generally does not impact the predictions.
### Quantitative evaluation
In this section, we quantitatively evaluate the quality of our explanations. A critical requirement for explanations is their faithfulness to the model. Faithfulness measures evaluate how accurately the explanation reflects the model's inner workings (Jacovi and Goldberg, 2020).
Metrics.We generalize two widely adopted measures for the XAI literature: _comprehensiveness_ and _sufficiency_(DeYoung et al., 2020). These notions were originally designed for token-level explanations for text classification, where explainers assign a relevance score to each token. This scenario is close to our word-level audio segment explanations. Intuitively, we consider audio segments rather than tokens. _Comprehensiveness_ evaluates whether the explanation captures the audio segments the model used to make the prediction. We measure it by progressively masking the audio segments highlighted by the explanation, observing the change in probability, and finally averaging the results. A high value of comprehensiveness indicates that the audio segments highlighted by the explanations are relevant to the prediction. Conversely, _sufficiency_ captures if the audio segments in the explanation are sufficient for the model to make the prediction. Opposed to comprehensiveness, we preserve only the relevant audio segments and compute the prediction difference. A low score indicates that the audio segments in the explanations indeed drive the prediction. We include the extended description of the two metrics in our repository.
Baseline.We assess the quality of explanations compared to a random explainer. The random explainer assigns a random score in the range [-1, 1] to each word audio level segment.
Results.Table 4 shows the comprehensiveness and sufficiency results on the FSC, ITALIC, and IEMOCAP datasets, separately for each label. We generate our word-level explanations with respect to the predicted class. For the random baseline, we consider five rounds of generations, and we report average and standard deviation. The results show that our word-level audio segment explanations computed by leaving one out audio segments (WA-L1O in Table 4) highly outperform the random baseline for both metrics.
## 4 Related Work
### Interpretability for Speech Models
Multiple studies have adopted Layer-wise Relevance Propagation (LRP) (Bach et al., 2015), initially proposed for image classification explanations, to explain prediction across diverse audio analysis tasks. Most of these works represent explanations as time-frequency heatmaps over spectrograms, such as Becker et al. (2018) for gender and digit audio classification, Frommholz et al. (2023)
\begin{table}
\begin{tabular}{c|c|c c c|c|c} \hline \hline & & \multicolumn{3}{c|}{**FSC**} & \multicolumn{1}{c|}{**ITALIC**} & \multicolumn{1}{c}{**IEMOCAP**} \\ \hline & & **action** & **object** & **location** & **intent** & **emotion** \\ \hline WA-L1O & \multirow{2}{*}{_Comprehensiveness_} & **0.619** & **0.623** & **0.465** & **0.693** & **0.508** \\ random & & 0.294\(\pm\)0.005 & 0.246\(\pm\)0.003 & 0.195\(\pm\)0.006 & 0.324\(\pm\)0.005 & 0.273\(\pm\)0.005 \\ \hline WA-L1O & \multirow{2}{*}{_Sufficiency_} & **0.158** & **0.083** & **0.065** & **0.164** & **0.311** \\ random & & 0.474\(\pm\)0.004 & 0.444\(\pm\)0.008 & 0.339\(\pm\)0.006 & 0.557\(\pm\)0.004 & 0.450\(\pm\)0.002 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comprehensiveness and Sufficiency results for our word attribution explanation via leave-one-out (WA-L1O) and random attribution for the FSC, ITALIC, and IEMOCAP datasets, separately for each label. For comprehensiveness, the closer to one, the better. For sufficiency, the closer to zero, the better.
for audio event classification, and Colussi and Ntalampiras (2021) for the task of urban sound classification. Wang et al. (2023) used heatmaps over ad-hoc terms (carrier and modulation frequency) for the specific task of audio classification of playing techniques (e.g., vibrato, trill, tremolo) in the context of music signal analysis. While experts can find spectrograms a familiar tool for understanding audio data, these visual representations can be challenging for laypersons to interpret.
Becker et al. (2018) also adopt the LRP method to derive the relevance score of individual samples with respect to the input waveform in the time domain. Interpreting explanations as sets of individual samples can pose challenges, such as a lack of abstraction and context of isolated data points. We advocate for prioritizing a more user-friendly and intuitive approach to explanation. In this line of intent, rather than samples, Wu et al. (2023) assign relevance scores to audio frames, i.e., raw data bins in time dimension of predefined size. The work generalizes two XAI techniques from image classification and explains Automatic Speech Recognition (ASR) systems. Mishra et al. (2017) propose to describe the data to explain via interpretable representations. Their method involves segmenting the data into equal-width segments within the time, frequency, or time-frequency domains. Subsequently, they apply the LIME explanation method Ribeiro et al. (2016) to these interpretable representations. However, these temporal explanations may be affected by the size of the audio segments chosen for analysis. Moreover, they are not grounded in spoken words or paralinguistic information, hindering interpretability for semantically intensive contexts such as speech classification.
The work by Wu et al. (2023) aligns with our direction, as it not only tests fixed-width audio segments but also audio segments aligned with phonemes. However, the approach requires phoneme-level annotations, and therefore, it is limited to evaluation purposes when such labeling is available. Moreover, the method is suitable for the phoneme recognition task. In contrast, our approach offers a more generalized solution to any Speech Language Understanding (SLU) classification model and data. We automatically derive audio segments at the word level, coupled with their transcriptions, via state-of-the-art speech transcription systems. Furthermore, our approach stands out as the first to offer explanations that study the impact of paralinguistic features on predictions, presenting these insights in an interpretable form.
### Explanation by Occlusion
Removing parts of input data to understand their impact is a well-established strategy in explainability Covert et al. (2021). Different domains use various techniques for removing or masking parts of the data. Standard techniques for image data include noise addition, blurring, or replacing via a grey area. Using a special mask token or directly removing words is often employed in text analysis. For structured data, analyzing the effects based on average values is a typical approach Covert et al. (2021). For speech data, Wu et al. (2023) have applied a similar technique to phonemes, using signal zeroing for masking. However, the masking is adopted for generating perturbation used by LIME explanation method Ribeiro et al. (2016), and they are at the phoneme level.
## 5 Conclusion
We propose a novel perturbation-based explanation method that explains the predictions of speech classification models regarding word-level audio segments and paralinguistic features. Our results show that our explanations can be a tool for model understanding.
### Limitations
Our work has some technical and design limitations. From the technical perspective, word-level segment attributions are computed by masking one-word segment at a time, thus not considering the intersectional effect of multiple masked words. We plan to experiment with different masking strategies. Moreover, word-level explanations might not be the most helpful explanation in specific speech classification tasks, e.g., spoken language identification or speaker identification. We are accounting for this limitation by including paralinguistic explanations, but we will also explore new methods. We will also investigate the impact of the perturbation techniques and third-party speech libraries on paralinguistic explanations. From the experimental design perspective, we are currently reporting self-evaluation for plausibility. We will conduct a comprehensive user study to evaluate it thoroughly. |
2305.20059 | Exploiting Mechanics-Based Priors for Lateral Displacement Estimation in
Ultrasound Elastography | Tracking the displacement between the pre- and post-deformed radio-frequency
(RF) frames is a pivotal step of ultrasound elastography, which depicts tissue
mechanical properties to identify pathologies. Due to ultrasound's poor ability
to capture information pertaining to the lateral direction, the existing
displacement estimation techniques fail to generate an accurate lateral
displacement or strain map. The attempts made in the literature to mitigate
this well-known issue suffer from one of the following limitations: 1) Sampling
size is substantially increased, rendering the method computationally and
memory expensive. 2) The lateral displacement estimation entirely depends on
the axial one, ignoring data fidelity and creating large errors. This paper
proposes exploiting the effective Poisson's ratio (EPR)-based mechanical
correspondence between the axial and lateral strains along with the RF data
fidelity and displacement continuity to improve the lateral displacement and
strain estimation accuracies. We call our techniques MechSOUL
(Mechanically-constrained Second-Order Ultrasound eLastography) and L1-MechSOUL
(L1-norm-based MechSOUL), which optimize L2- and L1-norm-based penalty
functions, respectively. Extensive validation experiments with simulated,
phantom, and in vivo datasets demonstrate that MechSOUL and L1-MechSOUL's
lateral strain and EPR estimation abilities are substantially superior to those
of the recently-published elastography techniques. We have published the MATLAB
codes of MechSOUL and L1-MechSOUL at http://code.sonography.ai. | Md Ashikuzzaman, Ali K. Z. Tehrani, Hassan Rivaz | 2023-05-31T17:37:04Z | http://arxiv.org/abs/2305.20059v1 | # Exploiting Mechanics-Based Priors for Lateral Displacement Estimation in Ultrasound Elastography
###### Abstract
Tracking the displacement between the pre- and post-deformed radio-frequency (RF) frames is a pivotal step of ultrasound elastography, which depicts tissue mechanical properties to identify pathologies. Due to ultrasound's poor ability to capture information pertaining to the lateral direction, the existing displacement estimation techniques fail to generate an accurate lateral displacement or strain map. The attempts made in the literature to mitigate this well-known issue suffer from one of the following limitations: 1) Sampling size is substantially increased, rendering the method computationally and memory expensive. 2) The lateral displacement estimation entirely depends on the axial one, ignoring data fidelity and creating large errors. This paper proposes exploiting the effective Poisson's ratio (EPR)-based mechanical correspondence between the axial and lateral strains along with the RF data fidelity and displacement continuity to improve the lateral displacement and strain estimation accuracies. We call our techniques MesSOUL (Mechanically-constrained Second-Order Ultrasound eLastography) and \(L1\)-MechSOUL (\(L1\)-norm-based MechSOUL), which optimize \(L2\)- and \(L1\)-norm-based penalty functions, respectively. Extensive validation experiments with simulated, phantom, and _in vivo_ datasets demonstrate that MechSOUL and \(L1\)-MechSOUL's lateral strain and EPR estimation abilities are substantially superior to those of the recently-published elastography techniques. We have published the MATLAB codes of MechSOUL and \(L1\)-MechSOUL at [http://code.sonography.ai](http://code.sonography.ai).
Ultrasound elastography, Mechanical constraint, Effective Poisson's ratio, Analytic optimization, High-quality lateral estimation.
## I Introduction
Since its discovery in the 1950s, ultrasound has gradually established itself as one of the most commonly used medical imaging modalities thanks to its non-invasiveness, low expense, and portability. Elastography [1, 2] is an emerging clinical application of ultrasound that reveals tissue abnormalities by portraying hidden mechanical properties. Among different ultrasound elastography techniques [3, 4, 5], the free-hand palpation quasi-static [6] one has drawn the special attention of researchers over the last three decades, because it is low cost and requires no additional hardware. Consequently, it has been employed in successful assessments of breast [7, 8], liver [9, 10], thyroid [11], prostate [12], lymph node [13], uterine [14], blood vessels [15], and heart [16, 17]. Tracking the displacement (also known as time-delay estimation) between two radio-frequency (RF) frames collected before and after tissue deformation is the main step of quasi-static elastography. The estimated displacement field is spatially differentiated to obtain the strain maps, which show a color contrast between the healthy and abnormal tissues.
Several approaches have been followed thus far to solve the critical problem of displacement estimation. A common approach is to split the RF data into a certain number of windows and determine their displacements based on the peak normalized cross-correlation (NCC) [18, 19] or zero-phase crossing [20]. Although the window-based algorithms are straightforward, they are sensitive to noise and make a compromise between the tracking accuracy and the spatial resolution depending on the window size. Recently, machine learning-based techniques [21, 22, 23] have been employed to accomplish this task. This newly-introduced class includes both supervised [24] and unsupervised [25, 26, 27] training-based algorithms. Although the preliminary validation results of machine learning-based methods are promising, they are still in the feasibility stage. This paper focuses on regularized optimization-based or energy-based [28, 29, 30, 31, 32] algorithms, another established class of displacement tracking techniques that involve formulating and optimizing an energy function for obtaining the displacement fields. These techniques are mathematically complex but produce accurate and spatially smooth displacement and strain maps.
While many strides in improving axial displacement estimation have been made, accurate lateral displacement estimation remains an elusive problem. The existing techniques' sub-standard lateral strain imaging capability originates from the wider point-spread function [33] in this dimension. The lack of an echo carrier [34] and the low sampling rate [35] are two other mainstream contributors to the loss of lateral estimation accuracy. However, lateral strain carries important diagnostic information. In addition, an accurate lateral displacement estimation is vital for precise reconstructions of Young's modulus as well as poro- and rotation-elastograms [36]. Therefore, several attempts have been made to improve the lateral tracking quality. In [37], the number of RF lines is increased by interpolating the acquired data in the lateral direction. RF data has been enhanced at subpitch locations using a conventional linear array transducer in [36]. A multi-angle acquisition scheme has been incorporated in [38] to improve lateral esti
mation using beam-steered RF data. The data augmentation- and beam-steering-based techniques either require artificial enhancement of RF data or substantially increase the hardware and software complexities. Multi-step virtual source technique [39], which requires channel data acquisition and synthetic aperture beamforming for better lateral estimation, has been proposed. Other notable algorithms [40, 41] derive good quality lateral estimates from accurate axial and noisy lateral priors depending on some mechanical correspondence. These techniques disregard RF data while calculating the lateral strain; therefore, the lateral estimate follows the axial one, which might lead to incorrect results. In fact, we show in some of our results that if the Poisson's ratio (PR) and the elastic modulus vary independently, the lateral and axial strains are no longer correlated. Our proposed technique will exploit the data fidelity term to address this issue.
In this paper, we develop two novel speckle tracking techniques optimizing regularized cost functions that incorporate effective Poisson's ratio (EPR), which is defined as the negative of the sample-wise ratio of the lateral and axial strains, to leverage the mechanical relation between different strain components. The proposed techniques aim to exploit the newly-introduced mechanical, first- and second-order continuity, and the RF data fidelity constraints simultaneously (see Fig. 1 of the Supplemental Video) to produce highly accurate lateral strain maps without hampering the axial strain quality. Another purpose of the proposed algorithm is to iteratively improve the EPR estimate, which can be used as a contrast mechanism in addition to the strain images. We name our techniques **MechSOUL**: Mechanically-constrained Second-Order Ultrasound eLastography and \(L\)**1-MechSOUL**: \(L1\)-norm-based MechSOUL. The difference between these two proposed algorithms is that MechSOUL penalizes the \(L2\)-norms of the mechanical inconsistency and the displacement derivatives, whereas \(L1\)-MechSOUL employs the \(L1\)-norms. Note that in the case of an inhomogeneous tissue containing an inclusion, EPR is spatially varying (typically between 0.2 and 0.5) and technically different from the PR, which is a material property and spatially constant. Therefore, MechSOUL and \(L1\)-MechSOUL consider distinct EPR values for each RF sample and iteratively update the strain maps and the EPR distribution. It is worth mentioning that EPR-driven physical constraint has been used in a deep-learning-based tracking technique [42], unlike which the proposed algorithms incorporate EPR in regularized optimization-based frameworks to improve lateral strain and EPR simultaneously. The performance of the proposed techniques has been validated against _in silico_, phantom, and _in vivo_ datasets. Similar to our previous techniques [43, 44, 45], MechSOUL and \(L1\)-MechSOUL codes have been published at [http://code.sonography.ai](http://code.sonography.ai).
## II Methods
Our goal is to estimate the displacement field between two RF frames \(I_{1}(i,j)\) and \(I_{2}(i,j)\), \(1\leq i\leq m\), \(1\leq j\leq n\), collected before and after tissue deformation and spatially differentiate its components to obtain the axial and lateral strain fields. Dynamic Programming (DP) [46] provides \(a\in\mathbb{R}^{m\times n}\) and \(l\in\mathbb{R}^{m\times n}\), the initial guesses for the axial and lateral displacement fields. The vital step of estimating \(\Delta a\in\mathbb{R}^{m\times n}\) and \(\Delta l\in\mathbb{R}^{m\times n}\), the refinement displacement fields, is performed by a continuous optimization technique. This section first describes SOUL [44] and \(L1\)-SOUL [45], two such recently-published techniques, and then MechSOUL and \(L1\)-MechSOUL, the proposed algorithms.
### _Second-Order Ultrasound eLastography (SOUL)_
SOUL optimizes \(C_{l2}\), a non-linear cost function comprised of \(L2\)-norm data constancy as well as \(L2\)-norm first- and second-order continuity terms.
\[\begin{split}& C_{l2}(\Delta a_{1,1},...,\Delta a_{m,n},\Delta l _{1,1},...,\Delta l_{m,n})=\\ &\|D_{I}(i,j,a_{i,j},l_{i,j},\Delta a_{i,j},\Delta l_{i,j})\|_{2}^ {2}+\gamma\|\partial_{y}a_{f}\|_{2}^{2}+\\ &\alpha_{1}\|\partial_{y}a-\epsilon_{\mathbf{a}}\|_{2}^{2}+ \alpha_{2}\|\partial_{x}a-\epsilon_{\mathbf{a}}\|_{2}^{2}+\beta_{1}\|\partial _{y}l-\epsilon_{\mathbf{l}}\|_{2}^{2}+\\ &\beta_{2}\|\partial_{x}l-\epsilon_{\mathbf{l}}\|_{2}^{2}+w \alpha_{1}\|\partial_{y}^{2}a\|_{2}^{2}+w\alpha_{2}\|\partial_{x}^{2}a\|_{2}^ {2}+w\beta_{1}\|\partial_{y}^{2}l\|_{2}^{2}+\\ & w\beta_{2}\|\partial_{x}^{2}l\|_{2}^{2}\end{split} \tag{1}\]
where \(D_{I}\) denotes the data constancy term:
\[\begin{split}& D_{I}(i,j,a_{i,j},l_{i,j},\Delta a_{i,j},\Delta l _{i,j})=\\ &[I_{1}(i,j)-I_{2}(i+a_{i,j}+\Delta a_{i,j},j+l_{i,j}+\Delta l_{i, j})]^{2}\end{split} \tag{2}\]
The non-linearity present in the data function is removed by approximating \(I_{2}\) by its first-order Taylor series expansion:
\[\begin{split}& I_{2}(i+a_{i,j}+\Delta a_{i,j},j+l_{i,j}+\Delta l _{i,j})\approx\\ & I_{2}(i+a_{i,j},j+l_{i,j})+\Delta a_{i,j}I_{2,a}^{{}^{\prime}}+ \Delta l_{i,j}I_{2,l}^{{}^{\prime}}\end{split} \tag{3}\]
\(\gamma\), \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{1}\), \(\beta_{2}\), and \(w\) are tunable parameters. \(\epsilon_{\mathbf{a}}\) and \(\epsilon_{1}\) contain the axial and lateral bias parameters that prevent displacement underestimation [9, 44, 45]. \(\partial_{y}a_{f}\) stands for
Fig. 1: Comparison among different strain imaging algorithms. (a) depicts the methodical differences among elastography techniques. (b) demonstrates the lateral strain imaging performance of four different tracking algorithms.
the axial derivatives of the RF lines' first samples. Considering that the imaginary sample prior to an RF line's first sample is zero, \((\partial_{y}a_{f})_{1,j}\) is defined as:
\[(\partial_{y}a_{f})_{1,j}=a_{1,j}+\Delta a_{1,j} \tag{4}\]
\((\partial_{y}a)_{i,j}\), \((\partial_{x}a)_{i,j}\), \((\partial_{y}l)_{i,j}\), and \((\partial_{x}l)_{i,j}\) denote the first-order axial and lateral displacement derivatives, whereas \((\partial_{y}^{2}a)_{i,j}\), \((\partial_{x}^{2}a)_{i,j}\), \((\partial_{y}^{2}l)_{i,j}\), and \((\partial_{x}^{2}l)_{i,j}\) refer to the second-order displacement derivatives.
### _SOUL using \(L1\)-norm Regularization (\(L1\)-Soul)_
Unlike SOUL, \(L1\)-SOUL minimizes a penalty function \(C_{l1}\) consisting of \(L2\)-norm data and \(L1\)-norm continuity terms:
\[C_{l1}(\Delta a_{1,1},...,\Delta a_{m,n},\Delta l_{1,1},...,\Delta l_{m,n})=\]
\[\|D_{I}(i,j,a_{i,j},l_{i,j},\Delta a_{i,j},\Delta l_{i,j})\|_{2}^{2}+\gamma_{ s}\|\partial_{y}a_{f}\|_{1}+\]
\[w_{f}\alpha_{1s}\|\partial_{y}a-\epsilon_{\mathbf{a}}\|_{1}+w_{f}\alpha_{2s} \|\partial_{x}a-\epsilon_{\mathbf{a}}\|_{1}+w_{f}\beta_{1s}\|\partial_{y}l- \epsilon_{\mathbf{l}}\|_{1}+w_{f}\beta_{2s}\|\partial_{x}l-\epsilon_{ \mathbf{l}}\|_{1}+w_{s}\alpha_{1s}\|\partial_{y}^{2}a\|_{1}+\]
\[w_{s}\beta_{1s}\|\partial_{y}^{2}l\|_{1}+w_{s}\beta_{2s}\|\partial_{x}^{2}l\| _{1} \tag{5}\]
where \(\gamma_{s}\), \(\alpha_{1s}\), \(\alpha_{2s}\), \(\beta_{1s}\), \(\beta_{2s}\), \(w_{f}\), and \(w_{s}\) are tunable parameters. To facilitate analytic optimization, \(L1\)-SOUL replaces the \(L1\)-norm with the total variation distance (TVD) approximating the absolute value function with its smooth version. Therefore, \(L1\)-norm is defined as:
\[\|\cdot\|_{1}=\sum_{j=1}^{n}\sum_{i=1}^{m}\sqrt{(\cdot)_{i,j}^{2}+\eta^{2}} \tag{6}\]
where \(\eta\) is a sharpness controlling parameter. As detailed in [45], \(L1\)-SOUL iteratively optimizes Eq. 5 to obtain a sharp displacement map.
### _Mechanically-constrained SOUL (MechSOUL)_
SOUL and \(L1\)-SOUL are not suitable for generating high-quality lateral strain maps. MechSOUL resolves this limitation by adding a mechanically-inspired constraint to SOUL's cost function. This newly-added constraint takes the EPR into account to impose the mechanical relation between the axial and lateral components (\(s_{yy}=\partial_{y}a\) and \(s_{xx}=\partial_{x}l\)) of the strain tensor. Note that optimizing a regularized cost function penalizing \(s_{xx}+\nu s_{yy}\) is different from estimating \(s_{yy}\) first and then multiplying it by \(-\nu\) to find \(s_{xx}\), where \(\nu\) is the EPR. Because in our work, \(s_{xx}+\nu s_{yy}\) is just a soft constraint in a cost function that contains data fidelity and spatial continuity terms as well. Therefore, the estimated lateral strain has the freedom to deviate from the axial strain's multiple depending on the RF data under investigation. A comprehensive analysis of this feature is presented in the Discussion Section. In addition, since EPR is expected to be spatially varying in real tissue, MechSOUL (and \(L1\)-MechSOUL) establishes an iterative scheme to employ a distinct EPR for each RF sample. The MechSOUL cost function is given by:
\[C_{l2m}(\Delta a_{1,1},...,\Delta a_{m,n},\Delta l_{1,1},..., \Delta l_{m,n})= \tag{7}\] \[C_{l2}(\Delta a_{1,1},...,\Delta a_{m,n},\Delta l_{1,1},..., \Delta l_{m,n})+\] \[\sum_{j=1}^{n}\sum_{i=1}^{m}\alpha_{3}[(\partial_{x}l)_{i,j}+\nu _{i,j}(\partial_{y}a)_{i,j}]^{2}\]
where \(\alpha_{3}\) is the mechanical constancy weight, whereas \(\nu_{i,j}\) stands for the EPR for sample \((i,j)\). We minimize \(C_{l2m}\) by setting \(\frac{\partial C_{l2m,i,j}}{\partial\Delta a_{i,j}}=0\) and \(\frac{\partial C_{l2m,i,j}}{\partial\Delta l_{i,j}}=0\) and obtain:
\[(H+D_{l2}+D_{l22}+M_{l2})\Delta d_{l2}=H_{1}\mu-(D_{l2}+D_{l22}+M_{l2})d+b_{s2} \tag{8}\]
where \(d\in\mathbb{R}^{2mn\times 1}\) and \(\Delta d_{l2}\in\mathbb{R}^{2mn\times 1}\), respectively, stack the initial and the fine-tuning displacements. \(D_{l2}\) and \(D_{l2l2}\), respectively, are sparse matrices of size \(2mn\times 2mn\) containing functions of first- and second-order regularization parameters. \(H\in\mathbb{R}^{2mn\times 2mn}\) and \(H_{1}\in\mathbb{R}^{2mn\times 2mn}\), respectively, are symmetric tridiagonal and diagonal matrices comprising of data derivatives and their functions. \(\mu\in\mathbb{R}^{2mn\times 1}\) contains the data residuals. \(M_{l2}\in\mathbb{R}^{2mn\times 2mn}\) contains the functions of the EPR and the mechanical constancy weight. \(b_{s2}\in\mathbb{R}^{2mn\times 1}\) denotes the adaptive regularization vector.
### _Mechanically-constrained \(L1\)-Soul (\(L1\)-MechSOUL)_
\(L1\)-MechSOUL is developed as the \(L1\) version of MechSOUL. \(L1\)-MechSOUL modifies the cost function of \(L1\)-SOUL by adding the \(L1\)-norm of the aforementioned mechanical constraint. As described in Eq. 6, the \(L1\)-norm is defined in terms of a differentiable approximation of the absolute value function. The \(L1\)-MechSOUL penalty function is given by:
\[C_{l1m}(\Delta a_{1,1},...,\Delta a_{m,n},\Delta l_{1,1},..., \Delta l_{m,n})= \tag{9}\] \[C_{l_{1}}(\Delta a_{1,1},...,\Delta a_{m,n},\Delta l_{1,1},..., \Delta l_{m,n})+\] \[\sum_{j=1}^{n}\sum_{i=1}^{m}\alpha_{3s}\sqrt{[(\partial_{x}l)_{i,j }+\nu_{i,j}(\partial_{y}a)_{i,j}]^{2}+\eta_{m}^{2}}\]
where \(\alpha_{3s}\) and \(\eta_{m}\) are mechanical and sharpness parameters, respectively. Optimizing \(C_{l1m}\) in the same fashion as [45] leads to:
\[(H+D_{l1}+D_{2l1}+M_{l1})\Delta d_{l1}=H_{1}\mu-(D_{l1}+D_{2l1}+M_{l1})d+b_{s1} \tag{10}\]
where \(\Delta d_{l1}\in\mathbb{R}^{2mn\times 1}\) stacks the refinement displacements. \(D_{l1}\) and \(D_{2l1}\), respectively, are sparse matrices of size \(2mn\times 2mn\) containing functions of first- and second-order continuity weights. \(M_{l1}\in\mathbb{R}^{2mn\times 2mn}\) consists of the functions of the EPR and the mechanical parameter. \(b_{s1}\in\mathbb{R}^{2mn\times 1}\) denotes the adaptive regularization vector.
Both MechSOUL and \(L1\)-MechSOUL initialize the EPRs with the organ- or material-specific nominal value of the PR (e.g., 0.3 for liver). The subsequent iterations update each sample's EPR using \(\nu_{i,j}=-(s_{xx,i,j})/(s_{yy,i,j})\), where \(s_{xx}\) and \(s_{yy}\) are lateral and axial strains calculated in the previous iteration.
The estimated fine-tuning displacement fields are added to the DP initial guesses to obtain the final displacements, which are spatially differentiated using a least-square technique to
estimate the axial and lateral strain fields. Fig. 1 illustrates methodical differences among SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL.
### _Ultrasound Simulation and Data Acquisition_
#### Iii-E1 Hard-inclusion Simulated Phantom
A homogeneous tissue phantom containing a stiff cylindrical inclusion was simulated, setting the background and inclusion elastic moduli to 20 kPa and 40 kPa, respectively. Both the background and target PRs were set to 0.49. The aforementioned phantom was compressed by \(2\%\) using the finite-element (FEM) package ABAQUS (Providence, RI), and the pre- and post-deformed RF frames were simulated with Field II [47]. The center and sampling frequencies were set to 5 MHz and 50 MHz, respectively.
#### Iii-E2 Multi-inclusion Simulated Phantom
A tissue phantom containing three hard inclusions with different elasticities was simulated. While both the background and inclusion PRs were set to 0.49, Young's moduli corresponding to the background and the three inclusions were fixed at 20 kPa, 40 kPa, 60 kPa, and 80 kPa. Non-uniaxial displacement profiles were created using ABAQUS in two ways: 1) imposing an additional condition that set the lateral displacement of the phantom's left boundary to zero 2) deforming the phantom asymmetrically using a surface traction load containing both axial and lateral components. The pre- and post-compressed RF frames were simulated with Field II [47] setting the center frequency and the temporal sampling rate to 7.27 MHz and 40 MHz, respectively.
#### Iii-E3 Simulated Phantom with Different PRs
A phantom with the same background and target elasticity moduli (20 kPa) but different Poisson's ratios (0.45 for background and 0.25 for target) was simulated and compressed using ABAQUS. The RF frames were simulated with Field II using the same imaging setting as the multi-inclusion phantom.
#### Iii-E4 Real Breast Phantom
The experimental phantom data were collected at Concordia University's PERFORM Centre. A hand-held L3-12H linear array probe was used to compress a Zerdine-made CIRS breast phantom (Model 059). The Young's modulus of the soft tissue-like material was \(20\pm 5\) kPa, whereas the inclusion was at least twice as hard as the background. An Alpinion E-cube R12 research ultrasound system was employed to acquire RF data from the phantom while it was deformed. The transmit frequency and the temporal sampling rate, respectively, were fixed at 10 MHz and 40 MHz.
#### Iii-E5 In vivo Liver Cancer Datasets
The _in vivo_ experiments were conducted at the Johns Hopkins Hospital (Baltimore, MD), where three cancer patients' livers undergoing open-surgical RF thermal ablation were compressed using a hand-held VF 10-5 linear array probe. While the livers were deformed, time-series RF data were collected with an Antares Siemens research ultrasound machine setting the center and sampling frequencies to \(6.67\) MHz and \(40\) MHz, respectively. The Institutional Review Board approved this study, and written consent was obtained from all patients. Interested readers can find more details of this experiment in [48].
### _Parameter Selection_
The two proposed techniques' performances were compared with those of NCC, NCC refined by partial differential equa
\begin{table}
\begin{tabular}{c c c c} \hline & Axial & Lateral & EPR \\ \hline NCC & \(2.1\times 10^{-3}\) & \(1.3\times 10^{-2}\) & 0.65 \\ NCC + PDE & \(6.93\times 10^{-4}\) & \(3.6\times 10^{-3}\) & 0.18 \\ SOUL & \(5.83\times 10^{-4}\) & \(5.7\times 10^{-3}\) & 0.30 \\ \(L1\)-SOUL & \(5.72\times 10^{-4}\) & \(1.16\times 10^{-2}\) & 0.58 \\ MechSOUL & \(8.42\times 10^{-4}\) & \(1.7\times 10^{-3}\) & 0.09 \\ \(L1\)-MechSOUL & \(\mathbf{5.17\times 10^{-4}}\) & \(\mathbf{1.5\times 10^{-3}}\) & **0.08** \\ \hline \end{tabular}
\end{table} TABLE V: RMSE for the different PR simulated phantom.
\begin{table}
\begin{tabular}{c c c c} \hline & Axial & Lateral & EPR \\ \hline NCC & \(3.2\times 10^{-3}\) & \(2.59\times 10^{-2}\) & 1.37 \\ NCC + PDE & \(1.7\times 10^{-3}\) & \(9.4\times 10^{-3}\) & 0.52 \\ SOUL & \(8.61\times 10^{-4}\) & \(7.4\times 10^{-3}\) & 0.39 \\ \(L1\)-SOUL & \(7.36\times 10^{-4}\) & \(1.29\times 10^{-2}\) & 0.68 \\ MechSOUL & \(7.41\times 10^{-4}\) & \(\mathbf{1.1\times 10^{-3}}\) & **0.05** \\ \(L1\)-MechSOUL & \(\mathbf{7.35\times 10^{-4}}\) & \(\mathbf{1.1\times 10^{-3}}\) & 0.06 \\ \hline \end{tabular}
\end{table} TABLE I: RMSE for the hard-inclusion simulated phantom. The best values are highlighted in bold.
\begin{table}
\begin{tabular}{c c c c} \hline & Axial & Lateral & EPR \\ \hline NCC & \(2.1\times 10^{-3}\) & \(1.3\times 10^{-2}\) & 0.65 \\ NCC + PDE & \(6.93\times 10^{-4}\) & \(3.6\times 10^{-3}\) & 0.18 \\ SOUL & \(5.83\times 10^{-4}\) & \(5.7\times 10^{-3}\) & 0.30 \\ \(L1\)-SOUL & \(5.72\times 10^{-4}\) & \(1.16\times 10^{-2}\) & 0.58 \\ MechSOSUL & \(8.42\times 10^{-4}\) & \(1.7\times 10^{-3}\) & 0.09 \\ \(L1\)-MechSOUL & \(\mathbf{5.17\times 10^{-4}}\) & \(\mathbf{1.5\times 10^{-3}}\) & **0.08** \\ \hline \end{tabular}
\end{table} TABLE IV: PSNR (dB) for the multi-inclusion simulated phantom with an additional lateral boundary condition.
\begin{table}
\begin{tabular}{c c c c} \hline & Axial & Lateral & EPR \\ \hline NCC & 49.94 & 31.73 & -2.74 \\ NCC + PDE & 55.18 & 40.51 & 5.73 \\ SOUL & 61.30 & 42.62 & 8.08 \\ \(L1\)-SOUL & 62.66 & 37.82 & 3.33 \\ MechSOSUL & 62.60 & 59.29 & **25.23** \\ \(L1\)-MechSOUL & \(\mathbf{62.67}\) & \(\mathbf{59.39}\) & 25.07 \\ \hline \end{tabular}
\end{table} TABLE II: PSNR (dB) for the hard-inclusion simulated phantom.
\begin{table}
\begin{tabular}{c c c c} \hline & Axial & Lateral & EPR \\ \hline NCC & \(3.6\times 10^{-3}\) & \(4.8\times 10^{-2}\) & 2.41 \\ NCC + PDE & \(2.4\times 10^{-3}\) & \(1.7\times 10^{-2}\) & 0.93 \\ SOUL & \(2.2\times 10^{-3}\) & \(1.2\times 10^{-2}\) & 0.61 \\ \(L1\)-SOUL & \(\mathbf{1.9\times 10^{-3}}\) & \(1.4\times 10^{-2}\) & 0.77 \\ MechSOUL & \(2\times 10^{-3}\) & \(1.8\times 10^{-3}\) & **0.09** \\ \(L1\)-MechSOUL & \(\mathbf{1.9\times 10^{-3}}\) & \(\mathbf{1.7\times 10^{-3}}\) & **0.09** \\ \hline \end{tabular}
\end{table} TABLE III: RMSE for the multi-inclusion simulated phantom with an additional lateral boundary condition.
tion (PDE)-based technique (NCC + PDE) [49], SOUL, and \(L1\)-SOUL. It is worth mentioning that we implemented both NCC and NCC + PDE in this work for comparison purposes. As predecessors of MechSOUL and \(L1\)-MechSOUL, both SOUL and \(L1\)-SOUL have been used as comparison benchmarks to assess the impacts of the proposed techniques.
The RF frames were upsampled by a factor of 3 using the MATLAB function _imresize_ for the implementation of NCC. The optimal window length and overlap, respectively, were determined as \(15\lambda(=3\times 5\lambda)\) and \(86\%\) by manually tuning NCC's performance on a validation set of input frames. The optimal parameter values obtained from the validation frames were used for generating the results for the test frame sets, which are reported in this paper. The estimated axial and lateral displacement fields were resized back to the RF frames' original size with a scaling factor of \(\frac{1}{3}\) for the displacement estimates. As suggested in [49], the ratio of the lateral and axial fidelity weights was set to 100 for the PDE-based refinement technique.
The tunable parameters of SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL were optimized for simulated, phantom, and _in vivo_ datasets using validation sets of RF frames according to a cross-validation strategy to avoid any bias and data leakage. The axial and lateral strain images for a large range of
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Axial & Lateral & EPR \\ \hline NCC & 53.50 & 37.74 & 3.79 \\ NCC + PDE & 63.19 & 48.83 & 15.15 \\ SOUL & 64.68 & 44.88 & 10.52 \\ \(L1\)-SOUL & 64.85 & 38.71 & 4.79 \\ MechSOUL & 61.49 & 55.59 & 21.32 \\ \(L1\)-MechSOUL & **65.73** & **56.53** & **22.09** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: PSNR (dB) for the different PR simulated phantom.
Fig. 2: Results for the hard-inclusion simulated phantom. Rows 1 and 2 show the axial and lateral strains, respectively, whereas, row 3 shows the EPR maps. Columns 1 to 7 correspond to ground truth, NCC, NCC + PDE, SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL, respectively.
possible parameter values were generated. The best parameter set was chosen by visually assessing the strain images' contrast, background smoothness, and boundary sharpness. This optimal parameter set was used to produce the results for test images, which are reported in this work. The optimal values for \(\{\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},w,\gamma\}\) and \(\{\alpha_{1s},\alpha_{2s},\beta_{1s},\beta_{2s},w_{f},w_{s},\gamma_{s}\}\) have been shown in Tables I and II of the Supplementary Video. For simulated, phantom, and _in vivo_ datasets, respectively, the sharpness controlling parameter \(\eta\) was set to 0.001, 0.0006, and 0.008 for the first-order terms and 0.0005, 0.0001, and 0.0013 for the second-order terms. The mechanical constancy weights \(\{\alpha_{3},\alpha_{3s}\}\) were set to \(\{20,0.045\}\), \(\{80,0.072\}\), and \(\{5,0.1\}\) for simulated, phantom, and _in vivo_ datasets. \(\eta_{m}\) was fixed at 0.001, 0.0006, and 0.008, respectively, for the simulated, phantom, and _in vivo_ datasets.
### _Quantitative Metrics_
The ground truth being available, the simulation results have been assessed using root-mean-square error (RMSE) and the peak signal-to-noise ratio (PSNR). RMSE is defined as:
\[\text{RMSE}=\sqrt{\frac{\sum\limits_{j=1}^{n}\sum\limits_{i=1}^{m}(\hat{q}_{i, j}-q_{i,j})^{2}}{mn}} \tag{11}\]
where \(\hat{q}_{i,j}\) and \(q_{i,j}\) denote the estimated and ground truth values (either strain or EPR) at \((i,j)\). For both simulated and real datasets, elastographic signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) have also been reported. SNR and CNR are given by:
\[\text{SNR}=\frac{\bar{s_{b}}}{\sigma_{b}}\qquad\text{CNR}=\frac{C}{N}=\sqrt{ \frac{2(\bar{s_{b}}-\bar{s_{t}})^{2}}{{\sigma_{b}}^{2}+{\sigma_{t}}^{2}}} \tag{12}\]
where \(\bar{s_{b}}\) and \(\bar{s_{t}}\) refer to the mean and \(\sigma_{b}\) and \(\sigma_{t}\) denote the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{SNR} & \multicolumn{3}{c}{CNR} \\ \cline{2-7} & Axial & Lateral & EPR & Axial & Lateral & EPR \\ \hline NCC & 7.61 \(\pm\) 2.28 & 0.27 \(\pm\) 0.31 & 0.28 \(\pm\) 0.31 & 4.04 \(\pm\) 1.14 & 0.34 \(\pm\) 0.26 & 0.39 \(\pm\) 0.29 \\ NCC + PDE & 18.04 \(\pm\) 4.75 & 1.15 \(\pm\) 1.11 & 1.20 \(\pm\) 1.17 & 10.07 \(\pm\) 2.14 & 1.07 \(\pm\) 0.88 & 0.91 \(\pm\) 0.92 \\ SOUL & 45.32 \(\pm\) 9.68 & 2.15 \(\pm\) 1.56 & 2.16 \(\pm\) 1.57 & 22.28 \(\pm\) 3.61 & 1.27 \(\pm\) 0.76 & 0.57 \(\pm\) 0.69 \\ \(L1\)-SOUL & **61.42**\(\pm\) 22.90 & 1.14 \(\pm\) 1.04 & 1.14 \(\pm\) 1.03 & 26.38 \(\pm\) 3.99 & 0.73 \(\pm\) 0.66 & 0.76 \(\pm\) 0.60 \\ MechSOUL & 51.40 \(\pm\) 12.78 & **39.84**\(\pm\) 12.41 & **44.88**\(\pm\) 13.62 & 26.20 \(\pm\) 4.67 & **13.01**\(\pm\) 4.33 & **3.48**\(\pm\) 2.48 \\ \(L1\)-MechSOUL & 60.59 \(\pm\) 21.20 & 37.72 \(\pm\) 13.73 & 43.16 \(\pm\) 19.91 & **27.13**\(\pm\) 4.10 & 12.67 \(\pm\) 4.09 & 3.27 \(\pm\) 2.40 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: SNR and CNR values for the hard-inclusion simulated phantom dataset. The best values are highlighted in bold.
Fig. 3: Results for the multi-inclusion simulated phantom with an additional lateral boundary condition. Rows 1 and 2 depict the axial and lateral strains, respectively, whereas, row 3 presents the EPR maps. Columns 1 to 7, respectively, correspond to ground truth, NCC, NCC + PDE, SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL.
standard deviations of the background and target windows, respectively. Other metrics used in the beamforming community can also be used for quantitative comparisons [50, 51, 52].
## III Results
Calculating the SNR on a single background window and the CNR between a target-background window pair is a common practice in quasi-static ultrasound elastography papers. Nevertheless, elastographic SNR and CNR are highly sensitive to window selection; therefore, single values often fail to quantify the differences among different techniques' performance properly. To tackle this issue, we sweep two 3 mm \(\times\) 3 mm spatial windows over the background and the target and calculate 50 SNR (50 background windows) and 150 CNR (3 target and 50 background windows) values. Finally, we summarize the quantitative performance by showing the box plots, mean, and standard deviations of the aforementioned SNR and CNR values.
Substantial improvements in lateral strain and EPR are the main strengths of the proposed algorithms. Therefore, the axial strain images, which are less attractive in this work, are shown in the Supplemental Video for most of the datasets.
Fig. 4: Results for the simulated phantom with different target and background PRs. Rows 1-3 show the axial and lateral strains and the EPR maps, respectively. Columns 1-7 correspond to FEM, NCC, NCC+PDE, SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL, respectively.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{SNR} & \multicolumn{3}{c}{CNR} \\ \cline{2-7} & Axial & Lateral & EPR & Axial & Lateral & EPR \\ \hline NCC & 4.99 \(\pm\) 3.51 & 0.13 \(\pm\) 0.31 & 0.14 \(\pm\) 0.28 & 2.02 \(\pm\) 1.32 & 0.23 \(\pm\) 0.17 & 0.03 \(\pm\) 0.01 \\ NCC + PDE & 14.54 \(\pm\) 7.11 & 1.77 \(\pm\) 1.37 & 1.91 \(\pm\) 1.55 & 9.39 \(\pm\) 5.80 & 1.49 \(\pm\) 1.10 & 0.86 \(\pm\) 0.80 \\ SOUL & 14.94 \(\pm\) 5.78 & -0.44 \(\pm\) 2.87 & -0.30 \(\pm\) 2.51 & 11.33 \(\pm\) 4.74 & 2.39 \(\pm\) 2.34 & 2.25 \(\pm\) 1.59 \\ \(L1\)-SOUL & 15.48 \(\pm\) 5.19 & -0.23 \(\pm\) 2.03 & -0.24 \(\pm\) 2.00 & 13.25 \(\pm\) 4.63 & 1.07 \(\pm\) 1.35 & 0.71 \(\pm\) 0.81 \\ MechSOUL & 15.96 \(\pm\) 5.33 & 14.66 \(\pm\) 7.65 & 22.00 \(\pm\) 9.01 & 11.70 \(\pm\) 4.69 & 7.16 \(\pm\) 6.85 & 5.98 \(\pm\) 2.90 \\ \(L1\)-MechSOUL & **17.69**\(\pm\) 5.80 & **16.72**\(\pm\) 7.99 & **31.09**\(\pm\) 14.92 & **14.14**\(\pm\) 5.02 & **8.93**\(\pm\) 6.20 & 6.28 \(\pm\) 2.82 \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: SNR and CNR values for the experimental phantom dataset. Physically impossible values are highlighted in red.
### _Hard-inclusion Simulated Phantom Dataset_
Fig. 2 describes that all six techniques successfully distinguish the hard inclusion from the uniform background. NCC produces the noisiest axial strain image. The PDE-based refinement technique substantially improves the output of NCC. \(L1\)-SOUL and \(L1\)-MechSOUL obtain sharper axial strain images than the other four techniques. NCC, SOUL, and \(L1\)-SOUL fail to produce acceptable lateral strain images. However, NCC + PDE turns the lateral estimate of NCC into an acceptable one. The proposed techniques MechSOUL and \(L1\)-MechSOUL generate high-quality lateral strain maps. Although both MechSOUL and \(L1\)-MechSOUL show good target-background contrast, \(L1\)-MechSOUL exploits the power of \(L1\)-norm regularization to obtain a sharper lateral strain image. The RMSE and PSNR values reported in Tables I and II indicate substantially higher resemblance of the proposed techniques to the ground truth than NCC, NCC + PDE, SOUL, and \(L1\)-SOUL. In addition, the SNR and CNR box plots (Figs. 8 and 9) and the associated mean and standard deviation values (Table VII) demonstrate that MechSOUL and \(L1\)-MechSOUL substantially outperform the other algorithms in terms of lateral strain estimates.
The EPR maps depicted in Fig. 2 reveal that NCC, NCC + PDE, SOUL, and \(L1\)-SOUL estimate many EPR samples that are beyond the physically possible range (also see Fig. 2 of the Supplemental Video). MechSOUL and \(L1\)-MechSOUL resolve this issue by estimating EPR maps similar to the ground truth. The EPR RMSE (Table I), PSNR (Table II), SNR, and CNR (see Fig. 6 of the Supplementary Video and Table VII) substantiate our qualitative assessment.
### _Multi-inclusion Simulated Phantom_
The strain and EPR maps for the multi-inclusion simulated phantom data with an additional lateral boundary condition and surface traction-type loading are reported in Fig. 3 and Fig. 3 of the Supplementary Video, respectively. All six techniques detect the axial strain contrast between the background and the inclusions. PDE-based technique refines NCC's axial estimate to reduce the noise. Due to the TV regularization, \(L1\)-SOUL and \(L1\)-MechSOUL obtain sharper axial strain images than
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{SNR} & \multicolumn{3}{c}{CNR} \\ \cline{2-7} & Axial & Lateral & EPR & Axial & Lateral & EPR \\ \hline NCC & 3.50 \(\pm\) 1.57 & 0.46 \(\pm\) 0.38 & 0.33 \(\pm\) 0.34 & 2.62 \(\pm\) 1.45 & 0.32 \(\pm\) 0.27 & 0.30 \(\pm\) 0.31 \\ NCC + PDE & 11.16 \(\pm\) 6.06 & 2.22 \(\pm\) 1.58 & 2.48 \(\pm\) 1.96 & 8.56 \(\pm\) 4.98 & 1.11 \(\pm\) 0.74 & 1.84 \(\pm\) 1.01 \\ SOUL & 22.90 \(\pm\) 9.22 & 5.15 \(\pm\) 3.74 & 4.98 \(\pm\) 3.80 & 16.53 \(\pm\) 6.04 & 2.64 \(\pm\) 1.59 & 3.39 \(\pm\) 1.94 \\ \(L1\)-SOUL & **33.14**\(\pm\) 13.13 & 6.35 \(\pm\) 2.66 & 6.43 \(\pm\) 2.93 & **22.36**\(\pm\) 7.08 & 3.69 \(\pm\) 2.15 & 1.83 \(\pm\) 1.31 \\ MechSOUL & 21.27 \(\pm\) 8.02 & 18.01 \(\pm\) 9.11 & 21.16 \(\pm\) 11.00 & 16.35 \(\pm\) 6.16 & 8.23 \(\pm\) 2.95 & 15.20 \(\pm\) 4.93 \\ \(L1\)-MechSOUL & 31.85 \(\pm\) 13.11 & **36.01**\(\pm\) 17.21 & **30.38**\(\pm\) 7.72 & 20.71 \(\pm\) 6.50 & **12.01**\(\pm\) 3.60 & **16.65**\(\pm\) 4.78 \\ \hline \hline \end{tabular}
\end{table} TABLE IX: SNR and CNR values for the first liver cancer dataset.
Fig. 5: Results for the experimental breast phantom. Rows 1 and 2 show the lateral strain images and the estimated EPR maps, respectively, whereas columns 1 to 7 correspond to B-mode, NCC, NCC + PDE, SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL, respectively.
SOUL and MechSOUL NCC, NCC + PDE, SOUL, and \(L1\)-SOUL fail to render satisfactory lateral strain and EPR maps. MechSOUL and \(L1\)-MechSOUL produce high-quality lateral strain maps showing proper contrast among the four (background and three inclusions) elastic regions in both loading conditions. The EPR maps generated by MechSOUL and \(L1\)-MechSOUL also correspond well with the ground truths. Tables III and IV and Tables III and IV of the Supplementary Video validate this statement quantitatively. Given the difficulty level of the datasets, this experiment highlights the potential of MechSOUL and \(L1\)-MechSOUL in simultaneous imaging of axial and lateral strains and the EPR.
### _Simulated Phantom with Different PRs_
Fig. 4 demonstrates that all competing techniques generate good-quality uniform axial strain images. However, NCC, SOUL, and \(L1\)-SOUL fail to visualize the inclusion in the lateral strain images and the EPR maps. NCC + PDE refines NCC estimates to generate good lateral strain and EPR maps. MechSOUL and \(L1\)-MechSOUL lateral strains do not follow the uniform axial strains blindly and properly delineate the inclusions. Although MechSOUL and \(L1\)-MechSOUL EPR
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{SNR} & \multicolumn{2}{c}{CNR} \\ \cline{2-7} & Axial & Lateral & EPR & Axial & Lateral & EPR \\ \hline NCC & 5.77 \(\pm\) 1.58 & 0.75 \(\pm\) 0.29 & 0.73 \(\pm\) 0.28 & 2.83 \(\pm\) 1.23 & 0.67 \(\pm\) 0.39 & 0.46 \(\pm\) 0.27 \\ NCC + PDE & 14.20 \(\pm\) 6.49 & 2.25 \(\pm\) 0.72 & 2.41 \(\pm\) 0.78 & 5.68 \(\pm\) 2.96 & 1.95 \(\pm\) 0.89 & 1.34 \(\pm\) 0.73 \\ SOUL & 35.18 \(\pm\) 21.45 & 3.02 \(\pm\) 4.25 & 3.38 \(\pm\) 4.92 & **11.36**\(\pm\) 3.60 & 3.30 \(\pm\) 2.75 & 2.69 \(\pm\) 2.36 \\ \(L1\)-SOUL & **57.63**\(\pm\) 69.73 & 7.70 \(\pm\) 9.39 & 6.92 \(\pm\) 10.23 & 10.92 \(\pm\) 4.51 & 4.62 \(\pm\) 2.89 & 2.99 \(\pm\) 1.66 \\ MechSOUL & 31.80 \(\pm\) 21.87 & 21.63 \(\pm\) 13.24 & 29.36 \(\pm\) 15.84 & 10.55 \(\pm\) 4.76 & 6.20 \(\pm\) 3.44 & 4.01 \(\pm\) 1.52 \\ \(L1\)-MechSOUL & 49.59 \(\pm\) 56.93 & **38.59**\(\pm\) 37.35 & **97.66**\(\pm\) 74.50 & 10.29 \(\pm\) 4.54 & **7.28**\(\pm\) 3.37 & **4.86**\(\pm\) 1.83 \\ \hline \hline \end{tabular}
\end{table} TABLE X: SNR and CNR values for the second liver cancer dataset.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{SNR} & \multicolumn{2}{c}{CNR} \\ \cline{2-7} & Axial & Lateral & EPR & Axial & Lateral & EPR \\ \hline NCC & 2.59 \(\pm\) 1.13 & 0.19 \(\pm\) 0.35 & 0.12 \(\pm\) 0.32 & 0.99 \(\pm\) 0.69 & 0.22 \(\pm\) 0.15 & 0.03 \(\pm\) 0.03 \\ NCC + PDE & 5.93 \(\pm\) 3.09 & 0.48 \(\pm\) 0.77 & 0.45 \(\pm\) 0.81 & 2.17 \(\pm\) 1.69 & 0.79 \(\pm\) 0.43 & 0.56 \(\pm\) 0.33 \\ SOUL & 40.22 \(\pm\) 26.19 & -2.20 \(\pm\) 6.99 & -1.70 \(\pm\) 6.57 & 2.35 \(\pm\) 1.66 & 2.29 \(\pm\) 1.97 & 2.27 \(\pm\) 1.91 \\ \(L1\)-SOUL & **77.68**\(\pm\) 58.70 & 5.41 \(\pm\) 5.89 & 5.49 \(\pm\) 6.13 & **14.63**\(\pm\) 4.33 & 1.23 \(\pm\) 0.58 & 1.50 \(\pm\) 0.40 \\ MechSOUL & 66.31 \(\pm\) 51.25 & 36.96 \(\pm\) 23.85 & 57.23 \(\pm\) 60.37 & 9.54 \(\pm\) 3.87 & 6.23 \(\pm\) 1.22 & 4.14 \(\pm\) 0.63 \\ \(L1\)-MechSOUL & 76.94 \(\pm\) 56.67 & **62.54**\(\pm\) 61.59 & 123.32 \(\pm\) **136.16** & 13.82 \(\pm\) 3.73 & **8.82**\(\pm\) 1.24 & **5.18**\(\pm\) 0.65 \\ \hline \hline \end{tabular}
\end{table} TABLE XI: SNR and CNR values for the third liver cancer dataset. Impractical values are highlighted in red.
Fig. 6: Lateral strain results for the liver datasets. Rows 1, 2, and 3 show the lateral strain estimates for patients 1, 2, and 3, respectively. Columns 1 to 7 in each row correspond to B-mode, NCC, NCC + PDE, SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL, respectively. (a) Color bar (patient 1) for NCC, NCC + PDE, SOUL, and \(L1\)-SOUL (b) Color bar (patient 1) for MechSOUL and \(L1\)-MechSOUL (c) Color bar (patient 2) for NCC, NCC + PDE, SOUL, and \(L1\)-SOUL (d) Color bar (patient 2) for MechSOUL and \(L1\)-MechSOUL (e) Color bar (patient 3) for NCC and NCC + PDE (f) Color bar (patient 3) for SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL.
maps do not replicate the ground truth fully, they are substantially better than the comparison techniques. The RMSE and PSNR values reported in Tables V and VI substantiate our statements.
### _Real Breast Phantom Dataset_
The axial and lateral strain and the EPR results for the experimental breast phantom are shown in Fig. 4 of the Supplemental Video and Fig. 5 of the current document, respectively. All six axial strain images detect the hard inclusion. However, NCC's axial estimate lacks smoothness in the background. NCC + PDE resolves this issue at the cost of visual contrast between the inclusion and the uniform background. The axial strain images obtained by SOUL and \(L1\)-SOUL are superior to those by NCC-based techniques. MechSOUL and \(L1\)-MechSOUL axial strain estimates, respectively, marginally outperform the ones generated by SOUL and \(L1\)-SOUL. The total variation (TV) regularization-based techniques \(L1\)-SOUL and \(L1\)-MechSOUL render sharper axial strain images than the other algorithms. NCC, SOUL, and \(L1\)-SOUL produce noisy lateral strain images with unacceptable target-background contrast. In addition, large spatial regions exhibit lateral strains that are out of physical range when compared to axial strains. PDE refines NCC's lateral result to reduce the noise and visualize the inclusion. MechSOUL and \(L1\)-MechSOUL successfully estimate high-contrast lateral strain maps with smooth backgrounds and substantially outperform the other four techniques. Note that the lateral strain image provided by \(L1\)-MechSOUL is visually sharper than the one obtained by MechSOUL. The quantitative metric values reported in Figs. 8 and 9 and Table VIII corroborate our visual judgement.
Fig. 5, Table VIII, and Fig. 6 of the Supplementary Video demonstrate that NCC, SOUL, and \(L1\)-SOUL fail to produce viable EPR distribution. Although NCC + PDE performs better than NCC, it still contains a noticeable amount of EPR samples which are practically impossible. MechSOUL and \(L1\)-MechSOUL successfully restrict the EPR values to the physically possible range and exhibit higher EPR values around the inclusion than the uniform regions.
### _In vivo Liver Cancer Datasets_
Fig. 5 of the Supplemental Video and Fig. 6 of the current document, respectively, depict the axial and lateral strain results for the liver cancer datasets collected before the ablation. The B-mode image for patient 1 reveals the tumor by showing a lower echo amplitude than the healthy tissue. However, the target-background echogenic contrasts for the other two patients' B-mode images are negligible.
The axial strain images clearly distinguish the tumor and healthy tissue for all three patient cases. Similar to the _in silico_ and phantom cases, NCC obtains the noisiest axial strain images. The PDE-based refining step resolves this issue of
Fig. 7: EPR estimates for the liver datasets. Rows 1, 2, and 3 show the results for patients 1, 2, and 3, respectively. Columns 1 to 6 correspond to NCC, NCC + PDE, SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL, respectively.
NCC and highlights the important details of the strain images. SOUL and MechSOUL outperform NCC + PDE in terms of background smoothness and the clarity of strain estimation in the shallow tissue region. The TV-regularization feature of \(L1\)-SOUL and \(L1\)-MechSOUL enables them to estimate substantially sharper axial strain than SOUL and MechSOUL for patients 1 and 2. In the case of the third liver patient, \(L1\)-SOUL and \(L1\)-MechSOUL obtain brighter axial strain images than the other techniques. In general, it is visually evident that the axial strain imaging performance of MechSOUL and \(L1\)-MechSOUL, respectively, are similar to those of SOUL and \(L1\)-SOUL. The box plots reported in Figs. 8 and 9 and the mean and standard deviation values (Tables IX, X, and XI) substantiate this observation.
NCC fails to produce acceptable lateral strain images in patients 1 and 3. However, it shows slight target-background contrast for patient 2. NCC + PDE notably improves the lateral estimates of NCC in all three patient cases. The lateral strain images for patients 1 and 2 obtained by SOUL and \(L1\)-SOUL show minimal contrast between the healthy and pathologic tissues. In addition, the estimated strains are markedly out of the feasible bound. Furthermore, they are highly corrupted by estimation noise. In the case of patient 3, both SOUL and \(L1\)-SOUL fail to generate appreciable lateral strain images. For all three patients, MechSOUL and \(L1\)-MechSOUL obtain high-contrast lateral strain maps and substantially outperform the other four algorithms. MechSOUL exhibits a horizontal striking artifact in patient 2's lateral strain images, which
Fig. 8: Box plots for 50 SNR values. Rows 1 and 2, respectively, correspond to axial and lateral, whereas columns 1 to 5 correspond to hard-inclusion simulated phantom, real phantom, and liver patients 1-3, respectively.
Fig. 9: Box plots for 150 CNR values. Rows 1 and 2 correspond to axial and lateral, respectively, whereas columns 1 to 5 correspond to hard-inclusion simulated phantom, real phantom, and liver patients 1, 2, and 3, respectively.
is removed by \(L1\)-MechSOUL. In addition, \(L1\)-MechSOUL yields sharper lateral strain estimates than MechSOUL in all patient cases. The SNR and CNR box plots (Figs. 8 and 9) and their mean and standard deviation values (Tables IX, X, and XI) align with our visual perception.
Fig. 7 demonstrates that NCC, SOUL, and \(L1\)-SOUL estimate physically impossible EPR maps. The PDE-based method substantially improves NCC estimates. The proposed techniques estimate smooth EPR maps with the individual EPR values confined to the practical range (also see Fig. 6 of the Supplementary Video and Tables IX-XI). MechSOUL and \(L1\)-MechSOUL yield higher tumor EPR for the first two patients and lower tumor EPR for the third patient. This opposing behavior of tumor EPRs might be related to the complicated deformation physics in patient 3 stemming from multiple blood vessels in the vicinity of the tumor.
## IV Discussion
The poor lateral displacement or strain estimation capability stemming from low data resolution in this direction is a well-known drawback of the existing ultrasound elastography techniques. Due to the imaging mechanism, ultrasound loses important information associated with the dimension perpendicular to the primary wave propagation. The existing strain imaging frameworks cannot make up for the lost lateral information and, therefore, end up providing lateral estimates substantially inferior to the axial ones. The techniques proposed herein incorporate the tissue deformation mechanics to couple the lateral strain to the axial one and compensate for the information lost by the imaging modality. As demonstrated in the validation examples, this coupled approach dramatically improves lateral strain imaging performance.
MechSOUL and \(L1\)-MechSOUL impose an EPR-driven relation between the axial and lateral strains along with data fidelity and spatial smoothness constraints. Employing the aforementioned mechanical constraint is not analogous to calculating the lateral strain as a multiple of the independently estimated axial strain, which is prone to mirroring the accurate axial estimates to the less accurate lateral estimates. Instead, MechSOUL and \(L1\)-MechSOUL allow the lateral strains to deviate from the axial ones (see Fig. 4) depending on the underlying properties of tissue. Because the proposed techniques solve a unified optimization problem to investigate the mechanical and continuity constraints and the RF data simultaneously. In addition, this work iteratively updates each RF sample's EPR value.
The proposed techniques introduce new tunable parameters associated with the mechanical constancy terms. This mechanical parameter partly determines how strongly the estimated lateral strain follows the axial strain. On the one hand, a very high mechanical constancy weight might suppress the effect of data fidelity and force the lateral strain to follow the axial one blindly. On the other hand, a tiny parameter value restricts the impact of mechanical constraint, demolishing the sole purpose of this study. Since the mechanical parameters are not correlated with the continuity ones, MechSOUL and \(L1\)-MechSOUL use the same continuity weights as SOUL and \(L1\)-SOUL, respectively, and tune only the newly-introduced parameters on validation images. In our experience, the optimality of mechanical constancy weight is unrelated to the RF signal's SNR. While controlled by the material property and the deformation profile, a moderate value of the mechanical constancy parameter leads to a good estimation of the dis
Fig. 11: Individually tuned lateral strain results for the liver patient 3. (a) and (b) correspond to MechSOUL and \(L1\)-MechSOUL, respectively.
Fig. 10: Results for the first liver dataset after ablation. Rows 1 and 2 correspond to MechSOUL and \(L1\)-MechSOUL, respectively, whereas columns 1 to 4 correspond to B-mode, axial strain, lateral strain, and EPR, respectively.
placement fields. It is worth noting that tuning the mechanical parameters are not cumbersome since the proposed algorithms are not sensitive to reasonable alterations in their values, which is demonstrated in Fig. 7 of the Supplemental Video.
The proposed techniques' parameters were tuned on validation images different from the final test ones. Although parameter values are optimized for simulated, phantom, and _in vivo_ datasets, a single parameter set is used for all datasets of the same kind (i.e., same parameter set for all liver datasets). Scatterer size and distribution, attenuation coefficient, imaging settings, noise statistics, and the deformation field's temporal behavior are the main deciding factors for the optimal set of continuity and mechanical weights. Since these properties are different for different types of data used in this study, the optimal parameter values also vary from each other. Note that the simulated datasets employed in this work differ from the real phantom one in terms of both quantitative properties and imaging parameters, which leads to different sets of weights for simulated and real phantoms. Therefore, the parameters can be saved as presets in commercial ultrasound machines for imaging different organs such as thyroid, breast, _etc_. The proposed techniques exhibit good performance for all three simulation experiments (Figs. 2-4) using the same parameter values. A single parameter set leads to high-quality estimations in all _in vivo_ cases (Figs. 6 and 7 and Fig. 5 of the Supplementary Video) as well. To further justify our argument, we have conducted sensitivity analyses using two datasets: 1) the first liver patient (before ablation) but with different input frames than Fig. 6 2) the first liver patient after ablation. Note that the second dataset is an entirely new one collected from the first liver patient after a significant clinical procedure that alters the noise statistics. In addition, this dataset was not considered for tuning the parameters. Fig. 8 of the Supplementary Video and Fig. 10 demonstrate that MechSOUL and \(L1\)-MechSOUL perform well in both cases for the same parameter sets by properly delineating the tumor (before ablation) or coagulated tissue (after ablation).
Figs. 8 and 9 provide the opportunity to conduct a statistical test to determine if the proposed techniques are significantly better than the existing ones. The comparison intervals of the group means obtained from the analysis of variance (ANOVA) followed by a multiple comparison statistical test are reported in Figs. 9 and 10 of the Supplemental Video. The intervals for SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL are close to each other in most axial cases. However, in the lateral cases, MechSOUL and \(L1\)-MechSOUL comparison intervals yield significantly higher values than the existing techniques, reassuring the proposed algorithms' superiority in lateral tracking.
The \(L1\)-norm-based proposed technique \(L1\)-MechSOUL exhibits sharper strain estimates than MechSOUL in the validation experiments presented in this study. It is worth noting that \(L1\)-norm regularization does not force a sharp strain map if the underlying strain map is not sharp. It can produce a sharp estimate at the border of two organs where tissue properties display a rapid change or a smooth change where changes in the underlying mechanical properties are gradual. In contrast, \(L2\)-norm regularization produces a smooth strain map even if mechanical properties have a sharp transition.
Like \(L1\)-SOUL, \(L1\)-MechSOUL approximates the \(L1\)-norm with TVD, establishing a balance between smoothness and sharpness by penalizing the variation and simultaneously allowing sharp transition. An alternating direction method of multipliers (ADMM)-based strategy can eliminate the requirement of TVD approximation by optimizing \(L1\)-norm's original formulation using the shrinkage function [53, 54] and utilize the full potential of \(L1\)-norm. ADMM offers this direct optimization feature at the cost of increased complexity and more sensitive parameter tuning. Therefore, ADMM-based optimization of \(L1\)-MechSOUL's penalty function will be explored in a future extension of this work.
The lateral strain estimation performance of the proposed techniques is substantially better than the existing techniques in all validation experiments conducted in this study. However, the MechSOUL and \(L1\)-MechSOUL lateral strain images for liver patient 3 are not as good as the other two liver patients. This performance degradation might stem from the complicacy of RF data acquired from patient 3. The field-of-view (FOV) contains blood flow through the annotated vessels, which introduces different types of noise to RF data. In addition, being a combination of several vessels, healthy tissue, and tumor, the FOV poses high-variance distributions of elasticity and EPR. Furthermore, the tumor experiences a complicated deformation physics since it is located underneath the easily-compressible portal vein. It is worth observing that both MechSOUL and \(L1\)-MechSOUL handle this challenging dataset promisingly and yield perceptible contrast among different tissues. As shown in Fig. 11, slightly better performance can be achieved when the strain imaging techniques' parameter sets are dedicatedly optimized for this particular dataset. However, tuning the parameters for each dataset individually is not possible in the clinical context and affects the algorithms' generalizability.
tracking performance of SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL for this dataset. Despite being substantially outperformed by MechSOUL and \(L1\)-MechSOUL, SOUL and \(L1\)-SOUL generate reasonable lateral strain images in this experiment. We have also conducted a controlled experiment on the hard-inclusion simulated phantom, where the ground truth deformation field for \(4\%\) applied strain is obtained from FEM. The pre-deformed frame is generated by warping the Field II-simulated post-deformed RF data based on the ground truth displacements. Fig. 11 of the Supplementary Video shows that SOUL produces a good lateral strain image in this highly controlled environment where the applied strain is reasonably high. However, MechSOUL substantially outperforms SOUL in this case as well, demonstrating its strength in lateral tracking. These two experiments, in conjunction with the other validation experiments carried out in this work, manifest that SOUL produces reasonable lateral strain maps in a controlled and moderately high-strain scenario, whereas it often fails in realistic cases. Simultaneous exploitation of RF data and tissue deformation physics enables MechSOUL to resolve this issue by performing well in both controlled and realistic settings.
PDE-based refinement [49] is one of the comparison techniques used in this work. Duroy _et al._[55] also conducted a similar study in a recent work. Both of these techniques refine the initial axial and lateral estimates assuming tissue incompressibility. As demonstrated in this paper, the PDE-driven post-processing strategy improves the lateral estimation quality. However, the incompressibility constraint assumes the Poisson's ratio to be 0.5, which is not true for all biological tissues. In addition, the refinement techniques do not consider RF data and the regularization constraints in a unified manner and, therefore, are prone to failure of the first step. The proposed techniques MechSOUL and \(L1\)-MechSOUL investigate data, continuity, and mechanical constraints simultaneously and update the EPR value at each sample iteratively to tackle the aforementioned issues.
Our recently accepted deep learning framework self-supervised Physically Inspired ConsTraint for Unsupervised Regularized Elastography (sPICTURE) [42] can be a good competing technique to demonstrate MechSOUL and \(L1\)-MechSOUL's lateral tracking potential. Fig. 13 shows the sPICTURE lateral strain and EPR maps for the multi-inclusion simulated phantoms, different PR simulated phantom, and the first liver patient. For both multi-inclusion simulated phantoms, sPICTURE performs notably better than NCC, NCC+PDE, SOUL, and \(L1\)-SOUL. However, both MechSOUL and \(L1\)-MechSOUL substantially outperform sPICTURE in terms of contrast and resemblance to the ground truth. Note that the multi-inclusion phantoms contain non-uniaxial force, and sPICTURE was not trained for such a case during its development. Except for the red-marked outlier region, sPICTURE achieves similar performance as the proposed techniques in the case of the different PR simulated phantom. The RMSE and PSNR values reported in Tables XII and XIII substantiate our visual assessments. sPICTURE shows a contrast between the tumor and the healthy tissue in the case of the first liver patient. Nevertheless, lateral strain and EPR estimation quality are substantially lower than MechSOUL and \(L1\)-MechSOUL (also see Table XIV). This comparison against sPICTURE, a state-of-the-art deep learning-based lateral estimation technique, is another evidence of MechSOUL and \(L1\)-MechSOUL's strength in lateral strain imaging.
The spatial distribution of EPR is directly correlated with tissue compressibility (i.e., ability to change the volume) [56, 57]. Specific pathologies such as cancer and lymphedema tend to alter the value of this mechanical parameter [56, 57]. In addition, compressibility often signifies the tissue's sensitivity to treatments or therapies [56]. Therefore, the EPR contrast between different regions can be used as a marker for tissue's pathologies or susceptibility to treatment. These potential applications of an EPR map make MechSOUL and \(L1\)-MechSOUL attractive for clinical translation since they substantially improve the EPR image quality.
\begin{table}
\begin{tabular}{l c c} \hline \hline & Lateral strain & EPR \\ \hline SNR & \(3.88\pm 1.88\) & \(2.74\pm 1.86\) \\ CNR & \(1.68\pm 1.43\) & \(3.44\pm 2.08\) \\ \hline \hline \end{tabular}
\end{table} TABLE XIV: SNR and CNR of sPICTURE for the first liver dataset.
Fig. 12: Lateral strain results for an additional phantom dataset. Columns 1 to 5 correspond to B-mode, SOUL, \(L1\)-SOUL, MechSOUL, and \(L1\)-MechSOUL, respectively.
Both PR and EPR range between 0 and 0.5 for uniform soft materials. An EPR greater than 0.5 in a uniform region for uniaxial compression refers to a negative bulk modulus, which is impossible in thermodynamic equilibrium. Therefore, the validation results showing EPR values greater than 0.5 in uniform and uniaxial cases indicate possible errors in strain estimation.
The success of the proposed algorithms is correlated with the accuracy of the EPR update. Although MechSOUL and \(L1\)-MechSOUL worked well in all validation experiments conducted in this work, there might be a downfall in their performance in the case of a more challenging dataset where the EPR distribution progresses in a wrong direction. A potential solution to this problem is incorporating an EPR-independent, tensor geometry-driven mechanical constraint such as the compatibility condition. A recent work [58] has used the compatibility equation to improve lateral strain estimation. However, this work presents a post-processing algorithm that highly depends on the initial axial and lateral estimation accuracy. Since RF data and the mechanical constraint are not investigated simultaneously, this post-processing technique might fail in challenging scenarios like the different PR phantom presented in this study. Simultaneous optimization of data and compatibility constraints in a direct strain imaging framework might resolve this issue.
## V Conclusion
Two novel algorithms, MechSOUL and \(L1\)-MechSOUL have been proposed for high-accuracy lateral displacement estimation in ultrasonic strain imaging. MechSOUL and \(L1\)-MechSOUL, respectively, optimize \(L2\)- and \(L1\)-norm-based cost functions featuring mechanical as well as data similarity and spatial continuity constraints. The main contribution of the proposed techniques is emphasizing the EPR-inspired sample-wise mechanical congruence between the axial and lateral components of the strain tensor. Integrated optimization of mechanical and data fidelities leads to dramatic improvements of the lateral strain and EPR image quality, as demonstrated in the _in silico_, phantom, and _in vivo_ experiments conducted in this study.
## Acknowledgment
This work is funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). Md Ashikuzzaman holds PBEEE and B2X Doctoral Research Fellowships granted by the Fonds de Recherche du Quebec - Nature et Technologies (FRQNT). The purchase of the Alpinion ultrasound machine was partly funded by Dr. Louis G. Johnson Foundation. The authors thank Drs. E. Boctor, M. Choti, and G. Hager for allowing them to use the liver datasets and the anonymous reviewers for their constructive comments.
|
2309.06439 | Attention De-sparsification Matters: Inducing Diversity in Digital
Pathology Representation Learning | We propose DiRL, a Diversity-inducing Representation Learning technique for
histopathology imaging. Self-supervised learning techniques, such as
contrastive and non-contrastive approaches, have been shown to learn rich and
effective representations of digitized tissue samples with limited pathologist
supervision. Our analysis of vanilla SSL-pretrained models' attention
distribution reveals an insightful observation: sparsity in attention, i.e,
models tends to localize most of their attention to some prominent patterns in
the image. Although attention sparsity can be beneficial in natural images due
to these prominent patterns being the object of interest itself, this can be
sub-optimal in digital pathology; this is because, unlike natural images,
digital pathology scans are not object-centric, but rather a complex phenotype
of various spatially intermixed biological components. Inadequate
diversification of attention in these complex images could result in crucial
information loss. To address this, we leverage cell segmentation to densely
extract multiple histopathology-specific representations, and then propose a
prior-guided dense pretext task for SSL, designed to match the multiple
corresponding representations between the views. Through this, the model learns
to attend to various components more closely and evenly, thus inducing adequate
diversification in attention for capturing context rich representations.
Through quantitative and qualitative analysis on multiple tasks across cancer
types, we demonstrate the efficacy of our method and observe that the attention
is more globally distributed. | Saarthak Kapse, Srijan Das, Jingwei Zhang, Rajarsi R. Gupta, Joel Saltz, Dimitris Samaras, Prateek Prasanna | 2023-09-12T17:59:10Z | http://arxiv.org/abs/2309.06439v1 | Attention De-sparsification Matters: Inducing Diversity in Digital Pathology Representation Learning
###### Abstract
We propose _DiRL_, a **D**iversity-inducing **R**epresentation **L**earning technique for histopathology imaging. Self-supervised learning techniques, such as contrastive and non-contrastive approaches, have been shown to learn rich and effective representations of digitized tissue samples with limited pathologist supervision. Our analysis of vanilla SSL-pretrained models' attention distribution reveals an insightful observation: _sparsity in attention_, i.e, models tends to localize most of their attention to some prominent patterns in the image. Although attention sparsity can be beneficial in natural images due to these prominent patterns being the object of interest itself, this can be sub-optimal in digital pathology; this is because, unlike natural images, digital pathology scans are not object-centric, but rather a complex phenotype of various spatially intermixed biological components. Inadequate diversification of attention in these complex images could result in crucial information loss. To address this, we leverage cell segmentation to densely extract multiple histopathology-specific representations, and then propose a prior-guided dense pretext task for SSL, designed to match the multiple corresponding representations between the views. Through this, the model learns to attend to various components more closely and evenly, thus inducing adequate diversification in attention for capturing context rich representations. Through quantitative and qualitative analysis on multiple tasks across cancer types, we demonstrate the efficacy of our method and observe that the attention is more globally distributed.
## 1 Introduction
Computational pathology is a rapidly emerging field that aims at analyzing high resolution images of biopsied or resected tissue samples. Advancements in computer vision and deep learning has enabled learning of the rich phenotypic information from whole slides images (WSIs) to understand mechanisms contributing to disease progression and patient outcomes. Acquiring crop-level localized annotations for WSIs is expensive and often not feasible; only slide-level pathologist labels are usually available. In such scenarios, weak supervision is a commonly utilized strategy, where crops are _embedded into representations_ in the first stage, followed by considering these WSI-crops' representation as a bag for multiple instance learning (MIL). Now the question remains, _how do we learn a model to effectively encode the crops into rich representations?_ Traditionally, ImageNet (Krizhevsky et al., 2017) pretrained neural networks are utilized to extract the representations (Lu et al., 2021; Lerousseau et al., 2021; Shao et al., 2021). However ImageNet and pathology datasets are composed of different semantics; while the former contains object-centric natural images, the later consists of images with spatially distributed biological components such as cells, glands, stroma, etc. Therefore, to learn domain-specific features of WSI-crops in the absence of localized annotations, various self-supervised learning (SSL) techniques are recently gaining traction (Ciga et al., 2022; Stacke et al., 2021; Boyd et al., 2021). There studies have shown the effectiveness of models pretrained through SSL on histopathology images in downstream classification tasks when compared to model trained on ImageNet.
To further analyze the role of SSL in computational pathology, we pretrained a vision transformer (Dosovitskiy et al., 2020) on various WSI datasets using vanilla SSL (Caron et al., 2021). An in-depth analysis of the pretrained models' attention maps on WSI-crops led us to a striking observation: **sparsity in attention maps**. The model tends to localize most of its attention to a small fraction of regions, leading to sub-optimal representation learning. To further validate our observation, we visualized the attention maps of a self-supervised ImageNet pretrained model on natural images (see Fig. 1). Similar observations led us to conclude that this is a property of SSL rather than of data. We believe that sparsity in attention might potentially benefit the performance in some natural imaging tasks such as object classification. This stems from the fact that during SSL, the model is tasked to match the two views, optimizing which leads the model to focus on the prominent patterns. For example, in Fig. 1(a), for an object-centric ImageNet example, since the prominent pattern is the object (eg. bird) itself (Yun et al., 2022), the model tends to center
its attention towards the object, thus benefiting numerous downstream applications (for eg., bird classification). In contrast, WSI-crops are _not object-centric_, rather they constitute a _spatial distribution of complex structures such as cells, glands, their clusters and organizations_, _etc_, see Fig. 1(b). Encoding this dense information available into a holistic representation demands the model to focus more diversely to various histopathology primitives and not just to specific ones. Conversely, the vanilla SSL model pretrained on histopathology only sparsely attends to the important regions (Fig. 1(b)), i.e., there is inadequate diversity in attention. We _hypothesize that this sparsely attending model could result in encoding sub-optimal representations, as fine-grained context-rich details are often ignored_.
To address this issue of inadequate attention diversity, we propose _DiRL_, a diversity-inducing pre-training technique, tailored to enhance representation learning in digital pathology. Each WSI-crop consists of _two regions: cellular regions (one containing cells) and non-cellular regions (containing no cells)_. We leverage an off-the-shelf cell segmentation pipeline to identify these regions. This domain-specific knowledge is then utilized to extract **region-level representations** separately for the cellular and non-cellular regions. We further propose to encode the inter- and intra-spatial interplay of two regions. This biologically-inspired step (Saltz et al., 2018; Fassler et al., 2022) is achieved through a transformer-based disentangle block to encode the self-interaction within the regions, and cross-interaction between both the regions, termed as **disentangled representations**. In contrast to vanilla SSL frameworks that leverage one image-level representation for a WSI-crop, our prior-guided representation learning framework leverages histology-specific domain knowledge to densely extract a set of region-level and disentangled representations. We then task our framework to match all the corresponding representations between the views. We hypothesize that _optimizing this dense matching objective between the views would encourage the model to diversify its attention to various regions; matching assorted representations would then enforce the model to explore diverse image-regions relevant for each such representations._ We validate this hypothesis through consistent improvements in performance on multiple downstream tasks such as slide-level and patch-level classifications. Our qualitative analysis on attention distribution of the pretrained models reveals that our _DiRL_ framework can effectively de-sparsify attention, thereby learning global context-rich representations, unlike existing methods. To summarize our main contributions, we:
* Demonstrate that attention sparsification in self-supervised learning may lead to learning sub-optimal representations in digital pathology classification tasks.
* Design a novel domain-aware pretext task to de-sparsify attention maps and achieve enhanced representations for digital pathology.
* Demonstrate the efficacy of our _DiRL_ through slide-level and patch-level classification tasks on three WSI datasets and two patch datasets.
Figure 1: **Diversification of attention for encoding dense information in digital pathology. View 1 and View 2 are two augmented views of the input image. a) Illustration of attention map from a model pretrained on ImageNet using vanilla SSL. b) Attention map of model pretrained on histopathology dataset with vanilla SSL, and with our proposed pre-training strategy. In both natural imaging and digital pathology, vanilla SSL pre-training creates _sparse_ attention maps, i.e., it attends largely to only some prominent patterns. Although attention sparsification can be beneficial in natural image tasks such as object classification, this could be sub-optimal for encoding representations in digital pathology as it leads to loss of important contextual information. Through a more diversified attention mechanism, _DiRL_ encodes dense information critical to non object-centric tasks.**
Related Work
In this section, we briefly discuss vision transformers, SSL and its dense counterpart, and their application in computational pathology.
**Vision transformers.** Inspired by the success of self-attention modules in language models (Vaswani et al., 2017), vision transformers (ViTs) Dosovitskiy et al. (2020); Liu et al. (2021); Touvron et al. (2021); Xu et al. (2022); Ali et al. (2021); Tu et al. (2022) have been have been proposed to exploit non-local spatial dependencies in the imaging domain. Recent studies Wang et al. (2021b); Chen and Krishnan (2022); Chen et al. (2022); Stegmuller et al. (2022); Gao et al. (2021); Chen et al. (2021) have demonstrated the promise of transformer-based architectures in modeling histopathology imaging for cancer diagnosis and prognosis. However, to the best of our knowledge, no existing work has leveraged the flexibility of attention mechanism in transformers to instill the biology-relevant domain knowledge into vision transformers. For example, interaction between concepts/primitives such as tumor nuclei and stroma or between lymphocytic cells plays an important role in disease pathophysiology and treatment outcome. Our proposed method takes a step in this direction through a domain-driven pretext task.
**Image-level SSL** aims at learning visual representations through different pretext tasks. Contrastive and non-contrastive methods such as Chen et al. (2020); He et al. (2020); Caron et al. (2021), have shown tremendous potential in learning robust and rich representation in natural imaging. Building upon them, studies such as Ciga et al. (2022); Stacke et al. (2021); Li et al. (2021a); Kapse et al. (2022); Boyd et al. (2021); Chen and Krishnan (2022); Kurian et al. (2022) have explored SSL pre-training in histopathology image analysis.
**Region-level SSL** aims to further boost information encoding through dense pre-training techniques such as Li et al. (2021b); Yun et al. (2022); Wang et al. (2021a). These techniques impose additional constraints to match 1) region-level correspondences across the two views of the data or 2) neighbor-level intra-view correspondences within the data. Studies such as Wen et al. (2022); Yang et al. (2022); Henaff et al. (2021) have explored utilizing segmentation-based or clustering-based regions in self-supervision to enhance representation learning. However, the goal of these studies is to mainly improve the transfer performance for dense-prediction tasks such as _object detection_ and _segmentation_. In contrast, we tailored a dense pre-training strategy in histopathology to enforce the model to focus on diverse-regions thus diversifying model's attention. This diversified attention encourages the model in effectively encoding the complex information about various histology components, thereby augmenting _classification_ performance.
## 3 Methodology
In this section, we first describe a naive vision transformer framework for Whole Slide Images (WSIs). This is followed by explaining how cell segmentation can be used as a prior in pre-training for WSIs. Next, we present the extraction of region-specific representations using our proposed cell-back pooling and disentangle block. Finally, we present _DiRL_, our diversity-inducing pre-training technique, to learn discriminative features for WSI patches which are subsequently leveraged by a multiple instance learning (MIL) framework for downstream classification tasks. An overview of the proposed architecture is shown in Fig. 2(a).
**Preliminaries.** For an understanding of the primary components in a transformer such as MSA (Multi-head Self-Attention), LN (Layer Normalization), and MLP (Multi-Layer Perceptron), we refer the readers to Vaswani et al. (2017).
### Vision Transformer for WSI
From each WSI, \(\mathcal{W}\), \(w_{1}\), \(w_{2}\),...\(w_{N}\) crops are extracted, where \(N\) is variable for each \(\mathcal{W}\). Each \(w_{i}\) is then decomposed into \(n\) patches \(\mathcal{X}=[X^{1},X^{2},...,X^{n}]\in\mathcal{R}^{n\times p\times p\times 3}\), where (\(p\), \(p\)) is the spatial size of each patch. Each patch is transformed into a token using a shared linear projection layer,
\[\mathcal{T}_{0}=[X^{1}\mathbf{E};X^{2}\mathbf{E};...,X^{n}\mathbf{E}] \tag{1}\]
where \(\mathbf{E}\) are convolutional filters operating on each patch with \(d\) number of \(p\times p\) size filters, thus extracting a \(d\) dimensional feature vector for patch. This is followed by adding 1D learnable position embedding as in Vaswani et al. (2017). The transformer block models the relationship between the tokens using a multi-head self-attention block:
\[\mathcal{T}_{l}^{{}^{\prime}}=\mathcal{T}_{l-1}+\mathrm{MSA}(\mathrm{LN}( \mathcal{T}_{l-1}));\hskip 14.226378pt\mathcal{T}_{l}=\mathcal{T}_{l}^{{}^{ \prime}}+\mathrm{MLP}(\mathrm{LN}(\mathcal{T}_{l}^{{}^{\prime}})) \tag{2}\]
where \(l\) is index of the \(l^{th}\) block of transformer encoder, composed of \(L\) stacked transformer blocks. Thus in each block, these tokens interact with each other to learn representations for each \(w_{i}\). The resulting \(\mathcal{T}_{L}\) of dimension (\(n\), \(d\)) is average pooled across all the \(n\) tokens to compute the image-level representation \(f\) of dimension (\(1\), \(d\)) as shown in Fig. 3(a).
### Cell segmentation as domain prior
Each WSI-crop \(w_{i}\) consists of **two regions**, one containing cells and the other without cells. There has been substantial advancements in deep learning research pertaining to cell segmentation; this stems from the important role of image analysis and machine learning algorithms in visual interpretation of cellular biology (morphology and spatial arrangement) in digitized pathology scans (Lu et al., 2021; Shaban et al., 2022; Ding et al., 2022). Identifying the cellular and non-cellular regions in \(w_{i}\) can be achieved by exploiting the cell segmentation output as prior via techniques such as Sahasrabudhe et al. (2020); Hou et al. (2020); Vahadane & Sethi (2013). Following cell segmentation, the centroids are extracted to yield the cell centroid map, a binary map (\(\mathcal{C}\)) of values zeros and ones, with \(\mathcal{C}_{i,j}=1\) if centroid of any cell is present at (\(i,j\)) pixel in WSI-crop. We term this as cell prior mask as shown in Fig. 2(c). Since cell segmentation is routinely used in computational pathology, we use off-the-shelf, well established cell segmentation pipelines instead of training a new model. To be coherent with ViT, \(\mathcal{C}\) is decomposed into \(n\) patches \(\mathcal{C}=[C^{1},C^{2},...,C^{n}]\) of size (\(p,p\)). Each patch is transformed as follows: \(C^{i}=\mathrm{MaxPool}(C^{i})\), i.e., if the patch \(C^{i}\) contains one or more centroids, it becomes one, else it remains zero. Thus, the cell prior mask is downsampled into a binary vector of dimension (\(n\), \(1\)), denoting the presence of cells in each patch of \(w_{i}\). We term this vector as cell prior (\(P_{c}\)), which is invoked to extract the region-specific representations for each \(w_{i}\).
### Prior-block for Cell-Back pooling
Following \(L\) stacks of transformer blocks, a set of tokens \(\mathcal{T}_{L}\in\mathcal{R}^{n\times d}\) is fed to a prior-block. In this block, the tokens can be categorized into (a) _cell_ tokens, implying the tokens whose input patches contain at least a cell and (b) background or _back_ tokens, whose input patches do not capture any cells.
Figure 2: **Overview of the proposed _DiRL_ framework:** a) A WSI-crop is patchified and fed into a linear projection layer followed by a transformer encoder. The output is fed to a Prior-block and to a Disentangle block. The Prior-block pools region-level representations separately for cellular and non-cellular regions. The Disentangle block encodes spatial interplay between the two regions followed by prior-block to extract region-level disentangled representations. b) Cell priors \(P_{c}\) and \(1-P_{c}\) pool the tokens associated with the cellular and the non-cellular regions, respectively, followed by average pooling to extract region-level features. c) Cell segmentation from WSI-crop followed by extraction of cell centroid map. Cell prior mask is generated by discretizing the cell centroid map into patches. A Cell prior vector \(P_{c}\) is then produced from the cell prior mask. After pre-training, modules above the red dashed line are discarded in the inference stage.
The cell and back tokens are separately encoded to represent region-level features from the cell prior \(P_{c}\) as follows:
\[f_{c}=\frac{P_{c}^{T}\mathcal{T}_{L}}{\sum P_{c}};\ \ \ \ \ f_{b}=\frac{(1-P_{c})^{T} \mathcal{T}_{L}}{\sum 1-P_{c}} \tag{3}\]
\(f_{c}\) is the average pooled representation of all the cell tokens, i.e. representation of the cellular region. \(f_{b}\) is the average pooled representation of all the back tokens, representing non-cellular regions. In the process, _Cell-Back Pooling_ is exploited in the prior-block to extract two **region-level representations** as shown in Fig. 3(b).
### Disentangle block
We take a step towards obtaining region-level representation by proposing a transformer block for disentangling the cellular and non-cellular regions. This disentanglement is performed to encode self-interaction in each region and cross-interaction between the two regions. To accomplish this disentanglement, we devise two attention masks, \(M_{self}\) and \(M_{cross}\), each of dimension (\(n\times n\)) as shown in Fig. 3(c). The goal of \(M_{self}\) is to only allow token interaction between the same regions, i.e., cell-cell and back-back. In contrast, \(M_{cross}\) allows tokens to interact between the different regions (cell-back). The masks \(M_{self}\) and \(M_{cross}\) are computed as:
\[\begin{split} M_{self}(i,j)=\begin{cases}0,&\text{if }P_{c}(i)=P_{c}(j)\ ;\\ -\infty,&\text{otherwise}\end{cases};\\ M_{cross}(i,j)=\begin{cases}0,&\text{if }P_{c}(i)\neq P_{c}(j)\ or\ i=j\\ -\infty,&\text{otherwise}\end{cases}\end{split} \tag{4}\]
where indices \(i,j\in\{1,2,...n\}\). The intuition behind diagonal elements in \(M_{cross}\) is to ensure the preservation of each token's information when interacting with tokens from another region. Recall that in transformers, tokens are projected into three embeddings \(q\), \(k\), \(v\) and the output of a MSA block is computed as a weighted sum of the values \(v\), where the weight assigned to each value is determined by a self-attention operation \(\mathrm{softmax}(qk^{T})\). Unlike standard MSA in \(w_{i}\), where all the tokens from cellular and non-cellular regions interact with each other through self-attention (see Fig. 3(a)), we propose to disentangle the interactions between the regions. \(M_{self}\) and \(M_{cross}\) are linearly combined with the attention matrix to obtain disentangled self-attention matrices as follows:
\[\begin{split}\mathrm{MSA}_{self}&=\mathrm{softmax} (\frac{qk^{T}+M_{self}}{\sqrt{d}})\ v;\\ \mathrm{MSA}_{cross}&=\mathrm{softmax}(\frac{qk^{T}+M _{cross}}{\sqrt{d}})\ v\end{split} \tag{5}\]
Figure 3: a) Illustration of the \(n\times n\) attention matrix from the last layer of the transformer encoder, where \(q,k,v\) are projections of tokens in transformer block. Matrix multiplication of attention matrix with value \(v\), followed by average pooling across all the tokens generates the representation (\(f\)) for \(w_{i}\). b) Cell prior \(P_{c}\) is utilized to separately pool cell tokens and back tokens to extract region-level representations. c) Tokens from the \(L^{th}\) layer are interacted in the transformer-based disentangle block, forming the attention matrix. Attention masks \(M_{self}\) and \(M_{cross}\) are added with the attention matrix, generating desired matrices for disentanglement. Note that sf denotes \(\mathrm{softmax}\) activation. Matrices are then multiplied with \(v\) followed by prior-based pooling, thus extracting four representations encoding spatial interplay in \(w_{i}\). For clarity, the cellular and non-cellular region patches (\(n_{c}\) and \(n_{b}\) respectively) are arranged in separate groups.
Note that attention masks \(M_{self}\) and \(M_{cross}\) are linearly combined before the \(\mathrm{softmax}\) activation to ensure that the sum of each row in the self-attention matrix remains one. For the sake of brevity, we have illustrated self-attention with just one head instead of multi-head self-attention in the above equation. However, in practice, q,k,v are split into h (number of heads) parts and self-attention is then performed on all the h parts in parallel. The disentangle block operates at the output of the transformer encoder \(\mathcal{T}_{L}\) as:
\[\begin{split}\mathcal{T}^{{}^{\prime}}_{self}&= \mathcal{T}_{L}+\mathrm{MSA}_{self}(\mathrm{LN}(\mathcal{T}_{L}));\\ \mathcal{T}_{self}&=\mathcal{T}^{{}^{\prime}}_{self}+ \mathrm{MLP}(\mathrm{LN}(\mathcal{T}^{{}^{\prime}}_{self}))\\ \mathcal{T}^{{}^{\prime}}_{cross}&=\mathcal{T}_{L}+ \mathrm{MSA}_{cross}(\mathrm{LN}(\mathcal{T}_{L}));\\ \mathcal{T}_{cross}&=\mathcal{T}^{{}^{\prime}}_{cross }+\mathrm{MLP}(\mathrm{LN}(\mathcal{T}^{{}^{\prime}}_{cross}))\end{split} \tag{7}\]
Finally, similar to 3.3, the region-level features are encoded using the cell prior \(P_{c}\) using:
\[\begin{split} f_{cc}=\frac{P_{c}^{T}\mathcal{T}_{self}}{\sum P_{ c}};& f_{bb}=\frac{(1-P_{c})^{T}\mathcal{T}_{self}}{\sum 1-P_{c}};\\ f_{cb}=\frac{P_{c}^{T}\mathcal{T}_{cross}}{\sum P_{c}};& f_{bc}=\frac{(1-P_{c})^{T}\mathcal{T}_{cross}}{\sum 1-P_{c}} \end{split} \tag{8}\]
Thus, the prior-based pooling on \(\mathcal{T}_{self}\) and \(\mathcal{T}_{cross}\) results in four **disentangled representations**\(f_{cc}\), \(f_{bb}\), \(f_{cb}\), and \(f_{bc}\), encoding the spatial interplay between the cellular and non-cellular regions. Thus, for each WSI-crop \(w_{i}\)_we encode six representations: two region-level representations using cell-back pooling, and four disentangled representations using disentangle block._ Our prior-guided pre-training framework operates on these six representations to pretrain the model.
### Diversity-inducing pre-training for WSI
In this section, we formulate our diversity-inducing representation learning (_DiRL_) using a widely used SSL framework for pre-training on histopathology data: DINO (Caron et al., 2021). However, in practice, our pre-training technique can be integrated with any pairwise SSL framework (Li et al., 2022), as demonstrated in Appendix A.2. DINO consists of a student and teacher branch, where the teacher is a momentum updated version of the student, thus both having same architecture (models). Different views of the input image are fed to both the branches to encode them into image-level representations. A projection head is applied on top of these representations with \(\mathrm{softmax}\) activation. SSL is performed by matching the student's output with the teacher's probability distribution through cross-entropy loss, \(\mathcal{L}^{CE}\).
In contrast to vanilla DINO, _DiRL_ yields six feature vectors from each branch (see Fig. 2). Therefore, the loss function is modified as:
\[\mathcal{L}^{CE}=\lambda_{1}\times(\mathcal{L}^{CE}_{c}+\mathcal{L}^{CE}_{b})+ \lambda_{2}\times(\mathcal{L}^{CE}_{cc}+\mathcal{L}^{CE}_{bb}+\mathcal{L}^{CE}_ {cb}+\mathcal{L}^{CE}_{bc}) \tag{9}\]
where \(\mathcal{L}^{CE}_{c}\) is the cross-entropy loss between projection of representation \(f_{c}\) of student and that of teacher branch. Likewise, the projected distribution of all other corresponding representations from student and teacher are matched. This linear combination of losses encourages the framework to perform a dense matching of the region-level and disentangled representations of the augmented views. Consequently, the dense matching promotes the model to globally diversify the attention map (refer to Fig 1, Fig 11).
We propose another variant of _DiRL_, without the disentangle block, i.e. similarities of only projection distribution of \(f_{c}\) and \(f_{b}\) are maximized between the views. We name this variant as _Cellback_. Following the pre-training, only linear projection layer, position embedding, and the transformer encoder of the teacher are retained. This pretrained ViT extracts the average pooled feature representation for all \(w_{i}\) belonging to WSI \(\mathcal{W}\), to generate feature matrix of dimension (\(N\), \(d\)), where \(N\) is variable number of WSI-crops for each \(\mathcal{W}\). Note that the prior is used _only_ at pre-training. Finally, MIL operates over this matrix for WSI slide-level analysis, as discussed next.
### Preliminary extension to multiple cell types
In this study, we further propose extending _Cellback_ 3.3 to incorporate multiple cell types. Precisely, in Equation 3, the cell prior (\(P_{c}\)) is replaced with multiple priors, one for each of the \(j\) cell classes as follows \(P_{c_{1}}\), \(P_{c_{2}}\),... \(P_{c_{j}}\). Note
that in \(\mathcal{C}\) (please refer to 3.2), a patch can contain centroids of multiple cell types. Therefore for a WSI-crop the different cell priors can be overlapping vectors, i.e. multiple priors could pool the same tokens. Background prior is conceptually the same, i.e. it pools tokens that don't contain any cells. We denote this extended version as _Cellback-V2_. We use HoVer-Net for segmenting and classifying cells into 5 cell types from PanNuke. Apart from each cell type level and background level representation extraction, in this version we perform average pooling of all tokens to extract a crop-level representation. Thus Cellback-V2 extracts a total of 6 region-level and 1 crop-level representations. All the representations are projected and matched across the views with equal weightage.
### Multiple Instance Learning for slide-level tasks
Multiple instance learning (MIL) is widely used method in WSI slide level analysis. We refer the readers to Ilse et al. (2018); Lerousseau et al. (2021); Shao et al. (2021); Lu et al. (2021b) for an overview. We adopted DSMIL (Li et al., 2021a) framework for this work. Following pre-training, the pretrained model is used to extract features for WSI-crops in \(\mathcal{W}\). The MIL model takes these features as input bag, optimizing the model weights through slide-level label supervision.
## 4 Experiments and Results
Our pretrained models are evaluated for both slide-level and patch-level classification tasks. As a _Baseline_, we a vision transformer with DINO (Caron et al., 2021), a vanilla self-supervised framework, which optimizes the similarities between two views through just one image-level representation per view. This is compared to pre-training with our proposed _DiRL_ and _Cellback_ frameworks. The encoders in our frameworks are implemented with both ViT-Tiny (ViT-T, \(d=192\)) and ViT-Small (ViT-S, \(d=384\)) consisting of 5M and 22M parameters, respectively.
**Dataset and tasks:** For _slide-level classification_, we use the following datasets: 1) TCGA-Lung (Albertina et al., 2016; Kirk et al., 2016) at 5\(\times\), 2) TCGA-BRCA (Lingle et al., 2016) at 5\(\times\), and 3) BRIGHT (Brancati et al., 2021a) at 10\(\times\). Note that our proposed pre-training is performed separately for each dataset followed by evaluating them for slide-level classification. This classification task comprises 1) TCGA-Lung: Lung Adenocarcinoma (LUAD) versus Lung Squamous Carcinoma (LUSC), 2) TCGA-BRCA: Invasive Ductal (IDC) verses Invasive Lobular Breast Carcinoma (ILC), and 3) two sub-tasks in BRIGHT: 3-class WSI-classification (noncancerous, precancerous, and cancerous), and 6-class WSI-classification, termed as BRIGHT (3) and BRIGHT (6), respectively.
For _patch-level classification_, evaluations are performed on Chaoyang (Zhu et al., 2021) and MHIST (Wei et al., 2021) datasets, which contain localized annotation at crop-level. MHIST consists of two classes of colon cancer, whereas Chaoyang contains four classes of colon cancer.
Note that, for generating cell prior \(P_{c}\), we employed HoVer-Net for TCGA-Lung (Graham et al., 2019) and due to computational limitations we employed Cellpose (Stringer et al., 2021) for the other three WSI datasets.
### Datasets split
**WSI Datasets: 1)**_TCGA-Lung_: This dataset consists of 940 diagnostic digital slides from two subtypes of Lung cancer - Lung adenocarcinoma (LUAD) and Lung Squamous cell carcinoma (LUSC). We split the data into 748 training (391 LUAD, 357 LUSC) and 192 (96 LUAD, 96 LUSC) testing samples randomly. The WSI-crops (670K train, 150K test) are extracted at 5\(\times\) magnification. 2) _TCGA-BRCA_: This dataset consists 1034 diagnostic digital slides of two subtypes of Breast cancer - Invasive ductal carcinoma (IDC) and Invasive lobular carcinoma (ILC). We split the data into 937 training (747 IDC, 190 ILC) and 97 (77 IDC, 20 ILC) testing samples randomly. The WSI-crops (790K train, 90K test) are extracted at 5\(\times\) magnification. 3) _BRIGHT_: Comprises 703 (423 training, 80 validation, 200 testing) diagnostic digital slides. This dataset contains two sub-tasks: 3-class WSI classification and 6-class WSI classification tasks. For the first sub-task, the 3 classes are as follows - Non cancerous (PB+UDH), Pre-cancerous or Atypical (ADH+FEA), and Cancerous (DCIS+IC). For the second sub-task, the 6 classes are as follows - Pathological Benign (PB), Usual Ductal Hyperplasia (UDH), Flat Epithelia Atypia (FEA), Atypical Ductal Hyperplasia (ADH), Ductal Carcinoma in Situ (DCIS), and Invasive Carcinoma (IC). The BRIGHT challenge contains train, validation, and test splits. Since the challenge is not active now, labels for the test set are not available. Therefore, we reported our results for this dataset on its validation set as our test set. Class-wise data split can be found here 1. The WSI-crops (1.24M train, 0.2M test) are extracted at 10\(\times\) magnification.
**Patch Datasets:** 1) _MHIST_(Wei et al., 2021): Consists of 3152 images of colon with tasks to classify the type of colorectal polyps into two types, benign and pre-cancerous. All the image resolutions are of 224 \(\times\) 224 pixels. 2) _Chaoyang_(Zhu et al., 2021): Consists of 6160 patches of size 512 \(\times\) 512 pixels from Colon cancer divided into four classes - normal, serrated, adenocarcinoma, and adenoma. These patches are resized to 224 \(\times\) 224 pixels in our experiments.
For these two patch datasets, we split their official training sets into a 5 fold cross validation sets. We train on 4 folds, validate on 1 fold and test on their official test sets. Thus, we report our results (accuracy and AUC) as a mean of 5 fold cross validation trials.
### Implementation details.
For all our experiments, \(224\times 224\) sized crops are extracted from WSIs. We set the patch size for vision transformer input to \(p=16\). Therefore, the number of tokens per WSI crop are \(n=196\). For ViT-Tiny (ViT-T), the embedding dimension \(d=192\), whereas for ViT-small (ViT-S), \(d=384\).
_Pre-training:_ For pre-training with DINO, we follow the hyper-parameter initialization from their source code (Caron et al., 2021). We use a batch size of 256. In pre-training _DiRL_, we set the loss weighting factors \(\lambda_{1}=0.5\) and \(\lambda_{2}=\frac{0.1}{4}\), whereas for _DiRL_ without disentangle block (_Cellback_), we set \(\lambda_{1}=0.5\). We use two different projection heads of default output sizes (65536) (Caron et al., 2021) for region representations \(f_{c}\) and \(f_{b}\), and four different projection heads of smaller output size (4096) for disentangled spatial interplay features. For pre-training with i) SimCLR, we adopted the implementation from Li et al. (2021) with a batch size 512, ii) EsViT, implementation is adopted from Li et al. (2021), and iii) SelfPatch, implementation is adopted from Yun et al. (2022). Note that pre-training is only performed on training samples of WSI datasets. For slide-level classification tasks, all models are for 100 epochs on Lung dataset, for 50 epochs on BRIGHT, and 30 epochs on BRCA. To study data-efficiency plot in Fig. 6, all the models are for 50 epochs on the Lung dataset.
_Multiple instance learning:_ We use DSMIL (Li et al., 2021) for slide-level classification throughout this study. For training DSMIL, we use a learning rate of \(2e^{-4}\), and weight decay of \(5e^{-2}\). Batch size is set to 1 to handle variable bag size for each WSI \(\mathcal{W}\). For each WSI-level MIL experiment, we run DSMIL with 10 different seeds and report the average performance. Other hyper-parameters such as number of epochs, and train-validation split ratio are kept consistent with Li et al. (2021). Note that training samples from the WSI datasets are split into train and validation for MIL training. We hope to explore the impact of other MIL frameworks such as Ilse et al. (2018); Shao et al. (2021); Lu et al. (2021); Lerousseau et al. (2021); Chen et al. (2022); Zhang et al. (2022); Pinckaers et al. (2020) on our _DiRL_ learned features in future.
_Patch classification:_ In our experiments of fine-tuning for patch classification, an average pooling layer (for averaging the tokens) followed by a fully connected layer is placed on top of the transformer-encoder backbone. For all experiments on MHIST dataset, we use a learning rate of \(3e^{-4}\), weight decay of \(1e^{-2}\), and batch size of 128. We train the network for 40 epochs and decay the learning rate by 0.1 at epoch 20 and epoch 30. For all experiments on the Chaoyang dataset, we use a learning rate of \(1e^{-4}\), weight decay of \(1e^{-2}\), and batch size of 128. We train the network for 45 epochs and decay the learning rate by 0.1 at epoch 20, 30 and 40.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & **Lung** & **BRCA** & **BRIGHT (3)** & **BRIGHT (6)** \\ Metric & Acc, AUC & Acc, AUC & Acc, AUC & Acc, AUC \\ \hline Baseline-T & 0.894, 0.960 & 0.907, 0.945 & 0.632, 0.850 & 0.474, 0.776 \\ Cellback-T & **0.908, 0.965** & 0.897, 0.940 & 0.646, 0.848 & 0.488, 0.769 \\ DiRL-T & 0.897, 0.957 & **0.927, 0.963** & **0.653, 0.852** & **0.500, 0.780** \\ \hline Baseline-S & 0.913, 0.967 & 0.907, 0.947 & 0.630, 0.840 & 0.474, 0.781 \\ Cellback-S & **0.922**, 0.967 & **0.928**, 0.957 & 0.667, 0.848 & **0.529**, 0.796 \\ Cellback V2-S & 0.912, **0.971** & **0.928**, 0.958 & **0.692, 0.861** & **0.529**, 0.799 \\ DiRL-S & 0.911, **0.971** & **0.928**, 0.963 & 0.662, 0.839 & **0.529**, **0.811** \\ IN-S & 0.826, 0.908 & 0.886, 0.953 & 0.586, 0.766 & 0.423, 0.712 \\ CTransPath & 0.897, 0.956 & 0.918, **0.971** & 0.632, 0.832 & 0.435, 0.757 \\ RetCCL & 0.870, 0.930 & **0.928**, 0.962 & 0.512, 0.852 & 0.337, 0.682 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for slide-level classification tasks. T denotes ViT-Tiny, and S denotes ViT-Small.
### Slide-level and Patch-level classification
**Slide-level classification:** In Table 1, we show the slide-level classification results on the three datasets with tiny and small ViT models using _Baseline_, _Cellback_ (_DiRL_ without disentangle block), and _DiRL_ frameworks. For state-of-the-art comparison, we employed pan-cancer pretrained models provided by CTransPath (Wang et al., 2022) and RetCCL (Wang et al., 2023) as the feature extractor instead of our pretrained models. For feature set extracted from both these models, we find that weight decay of \(5e^{-3}\) in DSMIL works best. We have also included ImageNet pretrained ViT-Small provided by (Touvron et al., 2021) as a feature extractor for comparing it's performance on slide-level tasks with DSMIL.
It may be observed that for the Lung and BRCA dataset, _DiRL_ consistently surpasses the _Baseline_ (up to 6.9% relative accuracy gain) and _Cellback_ (up to 3.3% relative accuracy gain) models for both ViT-T and ViT-S architectures. In all cases, both _DiRL_ and _Cellback_ considerably outperform the vanilla-DINO _Baseline_ (accuracy and AUC). Interestingly, _DiRL-T_ performs even better than _Baseline-S_ in BRCA, substantiating the importance of efficiently encoding diversified information even in smaller feature embedding (\(d=192\)) in _DiRL_-T against inefficiently and sparsely encoding into a larger feature embedding (\(d=384\)) in ViT-S. This paves the direction for efficiently encoding domain-information in smaller models. In the BRIGHT dataset, it is observed that _Cellback-S_ achieves the best performance for both the sub-tasks. Additional comparisons with SOTA SSL methods are provided in the A.1.
**Patch-level classification:** For evaluating the generalizability of the learned representations, we use BRCA pretrained models and fine-tune them on MHIST (Wei et al., 2021) and Chaoyang (Zhu et al., 2021) datasets (because of visual similarities between breast and colon cancers (Bremond et al., 1984)). In Table 2, we report the 5-fold cross validation accuracy and AUC on the official test set. We observed that both our models (_Cellback_ and _DiRL_) outperform the _Baseline_ on the two datasets using ViT-T and ViT-S backbones.
Relative to the _Baseline_, _DiRL_ improves the accuracy by \(1.7-3.16\%\) on MHIST and \(0.8-1.1\%\) on the Chaoyang dataset. Similarly, it also improves the corresponding AUCs by \(1.5-2\%\) and \(0.3-0.4\%\), respectively.
**Comparison of _DiRL_ with other dense pre-training methods:** Note that our pre-training aligns with dense pre-training literature as we perform dense matching across two views through region-level and disentangled representations instead of just matching through one image-level representation. For fair comparison, we re-implement dense pre-training methods closely related to our research: 1) SelfPatch (Yun et al., 2022) and 2) EsViT (Li et al., 2021b). In addition to image-level matching as in DINO, SelfPatch enforces invariance against each patch/token and its neigh
\begin{table}
\begin{tabular}{c c c} \hline \hline Dataset & **MHIST** & **Chaoyang** \\ Metric & Acc, AUC & Acc, AUC \\ \hline Baseline-T & 0.758, 0.854 & 0.819, 0.942 \\ Cellback-T & 0.769, 0.864 & 0.823, 0.942 \\ DiRL-T & **0.782, 0.871** & **0.828, 0.945** \\ \hline Baseline-S & 0.757, 0.844 & 0.830, 0.946 \\ Cellback-S & 0.765, 0.852 & 0.831, **0.950** \\ DiRL-S & **0.770, 0.857** & **0.836, 0.950** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for crop-level classification tasks.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & **Lung** & **BRCA** & **BRIGHT (3)** & **BRIGHT (6)** \\ Metric & Acc, AUC & Acc, AUC & Acc, AUC \\ \hline Baseline-S & 0.913, 0.967 & 0.907, 0.947 & 0.630, 0.840 & 0.474, 0.781 \\ SelfPatch-S & 0.719, 0.785 & 0.897, 0.940 & 0.513, 0.684 & 0.363, 0.636 \\ EsViT-S & 0.914, 0.967 & **0.928**, 0.953 & 0.678, 0.859 & 0.528, 0.788 \\ Cellback-S & **0.922**, 0.967 & **0.928**, 0.957 & 0.667, 0.848 & **0.529**, 0.796 \\ Cellback V2-S & 0.912, **0.971** & **0.928**, 0.958 & **0.692**, **0.861** & **0.529**, 0.799 \\ DiRL-S & 0.911, **0.971** & **0.928**, **0.963** & 0.662, 0.839 & **0.529**, **0.811** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of _DiRL_ with existing dense pre-training SSL methods (SelfPatch and EsViT). Statistical significance analysis is provided in the Table 10.
bors, whereas EsViT enforces matching between all the corresponding patch-based tokens across views. Note that we use ViT-S as the encoder for both the techniques. In Table 3, we showcase the results for SelfPatch and EsViT for slide-level classification tasks on all three datasets.
We find that our proposed models perform on par or better than EsViT on all the slide-level datasets. Whereas SelfPatch performs significantly worse in most tasks, possibly because neighborhood token invariance hardly exists in pathology images unlike for well-defined objects in natural images. Thus, our proposed domain-inspired dense matching shows consistent improvements for slide-level classification compared to the other densely pretrained models.
### Analysis of learned Attention
Here we demonstrate the de-sparsification of the learned attention of our _DiRL_ pretrained models. Recall that the aggregated attention associated with a token is represented by the sum of all values across its corresponding column in the \(n\times n\) self-attention matrix.
The sum of the aggregated attention values of all tokens (\(n\)) should be \(n\). Due to this constraint, if the model attends to some tokens with high attention values, then the attention value associated with other tokens are reduced significantly. In Fig. 4, we plot the distribution of aggregated attention values of tokens from the last layer of the transformer encoder pretrained by _Baseline_ (vanilla DINO), _Cellback_, _DiRL_, EsViT, and SelfPatch. We then split the aggregated attention values in three bins: 0-0.5, 0.5-2, \(>\)2. The 0-0.5 and \(>\)2 bins indicate sparse attention learned through low and high concentrated attention values, respectively. Whereas the 0.5-2 range is the desired bin with moderate attention values that would lead to a de-sparsified attention map (and hence, optimal encoding of context-rich information).
The plots show that _Baseline_ trained with vanilla DINO has around 20-30% tokens in the lower range sparse bin (0-0.5) and around 8-10% in higher range sparse bin (\(>\)2). Whereas for our _Cellback_ and _DiRL_ models, fewer than 5-10% tokens lie in the lower range sparse bin, while 3-5% lie in higher range sparse bin. Importantly, our models yield significantly more diversified attention with more than 80-90% tokens in the desired bin (0.5-2) compared to that of 65% tokens for the _Baseline_. Interestingly, SelfPatch is able to diversify the transformer attention well, avoiding the sparse bins. However it still performs 2-20% lower than our models on various slide-level classification tasks. This might be due to the neighbor invariant self-supervision (refer to Yun et al. (2022)) being noisy in histopathology domain (as discussed in 4.3). EsViT consistently contains 10-15% more tokens compared to the _Baseline_ in the desired bin. However it still contains much more tokens in the sparse bins compared to _DiRL_ and _Cellback_. These observations justify our premise that dense matching could diversify the attention, which is crucial for learning representations for histopathology.
In Fig. 5, we visualize the attention overlay from models pretrained using _Baseline_ vanilla DINO, and our proposed _DiRL_ and _Cellback_. The regions containing tumor cells are outlined in white, while those with necrosis and immune cells are outlined in yellow and green, respectively. It is evident that the _Baseline_ model is sparsely attending the WSI crop, often ignoring crucial tumor cell-dominant regions. In contrast, our models are able to globally diversify attention. Bar plots show that almost all tokens have moderate attention values ranging from 0.5 to 2 in _DiRL_. In contrast, _Baseline_ has a large number of tokens having very low attention (\(<\)0.5). Note that all the attention values \(>1\) are clipped to 1 for visualization. Additional visualizations are provided in A.3.
Figure 4: **Attention distribution plot. The second bin (0.5-2) is the desired one. Here, the percentage values show the fraction of tokens with attention values in the desired range. The baseline method has a higher fraction of tokens in lower range and higher range sparse bins, which is not ideal for digital pathology applications.**
### Ablation studies
Here we study the utility of various components proposed in our framework. We perform our ablations on the Lung cancer dataset.
**Data efficiency:** We investigate the effect of pre-training with different fractions (20-100%) of total training data. As seen in Fig. 6, the gain in both AUC and accuracy is around 6% when _DiRL_-based models are pretrained with significantly less data (20% data). These empirical findings show the importance of _DiRL_ especially in low data regimes, for e.g. rare cancers.
**Adaptation of _DiRL_ with other SSL frameworks:** In this study so far we adopted DINO framework for the self-supervision of the WSIs. However, _DiRL_ framework can be incorporated with any self-supervised learning strategies. In Table 4, we demonstrate the performance of our proposed _Cellback_ and _DiRL_ representations pretrained with either BYOL (Grill et al., 2020) or SimCLR (Chen et al., 2020) pipeline. We compare this adapted framework with _Baseline_ models pretrained with vanilla BYOL and vanilla SimCLR. Irrespective of SSL framework, _Cellback_ and _DiRL_ consistently outperform the _Baseline_ in both Accuracy and AUC.
**Multi-task adaptation of vanilla SSL:** An alternative approach of using cell segmentation as a prior could be to use the segmentation as an auxiliary task in self-supervised pre-training strategy. Consequently, in this section, we evaluate the impact of pre-training ViT with vanilla SSL jointly with a segmentation related auxiliary task. In order to avoid the use of a heavy decoder for segmentation, we instead design a new 'cell prediction task' to predict the number of cells present in each \(p\times p\) patch of WSI-crop. A linear layer is applied on top of the ViT encoder for this task.
The joint optimization of vanilla SSL with the cell prediction task could hypothetically force the model to learn discriminative features from SSL and capture de-sparsification effects from the supervised loss (\(L_{sup}\)). We adopted the cell prediction supervised loss \(L_{sup}\) with the _Baseline_. We find that the model with \(L_{sup}\) could outperform the baseline model at early epochs (accuracy of 0.832 vs. 0.808 in baseline, AUC of 0.921 vs. 0.886 in baseline). However, with later training epochs, the supervised loss does not augment the vanilla SSL pre-training (accuracy of 0.913 vs. 0.913 in baseline, AUC of 0.968 vs. 0.967 in baseline). Thus, the multi-task adaptation leads to better convergence at lower epochs but under-performs our proposed pre-training strategies when trained for a longer training schedule (100 epochs).
Figure 5: **Attention visualization.** Depicts the sparse attention by _Baseline_, and its subsequent de-sparsification by our methods on a representative lung cancer patch. Bar plot shows the percentage of tokens in the three bins. _Baseline_ contains greater than 30% of tokens’ attention values in the sparse bins; comparatively our method contains fewer than 10%.
Figure 6: **Data efficiency study.** Illustrates the effect of pre-training with different amount of training data.
We analyzed the effect of \(L_{sup}\) on the attention distribution in Fig. 7. It reveals that although the supervised loss helps to de-sparsify attention to an extent, the de-sparsification is still sub-par compared to _Cellback_ and _DiRL_.
This ablation analysis shows that although cell prior could aid the representation learning with token-level supervision, the benefits are more prominent when leveraging cell-prior in our proposed dense pretext-task framework both in terms of performance and de-sparsification.
**Comparison with Masked Image Modeling:** In this study, we hypothesized that the vanilla pretext task of matching the two views in SSL causes the sparsity in attention. Therefor,e a natural question that arises is that _why not utilize the pretext task that explicitly focuses on local-level tasks such as reconstruction loss in Masked Autoencoders (MAE) (He et al., 2022)?_ To investigate this, we pretrain the vision transformer with MAE-based loss for 200 epochs. We compare its slide-classification performance and attention de-sparsification in Table 5 and Fig. 8 respectively. Our results reveal that although the attention de-sparsification is much better than other inter-view pretext tasks, the performance is significantly worse than all other methods. This is an expected observation since MAE is known to have subpar linear probing performance as the network isn't tasked to learn the discriminatory information. Since the pretrained network is directly utilized to extract the features for MIL, WSI-level tasks can be thought of as aligning to linear probing rather than fine-tuning. For more discussions on this phenomenon, we refer the reader to exemplary work in (Park et al., 2023).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{SSL framework} & Dataset & \multicolumn{2}{c}{**Lung**} \\ & Metric & Acc & AUC \\ \hline \multirow{3}{*}{BYOL} & Baseline-S & 0.750 & 0.821 \\ & Cellback-S & 0.802 & **0.859** \\ & DiRL-S & **0.812** & 0.854 \\ \hline \multirow{3}{*}{SimCLR} & Baseline-S & 0.791 & 0.868 \\ & Cellback-S & **0.805** & **0.892** \\ & DiRL-S & 0.792 & 0.891 \\ \hline \multirow{3}{*}{DINO} & Baseline-S & 0.808 & 0.886 \\ & Cellback-S & 0.823 & 0.903 \\ \cline{1-1} & DiRL-S & **0.825** & **0.911** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Pre-training _DiRL_ with other SSL frameworks. All the results are reported for models pretrained for 20 epochs.
Figure 7: Attention distribution plot. Vertical lines separate the three defined bins. The percentage values show the percentage of tokens with attention values in the desired bin (0.5-2) for _Baseline_, _Baseline w/ \(L_{sup}\)_, _Cellback_, and _DiRL_.
Other ablations including: 1) _Baseline_ with an additional layer, 2) Effect of cell segmentation pipeline, and 3) Effect of stronger augmentation for diversification, are discussed in A.2.
## 5 Discussion
In this work, we present a crucial requirement of domain-driven tailoring of SSL techniques (proposed in natural imaging literature) to digital pathology tasks through our insightful observation about the sparsity of attention. We argued that since various natural imaging datasets are object centric, the sparsity in attention does not have an adverse effect on encoding capabilities particularly for global/shallow tasks like classification (Yun et al., 2022). However unlike object-centric natural images, pathology images are rather a complex phenotype of various spatially intermixed biological components. Therefore sparsity in attention leads to suboptimal encoding of this complex layout and thus could result in crucial information loss (see 11). To address this critical unmet need, we proposed _DiRL_, a framework that densely encodes various characteristics of regions and their co-occurrences in pathology by leveraging region prior from cell segmentation. The proposed prior-guided pre-training utilizes densely extracted representations via a dense matching objective. We hypothesized that our pre-training strategy would make the attention more globally distributed, since matching multiple histopathology-specific representations would enforce the model to pay adequate attention to _each_ image-region relevant for corresponding characteristics.
Through our thorough qualitative analysis, we showed that _DiRL_ de-sparsifies the attention map, thus boosting the capabilities to encode diverse information in complex histopathology imaging. We believe that this attention diversification leads to a more "complete" encoding of pathological components in WSI-crops. This was corroborated by consistent performance improvement on multiple slide-level and patch-level classification tasks by _DiRL_. We believe our work has the potential to augment future research in self-supervision/pre-training of vision-transformers for digital pathology domain. Our work opens exciting avenues toward utilizing domain-specific priors and instilling this domain knowledge in neural networks.
Figure 8: Attention distribution plot. Vertical lines separate the three defined bins. The percentage values show the percentage of tokens with attention values in the desired bin (0.5-2) for _Baseline_, _MAE_, _Cellback_, and _DiRL_.
\begin{table}
\begin{tabular}{c c c} \hline \hline Dataset & \multicolumn{2}{c}{**Lung**} \\ Metric & Acc & AUC \\ \hline Baseline-S & 0.913 & 0.967 \\ MAE-S & 0.820 & 0.910 \\ Cellback-S & 0.922 & 0.967 \\ Cellback V2-S & 0.912 & **0.971** \\ DiRL-S & 0.911 & **0.971** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison with Masked Image Modeling
Limitation of _DiRL_ is although it gracefully leverages the flexibility available in vision-transformers, this approach cannot be trivially employed in CNNs. Since ViTs are computationally intensive, this limits the scope of our method in edge computing or low-resource environments. In future work we will explore more fine-grained concepts based on characteristics of tumor-immune microenvironment (Fassler et al., 2022; Abousamaro et al., 2022; Pati et al., 2020) to augment the priors.
## Declaration of competing interest
The authors do not have any competing interest.
## Data availability
Both slide-level and patch-level datasets are publicly available. Slide-level datasets are hosted on TCGA portal, whereas patch-level datasets are hosted on corresponding datasets' webpage.
## Acknowledgments
Reported research was supported by the OVPR seed grant and ProFund grant at Stony Brook University, and NIH 1R21CA258493-01A1. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We also thank Maria Vakalopoulou and Ke Ma for their insights and valuable discussions.
|
2308.16503 | Semidiscrete optical vortex droplets in quasi-phase-matched photonic
crystals | A new scheme for producing semidiscrete self-trapped vortices
(\textquotedblleft swirling photon droplets\textquotedblright ) in photonic
crystals with competing quadratic ($\chi ^{(2)}$) and self-defocusing cubic
($\chi ^{(3)}$) nonlinearities is proposed. The photonic crystal is designed
with a striped structure, in the form of spatially periodic modulation of the
$\chi ^{(2)}$ susceptibility, which is imposed by the quasi-phase-matching
technique. Unlike previous realizations of semidiscrete optical modes in
composite media, built as combinations of continuous and arrayed discrete
waveguides, the semidiscrete vortex droplets are produced here in the fully
continuous medium. This work reveals that the system supports two types of
semidiscrete vortex droplets, \textit{viz}., onsite- and intersite-centered
ones, which feature, respectively, odd and even numbers of stripes,
$\mathcal{N}$. Stability areas for the states with different values of
$\mathcal{N}$ are identified in the system's parameter space. Some stability
areas overlap with each others, giving rise to multistability of states with
different $\mathcal{N}$. The coexisting states are mutually degenerate,
featuring equal values of the Hamiltonian and propagation constant. An
experimental scheme to realize the droplets is outlined, suggesting new
possibilities for the long-distance transmission of structured light carrying
orbital angular momentum in nonlinear media. | Xiaoxi Xu, Feiyan Zhao, Jiayao Huang, Hehe Xiang, Li Zhang, Zhaopin Chen, Zhongquan Nie, Boris A Malomed, Yongyao Li | 2023-08-31T07:27:01Z | http://arxiv.org/abs/2308.16503v2 | # Semidiscrete optical vortex droplets in quasi-phase-matched photonic crystals
###### Abstract
A new scheme for producing semidiscrete self-trapped vortices ("swirling photon droplets") in photonic crystals with competing quadratic (\(\chi^{(2)}\)) and self-defocusing cubic (\(\chi^{(3)}\)) nonlinearities is proposed. The photonic crystal is designed with a striped structure, in the form of spatially periodic modulation of the \(\chi^{(2)}\) susceptibility, which is imposed by the quasi-phase-matching technique. Unlike previous realizations of semidiscrete optical modes in composite media, built as combinations of continuous and arrayed discrete waveguides, the semidiscrete vortex "droplets" are produced here in the fully continuous medium. This work reveals that the system supports two types of semidiscrete vortex droplets, _viz_, onsite- and intersite-centered ones, which feature, respectively, odd and even numbers of stripes, \(\mathcal{N}\). Stability areas for the states with different values of \(\mathcal{N}\) are identified in the system's parameter space. Some stability areas overlap with each other, giving rise to the multistability of states with different \(\mathcal{N}\). The coexisting states are mutually degenerate, featuring equal values of the Hamiltonian and propagation constant. An experimental scheme to realize the droplets is outlined, suggesting new possibilities for the long-distance transmission of nontrivial vortex beams in nonlinear media.
**Key words**: Quasi-phase-matched photonic crystals, semidiscrete vortex droplets, striped modulation.
## I Introduction
Semidiscrete vortex quantum droplets, a new type of vortices, were initially predicted in binary Bose-Einstein condensates trapped in an array of tunnel-coupled quasi-1D potential wells [1]. Unlike vortex modes in fully continuous or fully discrete systems [2; 3; 4; 5; 6; 7], these are stripe-shaped localized states, which are continuous in one direction and discrete in the perpendicular one, and do not exhibit rotational symmetry. It is well known that the stability of self-trapped vortex modes in two-dimensional (2D) and three-dimensional (3D) geometries is a challenging problem because the self-attractive nonlinearity gives rise to strong splitting instability of vortex rings and tori, even if the collapse instability that affects fundamental (zero-vorticity) solitons in the same media may be suppressed [8; 9; 10; 11; 12; 13]. Due to the competition between the mean-field (MF) and beyond-MF effects in the bosonic condensate [14; 15; 16; 17; 18; 19; 20; 21; 22; 23], semidiscrete vortex quantum droplets may maintain stability in this setting against the azimuthal (splitting) perturbations. In the field of nonlinear optics, somewhat similar objects in the form of "photon droplets" were experimentally demonstrated in optical media with nonlocal (thermal) nonlinearity [24; 25]. Actually, the competition between different nonlinear terms is a common effect [26; 27; 28; 29; 30; 31; 32], which occurs in the propagation of high-power laser beams in various media [33; 32]. Optical semidiscrete vortex droplets can be maintained by the balance between the competing nonlinearities. In particular, stable self-bound semidiscrete vortex modes in the spatial domain were predicted in coupled planar waveguides with the cubic-quintic nonlinearity [34]. Similarly, self-bound spatiotemporal vortex modes can be predicted in coupled arrays of nonlinear fibers [35].
Recently, patterned quasi-phase matched (QPM) nonlinear photonic crystals in the 3D space have been produced by means of the thermoelectric field polarization [36], laser erasing [37], and femtosecond laser poling technique [38],
which provides more possibilities for the creation of vortex states. The QPM technique has developed to a well-known method for achieving accurate phase matching in \(\chi^{(2)}\) crystals for the nonlinear frequency conversion [39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49] and nonlinear beam shaping [50; 51; 52; 53; 54; 55; 56; 57] in different dimensions. Very recently, stable vortex solitons were predicted in 3D QPM photonic crystals [58]. The structure of the vortex solitons can be engineered by fixing different phase-matching conditions in different cells of the photonic crystals, thus inducing effective discreteness in this 3D optical medium. This technique offers a possibility of building semidiscrete vortex modes in the QPM-structured bulk photonic crystals. It is relevant to mention that effective 2D (but not 3D) discrete waveguiding structures for optical beams with the extraordinary polarization can be induced by means of a different technique in photorefractive materials illuminated by interfering beams with the ordinary polarization[59]. Such virtual photonic lattices were used for the creation of quasi-discrete 2D vortex solitons [60; 61; 62].
In this paper, we propose a scenario for the creation of semidiscrete vortex optical droplets in 3D photonic crystals with quadratic (\(\chi^{(2)}\)) and defocusing cubic (\(\chi^{(3)}\)) nonlinearities and a striped QPM-induced spatial structure, as shown in Fig. 1(a,b,c). The \(\chi^{(3)}\) nonlinearity is able to compete with the \(\chi^{(2)}\) interaction if intensities of the light fields reach the level of several GW/cm\({}^{2}\)[32], making it possible to create self-bound photon droplets. We demonstrate that the phase-matching condition, adjusted to the striped structure, induces effective discreteness between adjacent stripes, which is necessary for the design of semidiscrete states. The balance of the competition between quadratic and cubic nonlinearity allows the self-bound vortex modes to maintain their stability.
The subsequent material is arranged as follows. The model is introduced in Section 2. Numerical results and estimates for the experimental setup are presented in Section 3. The paper is concluded by Section 4.
Figure 1: (Color online) (a,b) The structures corresponding to OC and IC modulations, which are defined as per Eq. (3), \(\ell\) and \(\mathbf{O}\) representing the stripe’s width and center of the modulation pattern, respectively. The black and gray blocks represent \(\sigma=-1\) and \(+1\), respectively. (c) The periodic modulation along the \(Z\) axis defined as per Eq. (4), \(\Lambda\) being the period of the longitudinal modulation. (d) A schematic of the experimental setup for the creation of the semidiscrete vortex droplets: L1, L2, L3 – lenses, SLM1, SLM2 – spatial light modulators, PPLN – the periodically polarized lithium niobate crystal, FF – the fundamental frequency, SH – the second harmonic. Two bottom plots display the input and output intensity and phase patterns of the FF and SH beams.
The model
The paraxial propagation of light beams through the 3D QPM photonic crystals with the competing \(\chi^{(2)}\) and \(\chi^{(3)}\) nonlinearities is governed by coupled equations for the slowly varying fundamental frequency (FF) and second harmonic (SH) amplitudes, \(A_{1}\) and \(A_{2}\):
\[i\partial_{Z}A_{1} =-\frac{1}{2k_{1}}\nabla^{2}A_{1}-\frac{2d(X,Z)\omega_{1}}{cn_{1}} A_{1}^{*}A_{2}e^{-i\Delta k_{0}Z}+\frac{3\chi^{(3)}\omega_{1}}{2cn_{1}}(|A_{1}|^{ 2}+2|A_{2}|^{2})A_{1},\] \[i\partial_{Z}A_{2} =-\frac{1}{2k_{2}}\nabla^{2}A_{2}-\frac{d(X,Z)\omega_{2}}{cn_{2}} A_{1}^{2}e^{i\Delta k_{0}Z}+\frac{3\chi^{(3)}\omega_{2}}{2cn_{2}}(|A_{2}|^{2}+2|A_{ 1}|^{2})A_{2}, \tag{1}\]
where \(\nabla^{2}=\partial_{X}^{2}+\partial_{Y}^{2}\) is the paraxial-diffraction operator, \(c\) is the speed of light in vacuum, while \(n_{1,2}\), \(\omega_{1,2}\) (\(\omega_{2}=2\omega_{1}\)), and \(k_{1,2}\) are, respectively, the refractive indices, carrier frequencies, and wavenumbers of the FF and SH components, and \(\Delta k_{0}=2k_{1}-k_{2}\) is the phase-velocity mismatch. \(\chi^{(3)}>0\) is the third-order susceptibility, which accounts for the cubic self-defocusing. The local modulation of the second-order susceptibility \(\chi^{(2)}\) is determined by
\[d(X,Z)=\sigma(X)d(Z), \tag{2}\]
where \(\sigma(X)\) is the transverse striped OC (onsite-centered) or IC (intersite-centered) modulation pattern:
\[\sigma(X)=\begin{cases}-\text{sgn}\left[\cos(\pi X/\ell)\right]&\text{OC},\\ -\text{sgn}\left[\sin(\pi X/\ell)\right]&\text{IC},\end{cases} \tag{3}\]
\(\ell\) being the width of a stripe. The OC and IC patterns correspond to the pivot of the vortex beam located, respectively, at the center of a stripe or at the border between two stripes, see Fig. 1(a,b). Further, the factor accounting in Eq. (2) for the modulation in the \(Z\) direction, with amplitude \(d_{0}\) and period \(\Lambda\), is [63; 64; 65]
\[d(Z)=d_{0}\text{sgn}\left[\cos(2\pi Z/\Lambda)\right]\equiv d_{0}\sum_{m\neq 0 }\left(\frac{2}{m\pi}\right)\sin\left(\frac{m\pi}{2}\right)\exp\left(i\frac{2 \pi m}{\Lambda}Z\right), \tag{4}\]
see Fig. 1(c). Actually, only the terms with \(m=\pm 1\) are kept in Eq. (4), as they play the dominant role in the QPM effect. Thus, \(m=1\) and \(-1\) relate to the FF and SH components, respectively.
By means of rescaling [66; 67]
\[\zeta=\left(\frac{n_{1}}{\omega_{1}}+\frac{n_{2}}{\omega_{2}} \right),\quad z_{d}^{-1}=\frac{2}{c\pi}\frac{d_{0}^{2}}{\chi^{(3)}}\left(\frac {\omega_{1}^{2}\omega_{2}}{n_{1}^{2}n_{2}}\zeta\right)^{\frac{1}{2}},\] \[u_{p}=\frac{\chi^{(3)}}{d_{0}}\sqrt{\frac{n_{p}}{\omega_{p}\zeta }}A_{p}\exp\left[i(\Delta k_{0}-2\pi/\Lambda)Z\right],\quad p=1,2,\] \[z=Z/z_{d},\quad x=\sqrt{k_{1}/z_{d}}X,\quad y=\sqrt{k_{1}/z_{d} }Y,\quad\Omega=\left(\Delta k_{0}-2\pi/\Lambda\right)z_{d},\] \[g_{11}=\frac{3\pi}{4}\sqrt{\frac{\omega_{1}^{2}n_{2}}{n_{1}^{2} \omega_{2}}}\zeta,\quad g_{22}=\frac{3\pi}{4}\sqrt{\frac{\omega^{3}n_{1}^{2} }{n_{2}^{3}\omega_{1}^{2}}}\zeta\quad g_{12}=\frac{3\pi}{2}\sqrt{\frac{ \omega_{2}}{n_{2}}\zeta}, \tag{5}\]
Eqs. (1), which keep, as said above, the terms with \(m=\pm 1\) in Eq. (4), are simplified to the form of
\[i\partial_{z}u_{1}=-\frac{1}{2}\nabla^{\prime 2}u_{1}-\Omega u_{1}-2 \sigma(x)u_{1}^{*}u_{2}+\left(g_{11}|u_{1}|^{2}+g_{12}|u_{2}|^{2}\right)u_{1}, \tag{6}\] \[i\partial_{z}u_{2}=-\frac{1}{2\eta}\nabla^{\prime 2}u_{2}-\Omega u _{2}-\sigma(x)u_{1}^{2}+\left(g_{22}|u_{2}|^{2}+g_{12}|u_{1}|^{2}\right)u_{2}, \tag{7}\]
where \(\nabla^{{}^{\prime}2}=\partial_{xx}+\partial_{yy}\) and \(\eta=k_{2}/k_{1}\). Neglecting the slight difference in the FF and SH refractive indices, i.e., setting \(n_{1}=n_{2}\), results in coefficients \(g_{11}=3\pi\sqrt{3}/8\), \(g_{22}=g_{12}=4g_{11}\) and \(\eta=2\) in Eqs. (6) and (7).
Equations (6) and (7) conserve two dynamical invariants, _viz._, the Hamiltonian and total power (alias the Manley-Rowe invariant [68; 69; 70]):
\[H=\int\int\ \left(\mathcal{H}_{k}+\mathcal{H}_{\Omega}+\mathcal{H} _{2}+\mathcal{H}_{3}\right)dxdy, \tag{8}\] \[P=\iint\left(|u_{1}|^{2}+2|u_{2}|^{2}\right)dxdy\equiv P_{1}+P_{2}, \tag{9}\]
where \(\mathcal{H}_{k}=\frac{1}{2}|\nabla u_{1}|^{2}+\frac{1}{2p}|\nabla u_{2}|^{2}\), \(\mathcal{H}_{\Omega}=-\Omega\left(|u_{1}|^{2}+|u_{2}|^{2}\right)\), \(\mathcal{H}_{2}=\sigma(x)\left(u_{1}^{*2}u_{2}+\text{c.c.}\right)\) and \(\mathcal{H}_{3}=\frac{1}{2}g_{11}|u_{1}|^{4}+g_{12}|u_{1}|^{2}|u_{2}|^{2}+ \frac{1}{2}g_{22}|u_{2}|^{4}\). The power sharing between the FF and SH components is defined as the ratio \(r=P_{1}/P_{2}\). Control parameters for the subsequent analysis are \(P\), \(\ell\) and \(\Omega\) (the total power, the stripe's width, and the scaled detuning).
## III Results
### Numerical results
Stationary solutions to Eqs. (6) and (7) with propagation constant \(\beta\) were looked for as
\[u_{p}\left(x,y,z\right)=\phi_{p}\left(x,y\right)\mathrm{e}^{ip\beta z},\quad p =1,2 \tag{10}\]
where \(\phi_{1,2}\) are the stationary amplitudes of the FF and SH component. Vortex solutions were generated by means of the imaginary-time (imaginary-\(z\)) method, applied to Eqs. (6) and (7), with the input taken at \(z=0\) as
\[\phi_{1} =r^{|S|}\exp\left(-\alpha_{1}r^{2}+iS\theta\right), \tag{11}\] \[\phi_{2} =r^{2|S|}\exp\left[-\alpha_{2}r^{2}+i2S\tilde{\theta}(x)\right], \tag{12}\]
where \(r\) and \(\theta\) are the 2D polar coordinates, and
\[\tilde{\theta}(x)=\theta+\frac{1}{4S}[\sigma(x)-1]\pi, \tag{13}\]
with \(S\) and \(2S\) representing the winding numbers of FF and SH components, respectively. In Eq. (13), the matching condition between the phases of the FF and SH components, \(\varphi_{1,2}(x,y)\equiv\arg\{\phi_{1,2}\}\), is defined by setting
\[\varphi_{2}(x,y)=2\varphi_{1}(x,y)-\varphi_{d}(x,y), \tag{14}\]
with \(\varphi_{d}=0\) and \(\pi\) corresponding, respectively, to \(\sigma(x)=1\) and \(-1\) (i.e., \(\varphi_{d}=-[\sigma(x)-1]\pi/2\)). According to Eqs. (11) and (12), one has \(\varphi_{1}=S\theta\) and \(\varphi_{2}=2S\tilde{\theta}(x)\), hence Eq. (13) is derived via Eq. (14).
The stability of the stationary vortex solitons was tested by direct real-\(z\) simulations of the perturbed evolution in the framework of Eqs. (6) and (7) up to \(z=10000\). Unstable solutions readily exhibit splitting in the course of the simulations.
Figure 2: (Color online) Typical examples of the intensity distribution, \(|\phi_{1}(x,y)|^{2}\) (the first row) and \(|\phi_{2}(x,y)|^{2}\) (the third row), and phase patterns of \(\phi_{1}(x,y)\) (the second row) and \(\phi_{2}(x,y)\) (the fourth row) of a semidiscrete vortex optical droplet with \(S=1\), which corresponds to point “A-D” in the stable area of the OC-type in Fig. 4(a). The parameters are \((P,\ell)=(80,18)\) in (a), \((80,11)\) in (b), \((80,8)\) in (c), and \((80,6.5)\) in (d). In this case, the effective detuning is fixed as \(\Omega=0\).
As the stripe modulation acts solely in the \(x\)-direction, the vortex solutions feature similar modulation in the same direction, and can be characterized by the number of stripes, \(\mathcal{N}\), in the localized solution According to Eq. (3), the solutions are also categorized into the OC and IC types. For the OC- and IC-type solutions, with the pivot located at the center of a stripe or at the border between adjacent stripes, numbers \(\mathcal{N}\) are, respectively, odd or even. Typical examples for the OC-type vortex solution with \(\mathcal{N}=3,5,7,9\) and IC-type ones with \(\mathcal{N}=2,4,6,8\), all carrying the winding number \(S=1\) in their FF component, are shown, respectively, in Figs. 2 and 3. All these states are stable, as confirmed by direct simulations up to \(z=10000\). Because the modulation is applied only along the \(x\)-direction, the vortex solutions feature a typical semidiscrete configuration similar to that reported in previous works [1; 34]. However, unlike those works, the stable vortex modes are elaborated here in the bulk crystals with the spatially modulated local \(\chi^{(2)}\) susceptibility. The vortex phase patterns exhibited in Figs. 2 and 3 of the SH component are striped to obey the matching condition of Eq. (14), hence, the effective angular coordinate of this component is defined by Eq. (13), showing a complicated striped-mixed vorticity pattern of this component.
Stability areas for the vortex solutions of the OC and IC types, with different values of \(\mathcal{N}\) are shown, in the \((P,\ell)\) plane with \(\Omega=0\) and \(S=1\), in Fig. 4(a,b). These plots demonstrate that the vortex solutions with larger values of \(\mathcal{N}\) are also stable, as can be seen in Figs. 4(b,c) and 4(c).
Figure 4: (Color online) Stability areas of semidiscrete vortex optical droplets of the OC (a) and IC (b) types, in the \((P,\ell)\) plane with \(\Omega=0\) and \(S=1\). Digits in colored stability areas indicate the number of stripes in the droplets, which are odd in (a) and even in (b). Note that the presence of bistability in the cyan and yellow areas. Stability ranges of multistable states are shown in (c) and (d) for the droplets of the OC and IC types, respectively, with \((\ell,\Omega)=(10,0)\) and varying values of the total power, \(P\).
Figure 3: (Color online) A typical example of a stable IC-type optical droplet with \((P,\Omega)=(80,0)\) and \(S=1\). The first and second rows display the intensity and phase distributions of the FF component, while the third and fourth rows exhibit the same for the SH component. The stripe’s widths in panels (a-d) are \(\ell=22,13,9,\) and \(7\), respectively, corresponding to points “A-D” in Fig. 4(b).
of \(P\) and smaller values of \(\ell\) produce overlaps between different stability areas [see the cyan and yellow areas in Fig. 4(a), and the cyan area in Fig. 4(b)]. The overlaps indicate the presence of multistability in the system. To further illustrate this feature, we select \(\ell=10\) and examine solutions with different values of \(\mathcal{N}\), up to \(P=200\). In this region, we find that four different values of \(\mathcal{N}\) stably coexist at \(P=200\) for both OC and IC types of the solutions. Note also that the solutions with larger values of \(\mathcal{N}\) require larger values of \(P\) to support the stability. Multistable semidiscrete vortex solutions are characterized by values of \(H\), \(\beta\) and \(r\), which are displayed, as functions of \(P\), in Fig. 5, while keeping \(\ell\) and \(\Omega\) fixed. Notably, curves \(H(P)\) and \(\beta(P)\) for these solutions overlap almost completely, indicating degenerate solutions. The nearly flat dependences, with \(d\beta/dP\approx 0\), indicate the existence of broad states, which may be considered as effectively liquid ones, cf. Ref. [14]. A nearly constant value \(r(P)\approx 2.7\) in Fig. 4(e,f) implies the domination of the FF component in the system.
Finally, in Fig. 6 we present the range of the effective detuning for a fixed total power, \(P=100\). The results indicate that stable semidiscrete vortex solitons exist for \(\Omega\neq 0\).
Exploring vortex states with \(S>1\) is a challenging problem, as they may be stable only for sufficiently large values of \(P\). Two typical examples of stable vortex solutions of the OC and IC types with \(S=2\), and \(\mathcal{N}=5\) and \(6\), are shown in Fig. 7. The stable solution with \(S=2\) exists at \(P>280\). This threshold is much higher than its counterpart for \(S=1\), in which case the stable solutions are found at \(P>40\).
### An outline of the experimental setup
The fabrication of 3D photonic crystal by means of femtosecond laser pulses is a mature technology [71; 72]. To estimate parameters of the setting under the consideration, we consider the nonlinear photonic crystal implemented in LiNbO\({}_{3}\), which has the second-order nonlinearity coefficient \(d_{0}=d_{22}=2.1\) pm/V [73], and the third-order one \(\chi^{(3)}=6.6\times 10^{-22}\) m\({}^{2}\)/V\({}^{2}\)[74]. The wavelengths of the FF and SH components are selected as 1064 nm and 532
Figure 5: (Color online) (a,b) Hamiltonian \(H\), defined as per Eq.(8), (c,d) the FF propagation constant \(\beta\), and (e,f) ratio \(r=P_{1}/P_{2}\) vs. total power \(P\). The lines of blue stars, cyan circles, orange squares, and red balls in (a,c,e) indicate, respectively, semidiscrete vortex states with stripe numbers \(\mathcal{N}=5,7,9\), and \(11\), while in (b,d,f), they represent \(\mathcal{N}=6,8,10\), and \(12\). Other parameters are \(\ell=10,\Omega=0\), and vorticity of fundamental frequency \(S=1\).
Figure 6: (Color online) (a) and (b): The FF propagation constant \(\beta\) for stable vortex optical droplets of the OC and IC types vs. effective detuning \(\Omega\). Solid and dashed lines denote the number of stripes \(\mathcal{N}=5\) and \(\mathcal{N}=7\) for OC, or \(\mathcal{N}=6\) and \(\mathcal{N}=8\) for IC, respectively. Other parameters are \((P,\ell)=(100,10)\) and \(S=1\).
nm, respectively. The relations between the scaled units, in which Eqs. (6) and (7) are written, and their physical counterparts can be established by means of Eq. (5), as summarizes in Table 1.
According to results of the simulations (see Figs. 2 and 3), the peak droplet's intensity in the FF and SH components are \(I_{\rm FF}\approx 4.32\) GW/cm\({}^{2}\) and \(I_{\rm SH}\approx 0.432\) GW/cm\({}^{2}\), respectively. If we select the pulse width as 200 ps, the energy densities for the FF and SH components are 0.86 J/cm\({}^{2}\) and 0.086 J/cm\({}^{2}\), which are lower than the damage threshold (\(\sim\)1.5 J/cm\({}^{2}\)[75; 76]) of the PPLN crystals. The characteristic propagation distance, which is \(z=10000\), amounts to 2.8 m, which is several times the underlying diffraction length. Therefore, the stability of the solitons, predicted by the simulations, is a reliable prediction.
The sketch for the experimental observation of these droplets is shown in Fig. 1(d): A high-power sub-nanosecond laser (e.g., one with the power exceeding 0.6 mJ per pulse, 200 ps pulse width, and 400 Hz repetition rate) may be a suggested light source in the proposed experiment. The input patterns on the front surface of the PPLN can be generated by a spatial light modulator (SLM) and a Lens. Input FF and SF components can be selected with energies 0.37 mJ and 0.16 mJ per pulse, which are closed to the FF/SF power ratio \(r\approx 2.6\) for stationary semidiscrete vortex droplets in Figs. 6(e) and (f). If the input's power ratio is essentially different from this value, it naturally gives rise to strong oscillations between the FF and SH components. The power and phase patterns of the FF component of the input, which are shown in the left inset of Fig. 1(d), can be produced by a properly designed SLM1 and L1, and the input SH patterns with 0.14 mJ per pulse can be produced by SLM2 and L2, respectively. The FF and SH beams are coupled by the dichroic mirror, which sends them onto the PPLN coaxially. The necessary PPLN crystal may be 4 cm long along the axial direction. Finally, the beams transmitted through the PPLN are imaged on to a camera by Lens L3. The simulated output patterns at the back surface of the crystals are shown in the right inset in Fig. 1(d).
## IV Conclusion
We have proposed 3D photonic crystals with the striped structure and the combination of the \(\chi^{(2)}\) and defocusing \(\chi^{(3)}\) nonlinearities. The results of the analysis predict the creation of two types of stable semidiscrete vortex solutions, OC and IC (onsite- and intersite-centered ones), which exhibit an odd or even number of stripes in their structure, respectively. The smallest number of the stripes is \(\mathcal{N}_{\rm OC}=3\) or \(\mathcal{N}_{\rm IC}=2\). The setting admits multistability, _viz._, the coexistence of stable solutions with different numbers of stripes for the same parameters. Unlike the multistability in
\begin{table}
\begin{tabular}{c c} \(x=1\) \& \(y=1\) & 4.64 \(\mu\)m \\ \(\ell=10\sim 15\) & 46.4 \(\sim\) 69.6 \(\mu\)m \\ \(z=1\) & 280 \(\mu\)m \\ \(P=1\) & 31 kW \\ \(|u_{1}|^{2}=0.01\) \& \(|u_{2}|^{2}=0.01\) & 1.44 GW/cm\({}^{2}\) \& 0.72 GW/cm\({}^{2}\) \\ \end{tabular}
\end{table}
Table 1: Relations between scaled and physical units of the coordinates, stripe’s width, total power, and intensity.
Figure 7: (Color online) (a1)-(a4) A typical example of a stable semidiscrete vortex droplet of the OC type, with vorticity \(S=2\) of the FF component. Other parameters are \((P,\ell,\Omega)=(300,15,0)\). (b1)-(b4) An example of a stable semidiscrete vortex droplet of the IC type, with vorticity \(S=2\) of the FF component, for the same parameters as in (a1)-(a4). The four columns of panels from left to right display the intensity and phase patterns of the FF and SH components. These solitons with \(S=2\) remain stable for the propagation distance \(z=10000\), up to which the simulations were running.
systems with the competing cubic-quintic nonlinear interactions, the Hamiltonian and propagation constant of these coexisting states are equal, thus featuring degeneracy. The range of the multistability has been found. The stable solutions for semidiscrete vortices exist within a certain range of positive or negative values of the phase mismatch of the \(\chi^{(2)}\) interaction in the medium.
The scheme proposed in this paper can be developed in other settings, such as photonic crystals with ring- or fan-shaped structures. Those settings can be used, in particular, for implementation of various scenarios of beam shaping.
###### Acknowledgements.
This work was supported by the NNSFC (China) through Grants No. 12274077, 11874112, by the Research Fund of Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology through grant No.2020B1212030010 and the Graduate Innovative Talents Training Program of the Foshan University. The work of B.A.M. is supported, in part, by the Israel Science Foundation through grant No. 1695/22.
|
2303.00013 | What neutron stars tell about the hadron-quark phase transition: a
Bayesian study | The existence of quark matter inside the heaviest neutron stars has been the
topic of numerous recent studies, many of them suggesting that a phase
transition to strongly interacting conformal matter inside neutron stars is
feasible. Here we examine this hybrid star scenario using a soft and a stiff
hadronic model, a constituent quark model with three quark flavours, and
applying a smooth crossover transition between the two. Within a Bayesian
framework, we study the effect of up-to-date constraints from neutron star
observations on the equation-of-state parameters and various neutron star
observables. Our results show that a pure quark core is only possible if the
maximum mass of neutron stars is below $\sim2.35~M_\odot$. However, we also
find, consistently with other studies, that a peak in the speed of sound,
exceeding $1/3$, is highly favoured by astrophysical measurements, which might
indicate the percolation of hadrons at $\sim3-4n_0$. Even though our prediction
for the phase transition parameters varies depending on the specific
astrophysical constraints utilized, the position of the speed of sound peak
only changes slightly, while the existence of pure quark matter below $\sim4
n_0$, using our parameterization, is disfavoured. On the other hand, the
preferred range for the EoS shows signs of conformality above $\sim4n_0$.
Additionally, we present the difference in the upper bounds of radius estimates
using the full probability density data and sharp cut-offs, and stress the
necessity of using the former. | János Takátsy, Péter Kovács, György Wolf, Jürgen Schaffner-Bielich | 2023-02-28T19:00:02Z | http://arxiv.org/abs/2303.00013v2 | # What neutron stars tell about the hadron-quark phase transition: a Bayesian study
###### Abstract
The existence of quark matter inside the heaviest neutron stars has been the topic of numerous recent studies, many of them suggesting that a phase transition to strongly interacting conformal matter inside neutron stars is feasible. Here we examine this hybrid star scenario using various hadronic models, a constituent quark model with three quark flavours, and applying a smooth crossover transition between the two. Within a Bayesian framework, we rigorously study the effect of up-to-date constraints from neutron star observations on the equation-of-state parameters and various neutron star observables. Our results show that a pure quark core is only possible if the maximum mass of neutron stars is below \(\sim 2.3~{}M_{\odot}\). We also find, however, consistently with other studies, that a peak in the speed of sound, exceeding \(1/3\), is highly favoured by astrophysical measurements, which might indicate the percolation of hadrons at \(\sim 3-4n_{0}\). Even though our prediction for the phase transition parameters varies depending on the specific astrophysical constraints utilized, the position of the speed of sound peak only changes slightly, while the existence of pure quark matter below \(\sim 4n_{0}\) is disfavoured. Additionally, we present the difference in the upper bounds of radius estimates using the full probability density data and sharp cut-offs, and stress the necessity of using the former.
## I Introduction
Neutron stars (NSs) are one of the endpoints of stellar evolution formed in core-collapse supernovae with a progenitor mass of \(8M_{\odot}\) or more. NSs are so compact that the central energy density can reach several times the one of nuclear matter at saturation. At these high densities new particles could emerge and/or matter is transformed to a new phase characterised by approximate chiral symmetry restoration, which is dubbed the hadron-quark phase transition (see e.g. Ref. [1] for an introduction).
In the last few years, observations of NSs revealed several breakthrough measurements of their global properties. So it is now well established that pulsars, rotation-powered NSs, can have masses of around two solar masses [2; 3; 4; 5; 6], as determined from the timing of the pulses of NSs in binary systems with corrections from general relativity, such as the pulsar PSR J0740+6620 with a mass of \(M=2.08\pm 0.07M_{\odot}\)[6]. NSs with a low-mass stellar companion, so called black-widow or red-back pulsars, can have even higher masses. These masses are extracted from the observation of the stellar companion and amount to \(M=2.11\pm 0.04M_{\odot}\) for PSR J1810+1744, \(M=2.22\pm 0.10M_{\odot}\) for PSR 1311-3430 and even \(M=2.35\pm 0.17M_{\odot}\) for PSR J0952-0607 [7; 8; 9].
Furthermore, the mass and radius of NSs could be constrained directly with the phase-resolved observations of the hot spots on the surface of the NS with the NICER mission for PSR J0030+0451 [10; 11; 12] and PSR J0740+6620 [13; 14; 15]. Analysis of the gravitational wave (GW) event GW170817 of a binary NS merger reveals that NSs must have a rather small radius. The limit on the tidal deformability inferred from the GW has been extracted for low and high spins of the merging NSs and under different assumptions of the property of NS matter, i.e. the equation of state (EoS) by the LIGO/Virgo scientific collaboration [16; 17; 18]. The limit on the radius has been inferred by various groups to be less than \(R\leq 13.2\) to \(13.7\) km for a \(1.4M_{\odot}\) NS (see Refs. [19; 20; 21; 22]).
Due to the high densities present in the cores of the most massive NSs it is possible that hybrid stars exist with cores containing deconfined quark matter. The possibility of such a hadron to quark phase transition in NSs has been the topic of numerous recent studies. Many of them have investigated the impact of a strong first-order phase transition on astrophysical observables [23; 24], as such a phase transition is proposed by effective quark-meson models. However, recent astrophysical measurements seem to rule out strong first-order phase transitions at low densities while making their existence unlikely at higher densities as well [25; 26; 27; 28]. Another way to look for an indication of deconfinement inside NSs is to investigate if the conformal limit is approached. Multiple studies suggest that the existence of conformal matter inside NSs is feasible, which might indicate the existence of hybrid stars [29; 30]. Contrary to a first-order phase transition many recent studies propose an alternative scenario with a peak appearing in the speed of sound and
reaching above the conformal limit [30; 31; 32]. This possibility is naturally achieved in models of the so-called quarkyonic matter (e.g. [33; 34]). Recent investigations of color superconductivity using functional methods also independently found such a peak in their models [35; 36; 37].
Despite recent developments in the fields of the complex Langevin method or alternative expansion schemes [38; 39; 40], the sign problem still poses a huge challenge for first-principle calculations of quantum chromodynamics (QCD). Therefore, when trying to describe strongly interacting matter at finite densities and low temperature, an effective treatment of the strong degrees of freedom is reasonable.
The nuclear EoS below saturation density is well established. Here, two- and three-body interactions are determined by experimental data mostly based on nucleon-nucleon scattering and properties of light nuclei (e.g. [41; 42]). For these microscopic methods the source of uncertainties usually stem from the interactions applied, as well as the calculation methods themselves [43; 44]. In addition to higher-body interaction becoming more important at higher densities, one might also need to account for new degrees of freedom, such as hyperons or quarks. Chiral Effective Field Theory (EFT) provides a robust way to estimate these uncertainties [45; 46]. According to state-of-the-art calculations, the uncertainties of the nuclear EoS above \(\sim 1.1\,n_{0}\) become increasingly significant, with \(n_{0}\) being the baryon density at nuclear saturation [47; 48]. Hadron resonance gas models provide a different way to calculate the low density EoS, while accommodating lattice data [49; 50]. Another common way to account for the low density behaviour of hadronic matter is to apply a relativistic mean field model with parameters set by nuclear properties at saturation (e.g. [51; 52; 53]).
On the opposite side of the phase diagram, at very high densities, due to the asymptotic freedom, one can resort to perturbative QCD methods [54; 55]. This area, however, despite what its name might suggest, is remarkably challenging due to an infinite number of diagrams that need to be accounted for at a given order. In fact in the past several decades, only few advancements have been reported in this field, with recent studies calculating the leading contributions to N\({}^{3}\)LO [56]. Considering the zero temperature EoS, this method gives reliable results at \(\mu_{B}\gtrsim 2.5\) GeV, or equivalently \(n_{B}\gtrsim 40\,n_{0}\).
Therefore, there is more than an order of magnitude in density, where the EoS is largely uncertain. Several approaches exist in this region as well, apart from the ones that extrapolate from saturation properties of nuclear matter [57], one might also use Nambu-Jona-Lasinio (NJL) or linear sigma type models, which are based on the global symmetries of QCD, especially on chiral symmetry and the premise of chiral symmetry restoration at high densities (e.g. [58; 59; 60; 61; 62]). In this region the phase boundaries are also ambiguous. Quark deconfinement can occur at virtually any densities in this uncertain domain, while there are also strong indications for the existence of a color superconducting phase at densities reachable inside the cores of massive NSs. Extensive studies of the NJL type models exist in the literature, with non-local interactions also being considered in recent studies [63; 64; 65].
In this paper we use a hybrid approach by utilizing relativistic mean field models at low densities (the SFHo and the DD2 models, [51; 52; 53]), quark EoSs at high densities, while a smooth interpolation is applied between the two. For the high-density part we use EoSs derived from a \(\rm U_{L}(3)\times U_{R}(3)\) chirally symmetric constituent quark-vector meson model developed by our group [66; 67; 68; 69].
This paper is organized in the following way. In Sec. II we review the method we used to construct hybrid EoSs for cold NS matter, then after summarizing recent results from NS observations we introduce our Bayesian framework. In Sec. III we show the results from our Bayesian analysis and demonstrate how different astrophysical measurements influence the outcome of these.
## II Methods
In this section we first review how our hybrid EoS is constructed, then after an overview of the calculation of NS observables and recent observations we proceed to describe the details of our Bayesian analysis, devoting special attention to how our posterior probabilities are calculated.
### Equation of state
To be able to investigate the effect of variations in the properties of our quark model on stable sequences of NSs, we need to construct a reliable EoS covering a large range in density from below saturation density up to \(n_{B}\approx 5-6\,n_{0}\) where \(n_{0}\) is the baryon density at nuclear saturation.
As already mentioned in Sec. I we use a hybrid approach combining EoSs from hadronic and quark matter. For the hadronic part we use two relativistic mean field models, the Steiner-Fischer-Hempel (SFHo) model [51] and the density-dependent model of Typel et al. (DD2) [52; 53]. Both models are consistent with chiral EFT calculations, but differ in the stiffness at higher densities; while the SFHo EoS is relatively soft, the DD2 EoS is quite stiff.
For the quark part we utilize the (axial)vector meson extended linear sigma model (eLSM), developed and thoroughly investigated in several previous papers [66; 67; 68; 69] with investigations about the large-\(N_{c}\) behaviour as well [70]. This is a three-flavour quark-meson model containing constituent quarks and the complete nonets of (pseudo)scalar and (axial)vector mesons. The advantage of this model - altogether with the parameterization procedure and the approximations that were used - is that it reproduces the meson spectrum (and also various decay widths) quite well at \(T=\mu_{B}=0\), and moreover, its finite
temperature version also agrees well with various lattice results [67]. The detailed description of the approximation we use in this paper can be found in Ref. [71], where we have also already provided a comparison between the sequences of static (non-rotating) NSs predicted by the model and astrophysical observations.
It is worth to note that due to various particle mixings in the scalar sector, there is more than one possibility to assign scalar mesons to experimental resonances [66]. Possibly stemming from this mixing is that the mass of the sigma meson needs to be very low in our model in order to achieve a correct finite temperature behaviour and to reproduce other meson masses correctly. This problem might be resolved by considering additional bound quark states, such as tetraquarks. However, as we showed in Ref. [71], this low sigma meson mass, within the framework of this model, is also consistent with astrophysical observations.
Nevertheless, similarly to the analysis in Ref. [71] we leave \(m_{\sigma}\) as a free parameter and let it vary between \(290-700\) MeV. Another important parameter, which is not fixed by experimental data in the approximation we use, is the coupling between vector mesons and constituent quarks. We vary this parameter in the range \(g_{V}\in[0,10]\).
Since the low- and high density models operate with different degrees of freedom, we need to utilize some effective method to arrive from one phase to another. The simplest way to do this, which we will also follow in our paper, is to simply interpolate between the two zero temperature EoSs in some intermediate-density region (see e.g. Refs. [72; 73; 74; 75]). Also similar to other studies, we use a polynomial interpolation, however, in contrast to using the pressure, \(p(\mu_{B})\), as a thermodynamic potential for the interpolation, we use the energy density, \(\varepsilon(n_{B})\). We do this, since we find that this way the sound speed in the intermediate region shows a less sharp peak, and therefore a larger ensemble of EoSs will turn out to be causal.
In case the hadronic EoS, \(\varepsilon_{\rm H}(n_{B})\), is valid up to \(n_{BL}\), and the quark EoS, \(\varepsilon_{\rm Q}(n_{B})\), can be utilized above \(n_{BU}\), the interpolating polynomial looks like:
\[\varepsilon(n_{B})=\sum_{k=0}^{N}C_{k}n_{B}^{k}\,,\quad n_{BL}<n_{B}<n_{BU}, \tag{1}\]
where the coefficients \(C_{k}\) are determined so that the energy density and several of its derivatives remain continuous in the whole region. In our case, we use a fifth-order polynomial, so we need the energy density and its first and second derivatives to be continuous at the boundaries. The first derivative of the energy density with respect to the baryon number density is the baryon chemical potential, so this condition will ensure that the pressure is also continuous at the boundaries. The condition for the second derivative, in addition, ensures a continuous speed of sound.
From \(n_{BL}\) and \(n_{BU}\) we define the central density and width of the phase transition as \(\bar{n}=(n_{BU}+n_{BL})/2\) and \(\Gamma=(n_{BU}-n_{BL})/2\), respectively.
As a further remark, we add that a Maxwell construction is also a common way to get from one phase to the other, however, this method limits the possible range of concatenations by requiring the \(p(\mu_{B})\) curves of the two phases to cross each other, and also removing the freedom of choosing the density at which the phase transition occurs. From a philosophical point of view one might also argue that since both models have a limited region of validity, at intermediate densities one can only assume some interpolation between the two models.
### Neutron stars and observations
Here we briefly summarize how one can calculate different equilibrium properties of NSs given a specific EoS. For details of these calculations we refer the reader to Ref. [71], as well as references therein.
Once we have an EoS, \(p(\varepsilon)\), we can obtain mass-radius relations of NSs by solving the Tolmann-Oppenheimer-Volkoff (TOV) equations [76; 77]:
\[\frac{\mathrm{d}m(r)}{\mathrm{d}r} =4\pi r^{2}\varepsilon(r), \tag{2}\] \[\frac{\mathrm{d}p(r)}{\mathrm{d}r} =-[\varepsilon(r)+p(r)]\frac{m(r)+4\pi r^{3}p(r)}{r^{2}-2m(r)r}, \tag{3}\]
where \(m(r)\) is the gravitational mass enclosed within a sphere with radius \(r\), and \(p(r)\) is the pressure related to the energy density \(\varepsilon(r)\) by the EoS. These equations can usually only be integrated numerically. The total mass (\(M\)) and radius (\(R\)) of the NS for a certain central energy density \(\varepsilon_{c}\) is obtained through the boundary conditions, \(\varepsilon(r=0)=\varepsilon_{c}\), \(p(R)=0\) and \(m(R)=M\).
Another property of NSs that is becoming more and more important due to the recent and future observations of inspirals of NSs with GW detectors, is the \(\lambda\) tidal deformability parameter (e.g. [78; 16; 79]). This parameter is related to the \(k_{2}\) quadrupole tidal Love number through
\[k_{2}=\frac{3}{2}\lambda R^{-5}, \tag{4}\]
where \(k_{2}\) can usually be obtained by numerical integration (see e.g. Refs. [80; 81; 82; 82]). The dimensionless parameter \(\tilde{\Lambda}\) measurable through GW observations of binary NSs can then be calculated by
\[\tilde{\Lambda}=\frac{16}{13}\Lambda_{1}\frac{M_{1}^{4}}{M_{\rm tot}^{4}} \left(12-11\frac{M_{1}}{M_{\rm tot}}\right)+1\longleftrightarrow 2, \tag{5}\]
where \(M_{\rm tot}=M_{1}+M_{2}\), and where \(\tilde{\Lambda}\) is directly determined by the phase shift in the GW signal of circular NS binaries due to tidal effects [83]. Here \(\Lambda_{i}\) is the dimensionless tidal parameter of component \(i\), which can be obtained through
\[\Lambda_{i}=\frac{\lambda_{i}}{M_{i}^{5}}. \tag{6}\]
There are already several stringent observational constraints on how the EoS should look like, with more expected to come in the near future. These constraints stem from various sources, such as electromagnetic, GW, or combined, multi-messenger observations.
Masses of NSs, in case they form a binary with an other object, might be measured with remarkable precision, using, for example, the Shapiro time delay effect. In the past decade, multiple highly massive NSs have been observed, providing robust constraints on the stiffness of the EoS [84; 85; 86]. From these, until recently, the most massive was PSR J0740+6620, with a mass of \(2.08\pm 0.07~{}M_{\odot}\), and a \(95.4\%\) lower bound of \(1.95~{}M_{\odot}\)[87]. Since then, however, several other observations have also raised notable interest [88; 7]. Observation of the black-widow pulsar PSR J0952-0607 have measured its mass to be \(2.35\pm 0.17~{}M_{\odot}\)[8].
A recent observation discovered a very light central compact object within the supernova remnant HESS J1731-347. It is interpreted as the lightest NS observed so far or a quark star with a mass of \(0.77^{+0.20}_{-0.17}~{}M_{\odot}\) and radius \(10.4^{+0.86}_{-0.78}\) km.
Unlike masses, the measurement of radii of NSs is extremely challenging, and so far, the most accurate X-ray measurements were able to achieve a precision of \(\sim 10\%\). Recent measurements of the NICER collaboration use the ingenious idea of examining the rotation-resolved X-ray spectrum of NSs with hot spots. This, for the first time enables the simultaneous measurement of the mass and radius of a single NS. Two NSs have been measured with this method so far, one is PSR J0030+0451 with a mass and equatorial radius of \(1.44^{+0.15}_{-0.14}~{}M_{\odot}\) and \(13.02^{+1.24}_{-1.06}\) km [89], or \(1.34^{+0.15}_{-0.16}~{}M_{\odot}\) and \(12.71^{+1.14}_{-1.19}\) km [90], according to different collaborations using slightly different methods. The other pulsar measured was the massive pulsar PSR J0740+6620, with mass \(2.08\pm 0.07~{}M_{\odot}\) and reported radii of \(13.7^{+2.6}_{-1.5}\) km [91] or \(12.39^{+1.30}_{-0.98}\) km [92] at the \(68\%\) credible interval.
In addition to these measurements, we have also witnessed the first multi-messenger observation of a binary NS merger, with its GW signal being labeled GW170817. The first analysis of GW170817 performed by the LIGO-Virgo Collaboration (LVC) inferred a value of \(\Lambda<800\) for \(1.4~{}M_{\odot}\) NSs in the low-spin limit [16]. A thorough investigation of this constraint performed by Ref. [19] using a generic family of EoSs found an upper radius limit of \(13.6\) km for \(1.4~{}M_{\odot}\) NSs, while Ref. [20] arrived at a radius limit of \(13.7\) km with higher statistics. A subsequent study was also performed by the LVC, in which a combined analysis of tidal deformabilities and NS radii was performed, utilizing various assumptions for the EoSs. Here the values of \(\Lambda(1.4M_{\odot})=190^{+390}_{-120}\) and \(R(1.4M_{\odot})=10.8^{+2.0}_{-1.7}\) km were found [17]. An additional assumption of this study was to use a single EoS to describe both objects, whereas in Ref. [16] the two EoSs were varied independently. A similar study, also using a single EoS ansatz, was performed by Ref. [21], where the authors arrived at a slightly higher upper limit (\(\Lambda<642\), \(\Lambda<698\) or \(\Lambda<681\) depending on the prior assumption on the component masses). A companion study of Ref. [17] was also published by the LVC at around the same time, where an EoS agnostic approach was applied [18]. In their study they investigated the effect of using various waveform templates, and under minimal assumptions they found for the upper limit of the tidal deformability \(\Lambda(1.4M_{\odot})<720\)[18].
In this paper we chose to utilize the results of this analysis with the upper limit of \(\Lambda(1.4M_{\odot})<720\). The minimal prior assumptions of this study make it suitable to use it as a conservative upper limit for the tidal deformability. Other, recent studies also utilize this constraint (e.g. [93]. Ref. [94]) examine previous studies [17; 18; 95; 96; 95] and investigates the impact of prior assumptions and argues that upper and especially lower limits on \(\Lambda\) can be misleading without a more detailed discussion. Another reanalysis has also been done by Dietrich et al., which found similar upper limits for \(\Lambda\) (see Table S2 of Ref. [97]).
The electromagnetic properties of the source of GW170817 were also used to put constraints on NSs. A lower radius constraint was inferred by Ref. [98] from the absence of prompt collapse during this event, while an upper mass limit of \(2.16^{+0.17}_{-0.15}~{}M_{\odot}\) was proposed by Ref. [99] using a quasi-universal relation between the maximum mass of static and the maximum mass of uniformly rotating NSs. This conclusion rests upon the assumption that the merging NSs first formed a differentially rotating hypermassive NS and not a uniformly rotating supermassive one. This hypothesis is supported by simulations of the dynamical ejecta and kilonova modeling (e.g. [100]), albeit other scenarios are not completely ruled out either.
The GW signal of another binary NS merger GW190425 was also observed by the LVC, however, no clear tidal signature or electromagnetic signal was measured. Due to this, the binary NS classification only rests on the estimated masses of the binary components. Yet another notable GW event was GW190814, where one of the binary components resided in the so-called'mass-gap', with a mass of \(2.5-2.67~{}M_{\odot}\)[101], which could either mean it was the lightest black hole (BH), or the heaviest NS observed. Although the NS scenario seems unlikely, it should not be ruled out until further evidence is found against it.
Several studies exist that combine all these astrophysical measurements with nuclear physics and heavy-ion data to give stringent constraints on the nuclear EoS and the \(M-R\) relation of NSs (e.g. [96; 102; 103; 104; 105; 106]). Bayesian investigations are also available in this field (e.g. [32]). In this paper we also apply a Bayesian approach. However, we concentrate on hybrid stars, where the properties of quark matter are calculated from an effective model of QCD, and among others we focus on the restriction of quark model parameters and the parameters of the concatenation, and on the conditions for the existence of a
pure quark core.
### Bayesian inference
Suppose our EoS, and hence the properties of NSs can be described by a set of parameters, \(\boldsymbol{\vartheta}\). The probability of a specific data being measured, given a specific EoS is \(p(\text{data}|\boldsymbol{\vartheta})\). Then we can use Bayes' theorem to determine the probability of a specific parameter set, given data from a measurement:
\[p(\boldsymbol{\vartheta}|\text{data})=\frac{p(\text{data}|\boldsymbol{\vartheta })\,p(\boldsymbol{\vartheta})}{p(\text{data})}\:, \tag{7}\]
where \(p(\boldsymbol{\vartheta})\) is our prior assumption about the parameter sets, and \(p(\text{data})\) is just a normalization constant.
Our parameter space consists of the four parameters: \(m_{\sigma}\), \(g_{V}\), \(\bar{n}\) and \(\Gamma\). We vary \(\bar{n}\) between \(2n_{0}\) and \(5n_{0}\), and \(\Gamma\) between \(n_{0}\) and \(4n_{0}\). We also ensure that the low density EoS is described by the hadronic EoS by discarding parameter sets with \(\bar{n}-\Gamma<n_{0}\). For our prior we assume a uniform distribution in the parameter space, implying we do not have any prior knowledge about the probability of each parameter set. In addition, when calculating posterior probabilities for different astrophysical observations, the prior for the NS mass distribution is assumed to be uniform. We note here that a choice for the NS mass prior that does not match the observed distribution can lead to large biases in the Bayesian inference after \(\mathcal{O}(25)\) observational events (see e.g. Refs. [107; 108]). For the time being, we can safely assume a uniform prior without having to worry about these biases. Eventually, however, a self-consistent hierarchical framework that simultaneously models EoSs and NS populations will be necessary [109]. For further discussion about the uniform population prior we refer the reader to Refs. [109; 110; 111].
The conditional probability \(p(\text{data}|\boldsymbol{\vartheta})\) can be obtained as a product of several independent astrophysical observations:
\[p(\text{data}|\boldsymbol{\vartheta})=p(M_{\text{max}}|\boldsymbol{\vartheta })\,p(\text{NICER}|\boldsymbol{\vartheta})\,p(\tilde{\Lambda}|\boldsymbol{ \vartheta})\:, \tag{8}\]
where we detail the specific observational constraints below. Also note that since only the proportions of the probabilities for different parameter sets are meaningful, we can neglect constant normalization factors in front of our conditional probabilities.
#### ii.3.1 Compatibility with perturbative QCD
Even without any astrophysical constraints, our EoS should comply with some basic physical requirements. First and foremost, our EoS should be causal, meaning
\[c_{s}^{2}=\frac{\text{d}p}{\text{d}\varepsilon}\leq 1\:. \tag{9}\]
In addition, however, we can also use input from perturbative QCD calculations, similarly to Refs. [112; 113]. We know that at some density \(n_{\text{QCD}}\) strongly interacting matter should have a baryon chemical potential \(\mu_{\text{QCD}}\) and a pressure \(p_{\text{QCD}}\). On the other hand, we require our hybrid EoS to be valid up to the density present in the center of the most massive NS described by that specific EoS. This point is described by \(n_{\text{NS}}\), \(\mu_{\text{NS}}\) and \(p_{\text{NS}}\). However, these EoSs should be in accord with each other and therefore there should exist a thermodynamically allowed connection of the two. Therefore we assume in the core of NS that \(\mu_{\text{NS}}\leq\mu_{\text{QCD}}\) and require stability and causality:
\[n_{\text{NS}}\leq n_{\text{QCD}}\:,\quad p_{\text{NS}}\leq p_{\text{QCD}}\:, \tag{10}\]
\[\frac{n_{\text{NS}}}{\mu_{\text{NS}}}\leq\frac{n_{\text{QCD}}}{\mu_{\text{QCD} }}\:, \tag{11}\]
where we used the fact that a causal EoS crossing the point \((\mu,n)\) has a slope \(\text{d}n/\text{d}\mu\geq n/\mu\).
We obtain an additional integral constraint from the definition of the difference in pressure:
\[\Delta p=p_{\text{QCD}}-p_{\text{NS}}=\int\limits_{\mu_{\text{NS}}}^{\mu_{ \text{QCD}}}n(\mu)\text{d}\mu\:. \tag{12}\]
The integral here depends on the specific way we connect the two points, however it can be easily shown that it falls between the two limiting cases (see e.g. Ref. [112]):
\[\Delta p_{\text{min}}\leq\Delta p\leq\Delta p_{\text{max}}\:, \tag{13}\]
with
\[\Delta p_{\text{min}}=\frac{\mu_{\text{QCD}}^{2}-\mu_{\text{NS}}^{2}}{2} \frac{n_{\text{NS}}}{\mu_{\text{NS}}} \tag{14}\]
\[\Delta p_{\text{max}}=\frac{\mu_{\text{QCD}}^{2}-\mu_{\text{NS}}^{2}}{2} \frac{n_{\text{QCD}}}{\mu_{\text{QCD}}}\:. \tag{15}\]
For the perturbative QCD EoS we use the values calculated by Ref. [114] and utilized in Ref. [112] with a renormalization scale parameter \(X=2\), hence, \(\mu_{\text{QCD}}=2.6\) GeV, \(n_{\text{QCD}}=6.47\)\(1/\text{fm}^{3}\) and \(p_{\text{QCD}}=3823\) MeV/fm\({}^{3}\).
#### ii.3.2 Mass constraints
We use PSR J0348+0432 with a mass \(2.01\pm 0.04\)\(M_{\odot}\) and PSR J1614-2230 with a mass \(1.908\pm 0.016\)\(M_{\odot}\) to put a lower limit on the maximum mass of NS mass-radius relations. In order to avoid double counting, we do not include here the mass measurement of PSR J0740+6620, since it is included as a NICER measurement. We also similarly include the upper mass bound from Ref. [99]. We then approximate the likelihood functions by error functions:
\[p(M_{\text{max}}|\boldsymbol{\vartheta})\propto\prod_{i=1,2}\frac{1}{2}\left[ 1+\text{erf}\left(\frac{M_{\text{max}}(\boldsymbol{\vartheta})-M_{i}}{\sqrt{2 }\sigma_{i}}\right)\right]\]
\[\times\frac{1}{2}\left[1-\text{erf}\left(\frac{M_{\text{max}}(\mathbf{\vartheta})-M_{ U}}{\sqrt{2}\sigma_{U}}\right)\right]\,, \tag{16}\]
where erf is the error function. For the upper mass limit from the hypermassive NS scenario we use \(M_{U}=2.16~{}M_{\odot}\) and set the standard deviation conservatively to \(\sigma_{U}=0.17~{}M_{\odot}\). While \(M_{1}=2.01~{}M_{\odot}\), \(\sigma_{1}=0.04~{}M_{\odot}\) and \(M_{2}=1.908~{}M_{\odot}\), \(\sigma_{2}=0.016~{}M_{\odot}\).
#### ii.1.3 NICER measurements
For the two NICER measurements we use the kernel density estimated probability density \(p_{\text{N}}(M,R)\), utilizing the data provided by Refs. [91, 89]. The likelihood for a single measurement is then given by
\[p(\text{NICER}|\mathbf{\vartheta}) \propto \int\mathrm{d}M\mathrm{d}R\,p_{\text{N}}(M,R)\delta(R-R(M,\mathbf{ \vartheta})) \tag{17}\] \[= \int\mathrm{d}M\,p_{\text{N}}(M,R=R(M,\mathbf{\vartheta}))\;.\]
Note that the uniform mass population prior is already included in this formula.
#### ii.1.4 Tidal deformability measurement
The chirp mass of the source of GW170817 was measured very precisely by the LVC to be
\[\mathcal{M}=\frac{(M_{1}M_{2})^{3/5}}{(M_{1}+M_{2})^{1/5}}=(1.186\pm 0.001)\,M_{ \odot}\;, \tag{18}\]
where \(M_{1}\) is conventionally considered to be the mass of NS with the larger mass. We then use the joint posterior probability density \(p_{\text{GW}}(\tilde{\Lambda},q)\), provided by Ref. [101], where \(q=M_{1}/M_{2}\) is the mass ratio. The accurate measurement essentially determines the secondary mass \(M_{2}\) for a specific primary mass \(M_{1}\). Then, utilizing the EoS, \(\Lambda_{1}\) and \(\Lambda_{2}\) can be determined, and therefore \(\tilde{\Lambda}\) as well. We then calculate the conditional probability as
\[p(\tilde{\Lambda}|\mathbf{\vartheta})\propto\int\limits_{M_{\text{eq}}}^{M_{\text {max}}}\mathrm{d}M_{1}\,p_{\text{GW}}(\tilde{\Lambda}(M_{1},\mathcal{M},\mathbf{ \vartheta}),q(M_{1},\mathcal{M}))\;, \tag{19}\]
where \(M_{\text{eq}}=1.362~{}M_{\odot}\) corresponds to a mass ratio of \(q=1\), and \(M_{\text{max}}\) is the mass of the maximally stable NS.
#### ii.1.5 BH hypothesis
Based on some properties of the electromagnetic counterpart of GW170817 some previous works have suggested that the remnant collapsed to a BH (e.g. [100, 115, 99]). We refer to this as the BH hypothesis. In order to incorporate this assumption in our analysis we utilize baryon number conservation during the merger event:
\[N_{1}+N_{2}=N_{\text{remn}}+N_{\text{ej}} \tag{20}\]
where \(N_{1}\) and \(N_{2}\) are the baryon numbers of the two component NSs, while \(N_{\text{remn}}\) and \(N_{\text{ej}}\) are the baryon numbers corresponding to the remnant and the ejecta, respectively. Similarly to Refs. [113, 93] we use the assumption \(N_{\text{ej}}\approx 0\). Hence, in order for the remnant to collapse to a BH we must have \(N_{1}+N_{2}>N_{\text{max}}\), where \(N_{\text{max}}\) is the baryon number of the maximally massive stable NS. To add this assumption to our analysis we discard every pair of NS during the integral in Eq. 19, for which \(N_{1}+N_{2}\leq N_{\text{max}}\). Since values that \(N_{1}+N_{2}\) can take are primarily determined by experiment, and higher values become more and more improbable, this gives an upper bound for \(N_{1}+N_{2}\), which in turn gives an upper bound on the maximum mass of NSs, \(M_{\text{max}}\lesssim 2.53~{}M_{\odot}\).
Somewhat more speculatively one can assume that the remnant for a brief time remained a hypermassive NS, after which it quickly collapsed to a BH (e.g. [99]). We implement this assumption using the upper mass bound \(M_{U}\) mentioned earlier in this section. We refer to this scenario as the hypermassive NS hypothesis.
In addition, we can include the assumption that the inspiral did not end in a prompt collapse to a BH. In this case we discard pairs of NSs, for which the total mass is above the threshold mass for prompt collapse to a BH. We use this assumption in all of our results that include the BH constraint in any form. Several approaches exist to calculate this threshold mass [116, 117]. Here we utilize the nonlinear relation given by Ref. [117], calibrated by numerical relativity simulations:
\[\frac{M_{\text{th}}}{M_{\text{max}}}=a-\frac{b}{1-c\cdot C_{\text{max}}}\;, \tag{21}\]
where \(C_{\text{max}}=M_{\text{max}}/R_{\text{max}}\) is the compactness of the maximum mass configuration, and the parameters are \(b=1.01\), \(c=1.34\) and \(a=2b/(2-c)\). Hence, we only perform the integration in Eq. (19) for configurations where \(M_{\text{tot}}=M_{1}+M_{2}<M_{\text{th}}\).
#### ii.1.6 Mass-gap compact object
As we discussed in Sec. II.2, an object in the mass gap was observed in the event GW190814 with a mass \(M=2.59^{+0.08}_{-0.09}~{}M_{\odot}\) in the 90% credible interval [101]. In our analysis we also investigate what happens when we require this object to be described as a NS. We use a similar error function as for the other mass constraints with a mean \(M_{\text{gap}}=2.59~{}M_{\odot}\) and a standard deviation \(\sigma_{\text{gap}}=0.055~{}M_{\odot}\), assuming normal distribution.
## III Results
In this section we discuss our results from our analyses. In Sec. III.1 we investigate how well the maximum mass of hybrid stars can be predicted by the parameters of the quark component. In Sec. III.2 we show our results from our Bayesian analysis with various constraints included.
### Dependence of \(M_{\rm max}\) on quark parameters
In Ref. [71] we already showed in selected cases how the maximum mass of hybrid star sequences produced by using the eLSM for the quark component correlates with the parameters chosen for our quark model. Here we also investigate this correlation on whole span of the parameter space.
The results are shown in Fig. 1. The three different panels show results for three different values of the sigma meson mass, with \(m_{\sigma}=290\) MeV being the one preferred by the parameterization. For a specific parameter set \(\{m_{\sigma},g_{V}\}\), we have gathered the maximum masses from all the different concatenations and plotted the median and the 90% confidence intervals. The width of these intervals can be even lower than \(\pm 0.05\ M_{\odot}\) for \(M_{\rm max}\sim 2\ M_{\odot}\), while they moderately increase for higher median masses to \(\pm 0.15-0.2\ M_{\odot}\). We can also observe that changing the hadronic EoS (red and yellow points) does not make any significant change in the maximum masses.
We try to quantify this correlation by making a linear fit to the points above \(g_{V}=1\), since below that the dependence is clearly non-linear. The fitting function is then
\[\frac{M_{\rm max}}{M_{\odot}}=\alpha\left(1+\gamma\cdot\overline{m}_{\sigma} \right)+\beta\cdot g_{V}\left(1+\delta\cdot\overline{m}_{\sigma}\right)\,, \tag{22}\]
where
\[\overline{m}_{\sigma}=\frac{m_{\sigma}}{500\,{\rm MeV}}\,, \tag{23}\]
and where the cross-term with the coefficient \(\delta\) is necessary since the slope of the linear fit changes for different sigma masses.
The parameters obtained from the fit are
\[\alpha =0.962\pm 0.010\] \[\beta =0.284\pm 0.003\] \[\gamma =0.780\pm 0.013\] \[\delta =-0.426\pm 0.014\,, \tag{24}\]
with a goodness-of-fit value \(R^{2}=0.952\). The fitted function is shown in Fig. 1 by the purple lines. Due to \(\delta\) being negative, we get the largest slope for \(m_{\sigma}=290\) MeV.
### Bayesian analysis results
During our Bayesian analysis we incorporate constraints from astrophysical observations in a specific order. After establishing our prior we include the minimal constraints, namely, the requirement for consistency with pQCD calculations, and lower mass limits from the \(2\ M_{\odot}\) NSs. Then we apply the two NICER measurements, since these are the least constraining on our prior. After that, as another well-established constraint, we apply the tidal deformability measurement of GW170817, which, generally speaking, constrains the radii of \(1.4\ M_{\odot}\) NSs from above. These measurement constitute our canonical set of constraints.
On top of these, we also investigate the effect of other measurements as well. First, based on the hypermassive NS hypothesis, we put an upper limit on the maximum mass of NSs. As an alternative scenario, we explore the consequence of assuming the mass-gap object in GW190814 was a very massive NS. Finally, we also briefly review the effect of adding the recent measurement of the light compact object in HESS J1731-347 to our canonical set of constraints.
During each step we show how the posterior probabilities evolve in the parameter space when we consider a
Figure 1: The maximum mass of stable NSs as a function of the \(g_{V}\) vector meson coupling, for different sigma masses. For specific constituent quark model parameters the circles denote the median, while the errorbars denote the 90% confidence interval of maximum masses obtained by applying the complete ensemble of different concatenation parameters. The two different colors correspond to the SFHo (red) and the DD2 (yellow) hadronic EoS. The fitted relation is visualized by the purple lines.
new measurement, we investigate the radius distribution of 1.4 \(M_{\odot}\) and 2 \(M_{\odot}\) NSs, and we also show the region where, given the specific constraints, hybrid stars with pure quark matter in their cores can exist. For this we define matter being in pure quark state when the density is above \(n_{B}>n_{BU}=\bar{n}+\Gamma\), hence the EoS is described by our quark model.
The black outer contours on each \(M-R\) diagram display the boundaries of the complete ensemble of \(M-R\) curves (solid for the SFHo and dashed for the DD2 EoS) after applying cuts based on the corresponding astrophysical constraints. For the 2 \(M_{\odot}\) constraint this is \(M_{\rm max}>1.95\)\(M_{\odot}\), which corresponds to the 2-sigma lower bound for the mass of PSR J0740+6620 [6]. For the NICER measurements and the HESS object the requirement is that the \(M-R\) curves should cross the 2-sigma contour lines of the given measurement. The cut for the tidal deformability measurement of GW170817 is established in the following way. For a given EoS we calculate all the possible \(\tilde{\Lambda}\) values between \(M_{\rm eq}<M_{1}<1.6\)\(M_{\odot}\), and keep the EoS if \(\tilde{\Lambda}<720\) for any configuration. Pairs of NSs with \(M_{\rm tot}>M_{\rm th}\) are discarded, while pairs with \(N_{1}+N_{2}\leq N_{\rm max}\) are also discarded when the BH hypothesis is included. The upper mass bound from the hypermassive NS hypothesis is taken to be 2.33 \(M_{\odot}\), while the lower mass bound from the mass-gap object is taken as 2.5 \(M_{\odot}\).
The results for our canonical set of measurements are shown in Fig. 2. Our prior can be seen on the left with the minimal constraints included. The prior is taken to be uniform in the parameter space, as can be seen in the bottom panel, apart from the leftmost region, where parameter sets with low \(g_{V}\) are not preferred by the 2 \(M_{\odot}\) constraint. Other regions of zero probability are caused by exclusions. Values of \(\bar{n}\) for which \(\bar{n}-\Gamma<n_{0}\) are excluded since we require our EoS to be described by the hadronic EoS at least up until \(n_{0}\). The upper right regions with high \(\bar{n}\) and \(g_{V}\) are mostly excluded by the instability or acausality of the intermediate interpolated region, and some of them are excluded due to the pQCD constraint. The sharp edges are the result of our finite grid on the parameter space.
Examining the top panel one can observe that the highest mass NSs with masses of \(\sim 3.5\)\(M_{\odot}\) are excluded by the pQCD constraint (grey contour). Our construction for the EoS results in a stiffening in the intermediate-density region, as we showed in Ref. [71]. A result of this is that even with the relatively soft SFHo model as the hadronic EoS we get radii \(R(1.4\)\(M_{\odot})\gtrsim 12\) km and hence NSs with \(M_{\rm max}\gtrsim 2\)\(M_{\odot}\) and \(R(1.4\)\(M_{\odot})\lesssim 12\) km are absent from our prior. With the DD2 EoS this lower limit is even higher. Another interesting feature is the region of NSs with a pure quark core, since at first it might not seem obvious why there is an upper mass bound for these objects. However, one can understand this by examining the sound speed squared in the intermediate-density region (at \(\sim 3-5n_{0}\), see e.g. Fig. 10. in Ref. [71]). First there is a stiff peak that is then followed by a valley, which, in some cases, can get close to \(c_{s}^{2}\approx 0\). After this valley, the sound speed increases and only then the quark EoS is reached. Therefore, EoSs that exhibit a large peak in the beginning and hence create high-mass NSs have a deep valley, which makes the NS sequences prone to becoming unstable before they can develop a pure quark core.
In Fig. 2 the probabilities displayed correspond to the SFHo hadronic EoS. Even though our sampling in the parameter space is uniform, the probability distribution in the \(M-R\) plane might still exhibit irregularities, which is visible in the radius distributions as well. During the creation of the probability density plot in the \(M-R\) plane we have to introduce a metric to be able to sample the mass-radius curves evenly. Since there is no unique way to introduce this metric, the distribution obtained will be somewhat arbitrary. Hence, the prior distribution in itself will not present definitive information. However, the change between the prior and posterior distributions is independent of the chosen metric and hence portrays faithful information about the posterior probabilities.
After taking into account the two NICER measurements (middle panel in Fig. 2) the probabilities are only slightly modified, since, as showed in the top panel, even the 1-sigma contours (solid yellow lines) of the two measurements completely overlap with the whole set of \(M-R\) diagrams. The EoS parameters for the maximum posterior probability case are \(m_{\sigma}=290\) MeV, \(g_{V}=6.9\), \(\bar{n}=4n_{0}\) and \(\Gamma=2.5n_{0}\). The \(M-R\) curve for this parameter set is displayed in the middle panel of Fig. 2.
The change is more drastic when the tidal deformability measurement is taken into account as well. This measurement significantly constraints radii from above (see the indication by the yellow arrow) and consequently reduces the maximally possible NS mass as well to max(\(M_{\rm max}\)) \(<2.8\)\(M_{\odot}\) for the SFHo hadronic EoS and, even more significantly, to max(\(M_{\rm max}\)) \(<2.2\)\(M_{\odot}\) for the DD2 EoS. The region of hybrid stars with a pure quark core also shrinks and it even disappears for the DD2 EoS. In the parameter space, this measurement constrains the value of the vector coupling from above, since large values of \(g_{V}\) would correspond to stiff EoSs, which in turn would create NS sequences with large maximum masses and radii. One can observe that the probability density plot in the \(M-R\) diagram extends over the black contour to the right. Examining the distribution of \(R(1.4\)\(M_{\odot})\) one can also verify that while the black contour - corresponding to the 90% bound of \(\tilde{\Lambda}<720\) - crosses the 1.4 \(M_{\odot}\) line at \(\sim 13\) km, the 90% bound of the radius distribution is \(\sim 13.2\) km. This phenomenon was also reported in e.g. Ref. [32] and reinforces the necessity of taking the complete data from a given measurement into account instead of only the bounds from some credible intervals. The parameter set corresponding to the maximum probability EoS in this case is \(m_{\sigma}=290\) MeV, \(g_{V}=6.3\), \(\bar{n}=4.75n_{0}\) and \(\Gamma=3n_{0}\).
We also study the amount of quark matter contained in hybrid stars that have a quark core. We identify a
NS core being made out of quark matter in case the baryon density rises above \(n_{BU}=\bar{n}+\Gamma\). In addition to this condition, we also require chiral phase transition to have occured by that density. We define this by requiring the non-strange scalar condensate in our constituent quark model to drop below 10% of its vacuum value. Even though this seems like an overly strict definition, this only excludes a few percent of NSs that would have a quark core by the first requirement only. In Fig. 3 we show the amount of quark matter contained in hybrid stars that develop such a quark core. Here, no additional constraints were added on top of the minimal ones. In many cases the quark core is light with masses of \(M_{\rm quark}<0.05\ M_{\odot}\). However, some hybrid stars can develop a sizeable quark core, with radii \(R_{\rm quark}\gtrsim 5\) km (see the inset in Fig. 3). More massive cores correspond to NS sequences with lower maximum masses with the most massive core having a mass \(M_{\rm quark}\approx 0.33\ M_{\odot}\). This corresponds to a NS with a mass of 1.96 \(M_{\odot}\).
In addition to the canonical measurements we can investigate the constraint imposed by the hypermassive NS hypothesis. Specifically, we can use an upper mass bound based on Ref. [99]. We did not include this measurement in our canonical set, since there is still some ambiguity around the modeling of the kilonova signal AT2017gfo, and therefore the ejected mass in the merger event. The results for this scenario are shown in Fig. 4. The black contour for the SFHo EoS in the top panel that encompasses all the possible \(M-R\) curves that meet all the requirements does not shrink at lower masses, which is expected in case our ensemble of EoSs is robust enough. On the other hand, the 90% credible interval for \(R(1.4M_{\odot})\) shrinks from a width of 1.09 km to 1.02 km and shifts to lower values from an upper bound of 13.24 km to 13.08 km. Adding this measurement does not constrain \(M-R\) curves with the DD2 EoS any further, neither does
Figure 2: Prior and posterior probabilities from our Bayesian analysis in the mass–radius plane (top), as well as in the parameter space (bottom). The probabilities displayed correspond to the SFHo hadronic EoS, and darker colors indicate higher probabilities. On the mass–radius diagrams the outer black contours represent the boundaries of all the possible M–R curves using the given constraints, while the inner contours contain NSs with pure quark matter inside their cores with the SFHo (solid) and the DD2 (dashed) hadronic EoSs. Below the mass-radius diagrams we show the radius distribution of 1.4 \(M_{\odot}\) (blue) and 2 \(M_{\odot}\) (red) NSs, with the 90% confidence intervals indicated by the vertical dashed lines. In the bottom, the prior and posterior probabilities in the parameter space are shown in a contour plot for only the SFHo hadronic EoS, with the the two levels indicating the 68% (black) and the 95% (grey) credible intervals, while different contour styles represent different \(\Gamma\)s. The contours and probabilities correspond to \(m_{\sigma}=290\) MeV. The three panels side-by-side correspond to the prior with the pQCD and the 2 \(M_{\odot}\) minimal constraints (left), the posterior with the two NICER measurements (middle), and the posterior with the NICER and tidal deformability measurements from GW170817 (right). On the left, the grey contours represent the region excluded by the pQCD constraint (top), while on the posteriors the dark green curves display the maximum probability configurations (top), with the corresponding parameter set indicated by crosses (bottom).
it reduce the region of hybrid stars with a quark core. The main effect of this step on the parameter space, as expected from Sec. III.1, is an upper bound on the vector coupling, as can be seen in the lower panel of Fig. 4, which shows the probability densities for \(m_{\sigma}=290\) MeV. The maximum posterior probability corresponds to the parameter set \(m_{\sigma}=290\) MeV, \(g_{V}=3.1\), \(\bar{n}=3.5n_{0}\) and \(\Gamma=2n_{0}\), which means \(n_{BU}=5.5n_{0}\). Despite this moderate value, this NS does not develop a quark core.
Alternatively, going against the hypermassive NS hypothesis, we can keep the BH hypothesis and assume that the mass-gap object in GW190814 was an extremely massive NS. Even though by our current understanding of the nuclear EoS such a massive NS seems unlikely, it is still allowed by astrophysical measurements, as we show in Fig. 5, although together with the BH hypothesis they leave only a narrow region for the maximum mass of NS sequences (see the yellow band between 2.5 \(M_{\odot}\) and 2.53 \(M_{\odot}\)). One of the consequences of this narrow allowed region is the irregular shape of the posterior radius distributions, which indicate the limits of our EoS ensemble. Note that the difference here between the black contour in the \(M-R\) diagram and the edge of the posterior distribution is even more pronounced than in the right panel of Fig. 2. The 90% upper bound on \(R(1.4M_{\odot})\) here is \(\sim 13.3\) km, in contrast to the value of \(\sim 12.9\) km predicted by the black contour. Interestingly, EoSs created by using the relatively stiff DD2 EoS for the hadronic part can not produce any \(M-R\) curves that can satisfy the conditions \(\tilde{\Lambda}<720\) and \(M_{\rm max}>2.5\)\(M_{\odot}\) at the same time. This is due to the fact that the first of these two condition limits radii of low mass NSs from above and since a stiff hadronic EoS generates larger radii in general, this condition will limit the maximum mass of NSs (as can be seen in the right panel of Fig. 2). Therefore, the existence of very massive NSs is only possible in case the hadronic EoS is soft enough. Another interesting consequence of the mass-gap object interpreted as a NS is that none of the NSs would have a core consisting of pure quark matter in this case. This can be understood by looking at Fig. 2, where the maximum mass of such hybrid stars is \(\sim 2.3\)\(M_{\odot}\). The parameters that correspond to the maximum posterior probability are \(m_{\sigma}=290\) MeV, \(g_{V}=4.7\), \(\bar{n}=4n_{0}\) and \(\Gamma=2.5n_{0}\).
So far we have only investigated the effect of various measurements on the mass-radius diagram and different \(g_{V}-\bar{n}\) slices of the parameter space. It is also instructive to look at different \(\bar{n}-\Gamma\) slices and inspect what can be inferred about the parameters of the hadron-quark phase transition. This is shown in Fig. 6, where we calculated the posteriors on a finer grid compared to the previous figures. The two rows correspond to two different slices with a fixed \(m_{\sigma}\) and \(g_{V}\). These two slices where chosen to match the parameters with the maximum posterior probability case in the hypermassive NS hypothesis scenario (top) and the mass-gap NS scenario (bottom). The
Figure 3: Masses of quark cores for hybrid stars that develop such a core. The inset shows the radial dependence of the baryon density inside one of the hybrid stars that have a sizeable quark core. The vertical dashed line represents the boundary between the quark core and the outer layers.
Figure 4: Same as in Fig. 2 but with the upper mass constraint from the hypermassive NS hypothesis also applied, in addition to the NICER and GW170817 measurements.
first three panels in both rows from left to right show the prior, the posterior with the NICER measurements, and the posterior with NICER and tidal deformability measurements, respectively. The rightmost panel at the top has the upper mass constraint from the hypermassive NS hypothesis included as well, while the one at the bottom contains the lower mass bound from the mass-gap object and exclusions from the BH hypothesis instead. Hence, these parameter planes can be viewed as an evolution of posterior probabilities with more and more constraints in these two scenarios. The upper left excluded region corresponds to the requirement \(\bar{n}-\Gamma\geq n_{0}\), while the lower right part is due to acausality or instability. Looking at the top row, at first it might seem like the hypermassive NS hypothesis broadens the region of high-probability parameters. However, this illusion is due to the fact that these probabilities were normalized by dividing every probability by the maximum posterior probability of the entire parameter space, and hence probabilities in the second and third panels at the top are suppressed, since the maximum posterior probabilities in these cases correspond to \(g_{V}=6.9\) and \(g_{V}=6.3\), respectively. Note however, that it is not the case for the bottom panels, where the maximum probability case is always sufficiently close. Interestingly, the maximum posterior probability cases are always situated at the edges of the allowed regions, adjacent to unstable or acausal EoSs. This means that the lowest possible value of \(\Gamma\) is preferred for a given \(\bar{n}\), which also means that astrophysical observations prefer a very stiff intermediate-density region. Such stiff intermediate regions are also predicted by the theory of the so-called quarkyonic matter (see e.g. Refs. [33; 34]). The two scenarios depicted in Fig. 6 end up with different preferred values for \(\bar{n}\) and \(\Gamma\), and the preferred value of these parameters also varies from step to step, hence, no robust statement can be made about the values of the phase transition parameters. However, very low values of \(\bar{n}\) and \(\Gamma\) are disfavoured after taking into account the tidal deformability measurement, hence the existence of quark matter at densities below \(\sim 4n_{0}\) is also disfavoured.
The left panel of Fig. 7 shows the 90% credible intervals of the sound speed squared as a function of energy density for various astrophysical constraints, using the SFHo model for the hadronic EoS. The prior in itself exhibits a peak in the sound speed, which is located at \(\varepsilon\approx 400-500\) MeV/fm\({}^{3}\) for the lower bound. This is due to the eLSM and the concatenation between the hadronic and quark EoSs. This peak translates to the lack of NSs with small radii in the \(M-R\) plane in Fig. 2. However, this lower bound is slightly increased when we include the NICER and tidal deformability measurements as well. This can be attributed to the NICER measurements, which disfavour small radii. The effect of the tidal deformability measurement of GW170817 is the reduction of the upper bound for energy densities \(\varepsilon\lesssim 500\) MeV/fm\({}^{3}\), which roughly translates to a reduction in the radii of 1.4 \(M_{\odot}\). With the upper mass bound from the hypermassive NS hypothesis included as well, the upper bound of the sound speed squared is reduced from \(\sim 0.8\) to \(\sim 0.6\). The effect of astrophysical constraints on the upper bound above \(\varepsilon\gtrsim 1200\) MeV/fm\({}^{3}\) is minor, while it is negligible for the lower bound. The maximum posterior curve for the case with the hypermassive NS hypothesis is also shown in Fig. 7, which almost touches the 90% band several times. This can be understood if we consider that although this curve possesses the maximum probability on a global scale, it can still happen to be outside of the 90% bound locally. In this case, it approaches zero around 900 MeV/fm\({}^{3}\). This, however, is in line with the implications of Fig. 6, where the maximum posterior probability cases possess the lowest allowed values of \(\Gamma\) for a given \(\bar{n}\), which means they are marginally stable. This also implies that the maximum posterior EoSs have a region similar to a first-order phase transition, where the sound speed drops to nearly zero. However, note that these regions are usually not reached in NSs, since they become unstable at an earlier point. We also stress that some EoSs with sound speeds
Figure 5: Same as in Fig. 4 but instead of taking into account the upper mass bound based on the hypermassive NS hypothesis we only include the constraint from the BH hypothesis, while identifying the mass-gap object in GW190814 as a NS.
far from zero have similar, only slightly lower posterior probabilities.
In the middle panel of Fig. 7 we compare the two alternative scenarios with the hypermassive NS hypothesis and the mass-gap NS included, respectively. In both cases an intermediate-density peak in the sound speed squared is preferred. The position of this peak is similar in the two cases, with \(\varepsilon_{\rm p}=567^{+71}_{-103}\) MeV/fm\({}^{3}\) for the hypermassive NS and \(\varepsilon_{\rm p}=587^{+53}_{-86}\) MeV/fm\({}^{3}\) for the mass-gap NS scenario. The values of the peaks are \(0.48^{+0.08}_{-0.06}\) and \(0.64^{+0.07}_{-0.07}\), respectively. These numbers correspond to medians and 68% credible intervals. We note that Ref. [30] arrive at a similar result for the position of the peak in an independent analysis, however their median value of the peak is higher than ours. The energy density reached in the center of the maximally stable NSs in the two cases are \(\varepsilon_{\rm max}=1089^{+95}_{-117}\) MeV/fm\({}^{3}\) and \(\varepsilon_{\rm max}=1011^{+64}_{-72}\) MeV/fm\({}^{3}\), respectively. Note that the maximum energy density is lower for the mass-gap NS case, in which the maximally massive NSs have larger masses. This follows from the fact that a larger peak in the speed of sound, which creates heavier NSs, will lead to an earlier destabilisation after the speed of sound drops.
One can interpret the peak in the speed of sound as a dominance of repulsive interactions, opposed to the finite temperature case, where it never exceeds the conformal limit [118; 119; 67]. This might be interpreted as an indication of deconfinement, which might be linked to the percolation of hadrons. Using a simple model one can estimate the density at which percolation occurs (see e.g. Ref. [30]). In Ref. [30] the authors use an average mass radius of protons of \(r_{p}=(0.80\pm 0.05)\) fm (taken from Ref. [120]) that is directly extracted from experimental data of \(\phi\) meson photoproduction, yielding to a density of \(n_{\rm B,p}=0.57^{+0.12}_{-0.09}\) fm\({}^{-3}\). This density is obtained from percolation theory through the expression \(n_{\rm B,p}=1.22/V_{0}\), where \(V_{0}=4r_{p}^{3}\pi/3\)[121; 122]. Ref. [30] arrive at a value \(n_{\rm B,p}=0.56^{+0.09}_{-0.08}\) fm\({}^{-3}\), which is remarkably close to the estimated value of the percolation density. For the hypermassive NS and the mass-gap NS scenarios we calculate the density of the peak to be at \(0.54^{+0.06}_{-0.09}\) fm\({}^{-3}\) and \(0.54^{+0.04}_{-0.07}\) fm\({}^{-3}\), respectively, which are slightly lower but still consistent with the estimated density of percolation. It is worth noting that for the mass radius of proton there are several competing results on the market starting from as low as \(r_{p}=(0.55\pm 0.03)\) fm [123] up to \(r_{p}=(0.86\pm 0.08)\)[124]. Moreover, beside the mass radius of protons there is also the charge radius of protons, which can be measured accurately by electron
Figure 6: Prior and posterior probabilities of different parameter sets in the plane of the phase transition parameters \(\bar{n}\) and \(\Gamma\). Darker colors indicate higher probabilities (white areas correspond to \(\sim 0\) probability), with parameters having been normalized by the maximum probability in the whole parameter space and not in the specific plane shown in these panels. Parameter sets with an orange color are excluded by the requirement \(\bar{n}-\Gamma\geq n_{0}\) or by causality, and/or by stability. The top panels correspond to slices in the parameter space with \(m_{\sigma}=290\) MeV and \(g_{V}=3.1\), while the bottom panels have \(m_{\sigma}=290\) MeV and \(g_{V}=4.7\). The first three panels from left to right both for the top and bottom panels show the prior, the posterior with the NICER measurements, and the posterior with NICER and tidal deformability measurements, respectively. The rightmost panel at the top has the upper mass constraint from the hypermassive NS hypothesis included as well, while the one at the bottom contains instead the BH hypothesis together with the lower mass bound from the mass-gap object in GW190814. Parameter sets with the maximum probability are marked with a ring.
scattering experiments. Currently, there are two competing, non-overlapping values of \(r_{\rm Ep}=0.84\) fm and 0.88 fm [125; 126; 127]. Thus, the size of the proton is still under debate, and can be as low as 0.55 fm, which would give a much higher percolation density.
In the right panel of Fig. 7 we also show the limits for the trace anomaly, defined as
\[\Delta=\frac{1}{3}-\frac{p}{\varepsilon}. \tag{25}\]
This was recently proposed as a measure of conformality [128]. As we approach the conformal limit the value of \(\Delta\) will tend to zero. Similarly to the results of Ref. [30] we find that the value of \(\Delta\) approaches zero from above in the hypermassive NS scenario for large \(\varepsilon\) values. This is to be compared to the mass-gap NS scenario, where there is no such trend, in fact, quite remarkably, a negative value for \(\Delta\) is highly favoured at \(\varepsilon\approx 800\) MeV/fm\({}^{3}\).
Finally, Fig. 8 shows the posteriors when in addition to the NICER and tidal deformability measurements, the BH hypothesis and the constraint from the central compact object inside HESS J1731-347 is also taken into account. The two-sigma credible interval of the measurement barely overlaps with our set of mass-radius curves, however, a considerable region is still allowed in the \(M-R\) plane. As a consequence of this constraint the region of hybrid stars with a quark core shrinks to a narrow strip. None of the NSs with the DD2 model used for the hadronic EoS is allowed, since they generally predict large radii for low mass NSs. Comparing the posterior probabilities on the parameter space to those in the bottom right panel in Fig. 2, we see that with this constraint included parameter sets with a low value of \(\bar{n}\) are less probable, and the distributions shift upwards. The maximum posterior probability corresponds to the parameter set \(m_{\sigma}=290\) MeV, \(g_{V}=4.7\), \(\bar{n}=4.25n_{0}\) and \(\Gamma=2.5n_{0}\).
We summarize our results for the calculated posterior radius distributions of 1.4 \(M_{\odot}\) and 2 \(M_{\odot}\) NSs for various astrophysical constraints in Table 1 and Fig. 9. Note, however, that these results should be taken with a grain of salt, since our prior was not preprocessed in order to acquire a uniform radius prior, which should be done in order to obtain meaningful results (see e.g. Ref. [97]). Note also, that although it is not mentioned in the table and figure explicitly, our prior includes constraints from our constituent quark model implicitly, which restrict radii to values \(R_{1.4}\gtrsim 12\) km.
## IV Conclusion
In this paper we have investigated what can be inferred from astrophysical observations about the properties of quark matter inside NSs and the phase transition between the hadronic and quark phases. For this purpose we have utilized the (axial)vector meson extended linear sigma-model to describe quark matter at high densities, the SFHo and DD2 models as hadronic EoSs representing softer and stiffer hadronic models, respectively. To transition from the hadronic to the quark model we have used a general polynomial concatenation, the parameters of which can be varied to create phase transitions at different densities. The whole parameter space with 4 parameters (2 from the concatenation, and 2 from the constituent quark model) was studied by applying the complete measurement data from recent astrophysical observations.
First of all, we have shown that there is a tight correlation between the parameters of the constituent quark model and the maximum mass attainable by heavy NSs described by that model, even though many of the maximum mass NSs do not have pure quark matter in their
Figure 7: The 90% credible intervals of the sound speed squared (left and middle) and the trace anomaly (right) as a function of energy density under various constraints, using the SFHo model for the hadronic EoS. Different contours show results for the prior with the 2 \(M_{\odot}\) and the pQCD minimal constraints (blue), the posterior with the NICER and tidal deformability measurements (red), the posterior with the hypermassive NS hypothesis included as well (yellow), and the posterior for the alternative scenario where the mass-gap object in GW190814 is considered a NS (green). On the left, for the yellow contour, the maximum posterior probability EoS is shown as well. The position of the speed of sound peaks with symmetric error bars (middle) and the energy density at the center of maximally stable NSs (middle and right) is also displayed with circles and vertical lines, respectively.
cores. Hence, some properties of quark matter at high densities might be inferred by only gaining information about the intermediate-density region, and therefore determining the maximum mass of NSs might be used to deduce information about the properties of strongly interacting matter at high densities.
In our Bayesian analysis, we have investigated the effect of different astrophysical measurements on mass-radius curves, the radius distribution of NSs with specific masses, and on the posterior probabilities of different parameter sets. In addition to the lower mass limit from 2 \(M_{\odot}\) stars and constraints from pQCD, we have also considered the two NICER measurements, and the tidal deformability data obtained from GW170817. Moreover, we have studied the effect of additional constraints, such as the upper mass bound inferred from the hypermassive NS hypothesis, interpreting the mass-gap object in GW190814 as a very massive NS, or the mass-radius data obtained from the light compact object in HESS J1731-34.
We have shown that the 90% credible regions on the mass-radius diagram obtained by using the complete observational data of GW170817 differ slightly from those originating from a sharp cut-off using the 90% bound on the parameters of the binary corresponding to GW170817. This was also discussed in Ref. [32] and suggests the use of the the complete data for more precise predictions.
We have also found that the maximum mass of hybrid stars with a pure quark core is below \(<2.3\)\(M_{\odot}\). This is caused by the fact that there is a successive stiffening and softening in the intermediate-density region of our EoSs and in order to reach the density of our quark model, the NS sequence needs to go through the soft region without becoming unstable. Further constraints narrow down this region even more, leaving only a small space for hybrid stars with a pure quark core for EoSs with a soft hadronic part and none for ones with a stiff hadronic part. This is in line with the findings of some
\begin{table}
\begin{tabular}{|c||c|c|} \hline Measurement & \(R_{1.4}\) [km] & \(R_{2.0}\) [km] \\ \hline \hline Prior (pQCD + 2\(M_{\odot}\)) & \(12.93^{+0.88}_{-0.74}\) & \(13.12^{+1.24}_{-1.23}\) \\ \hline NICER & \(12.97^{+0.78}_{-0.74}\) & \(13.18^{+1.10}_{-1.07}\) \\ \hline NICER + \(\bar{\Lambda}\) & \(12.63^{+0.61}_{-0.48}\) & \(12.76^{+0.84}_{-0.78}\) \\ \hline NICER + \(\bar{\Lambda}\) + \(M_{U}\) & \(12.52^{+0.56}_{-0.46}\) & \(12.41^{+0.79}_{-0.64}\) \\ \hline NICER + \(\bar{\Lambda}_{\rm BH}\) + \(M_{\rm gap}\) & \(12.79^{+0.50}_{-0.50}\) & \(13.01^{+0.55}_{-0.53}\) \\ \hline NICER + \(\bar{\Lambda}_{\rm BH}\) + HESS & \(12.43^{+0.55}_{-0.38}\) & \(12.44^{+0.67}_{-0.63}\) \\ \hline \end{tabular}
\end{table}
Table 1: Median values of radii for 1.4 \(M_{\odot}\) and 2 \(M_{\odot}\) NSs for the different astrophysical constraints investigated in this paper. The errors represent the 90% credible intervals. All values correspond to NSs with hadronic EoSs given by the SFHo model.
Figure 8: Same as in Fig. 2 but with the constraint from the central compact object inside HESS J1731-347 also applied, in addition to the NICER and tidal deformability measurements, and the BH hypothesis.
Figure 9: Radius intervals for 1.4 \(M_{\odot}\) (blue) and 2 \(M_{\odot}\) (red) NSs. The circles represent the median values, while error bars correspond to the 90% credible intervals. The vertical dashed lines separate alternative scenarios and additional, recent measurements. All data correspond to hybrid EoSs created using the SFHo hadronic model. Data for these intervals can be found in Table 1.
other studies [129; 130; 106; 131], which also suggest the possible existence of pure quark matter inside massive NSs, although in a restricted parameter region, highly dependent on the hadronic EoS.
Additionally, we have also shown how the parameters of the hadron-quark concatenation are affected by the various astrophysical constraints. For the two main scenarios considered in this paper, we have found that the parameter encoding the central density of the phase transition falls between \(3n_{0}<\bar{n}<5n_{0}\), and that the appearance of pure quark matter at densities below \(\sim 4n_{0}\) is disfavoured in both scenarios.
Even though the presence of pure quark matter is restricted to a limited region in the mass-radius diagram, we find that a peak in the speed of sound is preferred by astrophysical observations, which might be interpreted as a consequence of reaching percolation densities. We have found that this peak is at \(\varepsilon_{\rm p}=567^{+71}_{-103}\) MeV/fm\({}^{3}\) and \(587^{+53}_{-86}\) MeV/fm\({}^{3}\) for the hypermassive NS and the mass-gap NS cases, respectively, or at \(n_{\rm B,p}=0.54^{+0.06}_{-0.09}\) fm\({}^{-3}\) and \(0.54^{+0.04}_{-0.07}\) fm\({}^{-3}\), regarding baryon densities. This is consistent with the findings of Ref. [30] and with estimations based on a geometrical picture. We have also shown the dependence of the \(\Delta\) trace anomaly on the energy density and found that in the mass-gap NS scenario a negative \(\Delta\) is preferred at some energy density region.
###### Acknowledgements.
The authors would like to thank Michal Marczenko for their useful advices on Fig. 7 and the discussion about the speed of sound peak. J. T., P. K. and G. W. acknowledge support by the National Research, Development and Innovation (NRDI) fund of Hungary, financed under the FK_19 funding scheme, Project No. FK 131982 and under the K_21 funding scheme, Project No. K 138277. J. T. was supported by the UNKP-22-3 New National Excellence Program of the Ministry for Culture and Innovation from the source of the NRDI fund. J. S. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'- project number 315477589 - TRR 211.
|
2309.15323 | Non-commutative gauge symmetry from strong homotopy algebras | We explicitly construct an L$_\infty$ algebra that defines U$_{\star}(1)$
gauge transformations on a space with an arbitrary non-commutative and even
non-associative star product. Matter fields are naturally incorporated in this
scheme as L$_\infty$ modules. Some possibilities for including P$_\infty$
algebras are also discussed. | Vladislav Kupriyanov, Fernando Oliveira, Alexey Sharapov, Dmitri Vassilevich | 2023-09-27T00:06:17Z | http://arxiv.org/abs/2309.15323v2 | # Non-commutative gauge symmetry from strong homotopy algebras
###### Abstract
We explicitly construct an L\({}_{\infty}\) algebra that defines U\({}_{\star}(1)\) gauge transformations on a space with an arbitrary noncommutative and even nonassociative star product. Matter fields are naturally incorporated in this scheme as L\({}_{\infty}\) modules. Some possibilities for including P\({}_{\infty}\) algebras are also discussed.
## 1 Introduction
Constructing gauge theories on noncommutative spaces is a notoriously difficult problem. There are many proposals and schemes on the market, which have their advantages and disadvantages. To us, the L\({}_{\infty}\) approach to gauge theories [1, 2] applied to noncommutative spaces in [3, 4] seems to be the most promising. However, there is a caveat: so far there is no proof that desired L\({}_{\infty}\)
algebras exist to all orders in the parameter of noncommutativity. Filling this gap is the principal goal of the present work.
In this paper, we do not restrict ourselves to associative noncommutative spaces. Nonassociative star products appear in the context of magnetic backgrounds in field theory [5], non-geometric fluxes [6, 7, 8] and D-branes [9, 10, 11] in string theory, see [12] for a review. To the readers who are not interested in nonassociative geometries, we mention that associativity does not bring drastic simplifications to our L\({}_{\infty}\) construction.
The strong homotopy Lie algebras, or the L\({}_{\infty}\) algebras, are being used in many areas of theoretical physics and are especially efficient in the treatment of quantization problems (see [13] for an introduction). In contrast to the Lie algebras, the L\({}_{\infty}\) algebras are defined through infinitely many \(n\)-linear maps \(\ell_{n}\) on a \(\mathbb{Z}\)-graded vector space \(V\). The maps are defined to satisfy an infinite sequence of relations called the generalized Jacobi identities. In the present work, we are interested in a noncommutative U\((1)\) gauge theory. Namely, we define gauge transformations of the gauge connections. Therefore we take \(V=V_{0}\oplus V_{1}\) with \(V_{0}\) containing zero-forms identified with the gauge parameters and \(V_{1}\) containing gauge field one-forms. The map \(\ell_{2}\) on \(V_{0}\otimes V_{0}\) is defined by the star product and used as an input. Other maps are determined (non-uniquely) through the generalized Jacobi identities. The procedure of solving the generalized Jacobi identities (which is called the L\({}_{\infty}\) bootstrap) is organized as a sequence of steps. At each step, we solve two equations. The construction exploits the standard homological perturbation theory1. We would like to stress that in this way we construct noncommutative U\((1)\) gauge transformations for an arbitrary star product. In this setting, it is natural to interpret the mater fields as an L\({}_{\infty}\) module. The construction of such a module is completely similar to the construction of the L\({}_{\infty}\) algebra itself.
Footnote 1: Equivalent results may be also achieved by solving directly the recurrence relations similarly to the construction of star products in [14, 15]. This latter method is much more technically demanding. We do not present any details here.
Since we allow for nonassociative star products we also allow for almost Poisson structures which do not need to be Lie-Poisson meaning that the corresponding biviector does not need to satisfy the Jacobi identity. In this context, it is natural to ask the following question. Which algebraic condition may replace the associativity in the quantization of almost Poisson structures which are not Lie-Poisson? The possibilities of imposing identities in the deformed algebra have been considerably reduced by the no-go results [16, 17, 18] and practically closed by the recent work [19]. A promising algebraic framework seems to be provided by the strong homotopy associative algebras A\({}_{\infty}\)[20, 21]. The relation between these algebras and the L\({}_{\infty}\) algebras is similar to that between the associative and Lie algebras. The A\({}_{\infty}\) algebras describe a controlled (up to homotopy) violation of associativity. In contrast to the L\({}_{\infty}\) algebras the multiplication maps \(m_{n}\) of A\({}_{\infty}\) algebras do not have any prescribed symmetry properties, which makes the analysis of corresponding homotopy relations much more complicated. A counterpart of A\({}_{\infty}\)
algebras in the Poisson setting is given by the strong homotopy Poisson algebras P\({}_{\infty}\) introduced by Cattaneo and Felder [22]. In the present work, we attempt to construct P\({}_{\infty}\) algebras starting with an almost Poisson structure with two different choices for a graded commutative product on \(V\) (which is another ingredient of P\({}_{\infty}\) algebras). We show that in both cases, the P\({}_{\infty}\) relations imply the Jacobi identity for the almost Poisson bracket.
The paper is organized as follows. In the next Section, we present the main definitions and explain the logic behind our construction of L\({}_{\infty}\) algebras. The construction itself is contained in Section 3. In Section 4, we introduce L\({}_{\infty}\) modules associated with matter fields and prove their existence. In Section 5, we discuss the A\({}_{\infty}\) and P\({}_{\infty}\) algebras. Section 6 contains some concluding remarks.
## 2 Setup and main definitions
An L\({}_{\infty}\)-algebra is a \(\mathbb{Z}\)-graded vector space \(V=\bigoplus_{k\in\mathbb{Z}}\,V_{k}\) together with a sequence of graded-antisymmetric multilinear maps, \(\ell_{n}:V^{\otimes n}\to V\). If one defines the degree of \(v\in V_{k}\) as \(|v|=k\), then the degree \(|\ell_{n}|=2-n\). The graded antisymmetry means that
\[\ell_{n}(\ldots,v_{j},v_{j+1},\ldots)=-(-1)^{|v_{j}|\cdot|v_{j+1}|}\ell_{n}( \ldots,v_{j+1},v_{j},\ldots). \tag{1}\]
The maps should satisfy the identities \({\cal J}_{n}(v_{1},\ldots,v_{n})=0\) for each \(n\geq 1\), called the generalized Jacobi identities, with
\[{\cal J}_{n}(v_{1},\ldots,v_{n}) := \sum_{i+j=n+1}(-1)^{i\,(n-i)}\,\sum_{\sigma\in{\rm Sh}_{i,n-i}} \chi(\sigma;v_{1},\ldots,v_{n})\] \[\qquad\times\ell_{j}\big{(}\ell_{i}(v_{\sigma(1)},\ldots,v_{ \sigma(i)}),v_{\sigma(i+1)},\ldots,v_{\sigma(n)}\big{)}\]
Here, the second sum runs over \((i,n-i)\)-shuffled permutations \(\sigma\in S_{n}\) of degree \(n\) which are restricted as
\[\sigma(1)<\cdots<\sigma(i)\,,\qquad\sigma(i+1)<\cdots<\sigma(n)\]
and \(\chi(\sigma;v_{1},\ldots,v_{n})=\pm\,1\) is the Koszul sign defined by the relation
\[\ell_{n}(v_{1},\ldots,v_{n})=\chi(\sigma;v_{1},\ldots,v_{n})\,\ell_{n}(v_{ \sigma(1)},\ldots,v_{\sigma(n)})\,. \tag{3}\]
This completes the definition of generic L\({}_{\infty}\) algebras.
We will be interested only in the so-called flat L\({}_{\infty}\) algebras for which \(\ell_{0}=0\). Later we will impose some further restrictions on the class of such algebras.
The arena for our construction is a formal quantization of \(\mathbb{R}^{p}\) in the direction of an almost Poisson structure defined by a smooth bivector \(P\) on \(\mathbb{R}^{p}\). This bivector in turn defines an almost Poisson bracket
\[\{f,g\}=P({\rm d}f,{\rm d}g)=P^{ij}\partial_{i}f\cdot\partial_{j}g \tag{4}\]
for \(f,\,g\in C^{\infty}(\mathbb{R}^{p})\). A formal noncommutative (and nonassociative) structure on \(\mathbb{R}^{p}\) is defined through a star product. Let \(\mathcal{A}=C^{\infty}(\mathbb{R}^{p})[[\lambda]]\) be an algebra of formal power series in the parameter \(\lambda\) with coefficients in \(C^{\infty}(\mathbb{R}^{p})\). A star product is a deformation of the pointwise product on \(\mathcal{A}\). It is a bilinear map \(\star:\mathcal{A}\times\mathcal{A}\to\mathcal{A}\) of the form
\[f\star g=f\cdot g+\sum_{r=1}^{\infty}\lambda^{r}C_{r}(f,g)\,, \tag{5}\]
where the \(C_{r}\)'s are bidifferential operators and
\[C_{1}(f,g)=\{f,g\}\,. \tag{6}\]
We shall consider exclusively the star products which satisfy the stability of unity condition
\[1\star f=f\star 1=f \tag{7}\]
for all \(f\in C^{\infty}(\mathbb{R}^{p})[[\lambda]]\). We do not impose any further conditions on the star product. In particular, we do _not_ assume that this product is associative. Thus, we do not need to assume that the bivector \(P\) satisfies the Jacobi identity.
Our purpose is to describe U\((1)\) gauge transformations on noncommutative spaces. The minimal requirement is to have gauge parameters (which are 0-forms) and gauge fields (which are 1-forms). Therefore, we take \(V_{0}=C^{\infty}(\mathbb{R}^{p})[[\lambda]]\), \(V_{1}=\Omega^{1}(\mathbb{R}^{p})[[\lambda]]\) and set \(V_{k}=\{0\}\) for \(k\neq 0,1\). The elements of \(V_{0}\) will be denoted by \(f,g,h,\dots\) and the elements of \(V_{1}\) - by \(A,B,\dots\).
The star product enters the game through our choice of the bilinear map \(\ell_{2}\) on \(V_{0}\):
\[\ell_{2}(f,g)=[f,g]_{\star}\equiv f\star g-g\star f\,. \tag{8}\]
In this formula, only the antisymmetric part of the star appears. The complete product will appear in Section 4 where we will study the matter fields. The stability of unity condition (7) yields \(\ell_{2}(1,f)=0\). We also require that
\[\ell_{2}(1,A)=0 \tag{9}\]
for all \(A\in V_{1}\). Then the condition \(\mathcal{J}_{2}(1,f)=0\) gives
\[\ell_{2}(\ell_{1}(1),f)=0,\]
which is satisfied by
\[\ell_{1}(g)=\mathrm{d}g, \tag{10}\]
where \(\mathrm{d}\) is the de Rham differential extended to formal series by linearity.
By counting the degrees, we obtain that non-zero multilinear maps \(\ell_{n}\) may contain either one scalar and \(n-1\) one-forms or two scalars and \(n-2\) one forms. Similarly, non-trivial conditions \(\mathcal{J}_{n}\) may contain either two or three scalars. The L\({}_{\infty}\) relations will be solved according to the following scheme,
For example, in the first step, we will use the relation \({\cal J}_{2}(f,g)=0\) to determine \(\ell_{2}(f,A)\) and the relation \({\cal J}_{3}(f,g,h)=0\) to define \(\ell_{3}(f,A,A)\). We call this iterative procedure the L\({}_{\infty}\) bootstrap.
Some comments are in order. First, the solutions for \(\ell_{n}\) will be highly non-unique. Second, it will be enough to consider only the "diagonal" elements of the multilinear maps computed for coinciding arguments from \(V_{1}\). These elements are in one-to-one correspondence with non-diagonal elements. Indeed, for any symmetric \(n\)-linear map \({\cal L}_{n}\) one has the identity
\[{\cal L}_{n}(A_{1},\ldots,A_{n})=\frac{1}{n!}\frac{\partial^{n}}{\partial z^{ 1}\ldots\partial z^{n}}\,{\cal L}_{n}(\hat{A},\ldots,\hat{A}), \tag{11}\]
where \(\hat{A}=z^{1}A_{1}+\cdots+z^{n}A_{n}\) with real variables \(z^{j}\). Furthermore, when evaluating the \(\ell_{n}\)'s or \({\cal J}_{n}\)'s on \(V^{\otimes n}\), we can restrict ourselves to expressions in which the arguments from \(V_{0}\) precede those from \(V_{1}\), as in the table above. The other distributions of arguments are obtained by graded antisymmetry (1).
Once the L\({}_{\infty}\) algebra has been constructed, the gauge transformations can be defined as [1]
\[\delta_{f}A:={\rm d}f-[\![f,A]\!]_{A}=\sum_{n=0}^{\infty}\frac{1}{n!}(-1)^{ \frac{n(n-1)}{2}}\,\ell_{n+1}(f,A,\ldots,A)\,. \tag{12}\]
The generalized Jacobi identities with two gauge parameters, \({\cal J}_{n+2}(f,g,A^{\otimes n})=0\), imply that
\[[\delta_{f},\delta_{g}]A=\delta_{[\![f,g]\!]_{A}}A\,, \tag{13}\]
where
\[[\![f,g]\!]_{A}:=-\sum_{n=0}^{\infty}\,\frac{1}{n!}\,(-1)^{\frac{n(n-1)}{2}} \,\ell_{n+2}\left(f,g,A^{\otimes n}\right)\,. \tag{14}\]
In addition, the identities with three gauge parameters, \({\cal J}_{n+3}(f,g,h,A^{\otimes n})=0\), imply the relation [23]
\[[\![h,[\![f,g]\!]_{A}]\!]_{A}+\delta_{h}[\![f,g]\!]_{A}+\mbox{cycl}(f,g,h)=0\,, \tag{15}\]
which in turn guarantees the Jacobi identity for gauge variations \(\delta_{f}\),
\[\left[\delta_{f},\left[\delta_{g},\delta_{h}\right]\right]+\text{ cycl}\equiv 0\,. \tag{16}\]
In the zero order in \(A\), one recovers the usual formulas
\[\delta_{f}(A)=\text{d}f+\mathcal{O}(A),\qquad\llbracket f,g\rrbracket_{A} = -[f,g]_{\star}+\mathcal{O}(A)\,, \tag{17}\]
which are shared by most of the approaches to noncommutative \(\text{U}(1)\) gauge theories.
## 3 Construction of the \(\text{L}_{\infty}\) algebra
In this section, we apply the method of \(\text{L}_{\infty}\) bootstrap [3, 4] to construct an \(\text{L}_{\infty}\) algebra underlying the noncommutative gauge symmetry. We show that, under mild assumptions, the equations of \(\text{L}_{\infty}\) bootstrap can be solved explicitly up to any given order in the gauge field \(A\).
### The first step
At the first step of \(\text{L}_{\infty}\) bootstrap, we define \(\ell_{2}(f,A)\) and \(\ell_{3}(f,g,A)\) which solve the conditions \(\mathcal{J}_{2}(f,g)=0\) and \(\mathcal{J}_{3}(f,g,A)=0\), respectively.
The relation \(\mathcal{J}_{2}(f,g)=0\) reads
\[\ell_{2}\left(\ell_{1}(f),g\right)+\ell_{2}\left(f,\ell_{1}(g)\right)=\ell_{1 }\left(\ell_{2}(f,g)\right)\,. \tag{18}\]
We can write the right-hand side of (18) as
\[\ell_{1}\left(\ell_{2}(f,g)\right)=\text{d}\ell_{2}(f,g)=\sum_{p,q=1}^{\infty }G_{a}^{(i)^{p}(j)^{q}}\left(\partial_{i}\right)^{p}f\left(\partial_{j}\right) ^{q}g\,\text{d}x^{a}, \tag{19}\]
where we used multi-index notations
\[(i)^{p}=\left(i_{1}\ldots i_{p}\right),\qquad\left(\partial_{i}\right)^{p}= \partial_{i_{1}}\ldots\partial_{i_{p}}\,. \tag{20}\]
If a multi-index is included in round brackets, this means symmetrization. By definition,
\[G_{a}^{(i)^{p}(j)^{q}}=-G_{a}^{(j)^{q}(i)^{p}}\,. \tag{21}\]
Besides, due to the stability of unity condition, \(\ell_{2}(f,g)\) contains no terms without derivatives of \(f\) or \(g\). Therefore, the summation in (19) starts with \(p,q=1\).
Now, one easily finds the following solution to (18):
\[\ell_{2}(f,A)=\frac{1}{2}\sum_{p,q=1}^{\infty}G_{a}^{(i)^{p}(j)^{q}}\left( \partial_{i}\right)^{p}f\left(\partial_{j}\right)^{q-1}A_{j_{q}}\,\text{d}x^{ a}. \tag{22}\]
To determine \(\ell_{2}(f,g,A)\in V_{0}\) we need to solve the equation
\[\ell_{3}(\ell_{1}(f),g,h)+\ell_{3}(f,\ell_{1}(g),h)+\ell_{3}(f,g, \ell_{1}(h))= \tag{23}\] \[-\ell_{2}(\ell_{2}(f,g),h)-\ell_{2}(\ell_{2}(g,h),f)-\ell_{2}(\ell _{2}(h,f),g)\,.\]
The right hand side is clearly antisymmetric in \(f\), \(g\) and \(h\) and we denote it by
\[\Pi_{\star}(f,g,h)=[[f,g]_{\star},h]_{\star}+[[g,h]_{\star},f]_{\star}+[[h,f]_ {\star},g]_{\star}\,. \tag{24}\]
Moreover, the stability of unity implies that
\[\Pi_{\star}(f,g,h)=\hat{\Pi}_{\star}(\mathsf{d}f,\mathsf{d}g,\mathsf{d}h)\,, \tag{25}\]
where the polydifferential operator
\[\hat{\Pi}_{\star}(\mathsf{d}f,\mathsf{d}g,\mathsf{d}h)=\sum_{p,q,r=1}^{\infty }F^{(i)^{p}(j)^{q}(k)^{r}}\left(\partial_{i}\right)^{p}f\left(\partial_{j} \right)^{q}g\left(\partial_{k}\right)^{r}h \tag{26}\]
is antisymmetric with respect to the permutation of \(f\), \(g\) and \(h\), so the coefficients \(F^{(i)^{p}(j)^{q}(k)^{r}}\) are antisymmetric with respect to the permutations of the groups of indices. Now, one can see that the expression
\[\ell_{3}(f,g,A)=\frac{1}{3}\hat{\Pi}_{\star}(\mathsf{d}f,\mathsf{d}g,A) \tag{27}\]
satisfies Eq. (23). This completes the first step of induction.
### All order solution
At the \((m+1)\)-th step of the L\({}_{\infty}\) bootstrap, we are trying to determine the structure maps
\[\ell_{m+2}(f,A^{1},\ldots,A^{m+1})\quad\text{and}\quad\ell_{m+3}(f,g,A^{1}, \ldots,A^{m+1}) \tag{28}\]
from the generalized Jacobi identities
\[\mathcal{J}_{m+2}\left(f,g,A^{1},\ldots,A^{m}\right)=0\quad\text{and}\quad \mathcal{J}_{m+3}\left(f,g,h,A^{1},\ldots,A^{m}\right)=0\,. \tag{29}\]
The complexity of the analysis increases rapidly with \(m\). Therefore, to proceed further, we need some preparation. First of all, it is convenient to replace the scalar parameters \(f,g,h\in C^{\infty}(\mathbb{R}^{n})\) by a single _odd_ function \(C\) on \(\mathbb{R}^{n}\). By definition,
\[C(x)C(x^{\prime})=-C(x^{\prime})C(x)\,.\]
From the viewpoint of BRST theory [24], the function \(C\) is nothing else but the ghost field associated with the gauge symmetry transformations (12). Then the structure maps (28) define and are defined by the following homogenous functions of \(C\) and \(A\):
\[\ell_{m+2}(C,A,\ldots,A)\quad\text{and}\quad\ell_{m+3}(C,C,A,\ldots,A)\,. \tag{30}\]
As a next step, we isolate the terms involving \(\ell_{1}(C)={\sf d}C\) and rewrite equations (29) as
\[\begin{array}{rcl}(\delta\ell_{m+2})(C,C;A,\ldots,A)&=&{\cal J}^{R}_{m+2}\,(C, C;A,\ldots,A)\,\\ (\delta\ell_{m+3})(C,C,C;A,\ldots,A)&=&{\cal J}^{R}_{m+3}\,(C,C,C;A,\ldots,A). \end{array} \tag{31}\]
The operator \(\delta\) on the left is defined by the formula
\[(\delta{\cal L})(\underbrace{C,\ldots,C}_{p+1};\underbrace{A,\ldots,A}_{q-1} )=q{\cal L}(C,\ldots,C;{\sf d}C,A,\ldots,A) \tag{32}\]
for any homogeneous function
\[{\cal L}(\underbrace{C,\ldots,C}_{p};\underbrace{A,\ldots,A}_{q})\,. \tag{33}\]
The functions \({\cal J}^{R}_{m+2}\) and \({\cal J}^{R}_{m+3}\) on the right are given by compositions of \(\ell_{n+2}(f,A^{\otimes(n+1)})\) and \(\ell_{n+3}(f,g,A^{\otimes(n+1)})\) with \(n<m\) that have been defined in previous stages. It is straightforward to see that the operator (32) squares to zero, making the space of functions (33) into a cochain complex. System (31) assumes thus the standard form of homological perturbation theory (see, e.g. [24, Ch. 8.4]) and its solvability is controlled by the cohomology of the coboundary operator \(\delta\). Applying \(\delta\) to both sides of (31) yields the cocycle conditions
\[(\delta{\cal J}^{R}_{m+2})(C,C,C;A,\ldots,A)=0\quad\mbox{and}\quad(\delta{ \cal J}^{R}_{m+3})(C,C,C,C;A,\ldots,A)=0\,. \tag{34}\]
These conditions are necessary for the equations to have a solution. Sufficiency requires more: both the cocycles must be trivial. Direct calculation shows that the right-hand sides of equations (31) are indeed \(\delta\)-closed, provided that all previous equations are satisfied (see [25] for the first non-trivial orders). If the differential were acyclic, this would immediately imply that equations (31) are solvable. However, this is not the case. For instance, the functions
\[{\cal L}(C)=C\,,\qquad{\cal L}(A,A)=F^{2}\,,\qquad{\cal L}(C,A,A)=CF^{2}\,, \tag{35}\]
where \(F_{ij}{\rm d}x^{i}\wedge{\rm d}x^{j}={\rm d}A\) and \(F^{2}=F_{ij}F^{ij}\), are nontrivial \(\delta\)-cocycles and one can easily construct more examples. To avoid possible obstructions to solvability we impose certain restrictions on the structure maps of the \(L_{\infty}\)-algebra to be found. First, we are looking for the \(\ell_{n}\)'s that are polydifferential operators with coefficients in \(C^{\infty}(\mathbb{R}^{n})\oplus\Omega^{1}(\mathbb{R}^{n})\). Second, the polydifferential operators should respect the unit, meaning that both \(\ell_{m+1}(f,A^{\otimes m})\) and \(\ell_{m+2}(f,g,A^{\otimes m})\) must vanish whenever one of their arguments \(f\) and \(g\) is equal to \(1\). We will refer to such polydifferential operators as _unital_. It is significant that the polydifferential operators defining the generalized Jacobi identities (29) are always unital if constructed from unital \(\ell_{n}\)'s and the same is true for the right-hand sides of equations (31).
Let us now introduce the following infinite sets of fields
\[\begin{array}{c}z^{\alpha}=\left\{\partial_{i_{1}}\ldots\partial_{i_{n+1}}C \right\}_{n=0}^{\infty},\qquad u^{\alpha}=\left\{\partial_{(i_{1}}\ldots \partial_{i_{n}}A_{i_{n+1})}\right\}_{n=0}^{\infty},\\ \qquad\qquad\qquad\qquad w^{J}=\left\{\partial_{(i_{1}}\ldots\partial_{i_{n}} F_{i)j}\right\}_{n=0}^{\infty}.\end{array} \tag{36}\]
It is easy to see that any unital polydifferential operator evaluated on \(A\) and \(C\) can be written uniquely as a polynomial in \(y\)'s, \(z\)'s, and \(w\)'s,
\[\mathcal{L}(\underbrace{C,\ldots,C}_{p};\underbrace{A,\ldots,A}_{q})=\sum_{k+ l=q}\lambda_{\alpha_{1}\cdots\alpha_{p}\beta_{1}\cdots\beta_{k}J_{1}\cdots J_{l}}z^{ \alpha_{1}}\cdots z^{\alpha_{p}}u^{\beta_{1}}\cdots u^{\beta_{k}}w^{J_{1}} \cdots w^{J_{l}}\,, \tag{37}\]
where only a finite number of the coefficients \(\lambda\) are different from zero. In this notation, the action of the differential (32) is given by the formula
\[\delta\mathcal{L}=z^{\alpha}\frac{\partial\mathcal{L}}{\partial u^{\alpha}}\,. \tag{38}\]
In order to show the acyclicity of \(\delta\) in the space of unital polydifferential operators (33) with \(p>0\), we introduce the operator
\[\delta^{*}\mathcal{L}=u^{\alpha}\frac{\partial\mathcal{L}}{\partial z^{\alpha }}\,. \tag{39}\]
Clearly,
\[\delta\delta^{*}+\delta^{*}\delta=N\,,\qquad N=z^{\alpha}\frac{\partial}{ \partial z^{\alpha}}+u^{\alpha}\frac{\partial}{\partial u^{\alpha}}\,. \tag{40}\]
The operator \(N\) counts the total degree of a polynomial in the variables \(z^{\alpha}\) and \(u^{\alpha}\). In particular, \(N\) is invertible on the subspace of polynomials at least linear in \(z\)'s.
Using (40) one can easily see that the expressions2
Footnote 2: The operator \(h=\delta^{*}N^{-1}=N^{-1}\delta^{*}\) is the standard contracting homotopy for \(\delta\) often used in local BRST cohomology, see e.g. [26, App. A]. It is also obtained by the symmetrization of the homotopy operator of [27].
\[\begin{array}{rcl}\ell_{m+2}(C,A,\ldots,A)&=&\delta^{*}N^{-1}\mathcal{J}_{m+ 2}^{R}(C,C,A,\ldots,A)\,,\\ \ell_{m+3}(C,C,A,\ldots,A)&=&\delta^{*}N^{-1}\mathcal{J}_{m+3}^{R}(C,C,C,A, \ldots,A)\end{array} \tag{41}\]
solve Eqs. (31). This solution is by no means unique as one can add to the cochains (41) any coboundaries \(\delta\mathcal{L}\) with appropriate numbers of arguments. Notice that the operators \(\delta\), \(\delta^{*}\), \(N\), and \(N^{-1}\) respect unitality and the structure maps (41) are unital whenever \(\mathcal{J}_{m+2}^{R}(f,g,A^{\otimes m})\) and \(\mathcal{J}_{m+3}^{R}(f,g,h,A^{\otimes m})\) are so. Thus, the recurrence relations (41) allow one to algorithmically extend the de Rham deferential \(\ell_{1}=\mathsf{d}\) and _any_ unital bidifferntial operator \(\ell_{2}(f,g)\) to an entire \(\mathrm{L}_{\infty}\)-structure on \(C^{\infty}(\mathbb{R}^{n})\oplus\Omega^{1}(\mathbb{R}^{n})\).
### Example
By way of illustration, let us reconstruct the trilinear map \(\ell_{3}(f,A,A)\) from the generalized Jacobi identity \(\mathcal{J}_{3}(f,g,A)=0\). Explicitly,
\[\ell_{3}(\ell_{1}(f),g,A)+\ell_{3}(f,\ell_{1}(g),A)=\mathcal{J}_{3 }^{R}(f,g,A)\,, \tag{42}\] \[\mathcal{J}_{3}^{R}(f,g,A)=-\ell_{1}(\ell_{3}(f,g,A))-\ell_{2}( \ell_{2}(f,g),A)-\ell_{2}(\ell_{2}(A,f),g)-\ell_{2}(\ell_{2}(g,A),f)\,.\]
As explained in the previous section, we first replace \(f,g\to C\) and write \(\mathcal{J}_{3}^{R}(C,C,A)\) as a polynomial in the formal variables (36) with coefficients in one-forms:
\[\mathcal{J}_{3}^{R}(C,C,A)=\big{(}G_{a\alpha\beta\gamma}z^{\alpha}z^{\beta}u^ {\gamma}+\bar{G}_{a\alpha\beta J}z^{\alpha}z^{\beta}w^{J}\big{)}\text{d}x^{a}\,. \tag{43}\]
The cocycle condition
\[(\delta\mathcal{J}_{3}^{R})(C,C,C)=G_{a\alpha\beta\gamma}z^{\alpha}z^{\beta}z ^{\gamma}\text{d}x^{a}=0 \tag{44}\]
implies that the coefficients \(G_{a\alpha\beta\gamma}\) have the symmetry of the hook-shaped Young diagram in the indices \(\alpha,\beta,\gamma\), that is,
\[G_{a\alpha\beta\gamma}=-G_{a\beta\alpha\gamma}\,,\qquad G_{a\alpha\beta\gamma} +G_{a\beta\gamma\alpha}+G_{a\gamma\alpha\beta}=0\,. \tag{45}\]
The operator \(N^{-1}\) applied to (43) multiplies the first summand by \(1/3\) and the second by \(1/2\). Applying then the operator \(\delta^{*}\), we finally get
\[\ell_{3}(C,A,A)=-\frac{2}{3}G_{a\alpha\beta\gamma}z^{\alpha}u^{\beta}u^{ \gamma}\text{d}x^{a}+\bar{G}_{a\alpha\beta J}z^{\alpha}u^{\beta}w^{J}\text{d} x^{a} \tag{46}\]
or, equivalently,
\[\ell_{3}(f,A,A)=-\frac{2}{3}\sum_{p,q,r=1}^{\infty}G_{a}^{(i)^{p }(j)^{q}(k)^{r}l}\left(\partial_{i}\right)^{p}f\left(\partial_{j}\right)^{q-1 }A_{j_{q}}\left(\partial_{k}\right)^{r-1}A_{k_{r}}\text{d}x^{a} \tag{47}\] \[-\sum_{p,q,r=1}^{\infty}\bar{G}_{a}^{(i)^{p}(j)^{q}(k)^{r}l} \left(\partial_{i}\right)^{p}f\left(\partial_{j}\right)^{q-1}A_{j_{q}}\left( \partial_{k}\right)^{r-1}(\partial_{l}A_{k_{r}}-\partial_{k_{r}}A_{l})\text{d }x^{a}\,.\]
## 4 Matter fields as an \(\text{L}_{\infty}\) module
In this section, we suggest a simple way to incorporate matter fields in the \(\text{L}_{\infty}\) approach to noncommutative gauge theories. In conventional gauge theories, matter fields belong to a representation of the Lie algebra of gauge transformations. Similarly, passing to noncommutative spaces, we assume the matter fields to form an \(\text{L}_{\infty}\) module [32].
**Definition 4.1**.: _Consider an \(L_{\infty}\) algebra \((V,\ell_{n})\). Let \(M\) be a graded vector space and let \(k_{n}\) be multilinear maps_
\[k_{n}:V^{\otimes(n-1)}\otimes M\to M. \tag{48}\]
_of degree \(2-n\). We extend \(k_{n}\) to the elements \(v_{1},\ldots,v_{n}\in V\) by the equation \(k_{n}(v_{1},\ldots,v_{n}):=\ell_{n}(v_{1},\ldots,v_{n})\). Then \((M,k_{n})\) is an \(L_{\infty}\) module if \((V\oplus M,k_{n})\) is an \(L_{\infty}\) algebra._
We consider the case of a single scalar field \(\varphi\). The space \(M\) has a single homogeneous component. The grading assignment for this component does not play any role. We choose \(|\varphi|=1\). The only non-vanishing multilinear maps involving \(\varphi\) and compatible with the degree counting are \(k_{n+2}(f,A^{\otimes n},\varphi)\). We define the gauge transformation of \(\varphi\) as
\[\delta_{f}\varphi=-[\![f,\varphi]\!]_{A} := \sum_{n=0}^{\infty}\frac{1}{n!}(-1)^{\frac{n(n+1)}{2}}\,k_{n+2}( f,A,\ldots,A,\varphi)\] \[= k_{2}(f,\varphi)-k_{3}(f,A,\varphi)-\frac{1}{2}k_{4}(f,A,A, \varphi)+\ldots\,.\]
It is easy to check that the homotopy relations
\[{\cal J}_{n+3}(f,g,A,\ldots,A,\varphi)=0\,, \tag{50}\]
imply the closure condition
\[[\delta_{f},\delta_{g}]\varphi=\delta_{[\![f,g]\!]_{A}}\varphi \tag{51}\]
so that the algebra of gauge transformations of \(\varphi\) is consistent with the algebra of gauge transformations of \(A\) with the bracket \([\![f,g]\!]_{A}\) given by (14).
Let us outline the construction of lower degree maps \(k_{n}\). Since \(M\) has a single component \(k_{1}(\varphi)=0\). For the lowest order gauge variation \(\varphi\), we make the choice
\[\delta_{f}\varphi=[\![\varphi,f]\!]_{A}=f\star\varphi+{\cal O}(A)\,, \tag{52}\]
which is compatible with (17) and implies
\[k_{2}(f,\varphi)=f\star\varphi\,. \tag{53}\]
This choice is motivated by correspondence with \({\rm U}(1)\) gauge theories on simple noncommutative spaces (e.g. the Moyal plane). The correspondence principle, however, does not allow one to fix \(k_{2}\) uniquely. One can replace (53) with the right product by the gauge parameter or with the star-commutator. The latter option is less interesting since the corresponding gauge transformation vanishes in the commutative limit.
The map \(k_{3}(f,A,\varphi)\) is now determined from the homotopy relation \({\cal J}_{3}(f,g,\varphi)=0\). This leads to the equation
\[k_{3}(\ell_{1}(f),g,\varphi)+k_{3}(f,\ell_{1}(g),\varphi)={\cal A}_{\star}(f,g,\varphi)-{\cal A}_{\star}(g,f,\varphi)\,, \tag{54}\]
where the star associator is given by
\[\mathcal{A}_{\star}(f,g,\varphi):=f\star(g\star\varphi)-(f\star g)\star\varphi\,. \tag{55}\]
Since we are working with a unital star product the associator contains at least one derivative of \(f\), \(g\), and \(\varphi\). Therefore, we can write
\[\mathcal{A}_{\star}(f,g,\varphi)=\hat{\mathcal{A}}_{\star}(\mathsf{d}f, \mathsf{d}g,\mathsf{d}\varphi)\,. \tag{56}\]
It is easy to check that
\[k_{3}(f,A,\varphi)=\frac{1}{2}\left(\hat{\mathcal{A}}_{\star}(\mathsf{d}f,A, \mathsf{d}\varphi)-\hat{\mathcal{A}}_{\star}(A,\mathsf{d}f,\mathsf{d}\varphi)\right) \tag{57}\]
satisfies Eq. (54).
With the general technique described in Section 3, it is not hard to write explicit formulas for determining the maps \(k_{n}\). Introducing the ghost field \(C\), we can bring the chain of homotopy relations \(\mathcal{J}_{m+2}(f,A^{\otimes m},\varphi)=0\) into the form of homological perturbation theory:
\[(\delta k_{m+2})(C,C;A,\ldots,A;\varphi)=\mathcal{J}_{m+2}^{R}(C,C;A,\ldots,A; \varphi)\,. \tag{58}\]
Here the right-hand side involves \(k_{n+2}\) with \(n<m\) and the differential \(\delta\) essentially coincides with (32):
\[(\delta\mathcal{L})(C,\ldots,C;A,\ldots,A;\varphi)=q\mathcal{L}(C,\ldots C; \mathsf{d}C,A,\ldots,A;\varphi) \tag{59}\]
for all
\[\mathcal{L}(\underbrace{C,\ldots C}_{p};\underbrace{A,\ldots,A}_{q};\varphi)\,. \tag{60}\]
Notice that \(\varphi\in M\) enters these relations as an external parameter and is not affected by \(\delta\). Again, one can see by induction that \(\delta\mathcal{J}_{m+2}^{R}\equiv 0\) provided that all previous equations \(\delta k_{n+2}=\mathcal{J}_{n+2}^{R}\) with \(n<m\) are satisfied. Furthermore, by construction, the polydifferential operator \(\mathcal{J}_{m+2}^{R}\) is unital and we may use the same homotopy operator as in (41) to write down the general solution to (58):
\[k_{m+2}(C;A,\ldots,A;\varphi)=\delta^{*}N^{-1}\mathcal{J}_{m+2}^{R}(C,C;A, \ldots,A;\varphi)+\mathcal{L}(\mathsf{d}C,A,\ldots,A;\varphi)\,, \tag{61}\]
\(\mathcal{L}\) being an arbitrary polydifferential operator on \(\wedge^{m+1}V_{1}\otimes M\) with values in \(M\).
## 5 A\({}_{\infty}\) and P\({}_{\infty}\) algebras
**Definition 5.1**.: _A (flat) \(A_{\infty}\) algebra is a \(\mathbb{Z}\) graded vector space \(V=\bigoplus_{k\in\mathbb{Z}}V_{k}\) together with a system of multilinear maps \(m_{n}:V^{\otimes n}\to V\), \(n\in\mathbb{N}\) of degree \(2-n\) satisfying the Stasheff
_identities_
\[\sum_{\begin{subarray}{c}r+s=m+1\\ 1\leq i\leq r\end{subarray}}(-1)^{\epsilon(m,r,s,i)}m_{r}(v_{1},\ldots,v_{i-1},m_ {s}(v_{i},\ldots,v_{i+s-1}),v_{i+s},\ldots,v_{n})=0\,, \tag{62}\]
_where_
\[\epsilon(n,r,s,i)=(s+1)i+sn+s(|v_{1}|+\cdots+|v_{i-1}|) \tag{63}\]
_for \(n\in\mathbb{N}\)._
We took the sign conventions from [28]. Flatness means the absence of \(m_{0}\) map, i.e. \(m_{0}=0\). By writing the Stasheff identity (62) involving \(m_{2}(f,m_{2}(g,h))\) one can see that the binary product \(m_{2}(f,g)\) is associative up to homotopy.
Let \((V,m_{n})\) be an A\({}_{\infty}\) algebra. Define
\[\ell_{n}(v_{1},\ldots,v_{n})=\sum_{\sigma\in S_{n}}\chi(\sigma;v_{1},\ldots,v _{n})m_{n}(v_{\sigma(1)},\ldots,v_{\sigma(n)}). \tag{64}\]
Then, \((V,\ell_{n})\) is an L\({}_{\infty}\) algebra. Thus, A\({}_{\infty}\) algebras give L\({}_{\infty}\) algebras in a way much similar to the one in which associative algebras give Lie algebras.
The usual approach [29] to deformation quantization starts with a commutative algebra \(C^{\infty}(M)\) of smooth functions on some manifold \(M\) with the point-wise product \(f,g\to f\cdot g\) and a Poisson bracket \(\{\,\cdot\,,\cdot\}\) which is a derivation on this algebra, \(\{f,g\cdot h\}=\{f,g\}\cdot h+g\cdot\{f,h\}\). It is also assumed that the bracket satisfies the Jacobi identity, so that \((C^{\infty}(M),\{\,\cdot\,,\cdot\})\) becomes a Lie algebra. The existence of deformation quantization and an explicit construction follow from the Kontsevich formality theorem.
To extend this picture to our setting let us fix a graded commutative product on \(V\), i.e., a map \(\mu:V\otimes V\to V\). A degree \(k\) derivation \(D\) on \(V\) is a linear map such that (i) \(DV^{j}\subset V^{j+k}\) and (ii) \(D\mu(u,v)=\mu(Du,v)+(-1)^{k|u|}\mu(u,Dv)\).
Another ingredient, namely, a strong homotopy Poisson algebra (or a P\({}_{\infty}\) algebra) was defined by Cattaneo and Felder in [22], see also [30], [31].
**Definition 5.2**.: _A flat P\({}_{\infty}\) algebra is a flat L\({}_{\infty}\) algebra defined on a \(\mathbb{Z}\)-graded vector space \(V=\bigoplus_{k\in\mathbb{Z}}\,V_{k}\) with a graded commutative product \(\mu\) and multilinear maps \(p_{n}\), \(n\geq 1\), such that the maps_
\[v\to p_{n}(v_{1},\ldots,v_{n-1},v) \tag{65}\]
_are derivations on \(V\) of degree \(2-n+\sum_{i=1}^{n-1}|v_{i}|\)._
Let \(V^{(0)}\) be a graded vector space, and let \(V=V^{(0)}[[\lambda]]\). Consider an A\({}_{\infty}\) algebra on \(V\). The multilinear maps \(m_{n}\) can be represented as formal power series in \(\lambda\), \(m_{n}=m_{n}^{(0)}+\lambda m_{n}^{(1)}+\ldots\)
Let us assume that \(m_{n}^{(0)}=0\) for \(n\neq 2\) and \(m_{2}^{(0)}\) is a graded commutative multiplication on \(V^{(0)}\). Define \(p_{n}=\sum_{\sigma\in S_{n}}\chi(\sigma)\,m_{n}^{(1)}\circ\sigma\). It was demonstrated in [22] that \((V^{(0)},p_{n})\) is a P\({}_{\infty}\) algebra with \(\mu=m_{2}^{(0)}\) This means, that quasiclassical limits of some formal A\({}_{\infty}\) algebras are P\({}_{\infty}\) algebras.
This result allows us to derive some no-go statements. Consider a graded vector space of formal power series of 0- and 1-forms on \(\mathbb{R}^{N}\), but now we want to interpret the 0-forms as scalar fields rather than gauge parameters. We are interested in an A\({}_{\infty}\) algebra which in the classical limit \(\lambda\to 0\) describes just this scalar field, i.e. the only non-vanishing \(m_{n}^{(0)}\) is \(m_{2}^{(0)}(f,g)=f\cdot g\). Thus, the condition from the previous paragraph is satisfied and we may try to construct a P\({}_{\infty}\) algebra. We additionally assume that
\[p_{2}(f,g)=\{f,g\}, \tag{66}\]
where \(\{\cdot\,,\cdot\}\) is an almost Poisson bracket. Since \(p_{1}\) is a derivation of \(m_{2}^{(0)}\), we have
\[p_{1}(f\cdot g)=m_{2}^{(0)}(p_{1}f,g)+m_{2}^{(0)}(f,p_{1}g).\]
The right hand side of this equation vanishes since \(m_{2}^{(0)}(f,A)\) for any \(A\). By setting \(g=1\) we obtain that \(p_{1}\) vanishes identically. The equation \({\cal J}_{3}(f,g,h)=0\) yields
\[p_{2}(p_{2}(f,g),h)+p_{2}(p_{2}(g,h),f)+p_{2}(p_{2}(h,f),g)=0,\]
i.e. the almost Poisson bracket \(\{\cdot,\cdot\}\) has to satisfy the Jacobi identity.
The paper [22] (especially the Relative Formality Theorem) suggests that P\({}_{\infty}\) algebras are natural objects in the context of deformation quantization. Therefore, it makes sense to study other P\({}_{\infty}\) algebras even if they do not fit into the construction of quasiclassical limits of A\({}_{\infty}\) algebras described above. Our next example operates with the same graded vector space \(V\) and uses the same ansatz (66) for \(p_{2}(f,g)\). The product \(\mu\) is taken to be the usual product on the truncated de Rham complex. Namely,
\[\mu(f,g)=f\cdot g,\qquad\mu(f,A)=\mu(A,f)=f\cdot A. \tag{67}\]
We take also \(p_{1}(f)={\rm d}f\). No further conditions are imposed. This example is closer to our construction of L\({}_{\infty}\) algebras in the preceding sections.
The Leibniz rule for \(p_{2}(f,\cdot)\) yields
\[p_{2}(f,gA)=gp_{2}(f,A)+p_{2}(f,g)A. \tag{68}\]
Thus, \(p_{2}(f,A)\) can be represented as a sum of first-order and zeroth-order differential operators
\[p_{2}(f,A)_{k}={\cal L}_{2,0}(f)_{k}^{i}A_{i}+{\cal L}_{2,1}(f)_{k}^{ij} \partial_{i}A_{j}\,. \tag{69}\]
The condition (68) does not restrict \({\cal L}_{2,0}\) but fixes
\[{\cal L}_{2,1}(f)^{ij}_{k}=\delta^{j}_{k}(\partial_{l}f)P^{li}. \tag{70}\]
A further \(L_{\infty}\) condition reads
\[{\rm d}\{f,g\}=p_{2}({\rm d}f,g)+p_{2}(f,{\rm d}g). \tag{71}\]
The general solution to this equation is
\[{\cal L}_{2,0}(f)^{i}_{k}=-{{1\over 2}}\partial_{k}P^{ij} \partial_{j}f+Q^{ij}_{k}\partial_{j}f\,, \tag{72}\]
where \(Q^{ij}_{k}\) is any \(x\)-dependent function symmetric with respect to \(i\leftrightarrow j\). The conditions \({\cal J}_{3}(f,g,h)=0\) combined with the derivative conditions for \(p_{3}(f,g,A)\) can be easily solved yielding
\[p_{3}(f,g,A)=\left({{1\over 3}}\hat{J}^{ijk}_{3}+Q^{\prime}\;^{ ijk}\right)\partial_{i}f\cdot\partial_{j}g\cdot A_{k}, \tag{73}\]
where
\[\hat{J}^{ijk}_{3}=P^{il}\partial_{l}P^{jk}+P^{jl}\partial_{l}P^{ki}+P^{kl} \partial_{l}P^{ij} \tag{74}\]
is the jacobiator and \(Q^{\prime}\;^{ijk}\) is any tensor with the symmetry of the hook-shaped Young diagram.
Looking at the way \(Q\) and \(Q^{\prime}\) appear in the formulas one concludes that \(Q\) should be linear in \(P\) while \(Q^{\prime}\) should be quadratic in \(P\). There are no tensor structures with required symmetry properties of these orders in \(P\). Thus, in what follows we set \(Q=Q^{\prime}=0\).
The derivation condition
\[p_{3}(f,A,gB)=gp_{3}(f,A,B)+p_{3}(f,A,g)B \tag{75}\]
has the general solution
\[p_{3}(f,A,B)_{k}={\cal L}_{3,0}(f,A)^{i}_{k}B_{i}+{\cal L}_{3,1}(f,A)^{ij}_{k }\partial_{i}B_{j}\,, \tag{76}\]
where
\[{\cal L}_{3,1}(f,A)^{ij}_{k}={{1\over 3}}\delta^{j}_{k}\hat{J}^{ipl} \partial_{p}f\cdot A_{l}. \tag{77}\]
\({\cal L}_{3,0}\) is not restricted by the condition (75). Let us take the equation
\[0={\cal J}_{3}(f,g,A)=p_{1}(p_{3}(f,g,A))+p_{3}({\rm d}f,g,A)+p_{ 3}(f,{\rm d}g,A)\] \[+p_{2}(p_{2}(f,g),A)+p_{2}(p_{2}(A,f),g)+p_{2}(p_{2}(g,A),f), \tag{78}\]
fix a point \(x\), and choose \(A\) such that \(A(x)=0\). Then, at this point,
\[0={{1\over 3}}\hat{J}^{ipl}\partial_{i}f\cdot\partial_{j}g \cdot(\partial_{k}A_{l}-\partial_{l}A_{k}) \tag{79}\]
Since \(x\) is arbitrary, and the derivatives of \(A\) are arbitrary at this point, the jacobiator has to vanish everywhere.
We are led to conclude that P\({}_{\infty}\) implies that the Poisson structure is Lie-Poisson.
Let us summarize the results of this section. We have considered two possible choices for the graded commutative product for two-term P\({}_{\infty}\) algebras with \(p_{2}(f,g)=\{f,g\}\) given by some almost Poisson bracket. For both choices, the Jacobi identity on this bracket necessarily follows from the P\({}_{\infty}\) relations. In that case, the construction of the corresponding P\({}_{\infty}\) algebra was given in [23]. We choose the graded commutative product \(\mu\) being an undeformed pointwise product on some truncation of the de Rham complex as such product describes the classical geometry in some sense. Other choices for \(\mu\) may change our conclusions regarding the existence of the P\({}_{\infty}\) algebra.
## 6 Conclusion
Let us briefly summarize the main results of the paper. We proved the existence and derived inductive formulas for the structure maps of an L\({}_{\infty}\) algebra describing U\({}_{\star}(1)\) gauge transformations on an arbitrary noncommutative (and even nonassociative) space. The same is done with an L\({}_{\infty}\) module describing matter fields. We also attempted to include P\({}_{\infty}\) algebras in this approach and ended up in two no-go statements.
Our approach is rather general and the formulas obtained are very simple. In fact, due to the use of proper mathematical machinery, they look even simpler than particular results for a few lower order \(\ell_{n}\) existing in the literature [3, 23, 25]. (Of course, after bringing our formulas to an expanded component form they are equivalent.) This suggests that the method will be efficient for studying various extensions of the scheme considered here. One possible extension is the inclusion of the space of two-forms containing the field strength [1, 33]. Another extension is to noncommutative deformations of non-abelian gauge theories, which may help to overcome the restrictions found in [34].
Acknowledgements.This work was supported in parts by the Sao Paulo Research Foundation (FAPESP), grants 2021/09313-8 (V.K.), 2021/10128-0 (D.V.) and 2022/13596-8 (A.Sh), and by the National Council for Scientific and Technological Development (CNPq), grants 304130/2021-4 (V.K.) and 304758/2022-1 (D.V.). |
2309.03541 | Thiele's PIDE for unit-linked policies in the Heston-Hawkes stochastic
volatility model | The main purpose of the paper is to derive Thiele's differential equation for
unit-linked policies in the Heston-Hawkes stochastic volatility model
introduced in arXiv:2210.15343. This model is an extension of the well-known
Heston model that incorporates the volatility clustering feature by adding a
compound Hawkes process in the volatility. Since the model is arbitrage-free,
pricing unit-linked policies via the equivalence principle under a risk neutral
probability measure is possible. Studying the moments of the variance and
certain stochastic exponentials, a suitable family of risk neutral probability
measures is found. The established and practical method to compute reserves in
life insurance is by solving Thiele's equation, which is crucial to guarantee
the solvency of the insurance company. | David R. Baños, Salvador Ortiz-Latorre, Oriol Zamora Font | 2023-09-07T07:59:26Z | http://arxiv.org/abs/2309.03541v2 | # Thiele's PIDE for unit-linked policies in the Heston-Hawkes stochastic volatility model
###### Abstract
The main purpose of the paper is to derive Thiele's differential equation for unit-linked policies in the Heston-Hawkes stochastic volatility model presented in [13]. This model is an extension of the well-known Heston model that incorporates the volatility clustering feature by adding a compound Hawkes process in the volatility. Since the model is arbitrage-free, pricing unit-linked policies via the equivalence principle under \(\mathbb{Q}\) is possible. Some integrability conditions are checked and a suitable family of risk neutral probability measures is found to obtain Thiele's differential equation. The established and practical method to compute reserves in life insurance is by solving Thiele's equation, which is crucial to guarantee the solvency of the insurance company.
_Keywords:_ Thiele's equation, reserve, unit-linked policy, life insurance policy, equivalence principle, stochastic volatility, risk neutral measure, Hawkes process, volatility with jumps.
_AMS classification MSC2020:_ 60G55, 60H30, 91G05, 91G15.
## 1 Introduction
A life insurance policy is a contract that defines the evolution of the cash flow between the insurance company and the insured. In traditional life insurance contracts, the benefit provided to the insured if the pre-agreed conditions are met is fixed. In contrast, the distinctive property of unit-linked life insurance policies is that the benefit is linked to the market value of some fixed portfolio composed of stocks, bonds or other financial assets. Consequently, two types of risk prevail in unit-linked policies, the mortality risk that controls the future flow of payments between the two parties and the financial risk arising from the future returns on the financial investments.
The computation of the so-called reserve is the basis to guarantee the solvency of the insurance company against the two aforementioned risks. Namely, the reserve is the present value of future potential payments from the insurance company to the insured and is the ordinary procedure to determine the premiums. The established method to calculate the reserve is by solving the celebrated Thiele's differential equation, which dates back to 1875. It was first published in the obituary on Thiele in [31] and presented in the scientific paper [38] in 1913.
Literature on pricing unit-linked policies is vast and we mention here a non-exhaustive list. Brennan and Schwartz [16, 17] and Boyle and Schwartz [15] conducted, to the best of our knowledge, the first studies of unit-linked contracts using modern financial techniques. Delbaen [26] and Bacinello and Ortu [7] examined unit-linked contracts employing the martingale-based theory developed by Harrison and Kreps [33]. See also [1] where Aase and Persson revisit the principle
of equivalence under a risk neutral probability measure for the pricing of unit-linked policies after presenting some historical context on the topic.
It is acknowledged in life insurance that in the case of long-term insurance contracts interest rate risk plays an important role. To incorporate that risk into the valuation of life insurance policies, Norberg and Moller obtained Thiele's differential equation assuming stochastic interest rates in [41] and, later on, Persson derived the risk adjusted version of it in [45]. For more fundamental references about interest rate risk in unit-linked policies see [8, 10, 40, 44] and some further extensions and generalizations [3, 9, 12, 14].
It is worth pointing out that also policyholder behaviour has a significant impact in life insurance. For instance, the insured may surrender the contract, cancel all future cash flows and receive just a single payment. Similarly, the policyholder may cancel the future premiums and accept a reduction in the benefits. See [6, 18, 19, 25, 29, 32, 36, 46] for references where behavioural risk in life insurance policies is studied.
The scope of this paper is to study the financial risk in unit-linked policies coming from stock returns. Precisely, we derive Thiele's differential equation under the assumption that the benefit is linked to a stock that follows the Heston-Hawkes stochastic volatility model presented in [13]. This model is an extension of the well-known Heston model that incorporates the volatility clustering feature by adding a compound Hawkes process component in the volatility process. A Hawkes process is a self-exciting point process introduced in 1917 by Hawkes, see [34, 35], with many applications in high-frequency finance, insurance, seismology and other disciplines. It is worth mentioning that there is notable literature supporting the inclusion of jumps in the volatility, for instance, see [22, 27, 28, 42]. By exploiting the tractability of the model, it is shown in [13] that the Heston-Hawkes model is arbitrage-free and incomplete. Furthermore, the passage from the historical probability to the risk neutral one is made explicit via a family of equivalent martingale measures.
It is a common practice in actuarial science to consider two independent probability spaces, one to describe the states of the insured and the other one to describe the evolution of the financial investments. As explained in [1], the right methodology to price unit-linked policies is through the principle of equivalence under \(\mathbb{Q}\), a risk neutral probability measure in the financial probability space. On account of that, the arbitrage-free property of the stochastic volatility model proven in [13] is crucial and required to make sense of the pricing of unit-linked policies.
The paper is organized as follows. In Section 2 we summarize the Heston-Hawkes stochastic volatility model presented in [13]. Then, we give some preliminary results in Section 3 that are needed to obtain Thiele's differential equation. Essentially, there are several technical results regarding the compensator of the Hawkes process under the risk neutral probability measure. The proofs are postponed in the Appendix for the sake of clarity. Finally, in Section 4 we obtain the desired Thiele's differential equation for unit-linked policies under this stochastic volatility model.
## 2 Stochastic volatility model
In this section, we outline the Heston-Hawkes stochastic volatility model given in [13]. Let \(T\in\mathbb{R}\), \(T>0\) be a fixed time horizon. On a complete probability space \(\left(\Omega,\mathcal{A},\mathbb{P}\right)\), we consider a two-dimensional standard Brownian motion \(\left(B,W\right)=\left\{\left(B_{t},W_{t}\right),t\in\left[0,T\right]\right\}\) and its minimally augmented filtration \(\mathcal{F}^{\left(B,W\right)}=\left\{\mathcal{F}_{t}^{\left(B,W\right)},t\in \left[0,T\right]\right\}\). On \(\left(\Omega,\mathcal{A},\mathbb{P}\right)\), we also consider a Hawkes process \(N=\left\{N_{t},t\in\left[0,T\right]\right\}\) with stochastic intensity given by
\[\lambda_{t}=\lambda_{0}+\alpha\int_{0}^{t}e^{-\beta\left(t-s \right)}dN_{s},\]
or, equivalently,
\[d\lambda_{t}=-\beta(\lambda_{t}-\lambda_{0})dt+\alpha dN_{t}, \tag{2.1}\]
where \(\lambda_{0}>0\) is the initial intensity, \(\beta>0\) is the speed of mean reversion and \(\alpha\in(0,\beta)\) is the self-exciting factor. Note that the stability condition
\[\alpha\int_{0}^{\infty}e^{-\beta s}ds=\frac{\alpha}{\beta}<1,\]
holds. See [11, Section 2] and [37, Section 3.1.1] for the definition of \(N\). Then, we consider a sequence of i.i.d., strictly positive and integrable random variables \(\{J_{i}\}_{i\geqslant 1}\) and the compound Hawkes process \(L=\{L_{t},t\in[0,T]\}\) given by
\[L_{t}=\sum_{i=1}^{N_{t}}J_{i}.\]
We assume that \((B,W),N\) and \(\{J_{i}\}_{i\geqslant 1}\) are independent of each other. We write \(\mathcal{F}^{L}=\left\{\mathcal{F}^{L}_{t},t\in[0,T]\right\}\) for the minimally augmented filtration generated by \(L\) and
\[\mathcal{F}=\{\mathcal{F}_{t}=\mathcal{F}^{(B,W)}_{t}\vee\mathcal{F}^{L}_{t}, t\in[0,T]\},\]
for the joint filtration. We assume that \(\mathcal{A}=\mathcal{F}_{T}\) and we will work with \(\mathcal{F}\). Since \((B,W)\) and \(L\) are independent processes, \((B,W)\) is also a two-dimensional \((\mathcal{F},\mathbb{P})\)-Brownian motion.
We assume that the interest rate is deterministic and constant equal to \(r\). Finally, with all these ingredients, we introduce the Heston-Hawkes model. The stock price \(S=\{S_{t},t\in[0,T]\}\) and its variance \(v=\{v_{t},t\in[0,T]\}\) are given by
\[\frac{dS_{t}}{S_{t}} =\mu_{t}dt+\sqrt{v_{t}}\left(\sqrt{1-\rho^{2}}dB_{t}+\rho dW_{t} \right), \tag{2.2}\] \[dv_{t} =-\kappa\left(v_{t}-\bar{v}\right)dt+\sigma\sqrt{v_{t}}dW_{t}+ \eta dL_{t}, \tag{2.3}\]
where \(S_{0}>0\) is the initial price of the stock, \(\mu:[0,T]\to\mathbb{R}\) is a measurable and bounded function, \(\rho\in(-1,1)\) is the correlation factor, \(v_{0}>0\) is the initial value of the variance, \(\kappa>0\) is the variance's mean reversion speed, \(\bar{v}>0\) is the long-term variance, \(\sigma>0\) is the volatility of the variance and \(\eta>0\) is a scaling factor. We assume that the Feller condition \(2\kappa\bar{v}\geqslant\sigma^{2}\) is satisfied, see [4, Proposition 1.2.15]. For more details and further results on the model see [13].
## 3 Preliminary results
Some preliminary results are needed to derive Thiele's PIDE under this stochastic volatility model. First, we summarize results proven in [13] regarding the existence and positivity of the variance process and the change of measure. Then, we study the existence of (positive and negative) moments of the variance and (positive) moments of the Radon-Nikodym derivative of the change of measure. To conclude the section, we prove that the compensator of the Hawkes process is the same under the historic and the risk neutral probability measures. At first glance, this may seem a straightforward result. However, independence between the Hawkes process and the Radon-Nikodym derivative is not obvious since the counting process is part of the latter. As a consequence, this result is not clear and needs to be proven requiring the aforementioned integrability results. For the sake of clarity, all the proofs of this section are postponed to the Appendix.
### Arbitrage-free property
Existence and positivity of the variance process are given in Proposition 3.1 and Proposition 3.2, respectively. All the results involving the change of measure are compacted in Theorem 3.4.
**Proposition 3.1**.: _Equation (2.3) has a pathwise unique strong solution._
Proof.: See [13, Proposition 2.1].
**Proposition 3.2**.: _Let \(\widetilde{v}=\{\widetilde{v}_{t},t\in[0,T]\}\) be the pathwise unique strong solution of_
\[\widetilde{v}_{t}=v_{0}-\kappa\int_{0}^{t}\left(\widetilde{v}_{s}- \bar{v}\right)ds+\sigma\int_{0}^{t}\sqrt{\widetilde{v}_{s}}dW_{s}. \tag{3.1}\]
_Then,_
\[\mathbb{P}\left(\{\omega\in\Omega:\widetilde{v}_{t}(\omega)\leqslant v_{t}( \omega)\ \forall t\in[0,T]\}\right)=1,\]
_where \(v\) is the pathwise unique strong solution of (2.3). In particular, \(v\) is a strictly positive process._
Proof.: See [13, Proposition 2.2].
**Assumption 1**.: There exists \(\epsilon_{J}>0\) such that the moment generating function \(M_{J}(t)=\mathbb{E}[\exp(tJ_{1})]\) of \(J_{1}\) is well defined in \((-\infty,\epsilon_{J})\). Moreover, \((-\infty,\epsilon_{J})\) is the maximal domain in the sense that
\[\lim_{t\to\epsilon_{J}^{*}}M_{J}(t)=\infty.\]
Since \(\epsilon_{J}>0\), all positive moments of \(J_{1}\) are finite.
**Proposition 3.3**.: _For \(c\leqslant\frac{\kappa^{2}}{2\sigma^{2}}\), define \(D(c):=\sqrt{\kappa^{2}-2\sigma^{2}c}\), \(\Lambda(c):=\frac{2\eta c(e^{D(c)T}-1)}{D(c)-\kappa+(D(c)+\kappa)e^{D(c)T}}\) and_
\[c_{l}:=\sup\left\{c\leqslant\frac{\kappa^{2}}{2\sigma^{2}}:\Lambda(c)< \epsilon_{J}\ \ \text{and}\ \ M_{J}\left(\Lambda(c)\right)\leqslant\frac{\beta}{\alpha}\exp\left(\frac{ \alpha}{\beta}-1\right)\right\}.\]
_Then, \(c_{l}>0\) and for \(c<c_{l}\)_
\[\mathbb{E}\left[\exp\left(c\int_{0}^{T}v_{u}du\right)\right]<\infty.\]
Proof.: See [13, Lemma 3.1] and [13, Proposition 3.5].
**Theorem 3.4**.: _Let \(a\in\mathbb{R}\) and define \(\theta_{t}^{(a)}:=\frac{1}{\sqrt{1-\rho^{2}}}\left(\frac{\mu_{t}-r}{\sqrt{v_{ t}}}-a\rho\sqrt{v_{t}}\right)\),_
\[Y_{t}^{(a)} :=\exp\left(-\int_{0}^{t}\theta_{u}^{(a)}dB_{u}-\frac{1}{2}\int_{ 0}^{t}(\theta_{u}^{(a)})^{2}du\right),\] \[Z_{t}^{(a)} :=\exp\left(-a\int_{0}^{t}\sqrt{v_{u}}dW_{u}-\frac{1}{2}a^{2}\int _{0}^{t}v_{u}du\right)\]
_and \(X_{t}^{(a)}:=Y_{t}^{(a)}Z_{t}^{(a)}\). Recall the definition of \(c_{l}\) in Proposition 3.3._
1. \(X^{(a)}\) _is a_ \((\mathcal{F},\mathbb{P})\)_-martingale for_ \(|a|<\sqrt{2c_{l}}\)_._
2. _The set_ \[\mathcal{E}:=\left\{\mathbb{Q}(a)\ \ \text{given by}\ \frac{d\mathbb{Q}(a)}{d \mathbb{P}}=X_{T}^{(a)}\ \ \text{with}\ \ |a|<\sqrt{2c_{l}}\right\}\] (3.2) _is a set of equivalent local martingale measures._
3. _Let_ \(\mathbb{Q}(a)\in\mathcal{E}\)_, the process_ \((B^{\mathbb{Q}(a)},W^{\mathbb{Q}(a)})\) _defined by_ \[dB_{t}^{\mathbb{Q}(a)} :=dB_{t}+\theta_{t}^{(a)}dt,\] \[dW_{t}^{\mathbb{Q}(a)} :=dW_{t}+a\sqrt{v_{t}}dt\] (3.3) _is a a two-dimensional standard_ \((\mathcal{F},\mathbb{Q}(a))\)_-Brownian motion._
4. _Let_ \(\mathbb{Q}(a)\in\mathcal{E}\)_, the dynamics of_ \(S\) _and_ \(v\) _are given by_ \[\frac{dS_{t}}{S_{t}} =rdt+\sqrt{v_{t}}\left(\sqrt{1-\rho^{2}}dB_{t}^{\mathbb{Q}(a)}+\rho dW _{t}^{\mathbb{Q}(a)}\right),\] (3.4) \[dv_{t} =-\kappa^{(a)}(v_{t}-\bar{v}^{(a)})dt+\sigma\sqrt{v_{t}}dW_{t}^{ \mathbb{Q}(a)}+\eta dL_{t}.\] (3.5) _where_ \(\kappa^{(a)}=\kappa+a\sigma\) _and_ \(\bar{v}^{(a)}=\frac{k\bar{v}}{k+a\sigma}\)_._
5. _If_ \(\rho^{2}<c_{l}\)_, the set_ \[\mathcal{E}_{m}:=\left\{\mathbb{Q}(a)\in\mathcal{E}:|a|<\min\left\{\frac{\sqrt {2c_{l}}}{2},\sqrt{c_{l}-\rho^{2}}\right\}\right\}\] (3.6) _is a set of equivalent martingale measures._
Proof.: See [13, Theorem 3.6], [13, Observation 3.8] and [13, Theorem 3.9].
### Moments of the variance and the Radon-Nikodym derivative
First, we prove the existence and integrability of all positive moments of the variance process. Essentially, this is a consequence of the fact that all positive moments of the standard Heston variance and the compound Hawkes process exist (see Lemma A.1 in the Appendix).
**Lemma 3.5**.: _Let \(s\geqslant 1\). Then, \(\mathbb{E}\left[v_{t}^{s}\right]<\infty\) for all \(t\in[0,T]\) and \(\int_{0}^{T}\mathbb{E}\left[v_{t}^{s}\right]dt<\infty\)._
Proof.: See Lemma A.2 in the Appendix.
Now, we check the existence and integrability of the \(s\)th negative moment of the variance process under the condition \(2\kappa\bar{v}>s\sigma^{2}\). Note that the larger the \(s\), the larger the fraction \(\frac{\kappa\bar{v}}{\sigma^{2}}\) must be, that is, the product of the mean reversion speed and the long-term variance divided by the volatility of the variance. Informally, the larger the fraction \(\frac{\kappa\bar{v}}{\sigma^{2}}\) is, the less likely is that the variance will approach zero and that contributes to the existence of negative moments.
**Lemma 3.6**.: _Let \(s\geqslant 1\). If \(2\kappa\bar{v}>s\sigma^{2}\), \(\mathbb{E}\left[\frac{1}{v_{t}^{s}}\right]<\infty\) for all \(t\in[0,T]\) and \(\int_{0}^{T}\mathbb{E}\left[\frac{1}{v_{t}^{s}}\right]dt<\infty\)._
Proof.: See Lemma A.4 in the Appendix.
In addition to the positive and negative moments of the variance, we study the existence of positive moments of the Radon-Nykodim derivative \(\frac{d\mathbb{Q}(a)}{d\bar{p}}=X_{T}^{(a)}\), given in Theorem 3.4. The proof boils down to check that expectations of the type
\[\mathbb{E}\left[\exp\left(A\int_{0}^{T}\frac{1}{v_{u}}du\right)\right]\ \ \text{and}\ \ \mathbb{E}\left[\exp\left(B(a)\int_{0}^{T}v_{u}du\right)\right]\]
are finite, where \(A\) is a constant independent of \(a\) and \(B(a)\) is a constant depending on \(a\). To ensure that the first expectation is finite we require the two following conditions on the model parameters \(2\kappa\bar{v}>\sigma^{2}\) and \(\frac{1-\rho^{2}}{D(s^{2}-s)}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2\sigma} \right)^{2}>1\) (see Lemma A.5 in the Appendix) where \(s>1\) is the moment that we want to study. To check that the second expectation is finite we use Proposition 3.3 obtaining the two conditions on \(a\) in (3.7).
**Lemma 3.7**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}\), \(s>1\), \(D:=\sup_{t\in[0,T]}(\mu_{t}-r)^{2}<\infty\) and \(X^{(a)}\) defined in Theorem 3.4. Assume that \(2\kappa\bar{v}>\sigma^{2}\) and \(\frac{1-\rho^{2}}{D(s^{2}-s)}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2\sigma} \right)^{2}>1\). Consider \(q_{2}\) such that \(1<q_{2}<\frac{1-\rho^{2}}{D(s^{2}-s)}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2 \sigma}\right)^{2}\) and define \(q_{1}:=\frac{q_{2}}{q_{2}-1}>1\). If_
\[|a|<\min\Bigg{\{}\frac{1}{q_{1}s}\sqrt{\frac{c_{l}}{2}},\sqrt{\frac{(1-\rho^{2} )c_{l}}{q_{1}s\left[2q_{1}s(1-\rho^{2})+\rho^{2}s-1\right]}}\Bigg{\}}, \tag{3.7}\]
_then_
\[\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{s}\right]<\infty\quad\text{ and }\quad \mathbb{E}\left[\left(X_{t}^{(a)}\right)^{s}\right]\leqslant\left(\frac{s}{s-1} \right)^{s}\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{s}\right]<\infty,\]
_for all \(t\in[0,T]\)._
Proof.: See Lemma A.6 in the Appendix.
**Observation 3.8**.: _Using that \(q_{1},s>1\) and that \(\rho^{2}<1\) one can check that the expression \(q_{1}s\left[2q_{1}s(1-\rho^{2})+\rho^{2}s-1\right]\) appearing in the second expression inside the minimum (3.7) is strictly positive. Indeed,_
\[2q_{1}s(1-\rho^{2})+\rho^{2}s-1>2(1-\rho^{2})+\rho^{2}-1=1-\rho^{2}>0.\]
The existence of the two following moments is required to prove that the compensator of \(N\) is the same under the historic probability measure and a suitable family of risk neutral probability measures:
\[\mathbb{E}[(X_{T}^{(a)})^{2+\varepsilon_{1}}]<\infty\quad\text{ and }\quad\mathbb{E}\left[\left(\frac{1}{v_{t}}\right)^{1+\varepsilon_{2}}\right]<\infty,\]
where \(\varepsilon_{1},\varepsilon_{2}>0\) are arbitrarily small. To ensure that the previous expectations are finite we assume some conditions on the model parameters and define a suitable family of risk neutral probability measures.
**Assumption 2**.: We fix \(\varepsilon_{1},\varepsilon_{2}>0\) and assume that
1. \(\rho^{2}<c_{l}\).
2. \(\frac{1-\rho^{2}}{D[(2+\varepsilon_{1})^{2}-(2+\varepsilon_{1})]}\left(\frac {2\kappa\bar{v}-\sigma^{2}}{2\sigma}\right)^{2}>1\).
3. \(2\kappa\bar{v}>(1+\varepsilon_{2})\sigma^{2}\).
**Definition 3.9**.: _Let \(q,s>1\), we define a subset of \(\mathcal{E}_{m}\) by_
\[\mathcal{E}_{m}(q,s):=\left\{\mathbb{Q}(a)\in\mathcal{E}_{m}:|a|<\min\left\{ \frac{1}{qs}\sqrt{\frac{c_{l}}{2}},\sqrt{\frac{(1-\rho^{2})c_{l}}{qs\left[2qs (1-\rho^{2})+\rho^{2}s-1\right]}}\right\}\right\}.\]
From now on, we fix \(Q_{2}\) such that \(1<Q_{2}<\frac{1-\rho^{2}}{D[(2+\varepsilon_{1})^{2}-(2+\varepsilon_{1})]} \left(\frac{2\kappa\bar{v}-\sigma^{2}}{2\sigma}\right)^{2}\) and \(Q_{1}:=\frac{Q_{2}}{Q_{2}-1}>1\).
**Observation 3.10**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\). By Lemma 3.6, Lemma 3.7 and Assumption 2 the following holds_
\[\mathbb{E}[(X_{T}^{(a)})^{2+\varepsilon_{1}}]<\infty,\ \ \mathbb{E}\left[\left(\frac{1}{v_{t}}\right)^{1+ \varepsilon_{2}}\right]<\infty\ \ \text{and}\ \ \int_{0}^{T}\mathbb{E}\left[\left(\frac{1}{v_{t}}\right)^{1+ \varepsilon_{2}}\right]dt<\infty.\]
### Compensator of \(N\) under the historic and the risk neutral measures
First, we give the compensators of \(N\) and \(L\) under \(\mathbb{P}\). By definition, the compensator \(\Lambda^{N}\) is a \(\mathcal{F}^{N}\)-predictable process such that \(N-\Lambda^{N}\) is a \((\mathcal{F}^{N},\mathbb{P})\)-local martingale. Using the independence between \((B,W)\) and \(N\) one can prove that martingale property of \(N-\Lambda^{N}\) is conserved when considering the joint filtration \(\mathcal{F}=\{\mathcal{F}_{t}=\mathcal{F}_{t}^{(B,W)}\vee\mathcal{F}_{t}^{L},t \in[0,T]\}\).
**Lemma 3.11**.: _The following holds_
1. _Define_ \(\Lambda^{N}_{t}:=\int_{0}^{t}\lambda_{u}du\)_, then_ \(N-\Lambda^{N}\) _is a square integrable_ \((\mathcal{F},\mathbb{P})\)_-martingale._
2. _Define_ \(\Lambda_{t}^{L}:=\mathbb{E}[J_{1}]\int_{0}^{t}\lambda_{u}du\)_, then_ \(L-\Lambda^{L}\) _is a square integrable_ \((\mathcal{F},\mathbb{P})\)_-martingale._
Proof.: See Lemma A.7 in the Appendix.
Finally, we prove that the compensators of \(N\) and \(L\) under \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\) are the same than under \(\mathbb{P}\). As previously mentioned, this would be a straightforward result if there was independence between the Hawkes procces \(N\) and the Radon-Nykodim derivative \(X^{(a)}\). However, this is not clear and we require the existence of the moments in Observation 3.10.
**Proposition 3.12**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\), then_
1. \(N-\Lambda^{N}\) _is a_ \((\mathcal{F},\mathbb{Q}(a))\)_-martingale._
2. \(L-\Lambda^{L}\) _is a_ \((\mathcal{F},\mathbb{Q}(a))\)_-martingale._
Proof.: See Proposition A.9 in the Appendix.
## 4 Derivation of Thiele's PIDE for unit-linked policies
The objective of this section is to derive Thiele's differential equation for unit-linked policies under the Heston-Hawkes stochastic volatility model. First, we find the drift of a process of the form \(t\to Z(t,S_{t},v_{t},\lambda_{t})\) under a risk neutral probability measure, where \(Z\) is a regular enough function. To do so, the compensator of \(N\) under \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\) is needed. In order to lighten the notation, we first define some space of functions. From now on, \(\mathbb{R}_{+}:=(0,\infty)\).
**Definition 4.1**.: _We define \(\mathcal{D}:=\mathbb{R}_{+}^{2}\times[\lambda_{0},\infty)\) and \(\mathcal{C}^{1,2}:=\mathcal{C}^{1,2}\left([0,T]\times\mathcal{D}\right)\) the space of functions \(Y\colon[0,T]\times\mathcal{D}\to\mathbb{R}_{+}\) that are jointly continuous, continuously differentiable on the first variable, twice continuously differentiable on the last three variables and all derivatives are jointly continuous._
_We define \(\mathcal{C}^{0,1,2}:=\mathcal{C}^{0,1,2}\left([0,T]^{2}\times\mathcal{D}\right)\) the space of functions \(Z\colon[0,T]^{2}\times\mathcal{D}\to\mathbb{R}_{+}\) that are jointly continuous, continuous on the first variable, continuously differentiable on the second variable, twice continuously differentiable on the last three variables and all derivatives are jointly continuous._
Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}\) and \(\varphi\colon[0,T]\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) a payoff such that \(\mathbb{E}^{\mathbb{Q}(a)}[|\varphi(s,S_{s})|]<\infty\) for all \(s\in[0,T]\). Recall that the price at time \(t\in[0,T]\) of the payoff function \(\varphi(s,S_{s})\) with maturity \(s\in[t,T]\) is given by \(e^{-r(s-t)}\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]\). One example of such payoff is \(\varphi(s,S_{s})=\max\{G,S_{s}\}\), where \(G\) is called the guaranteed amount. This means that at time \(s\) the insured is paid the maximum between the guaranteed amount and the stock.
As a consequence of the Markov property of the process \((N,\lambda)\), see [39, Remark 1.22], we prove in the next lemma that \(\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]\) is a deterministic function of the joint process \((t,S_{t},v_{t},\lambda_{t})\). Due to the presence of jumps in \(v\) and \(\lambda\), such function is the solution of a partial integro-differential equation, PIDE from now on. By applying Ito's formula and using the compensators of \(N\) and \(L\) under \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\) we obtain that PIDE.
**Lemma 4.2**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\), \(\varphi\colon[0,T]\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that \(\mathbb{E}^{\mathbb{Q}(a)}[|\varphi(s,S_{s})|]<\infty\) for all \(s\in[0,T]\). Then, there exists a function \(Z^{\varphi,a}\colon[0,T]^{2}\times\mathcal{D}\to\mathbb{R}_{+}\) such that_
\[\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]=Z^{\varphi,a}_{s} (t,S_{t},v_{t},\lambda_{t}).\]
_where \(s,t\in[0,T]\). Note that \(Z^{\varphi,a}_{s}(t,x,y,z)=\varphi(s,x)\) for \(t\in[s,T]\)._
_Furthermore, fix \(s\in[0,T]\), if \(Z^{\varphi,a}_{s}\in\mathcal{C}^{1,2}\), it satisfies the following PIDE_
\[\partial_{t}Z^{\varphi,a}_{s}(t,x,y,z)+rx\partial_{x}Z^{\varphi,a}_{s}(t,x,y, z)-\kappa^{(a)}(y-\bar{v}^{(a)})\partial_{y}Z^{\varphi,a}_{s}(t,x,y,z)\]
\[-\beta(z-\lambda_{0})\partial_{z}Z^{\varphi,a}_{s}(t,x,y,z)+\frac{1}{2}x^{2}y \partial_{xx}^{2}Z^{\varphi,a}_{s}(t,x,y,z)+\frac{1}{2}\sigma^{2}y\partial_{yy }^{2}Z^{\varphi,a}_{s}(t,x,y,z)\]
\[+\sigma\rho xy\partial_{xy}^{2}Z^{\varphi,a}_{s}(t,x,y,z)+z\int_{(0,\infty)} \left[Z^{\varphi,a}_{s}(t,x,y+\eta u,z+\alpha)-Z^{\varphi,a}_{s}(t,x,y,z)\right] P_{J_{1}}(du)=0, \tag{4.1}\]
_for \((t,x,y,z)\in[0,s]\times\mathcal{D}\) and final condition \(Z^{\varphi,a}_{s}(s,x,y,z)=\varphi(s,x)\)._
Proof.: By definition \(\mathcal{F}_{t}=\mathcal{F}_{t}^{(B,W)}\vee\mathcal{F}_{t}^{L}\) and since \(S\) and \(v\) are strong solutions, we have \(\mathcal{F}_{t}^{(B,W)}\vee\mathcal{F}_{t}^{L}=\mathcal{F}_{t}^{S}\vee\mathcal{ F}_{t}^{v}\). Moreover, since \(\mathcal{F}_{t}^{\Lambda}\subset\mathcal{F}_{t}^{v}\) we have that \(\mathcal{F}_{t}=\mathcal{F}_{t}^{S}\vee\mathcal{F}_{t}^{v}\vee\mathcal{F}_{t} ^{\Lambda}\). Since \((N,\lambda)\) is a Markov process, see [39, Remark 1.22], \((S,v,\lambda)\) is also a Markov process. We conclude that there exists a function \(Z^{\varphi,a}\colon[0,T]^{2}\times\mathcal{D}\to\mathbb{R}_{+}\) such that
\[\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]=Z_{s}^{\varphi,a }(t,S_{t},v_{t},\lambda_{t}),\]
where \(s,t\in[0,T]\).
Moreover, fix \(s\in[0,T]\), if \(Z_{s}^{\varphi,a}\in\mathcal{C}^{1,2}\) we can apply Ito formula to the process \(t\to Z_{s}^{\varphi,a}(t,S_{t},v_{t},\lambda_{t})\) for \(t\in[0,s]\). For convenience, we define \(Y_{t}:=(t,S_{t},v_{t},\lambda_{t})\). Applying Ito formula to \(Z_{s}^{\varphi,a}\) we get
\[Z_{s}^{\varphi,a}(Y_{t})= Z_{s}^{\varphi,a}(Y_{0})+\int_{0}^{t}\partial_{t}Z_{s}^{ \varphi,a}(Y_{u-})du+\int_{0}^{t}\partial_{x}Z_{s}^{\varphi,a}(Y_{u-})dS_{u}+ \int_{0}^{t}\partial_{y}Z_{s}^{\varphi,a}(Y_{u-})dv_{u}\] \[+\int_{0}^{t}\partial_{z}Z_{s}^{\varphi,a}(Y_{u-})d\lambda_{u}+ \frac{1}{2}\int_{0}^{t}\partial_{xx}^{2}Z_{s}^{\varphi,a}(Y_{u})-d[S]_{u}+ \frac{1}{2}\int_{0}^{t}\partial_{yy}^{2}Z_{s}^{\varphi,a}(Y_{u-})d[v]_{u}^{ \mathrm{c}}\] \[+\frac{1}{2}\int_{0}^{t}\partial_{zz}^{2}Z_{s}^{\varphi,a}(Y_{u- })d[\lambda]_{u}^{\mathrm{c}}+\int_{0}^{t}\partial_{xy}^{2}Z_{s}^{\varphi,a}(Y _{u-})d[S,v]_{u}^{\mathrm{c}}+\int_{0}^{t}\partial_{xz}^{2}Z_{s}^{\varphi,a}(Y _{u-})d[S,\lambda]_{u}^{\mathrm{c}}\] \[+\int_{0}^{t}\partial_{yz}^{2}Z_{s}^{\varphi,a}(Y_{u-})d[v, \lambda]_{u}^{\mathrm{c}}\] \[+\sum_{0<u\leqslant t}\left[Z_{s}^{\varphi,a}(Y_{u})-Z_{s}^{ \varphi,a}(Y_{u-})-\partial_{y}Z_{s}^{\varphi,a}(Y_{u-})\Delta v_{u}-\partial _{z}Z_{s}^{\varphi,a}(Y_{u-})\Delta\lambda_{u}\right]. \tag{4.2}\]
Recall that the dynamics of \(S\) is given in (3.4), the dynamics of \(v\) in (3.5) and the dynamics of \(\lambda\) in (2.1). Then,
\[d[S]_{u} =S_{u}^{2}v_{u}du,\] \[d[v]_{u} =\sigma^{2}v_{u}du+\eta^{2}d[L]_{u}\implies d[v]_{u}^{\mathrm{c} }=\sigma^{2}v_{u}du,\] \[[\lambda]_{u} =\alpha^{2}[N]_{u}=\alpha^{2}N_{u}\implies[\lambda]_{u}^{\mathrm{ c}}=0,\] \[d[S,v]_{u} =\sigma\rho S_{u}v_{u}du,\] \[[S,\lambda]_{u} =0,\] \[[v,\lambda]_{u} =\alpha\eta[L,N]_{u}=\alpha\eta L_{u}\implies[v,\lambda]_{u}^{ \mathrm{c}}=0.\]
Note that \(\Delta v_{u}=\eta\Delta L_{u}\) and \(\Delta\lambda_{u}=\alpha\Delta N_{u}\). Replacing everything in (4.2) we have
\[Z_{s}^{\varphi,a}(Y_{t})= Z_{s}^{\varphi,a}(Y_{0})+\int_{0}^{t}\Big{[}\partial_{t}Z_{s}^{ \varphi,a}(Y_{u})+rS_{u}\partial_{x}Z_{s}^{\varphi,a}(Y_{u})-\kappa^{(a)}(v_ {u}-\bar{v}^{(a)})\partial_{y}Z_{s}^{\varphi,a}(Y_{u})\] \[-\beta(\lambda_{u}-\lambda_{0})\partial_{z}Z_{s}^{\varphi,a}(Y_{ u})+\frac{1}{2}S_{u}^{2}v_{u}\partial_{xx}^{2}Z_{s}^{\varphi,a}(Y_{u})+\frac{1}{2} \sigma^{2}v_{u}\partial_{yy}^{2}Z_{s}^{\varphi,a}(Y_{u})\] \[+\sigma\rho S_{u}v_{u}\partial_{xy}^{2}Z_{s}^{\varphi,a}(Y_{u}) \Big{]}du+\sqrt{1-\rho^{2}}\int_{0}^{t}S_{u}\sqrt{v_{u}}\partial_{x}Z_{s}^{ \varphi,a}(Y_{u-})dB_{u}^{\mathbb{Q}(a)}\] \[+\int_{0}^{t}\left[\rho S_{u}\sqrt{v_{u}}\partial_{x}Z_{s}^{ \varphi,a}(Y_{u-})+\sigma\sqrt{v_{u}}\partial_{y}Z_{s}^{\varphi,a}(Y_{u-}) \right]dW_{u}^{\mathbb{Q}(a)}\] \[+\eta\int_{0}^{t}\partial_{y}Z_{s}^{\varphi,a}(Y_{u-})dL_{u}+ \alpha\int_{0}^{t}\partial_{z}Z_{s}^{\varphi,a}(Y_{u-})dN_{u}\] \[+\sum_{0<u\leqslant t}\left[Z_{s}^{\varphi,a}(Y_{u})-Z_{s}^{ \varphi,a}(Y_{u-})-\eta\partial_{y}Z_{s}^{\varphi,a}(Y_{u-})\Delta L_{u}- \alpha\partial_{z}Z_{s}^{\varphi,a}(Y_{u-})\Delta N_{u}\right]. \tag{4.3}\]
Note that
\[\int_{0}^{t}\partial_{y}Z_{s}^{\varphi,a}(Y_{u-})dL_{u}=\sum_{0<u \leqslant t}\partial_{y}Z_{s}^{\varphi,a}(Y_{u-})\Delta L_{u}\]
\[\int_{0}^{t}\partial_{z}Z_{s}^{\varphi,a}(Y_{u-})dN_{u}=\sum_{0<u\leqslant t} \partial_{z}Z_{s}^{\varphi,a}(Y_{u-})\Delta N_{u}.\]
Next, we can write
\[\sum_{0<u\leqslant t}[Z_{s}^{\varphi,a}(Y_{u})-Z_{s}^{\varphi,a}(Y_ {u-})] =\sum_{0<u\leqslant t}[Z_{s}^{\varphi,a}(u,S_{u},v_{u-}+\Delta v_{s },\lambda_{u-}+\Delta\lambda_{s})-Z_{s}^{\varphi,a}(Y_{u-})]\] \[=\sum_{0<u\leqslant t}[Z_{s}^{\varphi,a}(u,S_{u},v_{u-}+\eta \Delta L_{u},\lambda_{u-}+\alpha\Delta N_{u})-Z_{s}^{\varphi,a}(Y_{u-})]\] \[=\sum_{0<u\leqslant t}g_{s}^{\varphi,a}(u,\Delta L_{u},\Delta N_ {u}),\]
where
\[g_{s}^{\varphi,a}(u,b_{1},b_{2}):=Z_{s}^{\varphi,a}(u,S_{u},v_{u-}+\eta b_{1}, \lambda_{u-}+\alpha b_{2})-Z_{s}^{\varphi,a}(u,S_{u},v_{u-},\lambda_{u-}).\]
We now define \(M_{u}=(L_{u},N_{u})\) and for \(t\in[0,T]\), \(B\in\mathcal{B}(\mathbb{R}^{2}\setminus\{0,0\})\)
\[N^{M}(t,A)=\#\{0<s\leqslant t,\Delta M_{s}\in A\}.\]
We add and subtract the compensator of the counting measure \(N^{M}\) to split the expression into a \((\mathcal{F},\mathbb{Q}(a))\)-local martingale plus a predictable process of finite variation. By Proposition 3.12, the compensators of \(N\) and \(L\) under \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\) are \(\Lambda_{t}^{N}=\int_{0}^{t}\lambda_{u}du\) and \(\Lambda_{t}^{L}=\mathbb{E}[J_{1}]\int_{0}^{t}\lambda_{u}du\) respectively. Thus,
\[\sum_{0<u\leqslant t}[Z_{s}^{\varphi,a}(Y_{u})-Z_{s}^{\varphi,a} (Y_{u-})] =\int_{0}^{t}\int_{(0,\infty)^{2}}g_{s}^{\varphi,a}(u,b_{1},b_{2})N ^{M}(du,db)\] \[=\int_{0}^{t}\int_{(0,\infty)^{2}}g_{s}^{\varphi,a}(u,b_{1},b_{2}) \left(N^{M}(du,db)-\lambda_{u}P_{J_{1}}(db_{1})\delta_{1}(db_{2})du\right)\] \[\quad+\int_{0}^{t}\lambda_{u}\int_{(0,\infty)}g_{s}^{\varphi,a}(u,b_{1},1)P_{J_{1}}(db_{1})du.\]
Replacing everything in (4.3) we finally get
\[Z_{s}^{\varphi,a}(Y_{t})= \ Z_{s}^{\varphi,a}(Y_{0})+\int_{0}^{t}\Big{[}\partial_{t}Z_{s}^ {\varphi,a}(Y_{u})+rS_{u}\partial_{x}Z_{s}^{\varphi,a}(Y_{u})-\kappa^{(a)}(v_{ u}-\bar{v}^{(a)})\partial_{y}Z_{s}^{\varphi,a}(Y_{u})\] \[-\beta(\lambda_{u}-\lambda_{0})\partial_{z}Z_{s}^{\varphi,a}(Y_{u })+\frac{1}{2}S_{u}^{2}v_{u}\partial_{xx}^{2}Z_{s}^{\varphi,a}(Y_{u})+\frac{1} {2}\sigma^{2}v_{u}\partial_{yy}^{2}Z_{s}^{\varphi,a}(Y_{u})\] \[+\sigma\rho S_{u}v_{u}\partial_{xy}^{2}Z_{s}^{\varphi,a}(Y_{u})+ \lambda_{u}\int_{(0,\infty)}g_{s}^{\varphi,a}(u,b_{1},1)P_{J_{1}}(db_{1})\Big{]}du\] \[+\sqrt{1-\rho^{2}}\int_{0}^{t}S_{u}\sqrt{v_{u}}\partial_{x}Z_{s}^ {\varphi,a}(Y_{u-})dB_{u}^{\mathbb{Q}(a)}\] \[+\int_{0}^{t}[\rho S_{u}\sqrt{v_{u}}\partial_{x}Z_{s}^{\varphi,a} (Y_{u-})+\sigma\sqrt{v_{u}}\partial_{y}Z_{s}^{\varphi,a}(Y_{u-})]\,dW_{u}^{ \mathbb{Q}(a)}\] \[+\int_{0}^{t}\int_{(0,\infty)^{2}}g_{s}^{\varphi,a}(u,b_{1},b_{2} )\left(N^{M}(du,db)-\lambda_{u}P_{J_{1}}(db_{1})\delta_{1}(db_{2})du\right). \tag{4.4}\]
Recall that \(t\to Z_{s}^{\varphi,a}(Y_{t})\) is a \((\mathcal{F},\mathbb{Q}(a))\)-martingale and note that the last three terms in (4.4) are \((\mathcal{F},\mathbb{Q}(a))\)-local martingales. Moving this three terms to the left hand side we see that a local martingale is equal to a continuous process of finite variation. This implies that the integral of the drift is \(0\) on every interval \([0,t]\subset[0,T]\) and that the sum of the last three terms in (4.4) is
a \((\mathcal{F},\mathbb{Q}(a))\)-martingale. As a consequence, the drift is constant equal to \(0\) and \(Z_{s}^{\varphi,a}\) satisfies the following PIDE
\[\partial_{t}Z_{s}^{\varphi,a}(t,x,y,z)+rx\partial_{x}Z_{s}^{\varphi,a}(t,x,y,z)-\kappa^{(a)}(y-\bar{v}^{(a)})\partial_{y}Z_{s}^{\varphi,a}(t,x,y,z)\] \[-\beta(z-\lambda_{0})\partial_{z}Z_{s}^{\varphi,a}(t,x,y,z)+\frac {1}{2}x^{2}y\partial_{xx}^{2}Z_{s}^{\varphi,a}(t,x,y,z)+\frac{1}{2}\sigma^{2}y \partial_{yy}^{2}Z_{s}^{\varphi,a}(t,x,y,z)\] \[+\sigma\rho xy\partial_{xy}^{2}Z_{s}^{\varphi,a}(t,x,y,z)+z\int_ {(0,\infty)}\left[Z_{s}^{\varphi,a}(t,x,y+\eta u,z+\alpha)-Z_{s}^{\varphi,a}(t,x,y,z)\right]P_{J_{1}}(du)=0,\]
for \((t,x,y,z)\in[0,s]\times\mathcal{D}\) and final condition \(Z_{s}^{\varphi,a}(s,x,y,z)=\varphi(s,x)\). Note that \(\mathcal{D}\) is the support of the process \((S_{t},v_{t},\lambda_{t})\).
**Definition 4.3**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\), \(f\in\mathcal{C}^{1,2}\) satisfying_
\[\int_{(0,\infty)}|f(t,x,y+\eta u,z+\alpha)|P_{J_{1}}(du)<\infty. \tag{4.5}\]
_We define the following partial integro-differential operator \(\mathcal{L}^{a}\) by_
\[\mathcal{L}^{a}f(t,x,y,z):= \ rx\partial_{x}f(t,x,y,z)-\kappa^{(a)}(y-\bar{v}^{(a)})\partial_ {y}f(t,x,y,z)-\beta(z-\lambda_{0})\partial_{z}f(t,x,y,z)\] \[+\frac{1}{2}x^{2}y\partial_{xx}^{2}f(t,x,y,z)+\frac{1}{2}\sigma^{ 2}y\partial_{yy}^{2}f(t,x,y,z)+\sigma\rho xy\partial_{xy}^{2}f(t,x,y,z)\] \[+z\int_{(0,\infty)}\left[f(t,x,y+\eta u,z+\alpha)-f(t,x,y,z) \right]P_{J_{1}}(du).\]
_Since all positive moments of \(J_{1}\) exists (see Assumption 1) condition (4.5) automatically holds if \(f\) has polynomial growth in the third variable, that is, if \(f(t,x,y,z)\leqslant\sum_{i=0}^{n}C_{i}(t,x,z)y^{i}\) where the constants \(C_{i}(t,x,z)\) do not depend on \(y\). More general, if \(|f(t,x,y+\eta u,z+\alpha)|\leqslant C(t,x,y,z+\alpha)e^{cu}\) with \(c<\epsilon_{J}\), then condition (4.5) holds._
**Observation 4.4**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\) and \(f\in\mathcal{C}^{1,2}\). Note that we have proved in Lemma 4.2 that the drift of the Ito differential of \(f(t,S_{t},v_{t},\lambda_{t})\) is \(\partial_{t}f(t,S_{t},v_{t},\lambda_{t})+\mathcal{L}^{a}f(t,S_{t},v_{t}, \lambda_{t})\)._
**Observation 4.5**.: _Note that the PIDE in (4.1) is just_
\[\partial_{t}Z_{s}^{\varphi,a}(t,x,y,z)+\mathcal{L}^{a}Z_{s}^{\varphi,a}(t,x,y, z)=0. \tag{4.6}\]
_for \((t,x,y,z)\in[0,s]\times\mathcal{D}\) and final condition \(Z_{s}^{\varphi,a}(s,x,y,z)=\varphi(s,x)\)_
From the PIDE obtained in Lemma 4.2 we derive the PIDE that satisfies \(e^{-r(s-t)}\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]\), which is the price at time \(t\) of the payoff \(\varphi(s,S_{s})\) where \(0\leqslant t\leqslant s\leqslant T\).
**Lemma 4.6**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\), \(\varphi\colon[0,T]\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that \(\mathbb{E}^{\mathbb{Q}(a)}[|\varphi(s,S_{s})|]<\infty\) for all \(s\in[0,T]\). Then, there exists a function \(U^{\varphi,a}\colon[0,T]^{2}\times\mathcal{D}\to\mathbb{R}_{+}\) such that_
\[e^{-r(s-t)}\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]=U_{s} ^{\varphi,a}(t,S_{t},v_{t},\lambda_{t}) \tag{4.7}\]
_where \(s,t\in[0,T]\). Note that \(U_{s}^{\varphi,a}(t,x,y,z)=e^{-r(s-t)}\varphi(s,x)\) for \(t\in[s,T]\). Furthermore, fix \(s\in[0,T]\), if \(U_{s}^{\varphi,a}\in\mathcal{C}^{1,2}\), it satisfies the following PIDE_
\[\partial_{t}U_{s}^{\varphi,a}(t,x,y,z)+\mathcal{L}^{a}U_{s}^{\varphi,a}(t,x,y, z)=rU_{s}^{\varphi,a}(t,x,y,z), \tag{4.8}\]
_where \(\mathcal{L}^{a}\) is defined in Definition 4.3, \((t,x,y,z)\in[0,s]\times\mathcal{D}\) and final condition \(U_{s}^{\varphi,a}(s,x,y,z)=\varphi(s,x)\)._
Proof.: See Lemma A.10 in the Appendix.
### Thiele's PIDE for unit-linked policies
We now have all the preliminary results needed to obtain Thiele's PIDE for unit-linked policies under the Heston-Hawkes stochastic volatility model. Let \(\mathcal{X}=\{\mathcal{X}_{t},t\in[0,T]\}\) be a regular Markov process with finite state space \(\mathcal{J}\) that describes the insured's state. Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\) and \(f_{j},g_{j},h_{jk}\colon[0,T]\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) be policy functions where \(j,k\in\mathcal{J}\), \(j\neq k\), satisfying \(\mathbb{E}^{\mathbb{Q}(a)}[f_{j}(t,S_{t})]<\infty\), \(\mathbb{E}^{\mathbb{Q}(a)}[g_{j}(t,S_{t})]<\infty\), \(\mathbb{E}^{\mathbb{Q}(a)}[h_{jk}(t,S_{t})]<\infty\) for all \(t\in[0,T]\),
\[\int_{0}^{T}e^{-rs}p_{ij}(t,s)\mathbb{E}^{\mathbb{Q}(a)}[g_{j}(s,S_{s})| \mathcal{F}_{t}]ds<\infty,\]
and
\[\int_{0}^{T}e^{-rs}p_{ij}(t,s)\mu_{jk}(s)\mathbb{E}^{\mathbb{Q}(a)}[h_{jk}(s, S_{s})|\mathcal{F}_{t}]ds<\infty,\]
a.s. for all \(j,k\in\mathcal{J}\), \(j\neq k\). The mathematical reserve \(V^{+,a}_{i,\mathcal{F}}(t)\) of a contract with policy functions \(f_{j}\), \(g_{j}\) and \(h_{jk}\) given that the insured is in state \(i\in\mathcal{J}\) at time \(t\) and the information \(\mathcal{F}_{t}\), is given by
\[V^{+,a}_{i,\mathcal{F}}(t) =e^{rt}\Bigg{[}\sum_{j\in\mathcal{J}}e^{-rT}p_{ij}(t,T)\mathbb{E} ^{\mathbb{Q}(a)}[f_{j}(T,S_{T})|\mathcal{F}_{t}]\] \[\quad+\sum_{j\in\mathcal{J}}\int_{t}^{T}e^{-rs}p_{ij}(t,s)\mathbb{ E}^{\mathbb{Q}(a)}[g_{j}(s,S_{s})|\mathcal{F}_{t}]ds\] \[\quad+\sum_{\begin{subarray}{c}j,k\in\mathcal{J}\\ j\neq k\end{subarray}}\int_{t}^{T}e^{-rs}p_{ij}(t,s)\mu_{jk}(s)\mathbb{E}^{ \mathbb{Q}(a)}[h_{jk}(s,S_{s})|\mathcal{F}_{t}]ds\Bigg{]}. \tag{4.9}\]
In the following result we derive Thiele's PIDE.
**Proposition 4.7**.: _(Thiele's PIDE) Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\) and \(i\in\mathcal{J}\), then, there exists a function \(V^{a}_{i}\colon[0,T]\times\mathcal{D}\to\mathbb{R}_{+}\) such that_
\[V^{+,a}_{i,\mathcal{F}}(t)=V^{a}_{i}(t,S_{t},v_{t},\lambda_{t}).\]
_Furthermore, assume that \(U^{f_{j},a}_{T}\in\mathcal{C}^{1,2}\) and \(U^{g_{j},a},U^{h_{jk},a}\in\mathcal{C}^{0,1,2}\) for all \(j,k\in\mathcal{J}\), \(j\neq k\). Then, \(V^{a}_{i}\) satisfies the following PIDE_
\[\partial_{t}V^{a}_{i}(t,x,y,z) =rV^{a}_{i}(t,x,y,z)-g_{i}(t,x)-\sum_{\begin{subarray}{c}k\in \mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(h_{ik}(t,x)+V^{a}_{k}(t,x,y,z)-V^{a}_ {i}(t,x,y,z)\right)\] \[\quad-\mathcal{L}^{a}V^{a}_{i}(t,x,y,z),\]
_where \(\mathcal{L}^{a}\) is the operator defined in Definition 4.3, \((t,x,y,z)\in[0,T]\times\mathcal{D}\) and final condition \(V^{a}_{i}(T,x,y,z)=f_{i}(T,x)\)._
Proof.: Applying Lemma 4.6 there exist functions \(U^{f_{j},a},U^{g_{j},a},U^{h_{jk},a}\colon[0,T]^{2}\times\mathcal{D}\to \mathbb{R}_{+}\) for all \(j,k\in\mathcal{J}\), \(j\neq k\) such that the mathematical reserve \(V^{+,a}_{i,\mathcal{F}}(t)\) in (4.9) can be rewritten as
\[V^{+,a}_{i,\mathcal{F}}(t) =\sum_{j\in\mathcal{J}}p_{ij}(t,T)U^{f_{j},a}_{T}(t,S_{t},v_{t}, \lambda_{t})+\sum_{j\in\mathcal{J}}\int_{t}^{T}p_{ij}(t,s)U^{g_{j},a}_{s}(t,S_{ t},v_{t},\lambda_{t})ds\] \[\quad+\sum_{\begin{subarray}{c}j,k\in\mathcal{J}\\ j\neq k\end{subarray}}\int_{t}^{T}p_{ij}(t,s)\mu_{jk}(s)U^{h_{jk},a}_{s}(t,S_{ t},v_{t},\lambda_{t})ds. \tag{4.10}\]
Defining \(V_{i}^{(a)}\colon[0,T]\times\mathcal{D}\to\mathbb{R}_{+}\) by
\[V_{i}^{(a)}(t,x,y,z):= \ \sum_{j\in\mathcal{J}}p_{ij}(t,T)U_{T}^{f_{j},a}(t,x,y,z)+\sum_{j \in\mathcal{J}}\int_{t}^{T}p_{ij}(t,s)U_{s}^{g_{j},a}(t,x,y,z)ds\] \[\ +\sum_{\begin{subarray}{c}j,k\in\mathcal{J}\\ j\neq k\end{subarray}}\int_{t}^{T}p_{ij}(t,s)\mu_{jk}(s)U_{s}^{h_{jk},a}(t,x,y, z)ds.\]
we see that \(V_{i,\mathcal{F}}^{+,a}(t)=V_{i}^{a}(t,S_{t},v_{t},\lambda_{t})\) and the first part is proved.
Assume now that \(U_{T}^{f_{j},a}\in\mathcal{C}^{1,2}\) and \(U^{g_{j},a},U^{h_{jk},a}\in\mathcal{C}^{0,1,2}\) for all \(j,k\in\mathcal{J}\), \(j\neq k\). Since \(\mathcal{X}\) is regular, we see that \(\partial_{t}V_{i}^{a}\) and \(\mathcal{L}^{a}V_{i}^{a}\) are well defined by applying several times differentiation under the integral sign. For the sake of clarity, we define
\[V_{i,\mathcal{F}}^{+,a}(t)=V_{i}^{a}(t,S_{t},v_{t},\lambda_{t})=G_{i,T}^{a}(t,S_{t},v_{t},\lambda_{t})+\int_{t}^{T}F_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})ds, \tag{4.11}\]
where
\[G_{i,T}^{a}(t,x,y,z): =\sum_{j\in\mathcal{J}}p_{ij}(t,T)U_{T}^{f_{j},a}(t,x,y,z),\] \[F_{i,s}^{a}(t,x,y,z): =\sum_{j\in\mathcal{J}}p_{ij}(t,s)U_{s}^{\theta_{j},a}(t,x,y,z), \tag{4.12}\] \[\theta_{j}(s,x): =g_{j}(s,x)+\sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq j\end{subarray}}\mu_{jk}(s)h_{jk}(s,x).\]
Since \(\mathcal{X}\) is regular, note that \(U^{\theta_{j},a},F_{i}^{a}\in\mathcal{C}^{0,1,2}\). Fix \(s\in[0,T]\), we can apply Ito's formula to the processes \(t\to U_{s}^{\theta_{j},a}(t,S_{t},v_{t},\lambda_{t})\) and \(t\to F_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})\). Now, we compute the drift of the Ito differential of \(t\to F_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})\) in two ways. First, by direct definition. By Observation 4.4, the drift of the Ito differential of \(t\to F_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})\) is
\[\partial_{t}F_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})+\mathcal{L}^{a}F_{i,s}^{a} (t,S_{t},v_{t},\lambda_{t}). \tag{4.13}\]
On the other hand
\[dF_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})=\sum_{j\in\mathcal{J}}\partial_{t}p_{ ij}(t,s)U_{s}^{\theta_{j},a}(t,S_{t},v_{t},\lambda_{t})dt+\sum_{j\in \mathcal{J}}p_{ij}(t,s)dU_{s}^{\theta_{j},a}(t,S_{t},v_{t},\lambda_{t}).\]
Using Kolmogorov's backward equation in the first term and then the definition of \(F_{i,s}^{a}(t,x,y,z)\) given in (4.12) we get
\[dF_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})= \ \sum_{j\in\mathcal{J}}\sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(p_{ij}(t,s)-p_{kj}(t,s)\right)U_{s}^{ \theta_{j},a}(t,S_{t},v_{t},\lambda_{t})dt\] \[\ +\sum_{j\in\mathcal{J}}p_{ij}(t,s)dU_{s}^{\theta_{j},a}(t,S_{t}, v_{t},\lambda_{t})\] \[= \ \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(F_{i,s}^{a}(t,S_{t},v_{t},\lambda_{t})-F_ {k,s}^{a}(t,S_{t},v_{t},\lambda_{t})\right)dt\] \[\ +\sum_{j\in\mathcal{J}}p_{ij}(t,s)dU_{s}^{\theta_{j},a}(t,S_{t}, v_{t},\lambda_{t}).\]
We know by Observation 4.4 that the drift part of the Ito differential of \(t\to U_{s}^{\theta_{j},a}(t,S_{t},v_{t},\lambda_{t})\) is \(\partial_{t}U_{s}^{\theta_{j},a}(t,S_{t},v_{t},\lambda_{t})+\mathcal{L}^{a}U_{s }^{\theta_{j},a}(t,S_{t},v_{t},\lambda_{t})\). Moreover, \(U_{s}^{\theta_{j},a}(t,S_{t},v_{t},\lambda_{t})\) satisfies the PIDE in
(4.8). Thus, the drift part of the Ito differential of \(t\to F^{a}_{i,s}(t,S_{t},v_{t},\lambda_{t})\) can also be written as
\[\sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(F^{a}_{i,s}(t,S_{t},v_{t},\lambda_{t})-F^ {a}_{k,s}(t,S_{t},v_{t},\lambda_{t})\right)\] \[+\sum_{j\in\mathcal{J}}p_{ij}(t,s)\left(\partial_{t}U^{\theta_{j},a}_{s}(t,S_{t},v_{t},\lambda_{t})+\mathcal{L}^{a}U^{\theta_{j},a}_{s}(t,S_{t}, v_{t},\lambda_{t})\right)\] \[= \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(F^{a}_{i,s}(t,S_{t},v_{t},\lambda_{t})-F^ {a}_{k,s}(t,S_{t},v_{t},\lambda_{t})\right)+\sum_{j\in\mathcal{J}}p_{ij}(t,s)rU ^{\theta_{j},a}_{s}(t,S_{t},v_{t},\lambda_{t})\] \[= \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(F^{a}_{i,s}(t,S_{t},v_{t},\lambda_{t})-F^ {a}_{k,s}(t,S_{t},v_{t},\lambda_{t})\right)+rF^{a}_{i,s}(t,S_{t},v_{t},\lambda _{t}). \tag{4.14}\]
In the last step we have used again the definition of \(F^{a}_{i,s}\) in (4.12). Equating the two equivalent expressions of the drift of the Ito differential in (4.13) and (4.14) we get
\[\partial_{t}F^{a}_{i,s}(t,S_{t},v_{t},\lambda_{t})+\mathcal{L}^{a }F^{a}_{i,s}(t,S_{t},v_{t},\lambda_{t})\] \[=\sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(F^{a}_{i,s}(t,S_{t},v_{t},\lambda_{t})-F ^{a}_{k,s}(t,S_{t},v_{t},\lambda_{t})\right)+rF^{a}_{i,s}(t,S_{t},v_{t}, \lambda_{t}).\]
We deduce that
\[\partial_{t}F^{a}_{i,s}(t,x,y,z)+\mathcal{L}^{a}F^{a}_{i,s}(t,x,y,z)\] \[=\sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(F^{a}_{i,s}(t,x,y,z)-F^{a}_{k,s}(t,x,y, z)\right)+rF^{a}_{i,s}(t,x,y,z), \tag{4.15}\]
for \((t,x,y,z)\in[0,s]\times\mathcal{D}\). Since \(F^{a}_{i}\in\mathcal{C}^{0,1,2}\) we can apply differentiation under the integral sign to get
\[\partial_{t}\left(\int_{t}^{T}F^{a}_{i,s}(t,x,y,z)ds\right)=\int_{t}^{T} \partial_{t}F^{a}_{i,s}(t,x,y,z)ds-F^{a}_{i,t}(t,x,y,z). \tag{4.16}\]
Taking the derivative with respect to \(t\) in (4.11) and using (4.16) we get
\[\partial_{t}V^{a}_{i}(t,x,y,z)=\partial_{t}G^{a}_{i,T}(t,x,y,z)+\int_{t}^{T} \partial_{t}F^{a}_{i,s}(t,x,y,z)ds-F^{a}_{i,t}(t,x,y,z).\]
Writing the explicit expression of \(F^{a}_{i,t}(t,x,y,z)\) we obtain
\[\int_{t}^{T}\partial_{t}F^{a}_{i,s}(t,x,y,z)ds=\partial_{t}V^{a}_{i}(t,x,y,z) -\partial_{t}G^{a}_{i,T}(t,x,y,z)+g_{i}(t,x)+\sum_{\begin{subarray}{c}k\in \mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)h_{ik}(t,x). \tag{4.17}\]
We now integrate (4.15) with respect to \(s\) on the region \([t,T]\) to obtain
\[\int_{t}^{T}\partial_{t}^{T}F^{a}_{i,s}(t,x,y,z)ds+\int_{t}^{T} \mathcal{L}^{a}F^{a}_{i,s}(t,x,y,z)ds\] \[=\sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(\int_{t}^{T}F^{a}_{i,s}(t,x,y,z)ds- \int_{t}^{T}F^{a}_{k,s}(t,x,y,z)\right)+r\int_{t}^{T}F^{a}_{i,s}(t,x,y,z)ds.\]
Since \(F^{a}_{i,s}\in\mathcal{C}^{1,2}\) and it is positive we can apply differentiation under the integral sign several times and Tonelli's theorem to conclude that \(\int_{t}^{T}\mathcal{L}^{a}F^{a}_{i,s}(t,x,y,z)ds=\mathcal{L}^{a}\int_{t}^{T}F^ {a}_{i,s}(t,x,y,z)ds\). Now,
we write the expression of \(\int_{t}^{T}\partial_{t}^{T}F_{i,s}^{a}(t,x,y,z)\) in (4.17), we use that \(\int_{t}^{T}\mathcal{L}^{a}F_{i,s}^{a}(t,x,y,z)ds=\mathcal{L}^{a}\int_{t}^{T}F_{ i,s}^{a}(t,x,y,z)ds\) and that \(\int_{t}^{T}F_{i,s}^{a}(t,x,y,z)ds=V_{i}^{a}(t,x,y,z)-G_{i,T}^{a}(t,x,y,z)\) to get
\[\partial_{t}\left(V_{i}^{a}(t,x,y,z)-G_{i,T}^{a}(t,x,y,z)\right)+g_{i}(t,x)+ \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)h_{ik}(t,x)\]
\[+\mathcal{L}^{a}\left(V_{i}^{a}(t,x,y,z)-G_{i,T}^{a}(t,x,y,z)\right)\]
\[= \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(V_{i}^{a}(t,x,y,z)-V_{k}^{a}(t,x,y,z)+ G_{k,T}^{a}(t,x,y,z)-G_{i,T}^{a}(t,x,y,z)\right) \tag{4.18}\] \[+r\left(V_{i}^{a}(t,x,y,z)-G_{i,T}^{a}(t,x,y,z)\right).\]
Now we prove that the terms involving \(G_{T}^{a}\) will cancel each other. Indeed, observe that using Kolmogorov's backward equation we have
\[\partial_{t}G_{i,T}^{a}(t,x,y,z)= \sum_{j\in\mathcal{J}}\partial_{t}p_{ij}(t,T)U_{T}^{f_{j},a}(t,x, y,z)+\sum_{j\in\mathcal{J}}p_{ij}(t,T)\partial_{t}U_{T}^{f_{j},a}(t,x,y,z)\] \[= \sum_{j\in\mathcal{J}}\sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(p_{ij}(t,s)-p_{kj}(t,s)\right)U_{T}^{f_{ j},a}(t,x,y,z)\] \[+\sum_{j\in\mathcal{J}}p_{ij}(t,T)\partial_{t}U_{T}^{f_{j},a}(t, x,y,z)\] \[= \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(G_{i,T}^{a}(t,x,y,z)-G_{k,T}^{a}(t,x,y,z )\right)+\sum_{j\in\mathcal{J}}p_{ij}(t,T)\partial_{t}U_{T}^{f_{j},a}(t,x,y,z).\]
Moreover, using that \(\mathcal{L}^{a}G_{i,T}^{a}(t,x,y,z)=\sum_{j\in\mathcal{J}}p_{ij}(t,T)\mathcal{ L}^{a}U_{T}^{f_{j},a}(t,x,y,z)\) and that \(U_{T}^{f_{j},a}\) satisfies the PIDE in (4.8) we have
\[\partial_{t}G_{i,T}^{a}(t,x,y,z)+\mathcal{L}^{a}G_{i,T}^{a}(t,x,y,z) = \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(G_{i,T}^{a}(t,x,y,z)-G_{k,T}^{a}(t,x,y, z)\right)\] \[+\sum_{j\in\mathcal{J}}p_{ij}(t,T)\left(\partial_{t}U_{T}^{f_{j},a}(t,x,y,z)+\mathcal{L}^{a}U_{T}^{f_{j},a}(t,x,y,z)\right)\] \[= \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(G_{i,T}^{a}(t,x,y,z)-G_{k,T}^{a}(t,x,y, z)\right)\] \[+r\sum_{j\in\mathcal{J}}p_{ij}(t,T)U_{T}^{f_{j},a}(t,x,y,z)\] \[= \sum_{\begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(G_{i,T}^{a}(t,x,y,z)-G_{k,T}^{a}(t,x,y, z)\right)\] \[+rG_{i,T}^{a}(t,x,y,z).\]
Replacing this last equality in (4.18) we obtain
\[\partial_{t}V_{i}^{a}(t,x,y,z)= rV_{i}^{a}(t,x,y,z)-g_{i}(t,x)-\sum_{ \begin{subarray}{c}k\in\mathcal{J}\\ k\neq i\end{subarray}}\mu_{ik}(t)\left(h_{ik}(t,x)+V_{k}^{a}(t,x,y,z)-V_{i}^{a }(t,x,y,z)\right)\] \[-\mathcal{L}^{a}V_{i}^{a}(t,x,y,z),\]
finishing the proof.
**Acknowledgements**
The authors would like to acknowledge financial support by the Research Council of Norway under the SCROLLER project, project number 299897.
## Appendix A Appendix: Technical results
We give all the proofs that were postponed.
### Moments of the variance and the Radon-Nikodym derivative
**Lemma A.1**.: _Let \(s\geqslant 1\). Then, \(\mathbb{E}\left[L_{t}^{s}\right]<\infty\) for all \(t\in[0,T]\) and \(\int_{0}^{T}\mathbb{E}\left[L_{t}^{s}\right]dt<\infty\)._
_Moreover, \(\mathbb{E}\left[\left[L\right]_{t}^{s}\right]<\infty\) for all \(t\in[0,T]\)._
Proof.: Applying Holder's inequality for sums we get
\[L_{t}^{s}\leqslant N_{t}^{s-1}\sum_{i=1}^{N_{t}}J_{i}^{s}.\]
Using that \(N\) and \(\{J_{i}\}_{i\geqslant 1}\) are independent we obtain
\[\mathbb{E}[L_{t}^{s}] \leqslant\mathbb{E}\left[N_{t}^{s-1}\sum_{i=1}^{N_{t}}J_{i}^{s}\right]\] \[=\mathbb{E}\left[\mathbb{E}\left[N_{t}^{s-1}\sum_{i=1}^{N_{t}}J_{ i}^{s}\Big{|}\mathcal{F}_{t}^{N}\right]\right]\] \[=\mathbb{E}\left[N_{t}^{s-1}\mathbb{E}\left[\sum_{i=1}^{N_{t}}J_{ i}^{s}\Big{|}\mathcal{F}_{t}^{N}\right]\right]\] \[=\mathbb{E}\left[N_{t}^{s-1}\sum_{i=1}^{N_{t}}\mathbb{E}[J_{i}^{s }|\mathcal{F}_{t}^{N}]\right]\] \[=\mathbb{E}\left[N_{t}^{s-1}\sum_{i=1}^{N_{t}}\mathbb{E}[J_{i}^{ s}]\right]\] \[=\mathbb{E}\left[N_{t}^{s}J_{1}^{s}\right]\] \[=\mathbb{E}\left[N_{s}^{s}\right]\mathbb{E}\left[J_{1}^{s}\right]<\infty,\]
where we have used that \(\mathbb{E}[N_{t}^{s}]<\infty\) and \(\mathbb{E}[J_{1}^{s}]<\infty\). For a reference of \(\mathbb{E}[N_{t}^{s}]<\infty\) see [23, Theorem 1] or [24, Corollary 3.2] where their condition \(\delta>\mu_{{}_{1}}{}_{G}\) is our stability condition \(\beta>\alpha\). For \(\mathbb{E}[J_{1}^{s}]<\infty\) see Assumption 1. Note that \(\int_{0}^{T}\mathbb{E}\left[L_{t}^{s}\right]dt<\infty\) is a just a consequence of \(\mathbb{E}[L_{t}^{s}]\leqslant\mathbb{E}[L_{T}^{s}]<\infty\) for all \(t\in[0,T]\)
Finally, in order to prove that \(\mathbb{E}\left[\left[L\right]_{t}^{s}\right]<\infty\) one can repeat the same argument with \([L]_{t}=\sum_{i=1}^{N_{t}}J_{i}^{2}\).
**Lemma A.2**.: _Let \(s\geqslant 1\). Then, \(\mathbb{E}\left[v_{t}^{s}\right]<\infty\) for all \(t\in[0,T]\) and \(\int_{0}^{T}\mathbb{E}\left[v_{t}^{s}\right]dt<\infty\)._
Proof.: Recall that
\[v_{t}=v_{0}-\kappa\int_{0}^{t}(v_{u}-\bar{v})du+\sigma\int_{0}^{t}\sqrt{v_{u} }dW_{u}+\eta L_{u}.\]
Applying Holder's inequality for sums we get
\[v_{t}^{s}\leqslant 4^{s-1}\left[v_{0}^{s}+\kappa^{s}\left|\int_{0}^{t}(v_{u}- \bar{v})du\right|^{s}+\sigma^{s}\left|\int_{0}^{t}\sqrt{v_{u}}dW_{u}\right|^{s} +\eta^{s}L_{t}^{s}\right].\] (A.1)
By applying Jensen's inequality and again Holder's inequality for sums we obtain
\[\left|\int_{0}^{t}(v_{u}-\bar{v})du\right|^{s} \leqslant t^{s}\left(\frac{1}{t}\int_{0}^{t}|v_{u}-\bar{v}|du \right)^{s}\] \[\leqslant t^{s-1}\int_{0}^{t}|v_{u}-\bar{v}|^{s}du\] \[\leqslant(2t)^{s-1}\int_{0}^{t}(v_{u}^{s}+\bar{v}^{s})\,du\] \[\leqslant(2T)^{s-1}\int_{0}^{T}v_{u}^{s}du+2^{s-1}(T\bar{v})^{s}.\]
Replacing the last inequality in (A.1) we get
\[v_{t}^{s}\leqslant 4^{s-1}\left[v_{0}^{s}+(2T)^{s-1}\kappa^{s}\int_{0}^{T}v _{u}^{s}du+2^{s-1}(T\bar{v}\kappa)^{s}+\sigma^{s}\left|\int_{0}^{t}\sqrt{v_{u }}dW_{u}\right|^{s}+\eta^{s}L_{T}^{s}\right].\]
Define now \(A(t):=4^{s-1}\left[v_{0}^{s}+2^{s-1}(T\bar{v}\kappa)^{s}+\sigma^{s}\left|\int_ {0}^{t}\sqrt{v_{u}}dW_{u}\right|^{s}+\eta^{s}L_{T}^{s}\right]\), \(B:=(8T)^{s-1}\kappa^{s}\). Then,
\[v_{t}^{s}\leqslant A(t)+B\int_{0}^{t}v_{u}^{s}du,\]
for all \(t\in[0,T]\). For each \(\omega\in\Omega\), the functions \(t\to v_{t}^{s}(\omega)\) and \(t\to A(t,\omega)\) are measurable in \(t\). Moreover, \(\int_{0}^{T}v_{u}^{s}du<\infty\) a.s. because \(v\) is a continuous process except a finite number of finite jumps. Note that \(A(t)>0\) for all \(t\in[0,T]\). Then, all the conditions to apply Gronwall's inequality hold and we get
\[v_{t}^{s}\leqslant A(t)+B\int_{0}^{t}A(u)e^{B(t-u)}du\leqslant A(t)+Be^{BT} \int_{0}^{T}A(u)du,\] (A.2)
for all \(t\in[0,T]\).
By Proposition 3.3, there exists \(c_{l}>0\) such that for \(c<c_{l}\), \(\mathbb{E}\left[\exp\left(c\int_{0}^{T}v_{u}du\right)\right]<\infty\). In particular, \(\mathbb{E}\left[\int_{0}^{T}v_{u}du\right]<\infty\) and the process \(t\to\int_{0}^{t}\sqrt{v_{u}}dW_{u}\) is a \((\mathcal{F},\mathbb{P})\)-martingale. By the Burkholder-Davis-Gundy inequality, there exists a constant \(C_{s}\) independent of the martingale and \(t\) such that
\[\mathbb{E}\left[\left|\int_{0}^{t}\sqrt{v_{u}}dW_{u}\right|^{s}\right]\leqslant C _{s}\mathbb{E}\left[\left(\int_{0}^{t}v_{u}du\right)^{\frac{s}{2}}\right] \leqslant C_{s}\mathbb{E}\left[\left(\int_{0}^{T}v_{u}du\right)^{\frac{s}{2}} \right]<\infty,\]
where the last expectation is finite because all positive moments of \(\int_{0}^{T}v_{u}du\) exist. By Lemma A.1 we have that \(\mathbb{E}[L_{T}^{s}]<\infty\) and, therefore
\[\mathbb{E}\left[A(t)\right]\leqslant 4^{s-1}\left[v_{0}^{s}+2^{s-1}(T\bar{v} \kappa)^{s}+\sigma^{s}C_{s}\mathbb{E}\left[\left(\int_{0}^{T}v_{u}du\right)^{ \frac{s}{2}}\right]+\eta^{s}\mathbb{E}[L_{T}^{s}]\right]=:E<\infty,\]
for all \(t\in[0,T]\). Finally, by taking expectations in (A.2) we obtain
\[\mathbb{E}[v_{t}^{s}]\leqslant E+Be^{BT}TE<\infty,\]
for all \(t\in[0,T]\) and it follows that \(\int_{0}^{T}\mathbb{E}[v_{t}^{s}]dt<\infty\)
**Definition A.3**.: _For \(a,b,z\in\mathbb{R}\), we define the confluent hypergeometric function of the first kind \({}_{1}F_{1}(a,b;z)\) in the following way_
\[{}_{1}F_{1}(a,b;z):=\sum_{n=0}^{\infty}\frac{a^{(n)}}{b^{(n)}}\frac{z^{n}}{n!}\]
_where \(q^{(0)}=1\) and \(q^{(n)}=q\cdot(q+1)\cdot...\cdot(q+n-1)\) for \(n\geqslant 1\) is the rising factorial. See [43, Section 5.3] and [2, Section 13] for more details about the confluent hypergeometric function of the first kind._
**Lemma A.4**.: _Let \(s\geqslant 1\). If \(2\kappa\bar{v}>s\sigma^{2}\), \(\mathbb{E}\left[\frac{1}{v_{t}^{s}}\right]<\infty\) for all \(t\in[0,T]\) and \(\int_{0}^{T}\mathbb{E}\left[\frac{1}{v_{t}^{s}}\right]dt<\infty\)._
Proof.: Recall that \(\widetilde{v}=\{\widetilde{v}_{t},t\in[0,T]\}\) is the pathwise unique strong solution of
\[\widetilde{v}_{t}=\widetilde{v}_{0}-\kappa\int_{0}^{t}\left(\widetilde{v}_{s} -\bar{v}\right)ds+\sigma\int_{0}^{t}\sqrt{\widetilde{v}_{s}}dW_{s}.\]
In [30, Section 2, Equation (2.3)], we see that for \(t\in(0,T]\)
\[\widetilde{v}_{t}\sim\frac{e^{-\kappa t}v_{0}}{k(t)}\chi_{\delta}^{{}^{\prime }2}\left(k(t)\right)\ \ \text{with}\ \ k(t):=\frac{4\kappa v_{0}e^{-\kappa t}}{\sigma^{2}(1-e^{-\kappa t})}\ \ \text{and}\ \ \delta:=\frac{4\kappa\bar{v}}{\sigma^{2}},\]
where \(\chi_{\delta}^{{}^{\prime}2}(k(t))\) denotes a noncentral chi-square random variable with \(\delta\) degrees of freedom and noncentrality parameter \(k(t)\). Since \(2\kappa\bar{v}>s\sigma^{2}\), \(-s>-\frac{2\kappa\bar{v}}{\sigma^{2}}=-\frac{\delta}{2}\). Then, we can use [43, Section 10.1, Equation (10.9)] to get that for \(t>0\)
\[\mathbb{E}\left[\left(\frac{1}{\widetilde{v}_{t}}\right)^{s}\right]=\left( \frac{k(t)}{e^{-\kappa t}v_{0}}\right)^{s}\frac{1}{2^{s}e^{k(t)/2}}\frac{ \Gamma\left(\frac{\delta}{2}-s\right)}{\Gamma\left(\frac{\delta}{2}\right)}{} _{1}F_{1}\left(\frac{\delta}{2}-s,\frac{\delta}{2};\frac{k(t)}{2}\right)<\infty,\]
where \({}_{1}F_{1}\) is the confluent hypergeometric function of the first kind given in Definition A.3. By Proposition 3.2, this proves that we have \(\mathbb{E}\left[\frac{1}{v_{t}^{s}}\right]<\infty\) for all \(t\in[0,T]\).
To prove that the \(s\)th negative moment is integrable we check that \(t\to\mathbb{E}\left[\left(\frac{1}{\widetilde{v}_{t}}\right)^{s}\right]\) is a continuous function for \(t\in[0,T]\). Since \(\frac{\delta}{2}\) is strictly positive, and \(k\colon(0,T]\to\mathbb{R}\) is continuous, the function \(t\to{}_{1}F_{1}\left(\frac{\delta}{2}-s,\frac{\delta}{2};\frac{k(t)}{2}\right)\) is continuous for \(t\in(0,T]\). Therefore, \(t\to\mathbb{E}\left[\left(\frac{1}{\widetilde{v}_{t}}\right)^{s}\right]\) is continuous at least for \(t\in(0,T]\). To check continuity at \(t=0\) observe that \(\lim_{t\to 0^{+}}k(t)=\infty\) and by [2, Section 13, Equation 13.1.4] it is known that
\[\lim_{t\to 0^{+}}\frac{{}_{1}F_{1}\left(\frac{\delta}{2}-s,\frac{\delta}{2}; \frac{k(t)}{2}\right)}{\Gamma\left(\frac{\delta}{2}-s\right)}e^{k(t)/2}\left( \frac{k(t)}{2}\right)^{-s}=1.\]
Then,
\[\lim_{t\to 0^{+}}\mathbb{E}\left[\left(\frac{1}{\widetilde{v}_{t}} \right)^{s}\right] =\lim_{t\to 0^{+}}\left(\frac{k(t)}{e^{-\kappa t}v_{0}}\right)^{s} \frac{1}{2^{s}e^{k(t)/2}}\frac{\Gamma\left(\frac{\delta}{2}-s\right)}{\Gamma \left(\frac{\delta}{2}\right)}{}_{1}F_{1}\left(\frac{\delta}{2}-s,\frac{\delta }{2};\frac{k(t)}{2}\right)\] \[=\lim_{t\to 0^{+}}\left(\frac{k(t)}{e^{-\kappa t}v_{0}}\right)^{s} \frac{1}{2^{s}e^{k(t)/2}}\frac{\Gamma\left(\frac{\delta}{2}-s\right)}{\Gamma \left(\frac{\delta}{2}\right)}\frac{\Gamma\left(\frac{\delta}{2}\right)}{ \Gamma\left(\frac{\delta}{2}-s\right)}e^{k(t)/2}\left(\frac{k(t)}{2}\right)^ {-s}\] \[=\frac{1}{v_{0}^{s}}=\mathbb{E}\left[\left(\frac{1}{\widetilde{v} _{0}}\right)^{s}\right].\]
This proves that \(t\to\mathbb{E}\left[\left(\frac{1}{\widetilde{v}_{t}}\right)^{s}\right]\) is a continuous function for \(t\in[0,T]\). Thus, \(\int_{0}^{T}\mathbb{E}\left[\left(\frac{1}{\widetilde{v}_{t}}\right)^{s} \right]dt<\infty\), which by Proposition 3.2 implies that \(\int_{0}^{T}\mathbb{E}\left[\frac{1}{v_{t}^{s}}\right]dt<\infty\)
**Lemma A.5**.: _If \(2\kappa\bar{v}>\sigma^{2}\) and \(c\leqslant\frac{1}{2}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2\sigma}\right)^{2}\), then_
\[\mathbb{E}\left[\exp\left(c\int_{0}^{T}\frac{1}{v_{u}}du\right)\right]<\infty.\]
Proof.: We define the process \(Z=\{Z_{t}=\frac{1}{v_{t}},t\in[0,T]\}\) where \(\widetilde{v}=\{\widetilde{v}_{t},t\in[0,T]\}\) is the pathwise unique strong solution of
\[\widetilde{v}_{t}=\widetilde{v}_{0}-\kappa\int_{0}^{t}\left( \widetilde{v}_{u}-\bar{v}\right)du+\sigma\int_{0}^{t}\sqrt{\widetilde{v}_{u}} dW_{u}.\]
Note that the Feller condition \(2\kappa\bar{v}\geqslant\sigma^{2}\) ensures that the process \(\widetilde{v}\) is strictly positive. Therefore, the process \(Z\) is well defined. Applying Ito formula we get
\[dZ_{t}=\left[\kappa Z_{t}+\left(\sigma^{2}-\kappa\bar{v}\right)Z _{t}^{2}\right]dt-\sigma Z_{t}^{3/2}dW_{t}.\] (A.3)
Following the notation in [20, Theorem 3], \(Z\) is a quadratic drift \(3/2\) process with parameters \(p(t)=\kappa\), \(q=\sigma^{2}-\kappa\bar{v}\) and \(\epsilon=-\sigma\). The condition \(q<\frac{\epsilon^{2}}{2}\) is satisfied because \(2\kappa\bar{v}>\sigma^{2}\). Applying [20, Theorem 3] with \(u=0\) and \(s=-c\) we get that for \(c\leqslant\frac{1}{2}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2\sigma}\right)^{2}\)
\[\mathbb{E}\left[\exp\left(c\int_{0}^{T}\frac{1}{\widetilde{v}_{u }}du\right)\right]=\frac{\Gamma(\gamma-\widetilde{\alpha})}{\Gamma(\widetilde{ \alpha})}\left(\frac{2}{\sigma^{2}y(0,1/v_{0})}\right)^{\widetilde{\alpha}}{}_ {1}F_{1}\left(\widetilde{\alpha},\gamma;-\frac{2}{\sigma^{2}y(0,1/v_{0})} \right)<\infty,\]
where
\[\widetilde{\alpha} =-\left(\frac{\kappa\bar{v}}{\sigma^{2}}-\frac{1}{2}\right)+ \sqrt{\left(\frac{\kappa\bar{v}}{\sigma^{2}}-\frac{1}{2}\right)^{2}-2\frac{c} {\sigma^{2}}},\] \[\gamma =2\left(\widetilde{\alpha}+\frac{k\bar{v}}{\sigma^{2}}\right),\] \[y(0,1/v_{0}) =\frac{e^{\kappa T}-1}{v_{0}\kappa}.\]
and \({}_{1}F_{1}\) is the confluent hypergeometric function defined in Definition A.3. Note that \(\widetilde{\alpha}\in\mathbb{R}\) because \(c\leqslant\frac{1}{2}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2\sigma}\right)^{2}\) by hypothesis. Finally, by Proposition 3.2, we conclude that
\[\mathbb{E}\left[\exp\left(c\int_{0}^{T}\frac{1}{v_{u}}du\right) \right]<\infty.\]
See also [21, Footnote 10 on page 136] for another reference about the inverse CIR. Note that [21, Equation (24) on page 137] is the same condition we have on \(c\).
**Lemma A.6**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}\), \(s>1\), \(D:=\sup_{t\in[0,T]}(\mu_{t}-r)^{2}<\infty\) and \(X^{(a)}\) defined in Theorem 3.4. Assume that \(2\kappa\bar{v}>\sigma^{2}\) and \(\frac{1-\rho^{2}}{D(s^{2}-s)}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2\sigma} \right)^{2}>1\). Consider \(q_{2}\) such that \(1<q_{2}<\frac{1-\rho^{2}}{D(s^{2}-s)}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2 \sigma}\right)^{2}\) and define \(q_{1}:=\frac{q_{2}}{q_{2}-1}>1\). If_
\[|a|<\min\left\{\frac{1}{q_{1}s}\sqrt{\frac{c_{l}}{2}},\sqrt{ \frac{(1-\rho^{2})c_{l}}{q_{1}s\left[2q_{1}s(1-\rho^{2})+\rho^{2}s-1\right]}} \right\},\] (A.4)
_then_
\[\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{s}\right]<\infty\quad \text{ and }\quad\mathbb{E}\left[\left(X_{t}^{(a)}\right)^{s}\right]\leqslant \left(\frac{s}{s-1}\right)^{s}\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{s} \right]<\infty,\]
_for all \(t\in[0,T]\)._
Proof.: Recall that \(\theta^{(a)},Y^{(a)}\) and \(Z^{(a)}\) are defined in Theorem 3.4 and \(X^{(a)}_{t}=Y^{(a)}_{t}Z^{(a)}_{t}\). By Proposition 3.2, the variance process \(v\) is strictly positive. This implies that \(\int_{0}^{T}(\theta^{(a)}_{u})^{2}ds<\infty\), \(\mathbb{P}\)-a.s, and since \(\theta^{(a)}\) is \(\{\mathcal{F}^{W}_{t}\vee\mathcal{F}^{L}_{t}\}_{t\in[0,T]}\)-adapted,
\[Y^{(a)}_{T}|\mathcal{F}^{W}_{T}\vee\mathcal{F}^{L}_{T}\sim\text{ Lognormal}\left(-\frac{1}{2}\int_{0}^{T}(\theta^{(a)}_{u})^{2}du,\int_{0}^{T}( \theta^{(a)}_{u})^{2}du\right).\]
Using that \(Z^{(a)}_{T}\) is \(\mathcal{F}^{W}_{T}\vee\mathcal{F}^{L}_{T}\)-measurable we have
\[\mathbb{E}\left[\left(X^{(a)}_{T}\right)^{s}\right] =\mathbb{E}\left[\left(Z^{(a)}_{T}\right)^{s}\mathbb{E}\left[ \left(Y^{(a)}_{T}\right)^{s}|\mathcal{F}^{W}_{T}\vee\mathcal{F}^{L}_{T} \right]\right]\] \[=\mathbb{E}\left[\left(Z^{(a)}_{T}\right)^{s}\exp\left(\left( \frac{s^{2}-s}{2}\right)\int_{0}^{T}(\theta^{(a)}_{u})^{2}du\right)\right].\]
Using that \((\theta^{(a)}_{u})^{2}=\frac{1}{1-\rho^{2}}\left(\frac{(\mu_{u}-r)^{2}}{v_{u}} +a^{2}\rho^{2}v_{u}-2a\rho(\mu_{u}-r)\right)\) we obtain
\[\mathbb{E}\left[\left(X^{(a)}_{T}\right)^{s}\right]=e^{O^{(a)}\int_{0}^{T}( \mu_{u}-r)du}\mathbb{E}\left[\exp\left(P^{(a)}\int_{0}^{T}\sqrt{v_{u}}dW_{u}+Q ^{(a)}\int_{0}^{T}v_{u}du+R\int_{0}^{T}\frac{(\mu_{u}-r)^{2}}{v_{u}}du\right) \right],\]
where
\[O^{(a)} :=-\frac{(s^{2}-s)a\rho}{1-\rho^{2}}\] \[P^{(a)} :=-as\] \[Q^{(a)} :=\frac{(s^{2}-s)a^{2}\rho^{2}}{2(1-\rho^{2})}-\frac{1}{2}sa^{2}= \frac{sa^{2}(\rho^{2}s-1)}{2(1-\rho^{2})}\] (A.5) \[R :=\frac{s^{2}-s}{2(1-\rho^{2})},\] (A.6)
We apply Holder's inequality with \(q_{1}=\frac{q_{2}}{q_{2}-1}>1\) and \(q_{2}>1\) to get
\[\mathbb{E}\left[\exp\left(P^{(a)}\int_{0}^{T}\sqrt{v_{u}}dW_{u}+Q ^{(a)}\int_{0}^{T}v_{u}du+R\int_{0}^{T}\frac{(\mu_{u}-r)^{2}}{v_{u}}du\right)\right]\] \[\leqslant\mathbb{E}\left[\exp\left(q_{1}P^{(a)}\int_{0}^{T}\sqrt {v_{u}}dW_{u}+q_{1}Q^{(a)}\int_{0}^{T}v_{u}du\right)\right]^{\frac{1}{q_{1}}} \mathbb{E}\left[\exp\left(q_{2}R\int_{0}^{T}\frac{(\mu_{u}-r)^{2}}{v_{u}}du \right)\right]^{\frac{1}{q_{2}}}.\] (A.7)
Now, we focus on the first term in (A.7). We add and substract the constant \(q_{1}^{2}(P^{(a)})^{2}\) and we apply Cauchy-Schwarz inequality
\[\mathbb{E}\left[\exp\left(q_{1}P^{(a)}\int_{0}^{T}\sqrt{v_{u}}dW_ {u}+q_{1}Q^{(a)}\int_{0}^{T}v_{u}du\right)\right]\] \[=\mathbb{E}\left[\exp\left(q_{1}P^{(a)}\int_{0}^{T}\sqrt{v_{u}} dW_{u}-q_{1}^{2}(P^{(a)})^{2}\int_{0}^{T}v_{u}du\right)\exp\left((q_{1}Q^{(a)}+q_{1} ^{2}(P^{(a)})^{2})\int_{0}^{T}v_{u}du\right)\right]\] \[\leqslant\mathbb{E}\left[\exp\left(2q_{1}P^{(a)}\int_{0}^{T} \sqrt{v_{u}}dW_{u}-2q_{1}^{2}(P^{(a)})^{2}\int_{0}^{T}v_{u}du\right)\right]^{ \frac{1}{2}}\mathbb{E}\left[\exp\left(2(q_{1}Q^{(a)}+q_{1}^{2}(P^{(a)})^{2}) \int_{0}^{T}v_{u}du\right)\right]^{\frac{1}{2}}.\] (A.8)
Note that the first term in (A.8) is the expectation of a Doleans-Dade exponential. By (A.4), \(|a|<\frac{q_{1}s}{q_{1}s}\sqrt{\frac{\pi}{2}}\) and we have that \(2q_{1}^{2}(P^{(a)})^{2}=2q_{1}^{2}a^{2}s^{2}<c_{l}\). Then, by Proposition 3.3, Novikov's condition is satisfied, that is
\[\mathbb{E}\left[\exp\left(2q_{1}^{2}(P^{(a)})^{2}\int_{0}^{T}v_{u}du\right) \right]<\infty.\]
Therefore,
\[\mathbb{E}\left[\exp\left(2q_{1}P^{(a)}\int_{0}^{T}\sqrt{v_{u}}dW_{u}-2q_{1}^{ 2}(P^{(a)})^{2}\int_{0}^{T}v_{u}du\right)\right]<\infty.\]
For the second term in (A.8) we need to check again that Proposition 3.3 is satisfied. One can check that
\[2(q_{1}Q^{(a)}+q_{1}^{2}(P^{(a)})^{2})=\frac{q_{1}sa^{2}}{1-\rho^{2}}\left[2q_ {1}s(1-\rho^{2})+\rho^{2}s-1\right].\]
By Observation 3.8, \(2q_{1}s(1-\rho^{2})+\rho^{2}s-1>0\) and by (A.4), \(|a|<\sqrt{\frac{(1-\rho^{2})c_{l}}{q_{1}s|2q_{1}s(1-\rho^{2})+\rho^{2}s-1}}\). Thus, \(2(q_{1}Q^{(a)}+q_{1}^{2}(P^{(a)}))^{2}<c_{l}\) and applying again Proposition 3.3 we obtain
\[\mathbb{E}\left[\exp\left(2(q_{1}Q^{(a)}+q_{1}^{2}(P^{(a)}))^{2}\int_{0}^{T}v_ {u}du\right)\right]<\infty.\]
We conclude that the two terms in (A.8) are finite and, therefore, the first expectation in (A.7) is finite as well, that is,
\[\mathbb{E}\left[\exp\left(q_{1}P^{(a)}\int_{0}^{T}\sqrt{v_{u}}dW_{u}+q_{1}Q^{ (a)}\int_{0}^{T}v_{u}du\right)\right]<\infty.\]
We check the second term in (A.7). Recall that we have defined \(D=\sup_{u\in[0,T]}(\mu_{u}-r)^{2}<\infty\). Then
\[\mathbb{E}\left[\exp\left(q_{2}R\int_{0}^{T}\frac{(\mu_{u}-r)^{2}}{v_{u}}du \right)\right]\leqslant\mathbb{E}\left[\exp\left(q_{2}RD\int_{0}^{T}\frac{1}{ v_{u}}du\right)\right].\]
Since \(2\kappa\bar{v}>\sigma^{2}\) and \(q_{2}\) is such that \(1<q_{2}<\frac{1-\rho^{2}}{D(s^{2}-s)}\left(\frac{2\kappa\bar{v}-\sigma^{2}}{2 \sigma}\right)^{2}\) and \(q_{2}RD=\frac{q_{2}(s^{2}-s)D}{2(1-\rho^{2})}\), we can apply Lemma A.5 to obtain
\[\mathbb{E}\left[\exp\left(q_{2}R\int_{0}^{T}\frac{(\mu_{u}-r)^{2}}{v_{u}}du \right)\right]<\infty.\]
This proves that the second term in (A.7) is also finite and we can conclude that \(\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{s}\right]<\infty\).
By Theorem 3.4, \(X^{(a)}\) is a positive \((\mathcal{F},\mathbb{P})\)-martingale. We can apply Doob's martingale inequality, see [5, Section 2.1.2, Theorem 2.1.5], to get
\[\mathbb{E}\left[\left(X_{t}^{(a)}\right)^{s}\right]\leqslant\mathbb{E}\left[ \sup_{0\leqslant u\leqslant T}\left(X_{u}^{(a)}\right)^{s}\right]\leqslant \left(\frac{s}{s-1}\right)^{s}\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{s} \right]<\infty\]
for any \(t\in[0,T]\). This finishes the proof of this lemma.
### Compensator of \(N\) under the historic and the risk neutral measures
**Lemma A.7**.: _The following holds_
1. _Define_ \(\Lambda_{t}^{N}:=\int_{0}^{t}\lambda_{u}du\)_, then_ \(N-\Lambda^{N}\) _is a square integrable_ \((\mathcal{F},\mathbb{P})\)_-martingale._
2. _Define_ \(\Lambda_{t}^{L}:=\mathbb{E}[J_{1}]\int_{0}^{t}\lambda_{u}du\)_, then_ \(L-\Lambda^{L}\) _is a square integrable_ \((\mathcal{F},\mathbb{P})\)_-martingale._
Proof.: (1) By [11, Theorem 3] the process is a \(N-\Lambda^{N}\) is a \((\mathcal{F}^{N},\mathbb{P})\)-martingale.
Let \(0\leqslant s\leqslant t\leqslant T\), since \(\sigma(N_{t}-\Lambda_{t}^{N})\vee\mathcal{F}_{s}^{L}\) is independent of \(\mathcal{F}_{s}^{(B,W)}\), \(\mathcal{F}_{s}^{L}\subset\mathcal{F}_{s}^{N}\vee\sigma(\{J_{i}\}_{i\geqslant 1})\), \(\sigma(N_{t}-\Lambda_{t}^{N})\vee\mathcal{F}_{s}^{N}\) is independent of \(\sigma(\{J_{i}\}_{i\geqslant 1})\) and \(\mathcal{F}_{s}^{N}\subset\mathcal{F}_{s}^{N}\) we have
\[\mathbb{E}[N_{t}-\Lambda_{t}^{N}|\mathcal{F}_{s}] =\mathbb{E}[N_{t}-\Lambda_{t}^{N}|\mathcal{F}_{s}^{L}]\] \[=\mathbb{E}[\mathbb{E}[N_{t}-\Lambda_{t}^{N}|\mathcal{F}_{s}^{N} \vee\sigma(\{J_{i}\}_{i\geqslant 1})]|\mathcal{F}_{s}^{L}]=\] \[=\mathbb{E}[\mathbb{E}[N_{t}-\Lambda_{t}^{N}|\mathcal{F}_{s}^{N} ]|\mathcal{F}_{s}^{L}]\] \[=\mathbb{E}[N_{t}-\Lambda_{t}^{N}|\mathcal{F}_{s}^{N}]\] \[=N_{s}-\Lambda_{s}^{N}\,.\]
This proves that \(N-\Lambda^{N}\) is a \((\mathcal{F},\mathbb{P})\)-martingale. Moreover, by [23, Theorem 1] and [23, Section 3.1] we have that \(\mathbb{E}[N_{t}^{2}]<\infty\), \(\mathbb{E}[\lambda_{t}^{2}]<\infty\) and \(t\to\mathbb{E}[\lambda_{t}^{2}]\) is a continuous function for \(t\in[0,T]\). Then, by applying Jensen's inequality we see
\[\mathbb{E}\left[\left(N_{t}-\Lambda_{t}^{N}\right)^{2}\right] \leqslant 2\mathbb{E}[N_{T}^{2}]+2\mathbb{E}\left[\left(\int_{0}^{T} \lambda_{u}du\right)^{2}\right]\] \[=2\mathbb{E}[N_{T}^{2}]+2\mathbb{E}\left[T^{2}\left(\frac{1}{T} \int_{0}^{T}\lambda_{u}du\right)^{2}\right]\] \[\leqslant 2\mathbb{E}[N_{T}^{2}]+2T\mathbb{E}\left[\int_{0}^{T} \lambda_{u}^{2}du\right]\] \[=2\mathbb{E}[N_{T}^{2}]+2T\int_{0}^{T}\mathbb{E}[\lambda_{u}^{2}] du<\infty,\]
for all \(t\in[0,T]\). This proves that \(N-\Lambda^{N}\) is a square integrable \((\mathcal{F},\mathbb{P})\)-martingale. Actually, one can prove that all moments exist but for our purposes it is enough with the second moment.
(2) Let \(0\leqslant s\leqslant t\leqslant T\). Since \(J_{N_{s}+1},...,J_{N_{t}}\) are independent of \(\mathcal{F}_{s}\vee\mathcal{F}_{t}^{N}\) we have
\[\mathbb{E}\left[L_{t}|\mathcal{F}_{s}\right] =\mathbb{E}\left[L_{s}+(L_{t}-L_{s})|\mathcal{F}_{s}\right]\] \[=L_{s}+\mathbb{E}\left[\sum_{i=N_{s}+1}^{N_{t}}J_{i}\Big{|} \mathcal{F}_{s}\right]\] \[=L_{s}+\mathbb{E}\left[\mathbb{E}\left[\sum_{i=N_{s}+1}^{N_{t}}J _{i}\Big{|}\mathcal{F}_{s}\vee\mathcal{F}_{t}^{N}\right]\Big{|}\mathcal{F}_{s}\right]\] \[=L_{s}+\mathbb{E}\left[\sum_{i=N_{s}+1}^{N_{t}}\mathbb{E}\left[J_{ i}\Big{|}\mathcal{F}_{s}\right]=L_{s}+\mathbb{E}\left[J_{1}\right]\mathbb{E} \left[N_{t}-N_{s}|\mathcal{F}_{s}\right].\]
Thus,
\[\mathbb{E}[L_{t}-\Lambda_{t}^{L}|\mathcal{F}_{s}]=L_{s}+\mathbb{E}[J_{1}] \mathbb{E}[N_{t}-\Lambda_{t}^{N}-N_{s}|\mathcal{F}_{s}]\]
\[=L_{s}-\mathbb{E}[J_{1}]\Lambda_{s}^{N}\] \[=L_{s}-\Lambda_{s}^{L}.\]
This proves that \(L-\Lambda^{L}\) is a \((\mathcal{F},\mathbb{P})\)-martingale. Moreover, by Lemma A.1 and the same argument in the previous part one can prove that \(L-\Lambda^{L}\) is a square integrable \((\mathcal{F},\mathbb{P})\)-martingale. Actually, one can prove that all moments exist but for our purposes it is enough with the second moment.
**Lemma A.8**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\), the following holds_
1. _The process_ \(t\to\int_{0}^{t}X_{u}^{(a)}d(L-\Lambda^{L})_{u}\) _is a square integrable_ \((\mathcal{F},\mathbb{P})\)_-martingale._
2. _The process_ \(t\to\int_{0}^{t}L_{u-}dX_{u}^{(a)}\) _is a square integrable_ \((\mathcal{F},\mathbb{P})\)_-martingale._
3. _Let_ \(0\leqslant s\leqslant t\leqslant T\)_, then_ \(\mathbb{E}[L_{t}X_{t}^{(a)}|\mathcal{F}_{s}]=L_{s}X_{s}^{(a)}+\mathbb{E}[J_{1 }]\int_{s}^{t}\mathbb{E}[\lambda_{u}X_{u}^{(a)}|\mathcal{F}_{s}]du\)_._
Proof.: (1) Recall that \(X^{(a)}\) is defined in Theorem 3.4. By Lemma A.7, \(L-\Lambda^{L}\) is a square integrable \((\mathcal{F},\mathbb{P})\)-martingale. To prove that \(t\to\int_{0}^{t}X_{u}^{(a)}d(L-\Lambda^{L})_{u}\) is a square integrable \((\mathcal{F},\mathbb{P})\)-martingale we check that
\[\mathbb{E}\left[\int_{0}^{T}\left(X_{u}^{(a)}\right)^{2}d[L- \Lambda^{L}]_{u}\right]<\infty.\]
The quadratic variation of the compound Hawkes process is given by \([L]_{t}=\sum_{i=1}^{N_{t}}J_{i}^{2}\) and \([L-\Lambda^{L}]_{t}=[L]_{t}\). Using Holder's inequality with \(p=1+\frac{\varepsilon_{1}}{2}>1\) and \(q=\frac{p}{p-1}>1\) (recall Assumption 2 and Observation 3.10) and Doob's martingale inequality, see [5, Section 2.1.2, Theorem 2.1.5], in the last step we have
\[\mathbb{E}\left[\int_{0}^{T}\left(X_{u}^{(a)}\right)^{2}d[L- \Lambda^{L}]_{u}\right] =\mathbb{E}\left[\int_{0}^{T}\left(X_{u}^{(a)}\right)^{2}d[L]_{u}\right]\] \[\leqslant\mathbb{E}\left[\sup_{t\in[0,T]}\left(X_{t}^{(a)}\right) ^{2}[L]_{T}\right]\] \[\leqslant\mathbb{E}\left[\sup_{t\in[0,T]}\left(X_{t}^{(a)}\right) ^{2p}\right]^{1/p}\mathbb{E}\left[[L]_{T}^{q}\right]^{1/q}\] \[\leqslant\mathbb{E}\left[\sup_{t\in[0,T]}\left(X_{t}^{(a)}\right) ^{2+\varepsilon_{1}}\right]^{1/p}\mathbb{E}\left[[L]_{T}^{q}\right]^{1/q}\] \[\leqslant q\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{2+ \varepsilon_{1}}\right]^{1/p}\mathbb{E}\left[[L]_{T}^{q}\right]^{1/q}<\infty,\]
where \(\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{2+\varepsilon_{1}}\right]<\infty\) by Observation 3.10 and \(\mathbb{E}\left[[L]_{T}^{q}\right]<\infty\) by Lemma A.1. We conclude that the process \(t\to\int_{0}^{t}X_{u}^{(a)}d(L-\Lambda^{L})_{u}\) is a square integrable \((\mathcal{F},\mathbb{P})\)-martingale.
(2) By Theorem 3.4 and Observation 3.10, \(X^{(a)}\) is a square integrable martingale. To prove that \(t\to\int_{0}^{t}L_{u-}dX_{u}^{(a)}\) is a square integrable \((\mathcal{F},\mathbb{P})\)-martingale we check that
\[\mathbb{E}\left[\int_{0}^{T}L_{u-}^{2}d[X^{(a)}]_{u}\right]<\infty.\]
The quadratic variation of \(X^{(a)}\) is given by \(d[X^{(a)}]_{t}=\left[\left(\theta_{t}^{(a)}\right)^{2}+a^{2}v_{t}\right]\left( X_{t}^{(a)}\right)^{2}dt\), where \(\theta^{(a)}\) is defined in Theorem 3.4. Then,
\[\mathbb{E}\left[\int_{0}^{T}L_{u-}^{2}d[X^{(a)}]_{u}\right]= \mathbb{E}\left[\int_{0}^{T}L_{u-}^{2}\left[\left(\theta_{u}^{(a)}\right)^{2} +a^{2}v_{u}\right]\left(X_{u}^{(a)}\right)^{2}du\right]\]
\[=\int_{0}^{T}\mathbb{E}\left[L_{u-}^{2}\left(\theta_{u}^{(a)}\right)^{2 }\left(X_{u}^{(a)}\right)^{2}\right]du+a^{2}\int_{0}^{T}\mathbb{E}\left[L_{u-}^{ 2}v_{u}\left(X_{u}^{(a)}\right)^{2}\right]du.\] (A.9)
We focus on the first term in (A.9), applying Holder's inequality with \(p_{1}=\frac{p_{2}p_{3}}{p_{2}p_{3}-p_{2}-p_{3}}>1\), \(p_{2}=1+\varepsilon_{2}>1\), \(p_{3}=1+\frac{\varepsilon_{1}}{2}>1\), and then Doob's martingale inequality, see [5, Section 2.1.2, Theorem 2.1.5], to the last expectation we get
\[\mathbb{E}\left[L_{u-}^{2}\left(\theta_{u}^{(a)}\right)^{2}\left( X_{u}^{(a)}\right)^{2}\right]\leqslant \mathbb{E}[L_{u}^{2p_{1}}]^{\frac{1}{p_{1}}}\mathbb{E}\left[\left( \theta_{u}^{(a)}\right)^{2p_{2}}\right]^{\frac{1}{p_{2}}}\mathbb{E}\left[ \left(X_{u}^{(a)}\right)^{2p_{3}}\right]^{\frac{1}{p_{3}}}\] \[= \mathbb{E}[L_{u}^{2p_{1}}]^{\frac{1}{p_{1}}}\mathbb{E}\left[ \left(\theta_{u}^{(a)}\right)^{2+2\varepsilon_{2}}\right]^{\frac{1}{p_{2}}} \mathbb{E}\left[\left(X_{u}^{(a)}\right)^{2+\varepsilon_{1}}\right]^{\frac{1} {p_{3}}}\] \[\leqslant \left(\frac{2+\varepsilon_{1}}{1+\varepsilon_{1}}\right)^{2} \mathbb{E}[L_{T}^{2p_{1}}]^{\frac{1}{p_{1}}}\mathbb{E}\left[\left(\theta_{u} ^{(a)}\right)^{2+2\varepsilon_{2}}\right]^{\frac{1}{p_{2}}}\mathbb{E}\left[ \left(X_{T}^{(a)}\right)^{2+\varepsilon_{1}}\right]^{\frac{1}{p_{3}}}.\] (A.10)
Note that \(\mathbb{E}[L_{T}^{2p_{1}}]<\infty\) by Lemma A.1 and \(\mathbb{E}\left[\left(X_{T}^{(a)}\right)^{2+\varepsilon_{1}}\right]<\infty\) by Observation 3.10. Applying Holder's inequality for sums we get
\[\mathbb{E}\left[\left(\theta_{u}^{(a)}\right)^{2+2\varepsilon_{2}}\right] \leqslant\frac{2^{1+2\varepsilon_{2}}}{(1-\rho^{2})^{1+\varepsilon_{2}}} \left[D^{1+\varepsilon_{2}}\mathbb{E}\left[\left(\frac{1}{v_{u}}\right)^{1+ \varepsilon_{2}}\right]+(a\rho)^{2+2\varepsilon_{2}}\mathbb{E}\left[v_{u}^{1+ \varepsilon_{2}}\right]\right]\]
where \(D=\sup_{t\in[0,T]}(\mu_{t}-r)^{2}\) is defined in Lemma 3.7. By Observation 3.10 and Lemma 3.5 we have that
\[\int_{0}^{T}\mathbb{E}\left[\left(\theta_{u}^{(a)}\right)^{2+2 \varepsilon_{2}}\right]<\infty.\]
Applying Holder's inequality with \(p_{2}>1\) and \(q_{2}=\frac{p_{2}}{p_{2}-1}>1\) we obtain
\[\int_{0}^{T}\mathbb{E}\left[\left(\theta_{u}^{(a)}\right)^{2+2 \varepsilon_{2}}\right]^{\frac{1}{p_{2}}}du\leqslant T^{\frac{p_{2}-1}{p_{2}} }\left(\int_{0}^{T}\mathbb{E}\left[\left(\theta_{u}^{(a)}\right)^{2+2 \varepsilon_{2}}\right]du\right)^{\frac{1}{p_{2}}}<\infty.\]
By (A.10) this implies that
\[\int_{0}^{T}\mathbb{E}\left[L_{u-}^{2}\left(\theta_{u}^{(a)}\right)^{2}\left( X_{u}^{(a)}\right)^{2}\right]du<\infty.\]
Similarly, using Lemma A.1, Lemma A.2 and Observation 3.10 we can show that the second term in (A.9) is finite. We conclude that the process \(t\rightarrow\int_{0}^{t}L_{u-}dX_{u}^{(a)}\) is a square integrable \((\mathcal{F},\mathbb{P})\)-martingale.
(3) Applying Ito formula, using that \(L_{0}=0\), that \(L\) is of finite variation and \(X^{(a)}\) is continuous we have
\[L_{t}X_{t}^{(a)}=\int_{0}^{t}L_{u-}dX_{u}^{(a)}+\int_{0}^{t}X_{u}^{(a)}dL_{u}.\]
Using that the processes \(t\rightarrow\int_{0}^{t}L_{u-}dX_{u}^{(a)}\) and \(t\rightarrow\int_{0}^{t}X_{u}^{(a)}d(L-\Lambda^{L})_{u}\) are square integrable \((\mathcal{F},\mathbb{P})\)-martingales and the expression of \(\Lambda^{L}\) given in Lemma A.7 we obtain
\[\mathbb{E}[L_{t}X_{t}^{(a)}|\mathcal{F}_{s}] =\mathbb{E}\left[\int_{0}^{t}L_{u-}dX_{u}^{(a)}\Big{|}\mathcal{F} _{s}\right]+\mathbb{E}\left[\int_{0}^{t}X_{u}^{(a)}dL_{u}\Big{|}\mathcal{F}_{s}\right]\] \[=\int_{0}^{s}L_{u-}dX_{u}^{(a)}+\int_{0}^{s}X_{u}^{(a)}dL_{u}+ \mathbb{E}\left[\int_{s}^{t}X_{u}^{(a)}dL_{u}\Big{|}\mathcal{F}_{s}\right]\]
\[=L_{s}X_{s}^{(a)}+\mathbb{E}\left[\int_{s}^{t}X_{u}^{(a)}d(L-\Lambda^{L })_{u}\Big{|}\mathcal{F}_{s}\right]+\mathbb{E}\left[\int_{s}^{t}X_{u}^{(a)}d \Lambda_{u}^{L}\Big{|}\mathcal{F}_{s}\right]\] \[=L_{s}X_{s}^{(a)}+\mathbb{E}[J_{1}]\int_{s}^{t}\mathbb{E}[\lambda _{u}X_{u}^{(a)}|\mathcal{F}_{s}]du.\]
**Proposition A.9**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\), then_
1. \(N-\Lambda^{N}\) _is a_ \((\mathcal{F},\mathbb{Q}(a))\)_-martingale._
2. \(L-\Lambda^{L}\) _is a_ \((\mathcal{F},\mathbb{Q}(a))\)_-martingale._
Proof.: First of all, note that for \(0\leqslant s\leqslant t\leqslant T\),
\[\mathbb{E}\left[\int_{0}^{t}\lambda_{u}X_{t}^{(a)}du\Big{|} \mathcal{F}_{s}\right] =\int_{0}^{s}\mathbb{E}[\lambda_{u}X_{t}^{(a)}|\mathcal{F}_{s}] du+\int_{s}^{t}\mathbb{E}[\lambda_{u}X_{t}^{(a)}|\mathcal{F}_{s}]du\] \[=X_{s}^{(a)}\int_{0}^{s}\lambda_{u}du+\int_{s}^{t}\mathbb{E}[ \mathbb{E}[\lambda_{u}X_{t}^{(a)}|\mathcal{F}_{u}]\mathcal{F}_{s}]du\] \[=X_{s}^{(a)}\int_{0}^{s}\lambda_{u}du+\int_{s}^{t}\mathbb{E}[ \lambda_{u}X_{u}^{(a)}|\mathcal{F}_{s}]du.\]
By Lemma A.8 we know that \(\mathbb{E}[L_{t}X_{t}^{(a)}|\mathcal{F}_{s}]=L_{s}X_{s}^{(a)}+\mathbb{E}[J_{1} ]\int_{s}^{t}\mathbb{E}[\lambda_{u}X_{u}^{(a)}|\mathcal{F}_{s}]du\). Using the previous equalities we obtain,
\[\mathbb{E}^{\mathbb{Q}(a)}\left[L_{t}-\mathbb{E}[J_{1}]\int_{0}^{ t}\lambda_{u}du\Big{|}\mathcal{F}_{s}\right] =\frac{1}{X_{s}^{(a)}}\mathbb{E}\left[L_{t}X_{t}^{(a)}-\mathbb{E} [J_{1}]\int_{0}^{t}\lambda_{u}X_{t}^{(a)}du\Big{|}\mathcal{F}_{s}\right]\] \[=\frac{1}{X_{s}^{(a)}}\Bigg{[}L_{s}X_{s}^{(a)}+\mathbb{E}[J_{1}] \int_{s}^{t}\mathbb{E}[\lambda_{u}X_{u}^{(a)}|\mathcal{F}_{s}]du\] \[\quad\quad-\mathbb{E}[J_{1}]X_{s}^{(a)}\int_{0}^{s}\lambda_{u}du- \mathbb{E}[J_{1}]\int_{s}^{t}\mathbb{E}[\lambda_{u}X_{u}^{(a)}|\mathcal{F}_{s}] du\Bigg{]}\] \[=L_{s}-\mathbb{E}[J_{1}]\int_{0}^{s}\lambda_{u}du.\]
This finishes the proof.
### Derivation of Thiele's PIDE for unit-linked policies
**Lemma A.10**.: _Let \(\mathbb{Q}(a)\in\mathcal{E}_{m}(Q_{1},2+\varepsilon_{1})\), \(\varphi\colon[0,T]\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that \(\mathbb{E}^{\mathbb{Q}(a)}[|\varphi(s,S_{s})|]<\infty\) for all \(s\in[0,T]\). Then, there exists a function \(U^{\varphi,a}\colon[0,T]^{2}\times\mathcal{D}\to\mathbb{R}_{+}\) such that_
\[e^{-r(s-t)}\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]=U^{ \varphi,a}_{s}(t,S_{t},v_{t},\lambda_{t})\] (A.11)
_where \(s,t\in[0,T]\). Note that \(U^{\varphi,a}_{s}(t,x,y,z)=e^{-r(s-t)}\varphi(s,x)\) for \(t\in[s,T]\). Furthermore, fix \(s\in[0,T]\), if \(U^{\varphi,a}_{s}\in\mathcal{C}^{1,2}\), it satisfies the following PIDE_
\[\partial_{t}U^{\varphi,a}_{s}(t,x,y,z)+\mathcal{L}^{a}U^{\varphi,a}_{s}(t,x,y, z)=rU^{\varphi,a}_{s}(t,x,y,z),\] (A.12)
_where \(\mathcal{L}^{a}\) is defined in Definition 4.3, \((t,x,y,z)\in[0,s]\times\mathcal{D}\) and final condition \(U^{\varphi,a}_{s}(s,x,y,z)=\varphi(s,x)\)._
Proof.: By Lemma 4.2 there exists a function \(Z^{\varphi,a}\colon[0,T]^{2}\times\mathcal{D}\to\mathbb{R}_{+}\) such that
\[\mathbb{E}^{\mathbb{Q}(a)}[\varphi(s,S_{s})|\mathcal{F}_{t}]=Z^{\varphi,a}_{s }(t,S_{t},v_{t},\lambda_{t}),\]
where \(s,t\in[0,T]\). Define the function \(U^{\varphi,a}\colon[0,T]^{2}\times\mathcal{D}\to\mathbb{R}_{+}\) by \(U^{\varphi,a}_{s}(t,x,y,z):=e^{-r(s-t)}Z^{\varphi,a}_{s}(t,x,y,z)\). Then, (A.11) is satisfied.
Moreover, fix \(s\in[0,T]\), if \(U^{\varphi,a}_{s}\in\mathcal{C}^{1,2}\), \(Z^{\varphi,a}_{s}\in\mathcal{C}^{1,2}\) and it satisfies the following PIDE
\[\partial_{t}Z^{\varphi,a}_{s}(t,x,y,z)+\mathcal{L}^{a}Z^{\varphi,a}_{s}(t,x,y,z)=0.\] (A.13)
for \((t,x,y,z)\in[0,T]\times\mathcal{D}\) and final condition \(Z^{\varphi,a}_{s}(s,x,y,z)=\varphi(s,x)\). Note that
\[\partial_{t}Z^{\varphi,a}_{s}(t,x,y,z)=e^{r(s-t)}\left(-rU^{ \varphi,a}_{s}(t,x,y,z)+\partial_{t}U^{\varphi,a}_{s}(t,x,y,z)\right).\]
and
\[\mathcal{L}^{a}Z^{\varphi,a}_{s}(t,x,y,z)=e^{r(s-t)}\mathcal{L}^ {a}U^{\varphi,a}_{s}(t,x,y,z).\]
Replacing that in (A.13) we get that \(U^{\varphi,a}_{s}\) satisfies the following PIDE
\[\partial_{t}U^{\varphi,a}_{s}(t,x,y,z)+\mathcal{L}^{a}U^{\varphi,a}_{s}(t,x,y,z)=rU^{\varphi,a}_{s}(t,x,y,z).\]
for \((t,x,y,z)\in[0,s]\times\mathcal{D}\) and final condition \(U^{\varphi,a}_{s}(s,x,y,z)=\varphi(s,x)\). |
2309.13034 | Tuples of homological invariants of edge ideals | Let $G$ be a graph and $I(G)$ its edge ideal. In this paper, we completely
determine the tuples $(\dim R/I(G), \depth (R/I(G)), \reg (R/I(G)))$ when the
number of vertices is fixed for any graphs $G$. | Akane Kanno | 2023-09-22T17:48:49Z | http://arxiv.org/abs/2309.13034v1 | # Tuples of homological invariants of edge ideals
###### Abstract.
Let \(G\) be a graph and \(I(G)\) its edge ideal. In this paper, we completely determine the tuples \((\dim R/I(G),\operatorname{depth}(R/I(G)),\operatorname{reg}(R/I(G)))\) when the number of vertices is fixed for any graphs \(G\).
## 1. introduction
In this paper, graphs are always assumed to be finite, simple, undirected and connected unless otherwise noted. Let \(R=K[x_{1},\ldots,x_{n}]\) denote a standard graded polynomial ring over a field \(K\) and \(G\) a graph on vertex set \([n]=\{1,\ldots,n\}\) with edge set \(E(G)\). The edge ideal of \(G\) is the ideal of \(R\) generated by monomials \(x_{i}x_{j}\) where \(\{i,j\}\in E(G)\). Denote the edge ideal as \(I(G)\), and denote dimension, depth, regularity and \(h\)-polynomial of \(R/I(G)\) as \(\dim G,\operatorname{depth}G,\operatorname{reg}G\) and \(h_{G}\), respectively.
In recent years, a major trend in edge ideals has been the investigation of not only the relationship between invariants of graphs (e.g., matching numbers, induced matching numbers) and invariants of edge ideals but also the relationship among invariants of edge ideals. The following inequalities are well known as one of the most basic relationship between themselves ([21, Corollary B.4.1]).
\[\deg h_{G}-\operatorname{reg}G\leq\dim G-\operatorname{depth}G\]
In the above inequality, the equality holds if and only if the last Betti number \(\beta_{p,p+r}(R/I(G))\) is nonvanishing where \(p=\operatorname{proj}\dim(R/I(G))\), \(r=\operatorname{reg}G\). In particular, if \(R/I(G)\) is Cohen-Macaulay, then both sides are zeros and the equality holds. Based on this formula, the study has been done to determine the range that the above invariants can take when the number of vertices of the graph is fixed, i.e., to describe the sets defined as follows.
**Definition 1.1**.: Given a positive integer \(n\), we define the following three sets:
\[\operatorname{Graph}_{\dim,\operatorname{depth}}(n) =\left\{(d,p)\in\mathbb{N}^{2}\ \Big{|}\ \text{ there is a graph }G\text{ such that }\dim G=d,\operatorname{depth}G=p\right\},\] \[\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname{ reg}}(n) =\left\{(d,p,r)\in\mathbb{N}^{3}\ \Bigg{|}\ \ \ \ \text{ there is a graph }G\text{ such that }\dim G=d,\operatorname{depth}G=p, \operatorname{reg}G=r\ \right\},\] \[\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname{ reg},\operatorname{reg}}(n) =\left\{(d,p,r,g)\in\mathbb{N}^{4}\ \Bigg{|}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ there is a graph }G\text{ such that }\dim G=d,\operatorname{depth}G=p, \operatorname{reg}G=r,\deg h_{G}=g\ \Bigg{\}}\.\]
In [11], the following two partial results on this issue are obtained.
**Theorem 1.2**.: _[_11_, Theorem 2.8]_
_Define \(C^{*}(n)\) as follows:_
\[C^{*}(n)=\{(d,p)\in\mathbb{N}^{2}\ |\ 1\leq d\leq p\leq n-1,d\leq(n-d)(d-p+1)\}.\]
_Then \(C^{*}(n)\subset\operatorname{Graph}_{\dim,\operatorname{depth}}(n)\) for any \(n\geq 2\)._
**Theorem 1.3**.: _[_11_, Theorem 4.4]_
_Let \(n\geq 5\) be an integer. Then_
\[\operatorname{Graph}_{\operatorname{depth},\operatorname{reg}, \dim,\operatorname{deg}}^{CW}(n) =\operatorname{CW}_{2,\operatorname{reg},\dim,\operatorname{deg} }(n)\] \[\cup\left\{(a,d,d,d)\in\mathbb{N}^{4}\ \middle|\ 3\leq a\leq d\leq \left\lfloor\frac{n-1}{2}\right\rfloor,\ n<a+2d\right\}\] \[\cup\left\{(a,a,d,d)\in\mathbb{N}^{4}\ \middle|\ 3\leq a<d\leq n-a,\ n\leq 2a+d-1\right\}\] \[\cup\left\{(a,r,d,d)\in\mathbb{N}^{4}\ \middle|\ \begin{array}{c}3\leq a<r<d<n-r \text{,}\\ n+2\leq a+r+d\end{array}\ \right\},\]
_where_
\[\operatorname{CW}_{2,\operatorname{reg},\dim,\operatorname{deg} }(n)\] \[=\left\{\begin{aligned} &\{(2,2,n-2,n-2),(2,2,n-3,n-3)\},& \text{if $n$ is even},\\ &\left\{(2,2,n-2,n-2),(2,2,n-3,n-3),\left(2,\frac{n-1}{2},\frac{n-1}{2}, \frac{n-1}{2}\right)\right\},&\text{if $n$ is odd},\end{aligned}\right.\]
_and_
\[\operatorname{Graph}_{\operatorname{depth},\operatorname{reg},\dim, \operatorname{deg}}^{CW}(n)\]
_is restriction of \(\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname{reg}, \operatorname{deg}}(n)\) to Cameron-Walker graphs._
Furthermore, the following results were obtained in [13].
**Theorem 1.4**.: _[_13_, Corollary 1.4]_
_The equality \(C^{*}(n)=\operatorname{Graph}_{\dim,\operatorname{depth}}(n)\) holds if \(n\leq 12\)._
**Theorem 1.5**.: _[_13_, Theorem 1.5]_
_Let \(n\geq 2\). Then we have \(C^{*}(n)=\operatorname{Graph}_{\dim,\operatorname{depth}}^{chordal}(n)\) where \(\operatorname{Graph}_{\dim,\operatorname{depth}}^{chordal}(n)\) is restriction of \(\operatorname{Graph}_{\dim,\operatorname{depth}}(n)\) to chordal graphs._
Generalizing these results, \(\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname{reg}}(n)\) and \(\operatorname{Graph}_{\dim,\operatorname{depth}}(n)\) are completely determined in this paper.
**Theorem 1.6**.: _For all \(n\geq 3\), the following holds:_
\[\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname{reg}}(n)=C^{ **}(n),\]
_where_
\[C^{**}(n)= \left\{(n-1,1,1)\right\}\cup\] \[\left\{(d,p,r)\in\mathbb{N}^{3}\ \middle|\ \begin{array}{c}1\leq p \leq d\leq n-2,2\leq r+d\leq n-1\text{,}\\ 1\leq r\leq d\leq(n-d-(r-1))(d-p+1)+(r-1)\end{array}\right\}\]
**Corollary 1.7**.: _For all \(n\geq 3\), the following holds:_
\[\operatorname{Graph}_{\dim,\operatorname{depth}}(n)=C^{*}(n).\]
## 2. preliminaries
First we prepare materials from graph theory and discuss their properties. Moreover we introduce a relationship of them and ring-theoretical invariants of edge ideals.
**Definition 2.1**.: Let \(G=(V(G),E(G))\) be a graph and \(v\) a vertex. We call \(S\subset V(G)\) an _independent set_ of \(G\) if \(\{v,v^{\prime}\}\not\in E(G)\) for any \(v,v^{\prime}\in S\). Moreover we define \(\operatorname{m}(G)\), \(\operatorname{im}(G)\), \(N_{G}(v)\), \(N_{G}[v]\), \(\operatorname{d}(G)\) and \(\operatorname{p}(G)\) as follows:
\[\operatorname{m}(G) =\max\{|M|\ |\ M\subset E(G),e\cap e^{\prime}=\emptyset\text{ for any }e,e^{\prime}\in M\},\] \[\operatorname{im}(G) =\max\left\{|M|\ \left|\begin{array}{l}M\subset E(G), \text{there is no }e^{\prime\prime}\in E(G)\text{ with }\\ e\cap e^{\prime\prime}\neq\emptyset,e^{\prime}\cap e^{\prime\prime}\neq \emptyset\text{ for any }e,e^{\prime}\in M\end{array}\right.\right\},\] \[N_{G}[v] =N_{G}(v)\cup\{v\},\] \[\operatorname{d}(G) =\max\{|S|\ |\ S\text{ is a maximal independent set of }G\},\] \[\operatorname{p}(G) =\min\{|S|\ |\ S\text{ is a maximal independent set of }G\}.\]
**Lemma 2.2**.: _For any graph \(G\) and its vertex \(v\), the followings hold._
1. \(\dim G=\operatorname{d}(G)\)_._
2. \(\operatorname{depth}G\leq\operatorname{p}(G)\)_._
3. \(\operatorname{p}(G)\leq\operatorname{p}(G-v)+1\)_._
4. \(\operatorname{p}(G)\leq\operatorname{p}(G-N[v])+1\)_._
Proof.: (1), (2) Those are well known facts.
1. Suppose \(p=\operatorname{p}(G)>\operatorname{p}(G-v)\). Then there is a maximal independent set \(S\) of \(G\) such that \(|S|=p\), \(v\in S\). Thus \(\operatorname{p}(G-N[v])=p-1\) because \(S\setminus\{v\}\) is a maximal independent set of \(G-N[v]\) with minimum cardinality.. Therefore \(p-1\geq\operatorname{p}(G-v)\). If \(p-1>\operatorname{p}(G-v)\) then there exists a maximal independent set \(S^{\prime}\subset V(G-v)\) such that \(S^{\prime}\cap N[v]\neq\emptyset\) because \(S^{\prime}\) is not a subset of \(V(G-N[v])\). Thus \(\operatorname{p}(G-v)=p-1=\operatorname{p}(G)-1\).
2. Let \(S\) be a maximal independent set of \(G-N[v]\) with \(|S|=\operatorname{p}(G-N[v])\). Then \(S\cup\{v\}\) is also a maximal independent set of \(G\).
Second we mention some properties of regularity and relation with other invariants.
**Proposition 2.3**.: _[_16_, Corollary 18.6]_ _Suppose that \(0\to U\to U^{\prime}\to U^{\prime\prime}\to 0\) is a short exact sequence of graded finitely generated \(K[x_{1},\dots,x_{n}]\)-modules with homomorphisms of degree \(0\). Then_
1. _If_ \(\operatorname{reg}(U^{\prime})>\operatorname{reg}(U^{\prime\prime})\)_, then_ \(\operatorname{reg}(U)=\operatorname{reg}(U^{\prime})\)_._
2. _If_ \(\operatorname{reg}(U^{\prime})<\operatorname{reg}(U^{\prime\prime})\)_, then_ \(\operatorname{reg}(U)=\operatorname{reg}(U^{\prime\prime})+1\)_._
3. _If_ \(\operatorname{reg}(U^{\prime})=\operatorname{reg}(U^{\prime\prime})\)_, then_ \(\operatorname{reg}(U)\leq\operatorname{reg}(U^{\prime\prime})+1\)_._
**Proposition 2.4**.: _[_3_, Lemma 2.10]_ _For any graph \(G\) and any vertex \(x\in V(G)\), the following holds._
\[\operatorname{reg}G\in\left\{\operatorname{reg}\frac{R}{(I(G),x)},\operatorname {reg}\frac{R}{(I(G):x)}+1\right\}.\]
**Theorem 2.5**.: _[_7_, Theorem 6.7]__[_14_, Lemma 2.2]_ _For any graph \(G\), the following inequality holds._
\[\operatorname{im}(G)\leq\operatorname{reg}(G)\leq\operatorname{m}(G).\]
**Theorem 2.6**.: _[_19_, Theorem 11]_ _Let \(G\) be a graph. Then, \(\operatorname{reg}(I(G))=\operatorname{m}(G)+1\) if and only if each connected component of \(G\) is either a pentagon or a Cameron-Walker graph._
**Theorem 2.7**.: _[_12_, Theorem 4.1]_ _Let G be a graph on n vertices. Then \(\deg h_{R/I(G)}(t)+\operatorname{reg}(R/I(G))\leq n\), and \(\dim(R/I(G))+\operatorname{reg}(R/I(G))\leq n\) hold._
Next we introduce the operation on graphs called \(S\)-suspension. This notion is initiated in [9].
**Definition 2.8**.: Let \(G=(V(G),E(G))\) be a graph and \(S\subset V(G)\) an independent set. The \(S\)_-suspension_ of \(G\), denoted by \(G^{S}\), is defined as follows.
* \(V(G^{S})=V(G)\cup\{v\}\),
* \(E(G^{S})=E(G)\cup\{\{v,w\}:w\in V(G)\setminus S\}\).
The following properties of \(S\)-suspension are known.
**Lemma 2.9**.: _[_11_, Lemma 1.2]_ _Let \(G\) be a graph on \(V(G)=\{x_{1},\ldots,x_{n}\}\) and \(G^{S}\) the \(S\)-suspension of \(G\) for some independent set \(S\) of \(G\). If \(I(G)\subseteq R=K[x_{1},\ldots,x_{n}]\) and \(I(G^{s})\subseteq R^{\prime}=K[x_{1},\ldots,x_{n},x_{n+1}]\) are the respective edge ideals, then_
* \(\dim(R^{\prime}/I(G^{S}))=\dim R/I(G)\) _if_ \(|S|\leq\dim R/I(G)-1\)_._
* \(\operatorname{depth}(R^{\prime}/I(G^{S}))=\operatorname{depth}(R/I(G))\) _if_ \(|S|=\operatorname{depth}(R/I(G))-1\)_._
* \(\operatorname{depth}(R^{\prime}/I(G^{S}))=1\) _if_ \(S=\emptyset\)_._
Finally we introduce Betti splitting and prove a lemma of regularities of edge ideals by using. Denote \(\Gamma(I)\) as the minimal system of monomial generators of \(I\).
**Definition 2.10**.: _[_5_, Definition 1.1]_ _Let \(I,I^{\prime},\) and \(I^{\prime\prime}\) be monomial ideals such that \(\Gamma(I)\) is the disjoint union of \(\Gamma(I^{\prime})\) and \(\Gamma(I^{\prime\prime})\). Then \(I=I^{\prime}+I^{\prime\prime}\) is a Betti splitting if_
\[\beta_{i,j}(I)=\beta_{i,j}(I^{\prime})+\beta_{i,j}(I^{\prime\prime})+\beta_{i -1,j}(I^{\prime}\cap I^{\prime\prime})\text{ for all }i\in\mathbb{N}\text{ and degrees }j,\]
_where \(\beta_{i,j}(I)\) denotes the \(\{i,j\}\)-th graded Betti number of \(I\)._
**Proposition 2.11**.: _[_5_, Corollary 2.2]_ _Let \(I=I^{\prime}+I^{\prime\prime}\) be a Betti splitting. Then_
* \(\operatorname{reg}(I)=\max\{\operatorname{reg}I^{\prime},\operatorname{reg}I ^{\prime\prime},\operatorname{reg}I^{\prime}\cap I^{\prime\prime}-1\}\)_, and_
* \(\operatorname{pd}(I)=\max\{\operatorname{pd}(I^{\prime}),\operatorname{pd}(I ^{\prime\prime}),\operatorname{pd}(I^{\prime}\cap I^{\prime\prime})+1\}\)_,_
**Definition 2.12**.: _[_5_, Definition 2.6]_ _Let \(I\) be a monomial ideal in \(R=k[x_{1},\ldots,x_{n}]\). Let \(I^{\prime}\) be the ideal generated by all elements of \(\Gamma(I)\) divisible by \(x_{i}\), and let \(I^{\prime\prime}\) be the ideal generated by all other elements of \(\Gamma(I)\). We call \(I=I^{\prime}+I^{\prime\prime}\) an \(x_{i}\)-partition of \(I\). If \(I=I^{\prime}+I^{\prime\prime}\) is also a Betti splitting, we call \(I=I^{\prime}+I^{\prime\prime}\) an \(x_{i}\)-splitting._
**Proposition 2.13**.: _[_5_, Corollary 2.7]_ _Let \(I=I^{\prime}+I^{\prime\prime}\) be an \(x_{i}\)-partition of \(I\) in which all elements of \(I^{\prime}\) are divisible by \(x_{i}\). If \(\beta_{i,j}(I^{\prime}\cap I^{\prime\prime})>0\) implies that \(\beta_{i,j}(I^{\prime})=0\) for all \(i\) and multidegrees \(j\), then \(I=I^{\prime}+I^{\prime\prime}\) is a Betti splitting. In particular, if the minimal graded free resolution of \(I^{\prime}\) is linear, then \(I=I^{\prime}+I^{\prime\prime}\) is a Betti splitting._
The following is a key for the proof of Theorem 1.6.
**Lemma 2.14**.: _Let \(G\) be a graph and \(v\) a vertex. Define \(G^{\prime}\) as follows._
* \(V(G^{\prime})=V(G)\cup\{v^{\prime}\}.\)__
* \(E(G^{\prime})=E(G)\cup\{\{v^{\prime},w\}\ |\ w\in N(v)\}.\)__
_Then \(\operatorname{reg}(G)=\operatorname{reg}(G^{\prime}).\)_
Proof.: There exist the following exact sequences:
\[0\to I(G)\cap(v^{\prime}w\ |\ w\in N(v^{\prime}))\to I(G)\oplus(v^{\prime}w\ |\ w\in N(v^{\prime}))\to I(G^{\prime})\to 0,\]
\[0\to I(G-v)\cap(vw\ |\ w\in N(v))\to I(G-v)\oplus(vw\ |\ w\in N(v))\to I(G)\to 0.\]
Denote \(T=I(G-v)\cap(vw\in N(v^{\prime}))\), \(T^{\prime}=I(G)\cap(v^{\prime}w\in N(v^{\prime}))\). By virtue of Proposition 2.13, the above \(I(G)=I(G-v)+(vw\ |\ w\in N(v))\) and \(I(G^{\prime})=I(G)+(v^{\prime}w\ |\ w\in N(v^{\prime}))\) are Betti splitting since \((vw\ |\ w\in N(v))\cong(v^{\prime}w\ |\ w\in N(v^{\prime}))\) has a linear resolution. Therefore \(\operatorname{reg}(I(G^{\prime}))=\max\{\operatorname{reg}(T^{\prime})-1, \operatorname{reg}(I(G))\}\), \(\operatorname{reg}(I(G))=\max\{\operatorname{reg}(T)-1,\operatorname{reg}(I(G -v))\}\) by Proposition 2.11.
Moreover the following sequence is exact.
\[0\to(T^{\prime}:v)(-1)\xrightarrow{\cdot v}T^{\prime}\to(T^{\prime},v)\to 0.\]
Here \((T^{\prime}:v)=(v^{\prime}w\ |\ w\in N(v^{\prime}))\) and \((T^{\prime},v)=(I(G),v)\cap(v,v^{\prime}w\ |\ w\in N(v^{\prime}))\cong(T,v^{\prime})\). Thus \(\operatorname{reg}(T^{\prime})=\operatorname{reg}(T)\) by Proposition 2.3.
Hence \(\operatorname{reg}(I(G^{\prime}))=\max\{\operatorname{reg}(I(G)),\operatorname {reg}(T)-1\}=\operatorname{reg}(I(G)).\)
## 3. proof of main results
In this section, we prove Theorem 1.6 and Corollary 1.7. Prior to the proof, one lemma is introduced.
**Lemma 3.1**.: _[_11_]__[_13_]_ _For any \(3\leq n\) and \((p,d)\in C^{*}(n)\) there is a graph with \(n\) vertices such that \(G\) satisfying \(\dim G=d,\operatorname{depth}G=p,\operatorname{reg}G=1.\)_
Proof of Theorem 1.6.: First, we prove \(C^{**}(n)\subset\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname {reg}}(n).\)
It is sufficient to construct a graph \(G\) with \(n\) vertices such that \(\dim G=d\), \(\operatorname{depth}G=p\) and \(\operatorname{reg}G=r\). By Lemma 3.1, we only prove the case of \(2\leq r\).
1. The case \(r\leq p\). We denote \(r=1+r^{\prime}\), \(p=1+r^{\prime}+a,d=1+r^{\prime}+a+b\) where \(a,b\) are some nonnegative integers. The inequality \(n-d-r\geq 1\) implies \(n-2r^{\prime}-1\geq 2+a+b\). Therefore by Lemma 3.1, there is a graph \(G^{\prime}\) with \(n-2r^{\prime}-1\) vertices such that \(\dim G^{\prime}=1+a+b,\operatorname{depth}G^{\prime}=1+a,\operatorname{reg}G^ {\prime}=1\). Then \(G^{\prime}\) has an independent set \(S\) such that \(|S|=a<\operatorname{depth}G^{\prime}\).
We define \(G\) as follows. \[V(G)=V(G^{\prime}) \cup\{x_{1},y_{1},\cdots,x_{r^{\prime}},y_{r^{\prime}}\}\cup\{v\},\] \[E(G)=E(G^{\prime}) \cup\{\{x_{1},y_{1}\},\cdots,\{x_{r^{\prime}},y_{r^{\prime}}\}\}\] \[\cup\{\{v,x_{1}\},\cdots,\{v,x_{r^{\prime}}\}\}\] \[\cup\{\{v,w\}:w\in V(G^{\prime})\setminus S\}.\] By virtue of Lemma 2.9, it can be ensured \(|V(G)|=n,\dim G=1+r^{\prime}+a+b=d\), \(\operatorname{depth}G=1+r^{\prime}+a=p\) and \(\operatorname{reg}G=1+r^{\prime}=r\).
2. The case \(r>p\). Similarly denote \(r=1+r^{\prime}\), \(p=1+a,d=1+r^{\prime}+b\). Repeat the above discussion. Since \(n-2r^{\prime}-1\geq 2+b\), there is a graph \(G^{\prime}\) with \(n-2r^{\prime}-1\) vertices such that \(\dim G^{\prime}=1+b,\operatorname{depth}G^{\prime}=1,\operatorname{reg}G^{ \prime}=1\) and \(G^{\prime}\) has an independent set \(S\) with \(|S|=a\). We define \(G\) as follows: \[V(G)=V(G^{\prime}) \cup\{x_{1},y_{1},\cdots,x_{r^{\prime}},y_{r^{\prime}}\}\cup\{v\},\] \[E(G)=E(G^{\prime}) \cup\{\{x_{1},y_{1}\},\cdots,\{x_{r^{\prime}},y_{r^{\prime}}\}\}\] \[\cup\{\{v,x_{1}\},\cdots,\{v,x_{r^{\prime}}\}\}\] \[\cup\{\{v,y_{1}\},\cdots,\{v,y_{r^{\prime}-a}\}\}\] \[\cup\{\{v,w\}:w\in V(G^{\prime})\}.\] Then \(|V(G)|=n,\dim G=1+r^{\prime}+a=d\), \(\operatorname{depth}G=1+b=p\) and \(\operatorname{reg}G=1+r^{\prime}=r\).
Second we prove \(\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname{reg}}(n)\subset C ^{**}(n)\). If \(\operatorname{reg}(G)+\dim(G)=n\) then \(\operatorname{reg}(G)=\operatorname{m}(G)\) because \(n-\operatorname{d}(G)\geq\operatorname{m}(G)\geq\operatorname{reg}(G)\). Theorem 2.6 says that if the above condition holds then \(G\) is a pentagon or a star graph or a star triangle or a Cameron-Walker graph. Hence we obtain that if \(\operatorname{reg}(G)+\dim(G)=n\) then \(G\) is a star graph by virtue of Theorem 1.3 and easy calculation. Moreover if \(G\) is a star graph with \(n\) vertices, then \(\dim G=n-1,\operatorname{reg}G=\operatorname{depth}G=1\).
Thus we only prove that \(\dim G\), \(\operatorname{depth}G\), \(\operatorname{reg}G\) and \(n=|V(G)|\) satisfy the following inequality in \(\dim(G)+\operatorname{reg}(G)\leq n-1\):
\[\dim G\leq(n-\dim G-\operatorname{reg}G+1)(\dim G-\operatorname{depth}G+1)+( \operatorname{reg}G-1).\]
By virtue of Lemma 2.2, it is sufficient to prove this inequality by induction on \(|V(G)|\). We denote \(\operatorname{r}(G)\) as \(\operatorname{reg}G\).
Define \(U=\{v\ |\ v\in V(G),\text{ there is a maximal independent set }S\text{ such that }|S|\ = \operatorname{d}(G),\text{ and }v\in S\}\) and \(t=\min\{|N(v)|\ \ |\ v\in U\}\).
Suppose \(v\in U\) and \(|N(v)|=t\). If \(v^{\prime}\) is an isolated vertex of \(G-v\), then \(v^{\prime}\) is only incident to \(v\) and \(\{v^{\prime}\}\cup S\setminus v\) is also an independent set for any independent set \(v\in S\). Then \(|N(v)|=|N(v^{\prime})|=1\), thus \(E(G)=\{v,v^{\prime}\}\), a contradiction to \(n\geq 3\). Thus we may assume that there is no isolated vertex of \(G-v\).
In addition if \(v^{\prime}\) is an isolated vertex of \(G-N[v]\), \(v^{\prime}\) has \(t\) incident vertices since \(N(v^{\prime})\subset N(v)\) and \(\{v^{\prime}\}\cup S\setminus v\) is also an independent set. Thus \(N(v)=N(v^{\prime})\).
Denote \(m\) as a number of isolated vertices of \(G-N[v]\), \(G^{\prime}\) as a graph \(G-N[v]\) without isolated vertices and \(a=\dim(G)-\dim(G-v)\in\{0,1\}\).
1. The case \(\mathrm{r}(G)=\mathrm{r}(G-v)+1\). Assume \(G-N[v]\) has an isolated vertex \(v^{\prime}\). Then \(N(v)=N(v^{\prime})\) by the above discussion, \(\mathrm{r}(G)=\mathrm{r}(G-v)\) by virtue of Lemma 2.14, a contradiction. Therefore \(G-N[v]\) has no isolated vertex. Then the following holds from \(\dim(G)=\dim(G-N[v])+1=\mathrm{d}(G^{\prime})+1\), \(\mathrm{r}(G)=\mathrm{r}(G^{\prime})+1\) and \(\mathrm{p}(G)\leq\mathrm{p}(G^{\prime})+1\) by Lemma 2.2 and Proposition 2.4. \[\dim(G)=\mathrm{d}(G)=\mathrm{d}(G^{\prime})+1\] \[\leq((n-t-1)-\mathrm{d}(G^{\prime})-(\mathrm{r}(G^{\prime})-1))( \mathrm{d}(G^{\prime})-\mathrm{p}(G^{\prime})+1)+\mathrm{r}(G^{\prime})-1+1\] \[\leq(n-\mathrm{d}(G)-\mathrm{r}(G)+1+(1-t))(\mathrm{d}(G)- \mathrm{p}(G)+1)+\mathrm{r}(G)-1-1+1\] \[\leq(n-\mathrm{d}(G)-\mathrm{r}(G)+1)(\mathrm{d}(G)-\mathrm{p}(G )+1)+\mathrm{r}(G)-1.\]
2. The case \(\mathrm{r}(G)=\mathrm{r}(G-v)\) and \(\mathrm{p}(G)\leq\mathrm{p}(G-v)\). The followings holds. \[\dim(G)=\mathrm{d}(G)=\mathrm{d}(G-v)+a\] \[\leq((n-1)-\mathrm{d}(G-v)-\mathrm{r}(G-v)+1)(\mathrm{d}(G-v)- \mathrm{p}(G-v)+1)+(\mathrm{r}(G-v)-1)+a\] \[=(n-\mathrm{d}(G)-\mathrm{r}(G)+1+(a-1))(\mathrm{d}(G)-\mathrm{p }(G-v)+1-a)+(\mathrm{r}(G)-1)+a\] \[\leq(n-\mathrm{d}(G)-\mathrm{r}(G)+1)(\mathrm{d}(G)-\mathrm{p}(G )+1)+(\mathrm{r}(G)-1).\]
3. The case \(\mathrm{r}(G)=\mathrm{r}(G-v)\) and \(\mathrm{p}(G)=\mathrm{p}(G-v)+1\). Let \(S\subset V(G)\) be an independent set of \(G\) such that \(|S|=\mathrm{p}(G)\) and \(v_{1},\ldots,v_{m}\) be isolated vertices of \(G-N[v]\). The condition \(\mathrm{p}(G)=\mathrm{p}(G-v)+1\) implies that \(S\setminus\{v\}\) is a maximal independent set of \(G-N[v]\) such that \(|S\setminus\{v\}|=\mathrm{p}(G-v)=\mathrm{p}(G-N[v])\) and \(v_{i}\in S\) for any \(1\leq i\leq m\) since \(S\) is maximal. Therefore \(S\setminus\{v,v_{1},\ldots,v_{m}\}=S^{\prime}\) is a maximal independent set of \(G^{\prime}\) and \(|S^{\prime}|=\mathrm{p}(G^{\prime})=\mathrm{p}(G)-m-1\). By Lemma 2.2, \(\mathrm{p}(G-\{v,v_{1},\ldots,v_{m}\})\leq\mathrm{p}(G^{\prime})+t\) holds. Therefore \(\mathrm{p}(G)=\mathrm{p}(G-v)+1=\mathrm{p}(G^{\prime})+m+1\leq\mathrm{p}(G- \{v,v_{1},\ldots,v_{m}\})\leq\mathrm{p}(G^{\prime})+t\). Moreover by Proposition 2.4, \(\mathrm{r}(G-\{v,v_{1},\ldots,v_{m}\})\leq\mathrm{r}(G^{\prime})+t\) and \(\mathrm{r}(G)=\mathrm{r}(G-\{v_{1},\ldots,v_{m}\})\in\{\mathrm{r}(G^{\prime})+ 1,\mathrm{r}(G-\{v,v_{1},\ldots,v_{m}\})\}\). Thus \(\mathrm{r}(G)\leq\mathrm{r}(G^{\prime})+t\). Then the following holds. \[\dim(G)=\mathrm{d}(G)=\mathrm{d}(G^{\prime})+m+1\] \[\leq((n-t-1-m)-(\mathrm{d}(G^{\prime}))-(\mathrm{r}(G^{\prime}) -1))(\mathrm{d}(G^{\prime})-\mathrm{p}(G^{\prime})+1)+(\mathrm{r}(G^{\prime} )-1)+m+1\] \[=(n-\mathrm{d}(G)-(\mathrm{r}(G^{\prime})+t)+1)(\mathrm{d}(G)- \mathrm{p}(G)+1)+(\mathrm{r}(G^{\prime})+t)-1+(m+1-t)\] \[\leq(n-\mathrm{d}(G)-\mathrm{r}(G)+1)(\mathrm{d}(G)-\mathrm{p}(G )+1)+\mathrm{r}(G)-1+(m+1-t)\] \[\leq(n-\mathrm{d}(G)-\mathrm{r}(G)+1)(\mathrm{d}(G)-\mathrm{p}(G )+1)+\mathrm{r}(G)-1.\]
Proof of Corollary 1.6.: Define \(C^{**}(n,c)=\{(d,p)\in\mathbb{N}^{2}:1\leq p\leq d\leq n-1,d\leq(n-d-(c-1))(d-p+1)+ (c-1)\}\)
For any \(n\geq 3\) and \(\lfloor\frac{n-1}{2}\rfloor\geq....\geq i\geq 2\), the inclusion \(C^{**}(n,i)\subset C^{**}(n,i-1)\) is straightforward. Thus:
\[C^{**}(n,\lfloor\frac{n-1}{2}\rfloor)\subset C^{**}(n,\lfloor\frac{n-1}{2} \rfloor-1)\subset\cdots\subset C^{**}(n,2)\subset C^{**}(n,1)=C^{*}(n).\]
By definition of \(\operatorname{Graph}_{\dim,\operatorname{depth},\operatorname{reg}}(n)\), the following is obtained:
\[\operatorname{Graph}_{\dim,\operatorname{depth}}(n)=\bigcup_{i=1}^{\lfloor \frac{n-1}{2}\rfloor}C^{**}(n,i)=C^{**}(n,1)=C^{*}(n).\]
|
2309.15445 | Goodenough-Kanamori-Anderson high-temperature ferromagnetism in
tetragonal transition-metal xenes | Seminal Goodenough-Kanamori-Anderson (GKA) rules provide the inceptive
understanding of the superexchange interaction of two magnetic metal ions
bridged with an anion, and suggest fostered ferromagnetic interaction for
orthogonal bridging bonds. However, there are no examples of two-dimensional
(2D) materials with structure that optimizes the GKA arguments towards enhanced
ferromagnetism and its critical temperature. Here we reveal that an ideally
planar GKA ferromagnetism is indeed stable in selected tetragonal
transition-metal xenes (tTMXs), with Curie temperature above 300~K found in CrC
and MnC. We provide the general orbitally-resolved analysis of magnetic
interactions that supports the claims and sheds light at the mechanisms
dominating the magnetic exchange process in these structures. With recent
advent of epitaxially-grown tetragonal 2D materials, our findings earmark tTMXs
for facilitated spintronic and magnonic applications, or as a desirable
magnetic constituent of functional 2D heterostructures. | U. Yorulmaz, D. Šabani, C. Sevik, M. V. Milošević | 2023-09-27T07:16:36Z | http://arxiv.org/abs/2309.15445v1 | # Goodenough-Kanamori-Anderson high-temperature ferromagnetism in tetragonal transition-metal xenes
###### Abstract
Seminal Goodenough-Kanamori-Anderson (GKA) rules provide the inceptive understanding of the superexchange interaction of two magnetic metal ions bridged with an anion, and suggest fostered ferromagnetic interaction for orthogonal bridging bonds. However, there are no examples of two-dimensional (2D) materials with structure that optimizes the GKA arguments towards enhanced ferromagnetism and its critical temperature. Here we reveal that an ideally planar GKA ferromagnetism is indeed stable in selected tetragonal transition-metal xenes (tTMXs), with Curie temperature above 300 K found in CrC and MnC. We provide the general orbitally-resolved analysis of magnetic interactions that supports the claims and sheds light at the mechanisms dominating the magnetic exchange process in these structures. With recent advent of epitaxially-grown tetragonal 2D materials, our findings earmark tTMXs for facilitated spintronic and magnonic applications, or as a desirable magnetic constituent of functional 2D heterostructures.
+
Footnote †: Email: [email protected]
+
Footnote †: Email: [email protected]
+
Footnote †: Email: [email protected]
## I Introduction
The experimental discovery of the premiere magnetic two-dimensional materials (M2DMs) - CrI\({}_{3}\)[1] and CrGeTe\({}_{3}\)[2; 3] - opened the floodgates to many emergent 2D materials of this class. Numerous theoretical [4; 5; 6; 7] and experimental studies [8; 9; 10; 11] followed, explaining the origins and possible manipulations of the long-range magnetic order in the monolayer limit. It is needless to emphasize that intrinsically room-temperature M2DMs would be highly beneficial for applications in sensing, spintronics, and otherwise, and bear promise towards high tunability by diverse mechanical, chemical, and electronic means. However, it quickly became clear that critical temperatures (T\({}_{\rm c}\)) for magnetic order to vanish are by rule always smaller in 2D materials compared to their bulk counterparts [1; 12; 13]. Namely, in order to host sizable regions with magnetic order (and circumvent limitations imposed by the Mermin-Wagner theorem [14]), M2DMs require anisotropy in magnetic exchange. That needed anisotropy is known to originate from the spin-orbit coupling (SOC), which is in general much weaker compared to e.g. Coulomb attraction or repulsion of charged particles, hence attains very low magnitudes (of typically 0.01-0.1 meV). Another reason for T\({}_{\rm c}\) to decrease as one thins the magnetic material from bulk to a monolayer stems from the correspondingly diminishing magnetic exchange along the third, out-of-plane direction. One may therefore expect that magnetic order in 2D materials is strictly limited to the very low T\({}_{\rm c}\), but that is not necessarily the case. For example, Fe\({}_{3}\)GeTe\({}_{2}\) hosts the ferromagnetic (FM) order up to 130 K in the monolayer (ML) limit [15]. In addition, the FM order and T\({}_{\rm c}\) of 213 K were measured in few-layer thick 1T-CrTe\({}_{2}\)[16], with the unusual trend that T\({}_{\rm c}\) increases as one goes from bulk to few-layers. Furthermore, in a few-layer FePS\({}_{3}\), the antiferromagnetic (AFM) order was observed up to the T\({}_{\rm c}\) of 120 K [17]. Finally, FM order was measured even at room temperature in thicker films (\(\approx 10\) nm, see e.g. [18; 19]), but also in monolayer MnSe\({}_{x}\)[20; 21], VSe\({}_{2}\)[11; 22], and Cr\({}_{3}\)Te\({}_{4}\)[23]. The latter samples were deposited epitaxially, which in general involves structural defects [22] and strong interfacial effects with the substrate [22; 23] into the origins of the observed robust magnetic interactions, which complicates theoretical interpretations. Otherwise, the isotropic magnetic interactions in crystalline monolayers are relatively straightforward to extract theoretically in all available first-principles codes. Such studies, on predominantly _in silico_ created 2D materials, have yielded many predictions of high- or even room-temperature intrinsic magnetism [24; 25; 26; 27; 21; 28]. However, those predictions typically failed to quantify the underlying microscopic mechanisms for such a large predicted magnetic exchange.
In this paper we therefore take a step back, and explore the route towards room-temperature 2D ferromagnetism starting from the well-established set of Goodenough-Kanamori-Anderson (GKA) empirical theoretical rules from the late 1950s [29; 30; 31; 32]. We accordingly aim at monolayers with 90\({}^{\circ}\) between the nearest magnetic atoms (A) connected by a ligand (X), thus ideally a _Lieb-lattice material_ of A\({}_{2}\)X type. However, a magnetic 2D material of such specific planar structure has not been reported to date, although some Lieb-lattice 2D materials have been considered computationally for other purposes (see e.g. Ref. [33]). As a best available choice, for not only ge
ometry but also sizable SOC, we instead focus on the family of tetragonal transition-metal xenes (tTMXs; see Fig. 1), seeking a square-lattice planar material among them - still with 90\({}^{\circ}\) TM-X-TM nearest-neighbor bonds. Using advanced methodology on top of the standard first-principles approaches based on Density Functional Theory (DFT), we computationally validate the structural stability and strong intrinsic magnetic interactions in these materials, detail the microscopic (orbital-resolved) origin of enhanced magnetic exchange, and identify CrC and MnC as premiere square-lattice monolayer ferromagnets with a particularly high T\({}_{\rm c}\).
## II Results
We commence our analysis with a computationally crude throughput screening of dynamical stability and magnetic interactions in tTMXs (where TM = V, Cr, Mn, Fe, Co, Ni, Cu, Zn, and X = C, N, O, S, Se, Te). For each stable material we perform total energy mapping between the density functional theory corrected with on-site Coulomb repulsion (DFT+U) and the Heisenberg model Hamiltonian, for six particular magnetic orders - namely FM and AFM orders along three Cartesian directions (see Appendix B in \(\dagger\) Supplementary Information), in order to extract the governing magnetic interactions in the system. For the sake of screening, Hubbard parameter U in the calculations is taken from the online database, based on bulk oxides of transition metals [34]. In order to decrease computational cost, we consider only the first nearest-neighbor (NN) magnetic interactions and the single-ion anisotropy (SIA) in the Heisenberg Hamiltonian.
The considered primitive unit cell of these materials consists of two TM, and two X atoms - such that each TM atom has four X atoms as the nearest neighbors, and vice versa (see Fig. 1), where one can apply symmetry rules for the exchange matrix (see Appendix A in \(\dagger\) Supplementary Information) and SIA (cf. Ref. [35]). In Fig. 1 each TM atom is sketched with one \(d\) orbital, and each X atom with one \(p\) orbital, since these atomic orbitals, and their mutual hybridization, are essential for the interactions between magnetic moments on TM atoms. Though the positions of the atoms in the structure are uniquely determined with respect to the in-plane primitive lattice vectors due to the symmetry of tetragonal structures, both atomic species are allowed to relax out-of-plane (along \(\vec{a}_{3}\equiv z\) axis).
Our throughput computational screening revealed that out of a 48 materials in total, only the 21 listed in Table 1 possess dynamical stability. Out of those 21, we identified five materials with FM order as lowest in energy (out of six possibilities considered), seven materials with AFM order as the lowest-energy one, and nine materials not exhibiting magnetic order, i.e. with magnetic moments on individual atoms below 0.5\(\mu_{B}\) in every of the six considered magnetic configurations, and the con
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & Magnetic & & & & & \\ & order & J\({}^{xx}\)=J\({}^{yy}\) & J\({}^{zz}\) & \(\Delta\) & A\({}_{ii}^{zz}\) & T\({}_{\rm c}\)(K) \\ \hline CrC & FM & -52.68 & -52.66 & 0.03 & -0.35 & 515.6 \\ MnC & FM & -105.98 & -106.01 & -0.03 & -0.67 & 1065.8 \\ VN & FM & -3.73 & -10.54 & -6.81 & -13.94 & 152.4 \\ CoN & FM & -12.57 & -12.94 & -0.38 & 1.28 & 132.2 \\ NiTe & FM & -7.42 & -5.15 & 2.27 & 4.53 & 31.2 \\ FeC & AFM & 53.00 & 53.28 & 0.28 & 0.21 & 5.3 \\ FeO & AFM & 5.47 & 5.46 & -0.01 & 2.48 & 20.1 \\ MnS & AFM & 55.07 & 55.95 & 0.88 & 1.69 & 15.2 \\ FeS & AFM & 16.13 & 16.38 & 0.25 & -0.56 & 5.7 \\ MnSe & AFM & 50.66 & 50.65 & -0.01 & -0.20 & 10.3 \\ FeSe & AFM & 15.30 & 15.24 & -0.06 & -1.20 & 5.8 \\ FeTe & AFM & 31.80 & -28.89 & -60.69 & 119.63 & 35.6 \\ CuC & NM & - & - & - & - & - \\ NiN & NM & - & - & - & - & - \\ CuN & NM & - & - & - & - & - \\ NiS & NM & - & - & - & - & - \\ CuS & NM & - & - & - & - & - \\ ZnS & NM & - & - & - & - & - \\ CuSe & NM & - & - & - & - & - \\ ZnSe & NM & - & - & - & - & - \\ ZnTe & NM & - & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Magnetic properties of the stable monolayer tTMX structures.** FM, AFM, and NM stand for ferromagnetic, antiferromagnetic, and non-magnetic order, respectively. J\({}^{xx}\) and J\({}^{yy}\) mark in-plane, and J\({}^{zz}\) out-of-plane exchange interactions. Out-of-plane exchange anisotropy \(\Delta\) stands for the difference J\({}^{xx}\)-J\({}^{zz}\). A\({}_{ii}^{zz}\) is the single-ion anisotropy (SIA). If signs of SIA and exchange interactions are the same, SIA favors out-of-plane anisotropy, otherwise the in-plane one. J\({}^{xx}\), J\({}^{yy}\), J\({}^{zz}\), \(\Delta\), and A\({}_{ii}^{zz}\) are all given in meV. T\({}_{\rm c}\) stands for the critical temperature of the magnetic order, Curie for FM and Néel temperature for AFM monolayers.
Figure 1: **Crystal structure of a monolayer tTMX.** TM (cyan) atoms are sketched with one \(d\) orbital, and X (yellow) atoms with one \(p\) orbital, since these orbitals and their hybridization are essential for magnetic interactions in this system.
figuration with magnetic moments exactly 0 as lowest in energy. Out of five FM materials, two of them appear to have completely flat, square lattice when on-site Coulomb repulsion between electrons (U) is properly included - CrC and MnC. Such a structure ensures that the first-nearest-neighbor TM-X-TM bond comprises 90 degrees angle, and second-nearest-neighbor 180 degrees angle, making these structures optimal - according to GKA rules - for "maximization" of magnetic exchange in the system. This fact served as motivation to thoroughly analyze the structural, electronic, and magnetic properties of these two materials, and discuss the subtle interplay of these properties that optimizes the ferromagnetic order.
The first important finding concerns the structure. Namely, as indicated above, the relative atomic arrangement along the \(z\) axis strongly depends on the on-site Coulomb repulsion (U) between electrons on TM atoms. In Fig. 2(a), we present the relaxed structures for CrC and MnC with no U included, and for the U value calculated self-consistently - using the linear response theory as introduced by Timorov _et al._[36]. The latter U value is found as 3.2949 eV for CrC, and 3.8137 eV for MnC. In case of U=0, the structure buckles, and two X atoms relax above and below the plane of TM atoms. Furthermore, the X atoms create the tetrahedral structure around each TM atom. On the contrary, when using a realistic value for U, both CrC and MnC exhibit an ideally planar structure, i.e. a 2D square lattice as desired in the GKA argumentation towards the enhanced ferromagnetic order. For certainty, we tested the dynamical stability of both these planar structures - and the phonon dispersions shown in Fig. 2(b) exhibited no imaginary phonon frequencies.
After determining the on-site Coulomb repulsion and the planar structural stability, we move on to the magnetic properties of CrC and MnC. The necessary, yet insufficient condition for long-range magnetism is the non-zero magnetic moment per unit space. The TM atoms are expected to provide the latter, due to the localized unpaired electrons in their \(3d\) shell, each electron carrying spin \(\frac{1}{2}\), and spin magnetic moment of 1 \(\mu_{B}\). In most of the structures based on TMs, the contribution of the orbital magnetic moment is negligible compared to the spin magnetic moment, due to the quenching of orbital momentum, hence the magnetic moment can be assumed to originate purely from the spin of the electron. Consequently, the total magnetic moment on each TM atom is \(N\times 1\)\(\mu_{B}\), where \(N\) is the total number of unpaired electrons in the \(3d\) shell of each TM atom.
The basic ionic theory suggests 4+ oxidation state of Cr and Mn cations in our monolayers, since C atom receives 4 electrons to reach stable octet configuration. Furthermore, Cr\({}^{4+}\) ion has 20 electrons and the \(1s^{2}2s^{2}2p^{6}3s^{2}3p^{6}3d^{2}\) electronic configuration, while Mn\({}^{4+}\) ion 21 electrons and the \(1s^{2}2s^{2}2p^{6}3s^{2}3p^{6}3d^{3}\) electronic configuration. This means that one expects 2 \(\mu_{B}\) per Cr atom and 4 \(\mu_{B}\) per primitive unit cell in CrC, and 3 \(\mu_{B}\) per Mn atom and 6 \(\mu_{B}\) per primitive unit cell in MnC. Ionic theory predicts no moment per C atom in either cases, due to the mentioned stable octet configuration.
For a more precise account of magnetic moments per atom we resort to DFT calculations, and find that: (1) in case of CrC, the magnetization per Cr atom is 2.8588 \(\mu_{B}\) and per C atom \(-0.7236\)\(\mu_{B}\), resulting in 4.2703 \(\mu_{B}\) per primitive unit cell; (2) in case of MnC, the magnetization per Mn atom is 3.8825 \(\mu_{B}\) and per C atom \(-0.7718\)\(\mu_{B}\), resulting in 6.2214 \(\mu_{B}\) per primitive unit cell. The DFT results do corroborate the crude ionic theory regarding the total magnetization of the unit cell, but also reveal the significant hybridization between (\(d\)) orbitals of TM atoms and (\(p\)) orbitals of C atoms - causing a rather significant magnetization on otherwise non-magnetic C atoms.
However, in order for a system to host measurable magnetic order, next to the non-zero magnetic moments on TM ions, it must also host significant interaction between them. Unlike the initial estimates using the method based on mapping between total energies of the DFT and the Heisenberg Hamiltonians, we now calculate the magnetic exchange by mapping the energy variations due to the infinitesimal rotation of the magnetic moment on TM atoms from the reference FM state, between the DFT Hamiltonian in the localized-orbital basis set, and the Heisenberg Hamiltonian \(H=\sum_{<i,j>}\mathbf{S}_{i}\mathbf{J}_{i,j}\mathbf{S}_{j}\), as implemented in the TB2J code [37]. In the latter Hamiltonian, \(\mathbf{S}_{i}\) denotes the unit 3D vector of the magnetic moment on
Figure 2: **Stability of CrC and MnC ferromagnetic monolayers.** (a) The effect of Hubbard parameter U on monolayer structures of CrC and MnC, and (b) the phonon dispersions of two materials for optimal U shown in (a), proving dynamical stability of the flat structures.
the \(i^{th}\) TM atom; \(3\times 3\) matrix \(\mathbf{J}_{i,j}\) stands for interaction between magnetic moments on \(i\)-th and \(j\)-th TM atoms; and \(<i,j>\) denotes \(i\neq j\), with avoided double counting. The SIA matrix cannot be calculated using this formalism (for details see [37]), therefore it is not explicitly written in the used Heisenberg Hamiltonian. However, SIA is not negligible, and correspondingly must be taken into account during e.g. the calculation of T\({}_{\mathrm{c}}\) for the FM order. Therefore, we combine SIA reported in Table 1, together with the \(\mathbf{J}_{i,j}\) calculated using TB2J to construct the total model Hamiltonian for eventual \(2^{nd}\)-principles calculations:
\[H=\sum_{<i,j>}\mathbf{S}_{i}\mathbf{J}_{i,j}\mathbf{S}_{j}+\sum_{i}\mathbf{S} _{i}\mathbf{A}_{i,i}\mathbf{S}_{i}. \tag{1}\]
The main advantage of TB2J and the Green's-function-based methodology over the total energy mapping is the ability to orbitally disentangle the origins of magnetic interactions [38], and also the ability to calculate interactions between _all_ different neighbors within a large supercell upon a _single_ DFT calculation on the primitive unit cell. In particular, we have calculated the matrices \(\mathbf{J}_{i,j}\) up to the \(284^{th}\) NN for both materials.
Our results obtained using TB2J obey the symmetry-imposed constraints - i.e. all off-diagonal elements of all \(\mathbf{J}_{i,j}\) matrices are exactly 0. The diagonal part of each matrix can further be split - as presented in Table 1 - into the isotropic (J\({}_{i,j}=\) J\({}_{i,j}^{xx}=\) J\({}_{i,j}^{yy}\)) and the anisotropic part (\(\Delta=\)J\({}_{i,j}^{zz}-\)J\({}_{i,j}^{xz}\)). The anisotropic part of the exchange, \(\Delta\), in either system does not exceed few (1-6) \(\mu\)eV, and is comparable with the rounding error in our calculations. Therefore, we consider \(\Delta\) as effectively 0, and we subscrib the stabilization of the magnetic order in these 2D materials to just J\({}_{i,j}\) and SIA [39].
In Fig. 3 we present our TB2J results for J\({}_{i,j}\) up to the \(6^{th}\) NN. One notices in Fig. 3 that \(1^{st}\) and \(2^{nd}\) NN interactions are strong (few tens of meV), an order of magnitude larger compared to usually encountered isotropic exchange values (few meV) in 2D materials. Another observation is that even pairs over 1 nm distance have small but non-zero exchange interaction, of few hundreds of \(\mu\)eV. However, since these are much smaller than the \(1^{st}\) and \(2^{nd}\) NN interactions, in what follows, we focus on the first two NN pairs with giant J\({}_{i,j}\), being essential for the high-T\({}_{\mathrm{c}}\) GKA ferromagnetism. We use exchange interaction parameters calculated with TB2J, and SIA from Table 1 to build the Heisenberg Hamiltonian as explained above, to then employ Monte Carlo simulations to evaluate the stability of the FM order with respect to temperature. In Fig. 4, we present the thereby obtained evolution of (normalized) magnetization (M\({}_{st}\)/M\({}_{s}\)), magnetic susceptibility (\(\chi\)), and specific heat (C\({}_{v}\)), as a function of temperature. One clearly sees that estimated critical temperatures of the FM order in both CrC and MnC _exceed the room temperature_, being 307 K and 428 K respectively. Even though anisotropy is in general required to stabilize magnetic order in 2D above 0 K - in our case that is SIA - the main reason for such a large T\({}_{\mathrm{c}}\) lies in the particularly large isotropic exchange between the \(1^{st}\) as well as the \(2^{nd}\) NN pairs of TM atoms. Table 1 provides Curie and Neel temperature values for t-TMX structures. Notably, VN, CoN, and NiTe structures exhibit ferromagnetic properties; however, their respective Curie temperatures are significantly lower compared to those of CrC and MnC. This phenomenon can be primarily attributed to two key factors: weakened exchange interactions and strong Single Ion Anisotropy (SIA). This weakened exchange interaction and strong SIA inherently limits their ability to maintain ferromagnetic order at elevated temperatures, thus resulting in lower Curie temperatures when contrasted with CrC and MnC. Conversely, antiferromagnetic materials exhibit Neel temperatures in proximity to absolute zero (0 K) due to the significant impact of SIA. This low Neel temperature signifies the point at which antiferromagnetic materials undergo a transition to a non-magnetic state. The details about calculations of Curie and Neel temperatures can be found in \(\dagger\) Supplementary Information.
## III Discussion
Having presented the core results, we next detail the origin of the magnetic interactions behind the observed high Curie temperature in monolayer tTMXs, bearing in mind the original assumptions following from the GKA rules. To shed the light on the source of the large exchange J\({}_{i,j}\), we look into the orbitally-resolved contributions.
Our initial assumption was that the gross of the exchange interactions originates from the exchange between the \(d\) orbitals of the interacting TM atoms (\(d\)-\(d\) exchange). In order to properly describe the physics behind our observations, we treat \(3s\), \(3p\), \(3d\), \(4s\), and \(4p\) as valence orbitals on TM atoms. Further, we quantify the contributions of each orbital-to-orbital interaction be
Figure 3: **The isotropic magnetic exchange, per neighboring pair.** The isotropic exchange interactions in CrC (blue) and MnC (red), calculated using Green’s function method as implemented in TB2J. The inset depicts the numerical labeling of the nearest-neighbor sites within the structure.
tween the neighboring TM atoms (e.g. \(4s\) on TM1 and \(3d\) on TM2), to the total exchange between them. For facilitated interpretation, we consider the contributed interactions between each type of orbital on TM1 (\(a=3s\), \(4s\), \(3p\), \(3d\), and \(4p\)) and only \(3d\) orbitals on TM2 (\(b=3d\)), for the first three NN magnetic interactions \(J_{\psi NN}^{a,b}\), as shown in Fig. 5. It is rather obvious from Fig. 5 that TM1(\(3d\))-TM2(\(3d\)) interactions dominate (being several tens of meV strong), and determine the magnetic order in the system - in this case the FM one. Interactions between other types of orbitals on one TM and \(3d\) orbitals on the other are generally at least an order of magnitude smaller. Therefore, after \(d\)-\(d\) exchange is proven to be crucial for the large exchange and room-temperature magnetism in tTMXs, we next decompose it into the exchanges between individual \(d\)-orbitals of the interacting pair of TM atoms, to disentangle the key contributors. The results per \(d\)-orbital and per material are shown in Fig. 6.
### The nearest-neighbor interaction
Owing to the square lattice symmetry of these monolayers, our original premise holds, and the empirical GKA rules are validated in the case of the \(1^{st}\) NN exchange - the ideally \(90^{\circ}\) TM-X-TM bond arrangement fosters a particularly strong FM interaction, stemming mainly from the \(d\)-\(d\) exchange.
As seen in Fig. 6, the largest contribution to the \(1^{st}\) NN magnetic interaction in both considered materials comes from the interaction between \(d_{xz}\) on one TM atom, and \(d_{yz}\) on the other, together with its symmetric twin - TM1(\(d_{yz}\))-TM2(\(d_{xz}\)). We prescribe this exactly to the square-lattice geometry of the structure and the fact that the dumbbells of the \(d_{xz}\) on TM1 and \(d_{yz}\) on TM2 point along the bonds to the (X1) ligand atom between them in the structure (analogously for \(d_{yz}\) on TM1 and \(d_{xz}\) on TM2, interacting via the adjacent ligand X2). Moreover, after having a closer look at the DFT Hamiltonian in the localized basis set, we noticed that both \(d_{xz}\) on TM1 and \(d_{yz}\) on TM2 interact only with \(p_{z}\) on X1, while the other hopping matrix elements (with \(p_{x}\) and \(p_{y}\) on X1, and \(p_{x}\), \(p_{y}\), and \(p_{z}\) on X2) are 0, because of the symmetry of the system.
Another significant contribution to the total exchange between the \(1^{st}\) NN pair originates from TM1(\(d_{xy}\))-TM2(\(d_{xy}\)) interactions. In that case, the \(d_{xy}\) orbitals on TM1 and TM2 point towards each other directly and have significant direct overlap, however, there is also significant hopping between \(d_{xy}\) on both TM1 and TM2 and \(p_{x}\) and \(p_{y}\) orbitals on both nearest X atoms (X1 and X2). This increased complexity of the physical picture causes differences in the sign and the strength of those particular contributions in the two considered compounds: in case of CrC this contribution is AFM, while in case of MnC it is FM. The main reason for the difference in this orbital contribution lies in the different _atomic environment_ - the ordering and occupation of the atomic \(d(p)\) orbitals on TM(X) atoms - and different behavior of direct exchange in the particular environment. In case of the atomic environment in CrC, the superexchange solely determines the orbital contribution to the total magnetic exchange and it is AFM, while in case of MnC, direct- and superexchange compete in such atomic environment that the result is the FM coupling. Details of these findings are made available in the \(\dagger\) Supplementary Information.
Both compounds host several other non-zero interactions between different \(d\) orbitals, e.g. \(d_{xy}\) on TM1 and \(d_{z^{2}}\) on TM2. Even though these terms are much smaller than the dominating ones discussed above, they are still sizable (several meV) and they do affect the total exchange, albeit on tertiary level. In particular TM1(\(d_{xy}\))-TM2(\(d_{z^{2}}\)) contribution is very sensitive to the alteration
Figure 4: **Thermal stability of the long-range magnetic order.** Magnetization, specific heat and magnetic susceptibility of (a) CrC and (b) MnC, as a function of temperature.
Figure 5: **Orbitally-decomposed magnetic interactions.** The magnetic exchange interactions in CrC and MnC, for the three nearest-neighbor pairs. As illustrated in the inset, for every pair the contributions are discerned per orbital \(a\) of TM1 (\(a=3s,4s,3p,3d,4p\)), each interacting with just one orbital (\(b=3d\)) of TM2.
of the superexchange hopping TM(\(d_{xy}\))-X(\(p_{x/y}\)), hence we note that superexchange mechanism plays an important role here. For more discussion on their origin and behavior, we refer the reader to the \(\dagger\) Supplementary Information.
### The next-nearest and further-neighbor interactions
Although our initial premise of strong FM interactions between the first nearest neighbors was validated, we point out at this stage that plain GKA rules are not sufficient to interpret the magnetic behavior of a 2D material, even in the case of an ideally planar square-lattice structure. Namely, the \(2^{nd}\) NN interaction is expected to be AFM according to the GKA rules, due to the \(180^{\circ}\) TM-X-TM bond alignment - but we have observed the (strong) opposite in both materials of interest.
In what follows, we present the results for the TM1-X-TM2 bonds being aligned with the global \(x\) coordinate (as the case of TM1-X-TM2 along \(y\) is completely analogous). As was the case with the \(1^{st}\) NN interaction, the orbitals aligned with the direction of TM-X-TM bonds are mainly responsible for the large total exchange of the \(2^{nd}\) NN pair as well - i.e. TM1(\(d_{xz}\))-TM2(\(d_{xz}\)) interaction is the dominant one in this case. The dumbbells of these two \(d\) orbitals both point towards the common X atom, and only interact with its \(p_{z}\) orbital.
The fact that the dominating contributions to the \(1^{st}\) and \(2^{nd}\) NN interactions are originating from the same physical process - i.e. from two \(d\) orbitals on two TM atoms that are oriented towards the common ligand atom, and interact only with its \(p_{z}\) orbital - leads towards the conclusion that they should be of the same sign and comparable strength, as they indeed are in our results (strongly FM). That said, the GKA rules assume the dominant contribution to the AFM superexchange in case of \(180^{\circ}\) TM-X-TM bonds to be via the \(p\) orbital of X, whose dumbbell is aligned with TM-X-TM direction [30; 40] - which would be the \(p_{x}\) orbital in the above discussion. However, even though we find these contributions to be AFM as GKA rules would suggest, we also find that they are an order of magnitude smaller than the dominant contributions, involving the \(p_{z}\) orbital. Therefore, the disagreement between our results and GKA rules originates in the fact that mechanism considered dominant by GKA (superexchange involving the \(p_{x}\), or \(p_{\sigma}\)) is secondary in our case, and vice versa - the mechanism considered secondary by GKA (superexchange involving the \(p_{z}\), or \(p_{\pi}\)) appears to be dominant in the two materials of our interest. For interested readers, this is discussed in more detail within the \(\dagger\) Supplementary Information.
In case of the \(3^{rd}\) NN exchange, the geometry is very similar to the \(1^{st}\) NN exchange, hence one expects to have the same dominant contributions. We find that the three main contributions in both materials are indeed the same ones as in the case of the \(1^{st}\) NN, i.e. \(d_{xz}\)-\(d_{yz}\), \(d_{yz}\)-\(d_{xz}\), and \(d_{xy}\)-\(d_{xy}\). However, their observed behavior is more complicated. In case of CrC the TM1(\(d_{xz}\))-TM2(\(d_{yz}\)) and TM1(\(d_{yz}\))-TM2(\(d_{xz}\)) are contributing the most, however to AFM order (positive exchange parameter), while the second largest contribution comes from TM1(\(d_{xy}\))-TM2(\(d_{xy}\)) and it is FM. In case of MnC the main contribution comes from TM1(\(d_{xy}\))-TM2(\(d_{xy}\)) and
Figure 6: **Sub-orbitally-decomposed \(3d\)-\(3d\) interactions.** The magnetic exchange between five different 3d (\(d_{xy}\), \(d_{yz}\), \(d_{z^{2}}\), \(d_{xz}\) and \(d_{x^{2}-y^{2}}\)) orbitals on first three NN pairs, for both CrC and MnC monolayers. CrC and MnC are indicated using dark and light shading, respectively.
it is AFM. The interaction between TM1(\(d_{xz}\))-TM2(\(d_{yz}\)) and TM1(\(d_{yz}\))-TM2(\(d_{xz}\)) is again FM, however their magnitude affects the total 3\({}^{rd}\) NN exchange significantly less. Since the 3\({}^{rd}\) NN exchange is negligible compared to the 1\({}^{st}\) and 2\({}^{nd}\) NN exchange, we will not detail these interactions. We see however that sign and strength of the orbital contributions to the 3\({}^{rd}\) NN exchange are mainly determined by the orbital ordering and occupation of Cr and Mn atoms - i.e. their atomic environment - and the fact that in different environment, the different mechanisms may be dominating. In case of the TM1(\(d_{xy}\))-TM2(\(d_{xy}\)) contribution, our results suggest that in case of CrC there is the competition between direct and superexchange, while in MnC it is the usual superexchange through the X ligand that dominates. By comparing the results for the first- and the third-nearest-neighbor exchange in two materials, one could argue that in case that atomic environment stimulates the superexchange alone, the result will be AFM interaction (the first-nearest-neighbor in CrC, and third-nearest-neighbor in MnC). On the other hand, when atomic environment stimulates the competition between different exchange mechanisms (the first-nearest-neighbor in MnC, and third-nearest-neighbor in CrC) resulting interaction between TM1(\(d_{xy}\))-TM2(\(d_{xy}\)) will be FM. For interested readers, we provide brief additional discussion of the effect of atomic environment to the third-nearest-neighbor magnetic exchange in the \(\dagger\) Supplementary Information.
## IV Conclusions
In summary, after a computational throughput screening of the whole family of tTMX materials, we have identified two dynamically stable and ideally 2D flat Lieb-like magnetic crystals, CrC and MnC. According to the seminal Goodenough-Kanamori-Anderson rules, materials of such symmetry are prone to host pronouncedly high ferromagnetic exchange interaction. Our detailed analysis of the magnetic properties has shown that both Cr and Mn ions indeed have sizable magnetic moments, that materials host large isotropic magnetic interactions (order of 10 meV), that exchange anisotropy is negligible in these systems (order of 1 \(\mu\)eV), and that stabilization of the long-range magnetic order in these materials should be prescribed to single-ion anisotropy (order of 0.1 meV). As a result of this large isotropic exchange and non-zero single-ion anisotropy, the Curie temperature for ferromagnetic transition of these materials exceeds the room temperature - 307 K in CrC and 428 K in MnC.
In detailed analysis, we showed how the symmetry/geometry of the system selects the dominant contributions to the total magnetic exchange. In case of the 1\({}^{st}\) NN and 90 degrees TM-C-TM lattice direction, the interaction between \(d_{xz}\) on TM1 and \(d_{yz}\) on TM2 - TM1(\(d_{xz}\))-TM2(\(d_{yz}\)) - dominates together with its twin counterpart TM1(\(d_{yz}\))-TM2(\(d_{xz}\)). We presented the hypothesis that this interaction is mediated by the \(p_{z}\) orbital of the intervening ligand C. In case of the 2\({}^{nd}\) NN and 180 degrees TM-C-TM lattice direction, the dominant contributions are again geometry-selected, being either TM1(\(d_{xz}\))-TM2(\(d_{xz}\)) or TM1(\(d_{yz}\))-TM2(\(d_{yz}\)), depending on whether TM-C-TM direction is along global Cartesian \(x\) or \(y\) coordinate. In this case we also hypothesized that interaction is mediated by the \(p_{z}\) orbital of the C atom. In case of the 3\({}^{rd}\) NN, which was of secondary importance in our systems, we could not establish a criterion for single dominant contribution based on geometry, however, even there two contributions were identified as leading ones. Finally we have shown that interactions beyond the 3\({}^{rd}\) NN are far smaller and may be neglected in any prediction of experimentally measurable physical properties of these systems.
In conclusion, we provided convincing evidence that CrC and MnC in their two-dimensional tetragonal phase are room-temperature Lieb-like-lattice 2D ferromagnets. Furthermore, we highlighted the effect of geometry on the magnetic properties - both the magnetic moments and the exchange interactions between them. That said, we expect any emergent material with flat square-lattice structure and magnetic atom from the first half of the first row of transition metals (V, Cr, Mn, Fe) to: (i) have sizable magnetic moments (mainly from unpaired \(d_{xz}\) and \(d_{yz}\) orbitals), and (ii) host large geometry-selected magnetic interactions between these two types of orbitals. Consequently, in any such system with sufficient anisotropy in magnetic interactions, the latter features will foster the large isotropic exchange and high critical temperature of the long-range magnetic order in two dimensions, as already shown for 2D metallic MnB monolayer.[41]
## V Acknowledgement
This work was supported by the Research Foundation-Flanders (FWO) and the Technological Research Council of Turkey (TUBITAK) under Contract No. 118F512. D.S. is a doctoral fellow of FWO under Contract No. 11J4322N. The computational resources and services for this work were provided by the VSC (Flemish Supercomputer Center), funded by the FWO and the Flemish Government - department EWI.
|
2309.11623 | Leveraging Negative Signals with Self-Attention for Sequential Music
Recommendation | Music streaming services heavily rely on their recommendation engines to
continuously provide content to their consumers. Sequential recommendation
consequently has seen considerable attention in current literature, where state
of the art approaches focus on self-attentive models leveraging contextual
information such as long and short-term user history and item features;
however, most of these studies focus on long-form content domains (retail,
movie, etc.) rather than short-form, such as music. Additionally, many do not
explore incorporating negative session-level feedback during training. In this
study, we investigate the use of transformer-based self-attentive architectures
to learn implicit session-level information for sequential music
recommendation. We additionally propose a contrastive learning task to
incorporate negative feedback (e.g skipped tracks) to promote positive hits and
penalize negative hits. This task is formulated as a simple loss term that can
be incorporated into a variety of deep learning architectures for sequential
recommendation. Our experiments show that this results in consistent
performance gains over the baseline architectures ignoring negative user
feedback. | Pavan Seshadri, Peter Knees | 2023-09-20T20:21:13Z | http://arxiv.org/abs/2309.11623v2 | # Leveraging Negative Signals with Self-Attention for Sequential Music Recommendation
###### Abstract.
Music streaming services heavily rely on their recommendation engines to continuously provide content to their consumers. Sequential recommendation consequently has seen considerable attention in current literature, where state of the art approaches focus on self-attentive models leveraging contextual information such as long and short-term user history and item features; however, most of these studies focus on long-form content domains (tretail, movie, etc.) rather than short-form, such as music. Additionally, many do not explore incorporating negative session-level feedback during training. In this study, we investigate the use of transformer-based self-attentive architectures to learn implicit session-level information for sequential music recommendation. We additionally propose a contrastive-learning task to incorporate negative feedback (e.g skipped tracks) to promote positive hits and penalize negative hits. This task is formulated as a simple loss term that can be incorporated into a variety of deep-learning architectures for sequential recommendation. Our experiments show that this results in consistent performance gains over the baseline architectures ignoring negative user feedback.
Sequential Recommendation, Music Recommendation, Self-attention, Contrastive Learning +
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †: conference:
+
Footnote †:
## 1. Introduction
Recommendation systems have become integral to streaming services such as Spotify, Apple Music, Deezer, etc., and by proxy, the music industry as a whole. As the music streaming business model relies on continual user engagement and activity, consistent music discovery is an essential service. Sequential music recommendation is one such task in this domain, where given a current user session (i.e a current sequence of tracks listened to by a user), a system extends the session by recommending the user the next track. Within the music domain, sequential recommendation is generally split into two categories, _next song recommendation (NSR)_, and _automatic playlist continuation (APC)_. These two tasks can be learned in a similar manner from playlist and listening history information, but they differ in output length: APC aims to extend the session or playlist by an arbitrary length, while NSR only aims to provide the next relevant song in sequence (Knees et al., 2019). For this study we focus specifically on NSR.
Music recommendation differs from other well-studied domains of recommendation (tretail, movies, games, etc.) in a number of important ways. Singular music tracks generally are short and easily consumed, necessitating a thorough understanding of a user's preferences in order to provide both breadth and depth over a large quantity of relevant recommendations (Knees et al., 2019). Robust music recommendation systems often leverage previous consumer history to learn user preferences through methods such as collaborative filtering (Knees et al., 2019); however these approaches fall victim to the cold start problem (Knees et al., 2019): for new users or new tracks, the recommendation model does not have any usable information and must guess preferences until the user and/or track has interacted with the system enough to learn a profile (Knees et al., 2019).
Sequential recommendation in general can alleviate this issue by learning session-level relationships instead of, or in tandem with user-level relationships. By learning session item relationships from sequential interaction, item profiles can be rapidly built as they interact with the system as the recommendation engine can compare user sessions directly rather than using aggregate statistics via collaborative filtering, which takes much more data to build robust representations (Knees et al., 2019).
This study aims to leverage implicit and explicit signals present within listening sessions to learn robust profiles for sequential recommendations. Prior work has considered direct incorporation of user feedback for ad-hoc adjustments based on content and context similarity, e.g. (Han et al., 2018; Knees et al., 2019). In this work, we investigate learning session-level information via transformer-based architectures, influenced by SoTA methods for sequential retail recommendation, as well as incorporating user feedback through a learned contrastive task. To our best knowledge, learning from negative signals/user feedback has not been explored thoroughly for sequential music recommendation due to a lack of public data containing thorough user feedback. Many public music recommendation datasets, such as Lastfm-1K (Knees et al., 2019), were collected before the streaming boom, where logged listening history would primarily be sourced from user creation, leading to a low source of negative signals. For this study, we employ the Music Streaming Sessions Dataset from Spotify (Bogorst et al., 2019). Since many of the interactions present are from programmatic or expert curation, rather than user curation, they can be considered as exploration events where the user reacts positively (listens to track in entirety) or negatively (skips track). This provides a rich amount of negative samples to learn effective session-level representations from.
## 2. Related Work
Sequential recommendation systems can generally be divided into two types: _session-aware_ systems leverage session-level history from
identifiable users, while _session-based_ systems ignore user-labels and aim to build user-agnostic representations using solely discrete sessions. (Han et al., 2017). In this study, we investigate a _session-based_ system that implicitly learns a user profile through anonymous listening sessions.
Several session-based approaches have been proposed for retail recommendation tasks. CASER (Seshadri et al., 2017) and NextItNet (Seshadri et al., 2018) leverage convolutional filters to learn sequential representations. BERT4Rec (Liu et al., 2019) leverages the bidirectional attention mechanism from BERT (Chen et al., 2019) to learn a robust vocabulary of items for sequential recommendation.
Several sequential based approaches have been proposed for music recommendation tasks incorporating a variety of information to drive recommendation (Han et al., 2017). Most of such approaches leverage contextual and/or content features, largely by extensive user profiles and music tags. Relevant work for these respective approaches include CoSERNN (Chen et al., 2019), and Online Learning to Rank for Sequential Music Recommendation (Han et al., 2017). The former leverages contextual information such as device used, time of day during recommendation, etc. to drive contextual user-sequential embeddings for sequential recommendation, while the latter leverages content features via music tags for an online learning to rank scheme. For a study closest to our task, Wen et. al investigate leveraging implicit user feedback immediately after click for video and music recommendation, and find performance gains incorporating this information into a variety of recommendation approaches (Seshadri et al., 2018). Most state-of-the-art sequential music recommendation approaches leverage several types of information that offer are not present in public datasets (e.g lyrics, user contextual/demographic information, music tags, etc.). It would be increasingly difficult to re-implement and test these systems in a cold-start or academic setting due to the amount and variety of data required. Our approach aims to alleviate this data issue by taking advantage of implicit relationships from data present solely in listening sessions of songs, namely item labels and timestamps of user events. We additionally do not take into account long-term user history due to a lack of user labels; Thus, we focus on creating a _session-based_ system.
## 3. Method
### Problem Statement
In our scenario, we define a session \(S\) of length \(K\) and set of possible tracks \(t\in T\) for user \(u\). Track \(t_{i}\), where \(t_{1,2,...K}\in S\) represents the track at each time step \(i\) in session \(S\), where \(i\in[1\dots K]\). Generally, the task of a sequential recommendation system is to predict the desired next item \(h_{i}\) at time step \(i+1\) for each \(t_{i}\in S\), given an interaction history \(S_{i}\), where \(S_{i}=\{t_{a}\in S\mid a\leq i\}\).
For negative feedback-agnostic sequential recommendation systems (i.e where the user has not explicitly responded negatively to any item), we define \(h_{i}\) for track \(t_{i}\) as the next track in the sequence, \(t_{i+1}\).
For our feedback-aware system, we define the set of positive examples (no-skip) as \(P\) and negative examples (skipped tracks) as \(N\) per sequence \(S\), such that:
\[p_{j}\in P,n_{k}\in N,\text{ and all }p_{j},n_{k}\in S\]
where \(j,k\) correspond to the time step of each example in session \(S\). Additionally for clarity, we define \(l_{P}\) and \(l_{N}\) as the set of time steps for all positive and negative examples, respectively, where \(j\in l_{P}\) and \(k\in l_{N}\). For any track \(t_{i}\), we define the desired next track \(h_{i}\) as the next positive example in the session, \(p_{m}\), such that:
\[m=\min_{j}\{j\in l_{P}\mid j>i\}\]
Where the difference \(m-i\) represents the number of skipped tracks between track \(t_{i}\) and its next positive sample.
To predict the desired next track at time step \(i\), we model a probability distribution \(p(h_{i}=t\mid S_{i})\) over all possible tracks. Sorting this distribution provides a ranking of the most-relevant items. By learning from negative feedback, we aim to both raise the ranking of \(p_{m}\), as well as lower the rankings of items in \(N\) in predicting each \(h_{i}\).
### Model Architecture
We investigate unidirectional and bidirectional transformer-based architectures in this study, inspired by the SASRec (Seshadri et al., 2018) and BERT4Rec (Liu et al., 2019) architectures, respectively. For both approaches we use the same base architecture described below, with the sole differences being the training procedure, learning objective, and the use of a causal attention mask in the case of the unidirectional model. We keep the implementation analagous to that of the aformentioned authors for better comparison.
#### 3.2.1. Track Embeddings
We store learned track embeddings in a lookup-table \(e_{t}\in E\) of size \(T\times\mathbb{R}^{|d|}\), where \(T\) is the number of tracks and \(\mathsf{d}\) is the embedding dimensionality. \(E(\cdot)\) denotes the function retrieving the embeddings of a track or set of tracks from table \(E\).
#### 3.2.2. Positional Embeddings
To inject information about the position of each track in the sequence, we add a learnable positional embedding \(PE\) of size \(K\times\mathbb{R}^{|d|}\) to each track embedding in the sequence, where \(\mathsf{K}\) corresponds to the size of the sequence.
#### 3.2.3. Encoder
We employ a standard transformer encoder to learn contextual session-level information. This is a fully attention based model employing multiple multi-head self-attention layers and position-wise feedforward layers to learn contextual information from sequential inputs.
#### 3.2.4. Prediction Layer
After obtaining hidden vectors from the encoder with contextual information, we project them through a fully connected layer with GELU activation (Chen et al., 2019) to obtain predicted embeddings \(\hat{y}_{i}\) for each \(t_{i}\in S\). We then compute an inner product with the embedding table and apply a sampled softmax to get a probability distribution over each track.
#### 3.2.5. Sampled Softmax
Additionally for training stability with such a large amount of classes (\(\sim 1\)M tracks in this study), we employ a sampled softmax function during training. For each mini-batch for each session, we uniformly sample 1000 unseen tracks and rank the target tracks alongside these. These 1000 tracks are re-sampled each epoch, such that as training continues, the model continually learns to "rank" the target items with an increasing subset of the total tracks, as the number of unique tracks sampled for comparison increase.
### Sequential Recommendation Task
For both approaches, we employ the same learning objective, the negative log likelihood (NLL), for training; however they differ in how this learning objective is used.
#### 3.3.1. Unidirectional
We employ the _next-item prediction_ task for this approach. For each \(t_{i}\in S\), we task the model with predicting the next item in the sequence, \(t_{i+1}\). We then compute log-probabilities and pass this to the NLL Loss. Additionally, attention maps are computed using a causal mask, preserving the auto-regressive nature of unidirectional transformers.
#### 3.3.2. Bidirectional
We employ the _cloze_, or _masked language modelling (MIM)_ task for this approach. We randomly mask a proportion \(p\) of each sequence with a special token [MSK] and task the model with predicting what the correct track is at these indices with a bidirectional attention map. For the sequential recommendation task, we also append the [MSK] token to the end of the sequence and set the target of this to the last track in the session targets, to ensure that this target does not appear in the bidirectional attention map.
### Skip-informed Contrastive Task
To learn negative sequential track relationships, we employ a contrastive learning task using the skipped tracks in each listening session. We employ noise contrastive estimation with InfoNCE (Krishnan et al., 2017) shown below:
\[\mathcal{L}_{NCE}=-\mathbb{E}_{X}\left[\log\frac{f_{i}(\mathbf{p},\mathbf{c}) }{\sum_{x_{j}\in X}f_{i}(x_{j},\mathbf{c})}\right]\]
Given a context vector \(c\), positive anchor \(p\) and set of noise samples \(x\in X\), this loss term uses a categorical cross entropy to classify the positive anchor from the set of noise samples, given scoring function \(f_{i}(\mathbf{x},\mathbf{c})\).
For each track \(t_{i}\in S\), we adapt this to our task of promoting the next true positive sample \(p_{m}\) and penalizing all negative samples \(n_{j}\in N\) by defining the following:
1. \(c=e_{t_{i}}\) or \(\hat{y_{i}}\)
2. \(X=E(N)\)
3. \(p=E(p_{m})\)
4. \(f_{k}(\mathbf{x},\mathbf{c})=\frac{\mathbf{x}\cdot\mathbf{c}}{\|\mathbf{x}\|_ {2}\|\mathbf{c}\|_{2}}\)
This maximizes the cosine similarity between the embedding \(e\) of track \(t_{i}\) and next-positive-sample \(p_{m}\) while minimizing the similarity between \(e_{t_{i}}\) and all \(e_{n}\in E(N)\). Since during prediction, logits are computed by the inner product of \(\hat{y_{i}}\) and \(E\), this directly affects the rankings of \(p_{m}\) and all \(n\in N\), by drawing \(t_{i}\) and \(p_{m}\) closer together in the learned embedding space, and consequently pushing \(t_{i}\) and all \(n\in N\) farther away in the embedding space. We experiment with setting the context vector \(c\) as both \(\hat{y_{i}}\) and \(e_{t_{i}}\). Setting \(c=\hat{y_{i}}\) includes the current session context, while setting \(c=e_{t_{i}}\) ignores current session context and instead relies solely on the overall learned representation of the track. We explore both to examine the the extent to which immediate context and contextual history affect the learning of negative preference, respectively.
### Dataset
For this study we use the Music Streaming Sessions Dataset (MSSD) (Bahdan et al., 2017) for training and evaluation, which contains 160 Million user sessions of 10 to 20 consecutively listened songs (<60 seconds between listens). These listening sessions are uniformly sampled from a variety of contexts, such as the user's personally curated collections, expertly curated playlists, contextual non-personalized recommendations, and personalized recommendations.
Notably, this dataset is pseudonymized, meaning all included sessions lack a user label. Consequently, we treat each session as a new user, ignoring long term history.
Skip labels are provided for each track in each session with strength 1-3, defined per the authors as the track "played very briefly", "played briefly", and "played mostly (but not completely)", respectively. For this study, we are primarily interested in strong negative interactions and therefore only consider tracks with skip strength 1 and 2 as negative examples in each session.
Due to time and computational restraints, we uniformly sample \(\sim\)450K discrete sessions containing \(\sim\)2 million item interactions with \(\sim\)1 million total unique tracks to train and evaluate our models. We note that our subset of sessions contains roughly 15% skipped tracks.
### Training Procedure
As with other contrastive recommendation systems (Krishnan et al., 2017; Krizhevsky et al., 2014), we simply aggregate the sequential task loss and the contrastive shown below within one single training pass
\[\mathcal{L}=\alpha\mathcal{L}_{NCE}+\beta\mathcal{L}_{NLL}\]
where \(\alpha\) and \(\beta\) are scalar terms. We empirically tune these parameters through the validation set.
### Hyperparameters and Implementation
As our data contains variable length sessions between 10 and 20 interactions, we pad all sessions to length 20. We stack 2 encoder blocks with 8 attention heads. The embedding and hidden dimensions are both set to 128. Masking for the bidirectional model is applied per batch with proportion \(p=0.2\). We initialize all parameters via truncated normal sampling with \(\mu=0,\sigma=1\) in range \([-0.02,0.02]\). We tune the optimal \(\alpha,\beta\in[.25,.5,.75,1]\) using the validation set and select \(\alpha=0.5,\beta=0.5\). We use the ADAM optimizer (Kingmaa et al., 2014) with a learning rate of 0.005, selected after tuning through the validation set with \(lr\in[0.0001,0.0005,0.001,0.005]\). All models were implemented in python using pytorch-lightning and trained using an NVIDIA RTX 2070 GPU.
## 4. Results and Discussion
### Evaluation
We employ the next-item recommendation task used by (Krishnan et al., 2017; Krizhevsky et al., 2014) for our evaluation. For each sequence, we leave out the final and penultimate items as the testing and validation targets, respectively and reserve the rest of the sequence for training. For each target, we uniformly sample 1000 unobserved tracks, where the task becomes to rank the target among these tracks. We employ the Hit Rate@K
(equivalent to recall) as our evaluation metric, with \(k\in[1,5,10,20]\). The results are shown in Table 1.
### Discussion
We note a number of observations from our experiments. Namely:
1. The skip-informed contrastive task consistently outperforms the feedback-agnostic models, indicating that learning from negative feedback is beneficial for sequential music recommendation
2. The unidirectional models consistently outperform the bidirectional models, with a waning performance gap as the top-K for the hit rate increases.
3. Using the final hidden state \(\hat{y}_{i}\) with immediate contextual information as the context vector for the contrastive task performs similarly but consistently slightly worse than using the item embeddings.
Overall, we observe that our contrastive task reliably increases the hit rate in a next-item recommendation scenario, with the exception of the HR@20 for the bidirectional model using only track embeddings. Interestingly, even though we create a mismatch between the targets for the sequential recommendation task and contrastive task, the hit rate for the sequential recommendation task increases, inferring that optimizing for the next positive example (\(p_{m}\)) and next track (\(t_{i+1}\)) in tandem raises the performance in selecting the next track during inference.
We also observe waning performance gains as the number of tracks in the ranking window increases, likely due to the fact that the contrastive task only relates observed tracks with each other. As the amount of unobserved tracks in the comparison increases (i.e HR@1 to HR@20), the weaker the effect of the contrastive task. Our experiments imply that the effect of learning from negative feedback in this fashion mostly affect the top ranked recommendations.
The relatively weak performance of the BERT-like architecture may be due to the relative high density of our dataset and our relatively short sequence lengths, so training in an autoregressive manner with each sample in the training sequence per each epoch may be better for learning latent sequential track relationships. More work is likely needed to find an optimal setup using bidirectional attention with the MLM task.
The slight performance improvement when using the track embeddings as the contextual vector for the contrastive task may imply that while immediate session-level contextual information is useful in learning from negative feedback, reducing this emphasis may provide a slightly stronger signal for preference of a user's next desired track.
## 5. Conclusion and Future Work
Overall, we have presented both a study on the use of transformer-based architectures for sequential music recommendation, as well as a contrastive-based task to learn from negative feedback. We show through our experiments that the contrastive task results in greater hit rate on both unidirectional and bidirectional architectures. Multiple avenues for future work arise, namely the inclusion of long-term user profiles for better modelling of long term and changing user-taste. Additionally, contextual and content information can be injected into the embeddings to learn more powerful contextual representations. An analysis of the performance on different session types and streaming behaviors (playlist, auto-generated, user-curated, etc.) (Krizhevsky et al., 2015) would also provide better insight into the performance in different listening contexts.
###### Acknowledgements.
This research was funded in whole, or in part, by the Austrian Science Fund (FWF) [P33526]. For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.
|
2301.13616 | Anti-Exploration by Random Network Distillation | Despite the success of Random Network Distillation (RND) in various domains,
it was shown as not discriminative enough to be used as an uncertainty
estimator for penalizing out-of-distribution actions in offline reinforcement
learning. In this paper, we revisit these results and show that, with a naive
choice of conditioning for the RND prior, it becomes infeasible for the actor
to effectively minimize the anti-exploration bonus and discriminativity is not
an issue. We show that this limitation can be avoided with conditioning based
on Feature-wise Linear Modulation (FiLM), resulting in a simple and efficient
ensemble-free algorithm based on Soft Actor-Critic. We evaluate it on the D4RL
benchmark, showing that it is capable of achieving performance comparable to
ensemble-based methods and outperforming ensemble-free approaches by a wide
margin. | Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, Sergey Kolesnikov | 2023-01-31T13:18:33Z | http://arxiv.org/abs/2301.13616v2 | # Anti-Exploration by Random Network Distillation
###### Abstract
Despite the success of Random Network Distillation (RND) in various domains, it was shown as not discriminative enough to be used as an uncertainty estimator for penalizing out-of-distribution actions in offline reinforcement learning. In this paper, we revisit these results and show that, with a naive choice of conditioning for the RND prior, it becomes infeasible for the actor to effectively minimize the anti-exploration bonus and discriminativity is not an issue. We show that this limitation can be avoided with conditioning based on Feature-wise Linear Modulation (FiLM), resulting in a simple and efficient ensemble-free algorithm based on Soft Actor-Critic. We evaluate it on the D4RL benchmark, showing that it is capable of achieving performance comparable to ensemble-based methods and outperforming ensemble-free approaches by a wide margin. 1
Footnote 1: Our implementation is available at [https://github.com/tinkoff-ai/sac-rnd](https://github.com/tinkoff-ai/sac-rnd)
## 1 Introduction
In recent years, significant success has been achieved in applying Reinforcement Learning (RL) to challenging and large-scale tasks such as Atari (Badia et al., 2020), Go (Schrittwieser et al., 2020), Dota 2 (Berner et al., 2019), and Minecraft (Baker et al., 2022). However, the online nature of such RL algorithms makes it difficult to apply them in the real world, where online collection of large amounts of exploratory data may not be feasible for safety or financial reasons. Offline Reinforcement Learning (Levine et al., 2020) promises a more controllable and data-driven approach, focusing on algorithms that can learn from a fixed, pre-recorded dataset without requiring additional environment interactions.
The use of ensembles for uncertainty-based penalization has proven to be one of the most effective approaches for offline RL. Ensemble-based algorithms, such as SAC-N, EDAC (An et al., 2021), and MSG (Ghasemipour et al., 2022) currently achieve state-of-the-art results on most D4RL (Fu et al., 2020) datasets, outperforming ensemble-free methods by a wide margin. Unfortunately, in order to achieve the best performance, these algorithms may require tens or hundreds of ensemble members, leading to significant computational and memory overhead, as well as extended training duration (Nikulin et al., 2022).
Recent research (Yang et al., 2022) has successfully reduced the ensemble size to tens of Q-networks in the worst-case scenarios. However, given the general trend for model scaling in offline RL (Kumar et al., 2022; Reed et al., 2022; Lee et al., 2022), efficiently training even ten Q-networks with 80 million parameters each is not feasible. Furthermore, Ghasemipour et al. (2022) showed that methods for efficient ensemble training found in supervised learning literature do not deliver performance comparable to naive ensembles and can even worsen the results. Thus, further research on efficient uncertainty estimation for offline RL is needed, with the goal of reducing the size of the ensemble as much as possible or even fully removing it.
In this work, we move away from ensembles and take an alternative approach to uncertainty estimation, proposing an
Figure 1: Mean performance of SAC-RND variants on walker and hopper medium-* datasets, each averaged over 3 seeds. We plot performance for the naive version, which uses concatenation conditioning, and our final version, which is described in Section 5. We also plot the final scores for the ensemble-free CQL (Kumar et al., 2020) and the ensemble-based SAC-N (An et al., 2021). It can be seen that our version is a significant improvement over the naive version, achieving performance comparable to ensembles.
efficient offline RL method with ensemble-free uncertainty estimation via Random Network Distillation (RND) (Burda et al., 2018). RND, a simple and fast ensemble competitor for epistemic uncertainty estimation (Ciosek et al., 2019), is an attractive choice for offline RL. However, previous research (Rezaeifar et al., 2022) found RND to be insufficiently discriminative for good results.
In our preliminary experiment (Section 3), we show that RND is discriminative enough to detect OOD actions, which contradicts the previous study (Rezaeifar et al., 2022). Nevertheless, our results show that the naive application of RND does indeed not lead to good results (see Figure 1). Building upon these findings, we further simplify the problem and analyze the reasons for this issue (Section 4). We discover that a naive choice of conditioning for the RND prior can hinder the minimization of the anti-exploration bonus by the actor, and that conditioning based on Feature-wise Linear Modulation (FiLM) (Perez et al., 2018) is particularly effective in solving this problem.
Based on our findings, we propose a new ensemble-free offline RL algorithm called **SAC-RND** (Section 5). We evaluate our method on the D4RL (Fu et al., 2020) benchmark (Section 6), and show that SAC-RND achieves performance comparable to ensemble-based methods while outperforming ensemble-free approaches.
## 2 Background
**Offline Reinforcement Learning**. Reinforcement learning problem can be described as a Markov Decision Process (MDP) defined by the \(\{\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma\}\) tuple with state space \(\mathcal{S}\subset\mathbb{R}^{N}\), action space \(\mathcal{A}\subset\mathbb{R}^{M}\), transition dynamics \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\), reward function \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), and a discount factor \(\gamma\). The goal of reinforcement learning in an infinite horizon setting is to produce a policy \(\pi(a|s)\) that maximizes the expected cumulative discounted return \(\mathbb{E}_{\pi}[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})]\).
In offline reinforcement learning, a policy must be learned from a fixed dataset \(\mathcal{D}\) collected under a different policy or mixture of policies, without any environment interaction. This setting poses unique fundamental challenges (Levine et al., 2020), since the learning policy is unable to explore and has to deal with distributional shift and extrapolation errors (Fujimoto et al., 2019) for actions not represented in the training dataset.
**Offline RL as Anti-Exploration**. There are numerous approaches for offline RL, a substantial part of which constrain the learned policy to stay within the support of the training dataset, thus reducing (Kumar et al., 2020) or avoiding (Kostrikov et al., 2021) extrapolation errors. For our work, it is essential to understand how such a constraint can be framed as _anti-exploration_(Rezaeifar et al., 2022).
Similarly to online RL, where novelty bonuses are used as additive intrinsic rewards for effective exploration, in offline RL, novelty bonuses can induce conservatism, reducing the reward in unseen state-action pairs. Hence the name _anti-exploration_, since the same approaches from exploration can be used, but a bonus is subtracted from the extrinsic reward instead of being added to it.
However, unlike online RL, subtracting a bonus from the raw reward would not be as useful, since the novelty bonus is, by design, close to zero for in-dataset state-action pairs. Therefore, it is more effective to apply it where the overestimation for OOD actions emerges -- the temporal difference learning target:
\[r+\gamma\mathbb{E}_{a^{\prime}\sim\pi(\cdot|s^{\prime})}[Q(s^{\prime},a^{ \prime})-b(s^{\prime},a^{\prime})] \tag{1}\]
where the actor is trained to maximize the expected Q-value, as is usually done in off-policy actor-critic algorithms (Lillicrap et al., 2015; Haarnoja et al., 2018). It can be shown that, theoretically, these approaches are equivalent, but the latter is more suited for use in offline RL (Rezaeifar et al., 2022).
An illustrative example of how such framing can be effective are ensemble-based approaches such as SAC-N & EDAC (An et al., 2021) and MSG (Ghasemipour et al., 2022), which currently outperform their ensemble-free counterparts by a large margin on most D4RL (Fu et al., 2020) benchmark datasets. For the anti-exploration bonus, these methods use ensemble disagreement as a proxy for epistemic uncertainty. However, a large number of ensemble members is usually required for a competitive result.
**Random Network Distillation**. Random network distillation (RND) was first proposed in online RL (Burda et al., 2018) as a simple and effective exploration bonus. To this day, RND is still considered a strong baseline for exploration that can work well even in stochastic environments, contrary to some more modern approaches (Jarrett et al., 2022).
RND consists of two neural networks: a fixed and randomly initialized _prior_ network \(\bar{f}_{\bar{\psi}}\), and a _predictor_ network \(f_{\psi}\) which learns to predict the prior outputs on the training data:
\[\|f_{\psi}(s)-\bar{f}_{\bar{\psi}}(s)\|_{2}^{2} \tag{2}\]
Both networks map states to embeddings in \(\mathbb{R}^{K}\), and the gradient through _prior_ is disabled. The interpretation of the novelty is straightforward: with the sufficiently diverse _prior_, the _predictor_ must learn to match embeddings on data points similar to the training dataset, while failing to predict on new examples. A bonus in such a case may simply be a prediction error, as in Equation (2).
In a subsequent work, Ciosek et al. (2019) analyses the success of RND in a supervised setting, and shows that fitting random priors can be a competitive alternative to ensembles for estimating epistemic uncertainty.
Note that in practice, the choice of predictor and prior having the same architecture and the estimation of novelty from states only are _very common, but arbitrary_. Moreover, for offline RL, we are interested in estimating the novelty of an action conditioned on the state, which is why in our work RND depends on both: \(f_{\psi}(s,a)\).
**Multiplicative Interactions**. The most common way to fuse two different streams of information is feature concatenation, which is straightforward but can be suboptimal (Dumoulin et al., 2018). Jayakumar et al. (2020) shows that multiplicative interactions provide a powerful inductive bias for fusing or conditioning from multiple streams and are superior in practice. We provide a brief review of those used in our work (excluding concatenation): gating, bilinear, and feature-wise linear modulation (FiLM).
**Gating**. Simple conditioning with two linear layers and pointwise multiplication of the resulting features (Srivastava et al., 2019).
\[f(a,s)=tanh(W_{1}a+b_{1})\odot\sigma(W_{2}s+b_{2})\]
**Bilinear**. Bilinear layer in its most general form, as proposed by Jayakumar et al. (2020).
\[f(a,s)=s^{T}\mathbb{W}a+s^{T}\mathbb{U}+\mathbb{V}a+b\]
where \(\mathbb{W}\) is a 3D tensor, \(\mathbb{U}\), \(\mathbb{V}\) are regular matrices and \(b\) is a vector. However, in our work, we also use the implementation as in PyTorch, which does not learn \(\mathbb{U}\), \(\mathbb{V}\) by default.
**FiLM**. Special case of a bilinear layer with low-rank weight matrices (Perez et al., 2018).
\[f(h,s)=\gamma(s)\odot h+\beta(s)\]
Usually, FiLM operates on hidden activations \(h\) before non-linearity between layers. Thus, the main network takes \(a\) as an input.
## 3 Random Network Distillation is Discriminative Enough
To better understand the possible difficulties of applying RND to offline RL, we first reproduce the main experiment from Rezaeifar et al. (2022), which showed that RND is not discriminative enough to be used as a novelty bonus. For convenience, we provide the original figure from Rezaeifar et al. (2022) in the Appendix A. We also compare RND with a trained Q-ensemble (N = 25) from the SAC-N algorithm (An et al., 2021). Similarly to Rezaeifar et al. (2022), we use simple state-action concatenation. Predictor and prior share the identical architecture of 4-layer MLPs.
The goal of the experiment (see Figure 2) is to visually plot the anti-exploration bonus for ID state-action pairs and different perturbations of actions to model OOD data: random actions sampled from a uniform distribution and dataset actions to which Gaussian noise with different scales is added.
To our surprise, the result on Figure 2 is strikingly different from previous work. It shows that RND is able to discriminate between ID and OOD actions with varying degrees of distributional shift and is comparable to a trained Q-ensemble. In contrast, Rezaeifar et al. (2022) hypothesizes that RND can only work well out of the box for discrete action spaces and visual features, and concludes that extending it to continuous action spaces is not straightforward.
After further investigation of the open-sourced codebase2 in search of discrepancies with our implementation, we found that the only difference is that, contrary to the advice of Ciosek et al. (2019), Rezaeifar et al. (2022) sets the predictor smaller than prior by two layers during RND pretraining. It is important to make the predictor larger or comparable in capacity to the prior so that it can minimize the loss to zero on the training dataset (Ciosek et al., 2019). However, the actual RND hyperparameters used in the final publication were not listed, so we cannot draw a definitive conclusion about the reason for such different results.
Figure 2: Anti-exploration bonus (Rezaeifar et al., 2022) on the walker2d-medium dataset for trained SAC-N (An et al., 2021), Q-ensemble (N = 25) and RND. Bonus is computed for state-action pairs from the original dataset and different perturbations of actions: random actions, dataset actions to which Gaussian noise is added with different scales. Both RND networks use simple state-action concatenation. The result is strikingly different from a similar figure in the Rezaeifar et al. (2022) (we provide the original figure in the Appendix A for convenience). Contrary to previous research, it can be seen that RND is capable of distinguishing ID from OOD actions and is comparable to a trained Q-ensemble.
## 4 Concatenation Prior Hinders Bonus Minimization
A well-behaved anti-exploration bonus for continuous action spaces, be it RND or any other, should satisfy at least two criteria. First, it should be discriminative enough to detect novel actions and downweight their value estimates (see Equation (1)). Ideally, the bonus should be close to zero for ID data so that we do not bias the Q-function, as this can be detrimental to training. Second, it should allow the actor to easily minimize the bonus with gradient descent during training.
In Section 3, we showed that RND can detect OOD actions. Nevertheless, naive use of RND as an anti-exploration bonus on top of the Soft Actor Critic algorithm (Haarnoja et al., 2018) still does not provide satisfactory performance (see Figure 1) with scores lower than CQL (Kumar et al., 2020) and SAC-N (An et al., 2021). This gives us an hint that the problem may not be the discriminative power of RND, but that the actor cannot effectively minimize the anti-exploration bonus during training.
To test our hypothesis that the actor cannot effectively minimize the anti-exploration bonus, we further simplify the problem by removing the critic from the SAC algorithm but keeping the entropy bonus (see Algorithm 2 in the Appendix). We expect that, in such a setting, the actor will be able to successfully minimize the anti-exploration bonus to the possible minimum, i.e. comparable to the bonus for the ground truth data at the end of the RND pretraining. As a consequence, since dataset actions provide the minimum bonus by design, we also expect that the distance from the agent to dataset actions should be small.
We set predictor architecture to state-action concatenation. Additionally, we explore different conditioning schemes for the prior. We use the halfcheetah, walker2d and hopper medium datasets, with 3 seeds each. Figure 3 compares the anti-exploration bonus for dataset actions during RND pretraining (see Figure 2(a)) and for agent actions during training (see Figure 2(b)).
As one can see for all prior architectures except one, the anti-exploration bonus during actor training is much higher than it should be according to the values on the dataset actions. These results confirm our hypothesis. Furthermore, we can note from Figure 2(c) that the actor cannot clone the behavioral policy, since the distance to the dataset actions can even increase during training.
However, RND with the FiLM prior architecture allows the actor to effectively minimize the anti-exploration bonus and successfully clone the behavioral policy. This suggests that, with the right inductive bias for the prior, we can solve the problems of naive RND and possibly achieve better results.
## 5 Anti-Exploration by Random Network Distillation
We are now ready to present **SAC-RND**: a new offline RL method for continuous action spaces, based on our findings in Section 3 and Section 4. It is simple, ensemble-free and achieves state-of-the-art results comparable to ensemble-based methods. We have chosen the Soft Actor-Critic (Haarnoja et al., 2018) algorithm as the backbone of the method. In this section, we will explain how the RND is trained and how we define the anti-exploration bonus.
**Random Network Distillation**. We pretrain RND with MSE loss between prior and predictor embeddings, stopping gradient through prior and freezing both networks afterwards during SAC training. We keep both networks similar in size to the agent and critic, which are 4 layer MLPs. Contrary to Burda et al. (2018); Ciosek et al. (2019), we do not add additional layers to the predictor to prevent undesirable results. This is because, when the predictor size is bigger than prior on state-based tasks (not image-based as in original work by Burda et al. (2018)), we observe that it can sometimes overgeneralize to OOD prior embeddings.
According to Section 4, for the prior, we use FiLM conditioning on penultimate layer before nonlinearity. In principle, the predictor can be arbitrary (Ciosek et al., 2019), but in practice, its architecture and conditioning type can also affect performance. We conduct a preliminary study on a small subset of the D4RL Gym tasks to select the best-performing conditioning. Based on the results in Table 1, we chose a predictor with bilinear conditioning in the first layer, as it showed the best performance.
**Anti-Exploration Bonus**. We define the anti-exploration bonus similarly to RND loss as
\[b(s,a)=\|f_{\psi}(s,a)-\bar{f}_{\tilde{\psi}}(s,a)\|_{2}^{2} \tag{3}\]
and additionally divide it by RND loss running standard deviation (which is tracked during pretraining phase) to increase its scale uniformly among environments. Such
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline Task Name & Concat & Gating & Bilinear & FiLM \\ \hline hopper-medium-v2 & 94.8 & 39.7 & 98.4 & 86.3 \\ hopper-medium-expert-v2 & 71.5 & 59.3 & 110.3 & 102.7 \\ hopper-medium-replay-v2 & 100.3 & 51.3 & 100.8 & 100.3 \\ \hline walker2d-medium-v2 & 94.8 & 82.3 & 92.8 & 95.1 \\ walker2d-medium-expert-v2 & 86.1 & 84.2 & 108.9 & 110.0 \\ walker2d-medium-replay-v2 & 90.3 & 87.5 & 88.3 & 75.7 \\ \hline Average & 89.6 & 67.3 & **99.9** & 95.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different RND predictors. Prior uses FiLM conditioning. Predictor uses conditioning in the first layer. All scores are averaged over 3 random seeds. Halfcheetah tasks are ommited, as we found them non-representative of the final performance on harder tasks.
scaling simplifies hyperparameter search, shrinking the possible range of useful \(\alpha\) coefficients that control the level of conservatism during training.
For detailed training procedure and full SAC losses, we refer to Algorithm 1 in the Appendix (differences with the original SAC algorithm are highlighted in blue).
## 6 Experiments
In this section, we present an empirical evaluation of our method using the D4RL benchmark on the Gym domain (Section 6.1) and the more challenging AntMaze domain (Section 6.2). Next, we provide additional analysis and visual insight into why FiLM conditioning in the prior might be beneficial (Section 6.3). Finally, we present an ablation that compares more variations of conditioning for predictor and prior (Section 6.4). For each experiment, we also list the exact hyperparameters in Appendix D and implementation details in Appendix C. Additionally, we analyse sensitivity to hyperparameters in Appendix E.
### Evaluation on the Gym Domain
**Setup**. We evaluate our method on all available datasets for the HalfCheetah, Walker2d and Hopper tasks in the Gym domain of the D4RL benchmark. For ensemble-free baselines, we chose CQL (Kumar et al., 2020), IQL (Kostrikov et al., 2021), TD3+BC (Fujimoto and Gu, 2021), which show good results and are widely used in practice. For ensemble-based baselines, we chose SAC-N & EDAC (An et al., 2021) and the more recent RORL (Yang et al., 2022), which currently achieve state-of-the-art scores in this domain. We follow the An et al. (2021) and train for 3M gradient steps, evaluating on 10 episodes.
**Results**. The resulting scores are presented in Table 2. We see that SAC-RND stands out from the ensemble-free methods and outperforms them by a wide margin, achieving a mean score comparable to EDAC and only slightly behind RORL. Note that we do not use ensembles, whereas SAC-N can require up to 500 critics, EDAC up to 50 and RORL up to 20. In addition, we compare our proposed changes with the naive predictor and prior, confirming that our modifications are essential for achieving good performance (see Figure 1).
### Evaluation on the AntMaze Domain
**Setup**. We evaluate our method on all datasets available for the AntMaze domain of the D4RL benchmark. Ensemble-free baselines are the same as in Section 6.1. For ensemble-based baselines, we chose RORL (Yang et al., 2022) and MSG (Ghasemipour et al., 2022), the latter of which, to our knowledge, currently has the best mean score for this domain. We do not include SAC-N and EDAC, as there are no public results for them on this domain, and we were also unable to obtain a non-zero result. We follow the An et al. (2021) and train for 3M gradient steps, evaluating on 100 episodes.
**Results**. The resulting scores are presented in Table 3. Kostrikov et al. (2021) has shown that many offline RL methods that perform well on the Gym domain fail on the AntMaze domain. It can be seen that, on the AntMaze domain, SAC-RND shows good results that are on par with ensembles, and outperforms ensemble-free methods. This also shows that our choice of predictor and prior generalises well to new domains. Note that, in addition to ensembles, both MSG and RORL require pre-training or supervision with behavioural cloning in order to achieve reported results,
Figure 3: Effect of different state-action conditioning in the prior of RND on actor training. We use the halfcheetah, walker2d and hopper medium datasets, with 3 seeds each. For training procedure, see Algorithm 2 in the Appendix. **(a)** Anti-exploration bonus for in-dataset actions during RND pretraining. We additionally divide the bonus by the RND loss running standard deviation to increase its scale (see Section 5) so the anti-exploration bonus increases slightly over time as standard deviation decreases. However, this does not affect minimization by the actor and is needed to highlight the differences. **(b)** Anti-exploration bonus for actor actions during training. Ideally, it should converge to values close to the final values in (a). **(c)** Distance of actor actions to true in-dataset actions during training. Ideally, it should decrease, as actions closer to the behavioral policy have the lowest bonus by design.
while our method does not require any additional modifications.
### Why is FiLM Conditioning Beneficial for Bonus Minimization?
In Section 4, we showed that FiLM conditioning in the RND prior significantly improved the actors' ability to minimize the anti-exploration bonus. Since the issue occurred during actor training, we hypothesize that this may be related to the anti-exploration bonus optimization landscape. In this section, we analyze the anti-gradient fields for conditioning with concatenation or FiLM for the prior network.
For the purpose of analysis, we design a toy dataset with only four categorical states and two-dimensional actions sampled uniformly in each corner of the grid (see Appendix B for dataset visualization and generation details).
We fix the hyperparameters and pretrain two RNDs that differ only in the type of prior conditioning. The predictor uses simple concatenation. Next, in Figure 4, we plot the two-dimensional anti-gradient field for the anti-exploration bonus conditioned on each state. The effect of FiLM becomes more apparent in these plots. While the resulting anti-gradients for concatenation are noisy and only point in the direction of the minimum in a small neighbourhood, the directions for FiLM are smooth over the entire available action space and point to the correct global minimum for each state. While we cannot draw general conclusions from such a demonstration, based on the results of Section 4, we hypothesize that a similar phenomenon might exist in high-dimensional problems as well.
### Exploring More Conditioning Pairs
One might wonder (1) how different types of conditioning for predictor and prior interact with each other and (2) where to introduce conditioning in terms of depth for it to be most beneficial.
To answer these questions, we return to the experiment from Section 4 and generate more variations for each type (where it is possible): conditioning on the first layer, on the last layer, and on all layers. We also look at two variations of the bilinear layer: full, as presented in Jayakumar et al. (2020), and simplified, which is used by default in PyTorch. In Figure 5 we plot the final MSE between the resulting policy and the behavioural one on the training data. Two interesting observations can be made from these findings.
\begin{table}
\begin{tabular}{l|r r r|r r r|r} \multicolumn{7}{c}{Ensemble-free} & \multicolumn{4}{c|}{Ensemble-based} \\ \hline
**Task Name** & **TD3+BC** & **IQL** & **CQL** & **SAC-N** & **EDAC** & **RORL** & **SAC-RND** \\ \hline halfcheetah-random & 11.0 \(\pm\) 1.1 & 13.1 \(\pm\) 1.3 & 31.1 \(\pm\) 3.5 & 28.0 \(\pm\) 0.9 & 28.4 \(\pm\) 1.0 & 28.5 \(\pm\) 0.8 & 29.0 \(\pm\) 1.5 \\ halfcheetah-medium & 48.3 \(\pm\) 0.3 & 47.4 \(\pm\) 0.2 & 46.9 \(\pm\) 0.4 & 67.5 \(\pm\) 1.2 & 65.9 \(\pm\) 0.6 & 66.8 \(\pm\) 0.7 & 66.6 \(\pm\) 1.6 \\ halfcheetah-expert & 96.7 \(\pm\) 1.1 & 95.0 \(\pm\) 0.5 & 97.3 \(\pm\) 1.1 & 105.2 \(\pm\) 2.6 & 106.8 \(\pm\) 3.4 & 105.2 \(\pm\) 0.7 & 105.8 \(\pm\) 1.9 \\ halfcheetah-medium-expert & 90.7 \(\pm\) 4.3 & 86.7 \(\pm\) 5.3 & 95.0 \(\pm\) 1.4 & 107.1 \(\pm\) 2.0 & 106.3 \(\pm\) 1.9 & 107.8 \(\pm\) 1.1 & 107.6 \(\pm\) 2.8 \\ halfcheetah-medium-replay & 44.6 \(\pm\) 0.5 & 44.2 \(\pm\) 1.2 & 45.3 \(\pm\) 0.3 & 63.9 \(\pm\) 0.8 & 61.3 \(\pm\) 1.9 & 61.9 \(\pm\) 1.5 & 54.9 \(\pm\) 0.6 \\ halfcheetah-full-replay & - & - & 76.9 \(\pm\) 0.9 & 84.5 \(\pm\) 1.2 & 84.6 \(\pm\) 0.9 & - & 82.7 \(\pm\) 0.9 \\ \hline hopper-random & 8.5 \(\pm\) 0.6 & 7.9 \(\pm\) 0.2 & 5.3 \(\pm\) 0.6 & 31.3 \(\pm\) 0.0 & 25.3 \(\pm\) 10.4 & 31.4 \(\pm\) 0.1 & 31.3 \(\pm\) 0.1 \\ hopper-medium & 59.3 \(\pm\) 4.2 & 66.2 \(\pm\) 5.7 & 61.9 \(\pm\) 6.4 & 100.3 \(\pm\) 0.3 & 101.6 \(\pm\) 0.6 & 104.8 \(\pm\) 0.1 & 97.8 \(\pm\) 2.3 \\ hopper-expert & 107.8 \(\pm\) 7.0 & 109.4 \(\pm\) 0.5 & 106.5 \(\pm\) 9.1 & 110.3 \(\pm\) 0.3 & 110.1 \(\pm\) 0.1 & 112.8 \(\pm\) 0.2 & 109.7 \(\pm\) 0.3 \\ hopper-medium-expert & 98.0 \(\pm\) 9.4 & 91.5 \(\pm\) 14.3 & 96.9 \(\pm\) 15.1 & 110.1 \(\pm\) 0.3 & 110.7 \(\pm\) 0.1 & 112.7 \(\pm\) 0.2 & 109.8 \(\pm\) 0.6 \\ hopper-medium-replay & 60.9 \(\pm\) 18.8 & 94.7 \(\pm\) 8.6 & 86.3 \(\pm\) 7.3 & 101.8 \(\pm\) 0.5 & 101.0 \(\pm\) 0.5 & 102.8 \(\pm\) 0.5 & 100.5 \(\pm\) 1.0 \\ hopper-full-replay & - & - & 101.9 \(\pm\) 0.6 & 102.9 \(\pm\) 0.3 & 105.4 \(\pm\) 0.7 & - & 107.3 \(\pm\) 0.1 \\ \hline walker2d-random & 1.6 \(\pm\) 1.7 & 5.4 \(\pm\) 1.2 & 5.1 \(\pm\) 1.7 & 21.7 \(\pm\) 0.0 & 16.6 \(\pm\) 7.0 & 21.4 \(\pm\) 0.2 & 21.5 \(\pm\) 0.1 \\ walker2d-medium & 83.7 \(\pm\) 2.1 & 78.3 \(\pm\) 8.7 & 79.5 \(\pm\) 3.2 & 87.9 \(\pm\) 0.2 & 92.5 \(\pm\) 0.8 & 102.4 \(\pm\) 1.4 & 91.6 \(\pm\) 2.8 \\ walker2d-expert & 110.2 \(\pm\) 0.3 & 109.9 \(\pm\) 1.2 & 109.3 \(\pm\) 0.1 & 107.4 \(\pm\) 2.4 & 115.1 \(\pm\) 1.9 & 115.4 \(\pm\) 0.5 & 114.3 \(\pm\) 0.6 \\ walker2d-medium-expert & 110.1 \(\pm\) 0.5 & 109.6 \(\pm\) 1.0 & 109.1 \(\pm\) 0.2 & 116.7 \(\pm\) 0.4 & 114.7 \(\pm\) 0.9 & 121.2 \(\pm\) 1.5 & 105.0 \(\pm\) 7.9 \\ walker2d-medium-replay & 81.8 \(\pm\) 5.5 & 73.8 \(\pm\) 7.1 & 76.8 \(\pm\) 10.0 & 78.7 \(\pm\) 0.7 & 87.1 \(\pm\) 2.4 & 90.4 \(\pm\) 0.5 & 88.7 \(\pm\) 7.7 \\ walker2d-full-replay & - & - & 94.2 \(\pm\) 1.9 & 94.6 \(\pm\) 0.5 & 99.8 \(\pm\) 0.7 & - & 109.2 \(\pm\) 1.8 \\ \hline Average & 67.5 & 68.9 & 73.6 & 84.4 & **85.2** & **85.7** & **85.2** \\ \hline \end{tabular}
\end{table}
Table 2: SAC-RND evaluation on the Gym domain. We report the final normalized score averaged over 4 random seeds on v1� datasets. IQL, CQL, MSG scores are taken from Shanesmipour et al. (2022). TD3+BC, RORL scores are taken from Yang et al. (2022).
\begin{table}
\begin{tabular}{l|r r r|r r|r} \multicolumn{7}{c}{Ensemble-free} \\ \hline
**Task Name** & **TD3+BC** & **IQL** & **CQL** & **SAC-N** & **EDAC** & **RORL** & **SAC-RND** \\ \hline antrance-amate-aware & 78.6 & 87.5 & 74.0 & 97.7 \(\pm\) 1.9 & 87.8 \(\pm\) 1.2 & 97.2 \(\pm\) 1.2 \\ antrance-medium-play & 106.6 & 71.2 & 61.2 & 76.3 \(\pm\) 2.5 & 89.6 \(\pm\) 2.2 & 68.5 \(\pm\) 3.7 \\ antrance-medium-aware & 38.7 & 70.0 & 55.7 & 69.3 \(\pm\) 1.3 & 85.6 \(\pm\) 2.6 & 85.5 \(\pm\) 9.2 \\ antrance-negative-play & 0.2 & 39.6 & 15.3 & 66.3 \(\pm\) 11.1 & 72.6 \(\pm\) 6.7 & 67.2 \(\pm\
First, FiLM may not be the only architecture with the right inductive biases for the prior, and both bilinear types with conditioning on all layers can also achieve similar results. However, compared to FiLM, inner bilinear layers are much more computationally expensive, as they involve at least one 3D weight tensor and two additional 2D weight tensors, and the hidden dimensions are usually much higher than the input dimensions.
Second, it appears that conditioning on the last layer is most beneficial for the predictor, while conditioning on all layers is beneficial for the prior. In spite of that, it is difficult to draw broad conclusions, as different types may work well for new problems and domains.
## 7 Related Work
**Model-free offline RL.** Most offline RL approaches focus on the distribution shift problem and overestimation bias of Q-values for OOD actions. Some researchers address this by imposing strict constraints for policy updates, penalizing the divergence from the behavioral policy with KL divergence, maximum mean discrepancy (MMD) distance (Kumar et al., 2019; Wu et al., 2019), simple mean squared error (MSE) (Fujimoto and Gu, 2021), or by re-weighting behavioral policy actions with the estimated advantages (Nair et al., 2020). Others directly regularize Q-values by lowering return estimates for OOD actions, preventing overestimation for unseen actions. For instance, Kumar et al. (2020), Ghasemipour et al. (2022) and Rezaeifar et al. (2022) explicitly introduce an optimization term that lowers Q-values for OOD actions, while An et al. (2021) penalizes implicitly by utilizing the lower-confidence bound (LCB) of Q-values. Alternatively, the evaluation of OOD actions can be avoided altogether by using the upper expectile value
Figure 4: Actions’ anti-gradient field for the anti-exploration bonus conditioned on four categorical states at each corner for the toy problem introduced in Section 6.3. We visualize the dataset in Figure 7 in the appendix. The top row corresponds to RND with concatenation conditioning in the prior, while the bottom row corresponds to FiLM conditioning. As can be seen, the resulting anti-gradients for concatenation are noisy, while the directions for FiLM are smooth over the entire available action space.
Figure 5: Mean squared error between actions of the actor trained with different conditioning for the predictor & prior and actions of the behavioral policy. We use the halfcheetah, walker2d and hopper medium datasets, with 3 seeds each. It can be seen that conditioning on each layer is beneficial for the priors, while for the predictors, it is better to condition on the last layer. Note that this experiment is in the setting of Section 4, that is, without a critic.
function (Kostrikov et al., 2021) or by policy optimization within a latent action space (Chen et al., 2022; Zhou et al., 2021; Akimov et al., 2022).
In our work, we follow the anti-exploration approach (Rezaieifar et al., 2022). In contrast to An et al. (2021); Ghasemipour et al. (2022); Yang et al. (2022), we completely eliminate ensembles for uncertainty estimation, thus reducing computational overhead without sacrificing performance. Moreover, unlike Rezaeifar et al. (2022), we have succeeded in using an RND for novelty detection in offline RL for continuous action spaces.
**Estimation bias in Q-learning**. In both offline and online reinforcement learning, off-policy Q-learning methods suffer from an overestimation bias in the temporal difference learning target (Van Hasselt et al., 2016; Fujimoto et al., 2018). This phenomenon is orthogonal to overestimation due to unseen actions in offline RL, as it occurs even in the presence of strong conservatism constraints. It is mainly caused by target prediction errors for seen transitions and their propagation due to the maximum operation \(max_{a^{\prime}\in A}Q(s^{\prime},a^{\prime})\). To address this problem, Fujimoto et al. (2018) introduced clipped double Q learning (Van Hasselt et al., 2016) in TD3, which uses a minimum of two critics. This approach was later used by Haarnoja et al. (2018) in SAC to improve stability and accelerate convergence.
In our work, we use clipped double Q-learning (Fujimoto et al., 2018), since SAC-RND is based on SAC (Haarnoja et al., 2018), and found it beneficial for stability. However, to ensure that it does not introduce additional conservatism, which can be a confounding factor for the impact of RND, we always set the number of critics to two.
**Uncertainty estimation in offline RL**. Uncertainty estimation is a popular technique in reinforcement learning and is used for a variety of purposes such as exploration, planning, and robustness. In offline RL, its use is mostly limited to modeling epistemic uncertainty (Clements et al., 2019), including measuring the prediction confidence of dynamics models (Yu et al., 2020; Kidambi et al., 2020) or critics (An et al., 2021; Rezaeifar et al., 2022). This approach can be further used to induce uncertainty-aware penalization during training.
Alternatively, uncertainty can help overcome suboptimal conservatism by designing more flexible offline approaches, e.g., conditioning on different levels of confidence to dynamically adjust the level of conservatism during evaluation (Hong et al., 2022) or using Bayesian perspective to design an optimal adaptive offline RL policy (Ghosh et al., 2022).
In our work, we estimate epistemic uncertainty and use it as an anti-exploration bonus to induce conservatism. Unlike previous approaches, we do not use ensembles to estimate epistemic uncertainty.
**Efficient ensembles** Ensembles are a powerful and simple non-Bayesian baseline for uncertainty estimation that outperform Bayesian neural networks in practice (Lakshminarayanan et al., 2017). However, training deep ensembles can be both memory intensive and computationally demanding, making the design of efficient ensembles an attractive research direction for which numerous methods have been developed. For example, Gal & Ghahramani (2016) proposed to use dropout to approximate Bayesian inference in deep Gaussian processes, and Durasov et al. (2021) derived a method to interpolate between dropout and full ensembles with fixed masks and controllable overlap between them. Meanwhile, Wen et al. (2020) significantly reduced the cost by defining each weight matrix as the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member.
Recently, Ghasemipour et al. (2022) showed that, in offline RL, none of the most popular approaches for efficient ensembles are capable of delivering performance that is comparable to naive ensembles, and that more work is needed in this research direction. In our work, we chose an alternative path for uncertainty estimation with RND, which was shown to a fast and competitive counterpart to ensembles (Ciosek et al., 2019).
## 8 Conclusion
In this work, we revisited the results from previous research (Rezaieifar et al., 2022), showing that with a naive choice of conditioning for the RND prior, it becomes infeasible for the actor to effectively minimize the anti-exploration bonus and discriminativity is not an issue. To solve this, we proposed conditioning based on FiLM, which led us to a new ensemble-free method called SAC-RND. We empirically validated that it achieves results comparable to ensemble-based methods and outperforms its ensemble-free counterparts. As such, we believe that our work is a valuable contribution to anti-exploration and uncertainty estimation in offline RL.
|
2309.17252 | Forest Mixing: investigating the impact of multiple search trees and a
shared refinements pool on ontology learning | We aim at development white-box machine learning algorithms. We focus here on
algorithms for learning axioms in description logic. We extend the Class
Expression Learning for Ontology Engineering (CELOE) algorithm contained in the
DL-Learner tool. The approach uses multiple search trees and a shared pool of
refinements in order to split the search space in smaller subspaces. We
introduce the conjunction operation of best class expressions from each tree,
keeping the results which give the most information. The aim is to foster
exploration from a diverse set of starting classes and to streamline the
process of finding class expressions in ontologies. %, particularly in large
search spaces. The current implementation and settings indicated that the
Forest Mixing approach did not outperform the traditional CELOE. Despite these
results, the conceptual proposal brought forward by this approach may stimulate
future improvements in class expression finding in ontologies. % and influence.
% the way we traverse search spaces in general. | Marco Pop-Mihali, Adrian Groza | 2023-09-29T14:02:34Z | http://arxiv.org/abs/2309.17252v1 | Forest Mixing: investigating the impact of multiple search trees and a shared refinements pool on ontology learning
###### Abstract
We aim at development white-box machine learning algorithms. We focus here on algorithms for learning axioms in description logic. We extend the Class Expression Learning for Ontology Engineering (CELOE) algorithm contained in the DL-Learner tool. The approach uses multiple search trees and a shared pool of refinements in order to split the search space in smaller subspaces. We introduce the conjunction operation of best class expressions from each tree, keeping the results which give the most information. The aim is to foster exploration from a diverse set of starting classes and to streamline the process of finding class expressions in ontologies. The current implementation and settings indicated that the Forest Mixing approach did not outperform the traditional CELOE. Despite these results, the conceptual proposal brought forward by this approach may stimulate future improvements in class expression finding in ontologies.
Ontology Learning, DL-Learner, Inductive Logic Programming (IDL), Description Logic (DL), White-box Machine Learning
## I **Introduction**
Machine learning models are being deployed across diverse sectors, from predicting outcomes in business, guiding decision-making in finance, to advancing diagnostics and treatment planning in medicine. However, a significant challenge of these models is their "black box" nature. Complex models built with deep learning networks are not easily interpretable, lacking understanding of how they derive their predictions or decisions. This lack of transparency can pose serious issues, especially when they are applied to critical areas where interpretability and explainability are needed.
In contrast, "white box" models offer insights into the decision-making process, indicating the influence each feature has on the output. They present a more transparent approach for predicting outcomes, but these advances often come with a performance trade-off. Such models may not deliver performance on par with Large Language Models (LLMs), or they might require more time and resources to offer similar outputs.
As building blocks for the Semantic Web, ontologies can be used for data storage, relations among these data, reasoning, or as a background knowledge source for machine learning algorithms. A specific task is finding class expressions from ontologies and examples, an area that might be approached by inductive logic programming (ILP).
We present a novel approach to inductive logic programming, in which we modify the state-of-the-art algorithm CELOE [2]. Our Forest Mixing approach aims to improve
the process of finding class expressions from ontologies and traversing large search spaces, offering a potentially more efficient solution to this type of problems.
## II **Related Work**
The learning algorithm proposed here belongs to the larger field of Inductive Logic Programming (ILP). ILP represents a fusion between inductive learning and logic programming, aiming to derive hypotheses from observations and to create new knowledge from experience.
ALEPH (A Learning Engine for Proposing Hypotheses) is a tool that operates within the domain of Inductive Logic Programming (ILP) [3]. ALEPH formulates hypotheses based on a given set of positive and negative examples and a body of background knowledge. It utilizes a'set covering' loop and applies a hill-climbing search strategy within the hypothesis space. This approach is governed by a refinement operator, facilitating the exploration of the hypothesis space. The versatility of ALEPH, demonstrated by its adaptable parameters and settings, enables it to handle a wide array of logic programming tasks, making it a significant tool in the field of ILP. Notably, ALEPH has found successful applications across various sectors, such as bioinformatics and natural language processing [3].
DL-Learner (Description Logic Learner) is a framework for supervised machine learning in Semantic Web and other structured knowledge domains. Using refinement operators, the tool is designed for learning concepts within Description Logics (DLs), including other related formalisms such as OWL and RDF(S). Among its multiple learning algorithms, the CELOE begins with a broad concept (e.g., "owl:Thing" in OWL) and incrementally refines it, aiming to discover the most specific concepts that satisfy predefined quality criteria based on given positive and negative examples. The algorithm leverages a heuristic search, which enables efficient handling of large knowledge bases by removing the need for exhaustive searches [1, 2]. We rely on the modular design of the DL-Learner tool, which allows easy extension of the CELOE algorithm and easy reuse of its components like the Refinement Operators.
Learning ontologies have been also explored with Relational Concept Analysis [4], semantic role labelling [6], or Large Language Models (LLMs) [5]. Role labeling has been use to fill the gap between natural language expressions and ontology concepts or properties [6]. The LLMs are fine tuned to translate from natural language to OWL functional syntax. The generated translations can be manually validated by the human agent through a plugin for the Protoge ontology editor.
## III **Theoretical instrumentation**
We briefly introduce here some theoretical notions like: ontologies, description logics and refinement operators.
Ontologies are a key component in semantic web technologies and knowledge representation systems. They provide a structured framework of concepts and their relationships, facilitating more effective information retrieval, data integration, and reasoning. They contain classes (i.e. concepts, sets), relations among these classes (that can have some properties like reflexivity, transitivity, symmetry), and individuals (instances of concepts).
Description Logic (DL) is a formal language utilized for knowledge representation, often deployed in Semantic Web and ontologies for class expression and querying. DL exhibits a balance between expressivity and computational efficiency. The expressivity
of DL stems from operators used for creating complex classes, as outlined in Table I [8], [9]. In DL, ontologies are formalised using a collection of concepts (classes), roles (relationships), and individuals. By reasoning in DL, one can perform automatic consistency checking, or maintaining the integrity of the knowledge base when introducing new facts [9].
**Definition 1**: _A refinement operator \(\rho\) is a mapping from a concept \(C\) to a set of concepts, such that: \(\rho:\mathcal{C}\rightarrow\mathcal{C}_{1},\mathcal{C}_{2},...,\mathcal{C}_{n}\), where each \(\mathcal{C}_{i}\) represents a hypothesis._
Refinement operators can be classified into two main types: downward refinement operators and upward refinement operators.
**Definition 2**: _A downward refinement operator, denoted as \(\rho^{\downarrow}(\mathcal{C})\), transforms a concept \(C\) into a set of more specific concepts \(\mathcal{C}_{1},\mathcal{C}_{2},...,\mathcal{C}_{n}\), where each \(\mathcal{C}_{i}\subseteq\mathcal{C}\) for all \(i=1,2,...,n\)._
**Example 1**: _Let the current class expression \(C=Bird\). Applying a downward refinement operator, a more specific class expression is obtained, as \(Bird\sqcap\exists hasFeature.Fly\), describing birds that fly. The new expression describe a smaller set of individuals. Similarly, when the \(\neg\) operator is applied, one can obtain the expression \(Bird\sqcap\neg Aquatic\), describing birds that are not aquatic._
**Definition 3**: _An upward refinement operator, denoted as \(\rho^{\uparrow}(\mathcal{C})\), transforms a concept \(\mathcal{C}\) into a set of more general concepts \(\mathcal{C}_{1},\mathcal{C}_{2},...,\mathcal{C}_{n}\), where each \(\mathcal{C}_{i}\supseteq\mathcal{C}\) for all \(i=1,2,...,n\)._
**Example 2**: _Let the initial expression \(C=Birds\sqcap Carnivore\). Applying an upward refinement operator on \(C\), a more general expression is obtained, that is \(Bird\), corresponding to a larger set of individuals._
Refinement operators are used to generate and test hypotheses during learning. By applying these operators to concept learning, they facilitate navigation through the large space of possible hypotheses [10].
## IV **Forest Mixing Approach**
We start by analysing aspects of the state-of-the-art CELOE approach that can be improved. Building on these observations, we formalise the novel Forest Mixing approach (FM) for ontology learning.
### _Potential Advantages of FMA_
In both Forest Mixing approach and Random Forest algorithms, the search space is divided among several smaller trees. However, this division does not amount to a strict partition in either of the methods. Random Forests train each tree on subsets of
overlapping data and features. In FM, each tree navigates a subset of the search space, but these subsets are not mutually exclusive. Trees might delve into similar or even the same parts of the search space. A crucial difference emerges in the way overlaps are addressed in these algorithms. For Random Forests, overlapping can be beneficial, while for FM, redundancies arising from multiple trees generating identical class expressions can increase computational costs. Despite these contrasts, the central concept of FM draws inspiration from the Random Forest's mechanism.
Though CELOE [2] stands as the state-of-the-art in ontology-based hypothesis search, there exist scenarios where its performance might be improved. Within the scope of CELOE, and hypothesis searching in general, the most computationally demanding operation is the refinement process. This operation can induce an exponential growth in the number of nodes (concepts or class expressions) to be examined. Therefore, an efficient algorithm in our context should ideally minimize the number of refinements required to find the best hypothesis. Note that both CELOE and FM approach provide the functionality to set initial concepts. Setting the initial concepts with the help of user's knowledge triggers a reduction in the search space. We hypothesize two cases where the Forest Mixing approach could offer more efficiency than CELOE.
First, FM can exhibit higher efficiency compared to CELOE, particularly when users have prior knowledge of the data and can suggest starting classes within relevant subspaces. For example, consider non-disjoint classes such as \(Employee\) and \(Student\), where individuals could be part of both classes. If the target concept is for instance
\[Student\sqcap Employee\sqcap\exists attendsCourse.EveningCourse \tag{1}\]
representing individuals enrolled in a university and working there who also attend evening courses, there can be useful hypotheses within both \(Employee\) and \(Student\) subspaces. In this case, FM's parallel exploration can expedite the process by simultaneously investigating both paths, potentially discovering a suitable hypothesis faster than sequentially exploring one subspace after the other as CELOE would do.
Second, FM potentially outperforms CELOE in cases involving disjunctions in the target concept. Disjunctions pose a challenge for most Inductive Logic Programming (ILP) techniques, including CELOE, due to the prevalent use of downward refinement operators. These operators primarily generate conjunctions, not disjunctions. For instance, consider the target concept
\[(Student\sqcup Employee)\sqcap\exists attendsAICourse \tag{2}\]
representing individuals who are either students or employees and attend an \(AICourse\). CELOE, in this case, might need to explore a vast search space exhaustively. FM can address this efficiently by assigning different starting classes or computing them, thus facilitating parallel exploration in different relevant subspaces. While FM approach may not directly find the exact class expression, it can swiftly uncover simpler, separate class expressions such as:
\[Student\sqcap\exists attends.AICourse \tag{3}\] \[Employee\sqcap\exists attends.AICourse \tag{4}\]
### _Designing the FM algorithm_
The FMA commences by selecting an initial class or classes as the starting point. This selection is a strategic move aimed at reducing the search space, consequently
increasing the efficiency of the algorithm. The criterion for choosing a class is its ability to contain all the positive examples, symbolized by \(\mathcal{C}\). Such a class can then be refined or specialized without any loss of positive examples, as it ensures a full positive coverage \(PosCov\):
\[PosCov(ce)=\frac{|ce_{pos}(E)|}{|E_{pos}|} \tag{5}\]
Here, \(PosCov(ce)\) is the positive coverage of the class expression \(ce\). \(ce_{pos}(E)\) represents the set of positive examples covered by \(ce\), and \(E_{pos}\) is the set of all positive examples. Therefore, a class with a \(PosCov\) value of 1.0 signifies that all positive examples are encapsulated within that class. This ensures that the search space is efficiently minimized from the outset, providing an optimized starting point for further refinement and specialization. The selection of best nodes is described by Algorithm 1.
```
1:\(classSet\gets initialClassSet\)
2:\(startClassSet\leftarrow\emptyset\)
3:for all\(class\in classSet\)do
4:\(posCov\gets PosCov(class)\)
5:\(subClassSet\leftarrow\emptyset\)
6:for all\(subClass\in children(class)\)do
7:if\(PosCov(subClass)=posCov\)then
8:\(subClassSet\gets subClassSet\cup\{subClass\}\)
9:endif
10:endfor
11:if\(subClassSet=\emptyset\)then
12:\(startClassesSet\gets startClassesSet\cup\{class\}\)
13:else
14:\(classSet\gets classSet\cup subClassSet\)
15:endif
16:endfor=0
```
**Algorithm 1** Finding the Starting Classes
The Algorithm 1 starts by applying a strategy similar of CELOE, where a search tree is generated, the best nodes identified, and refined. However, FM approach introduces the following enhancement - instead of maintaining a single search tree, it manages multiple trees, and all the refined expressions derived are kept in a shared pool. Each tree is permitted to draw a maximum number of expressions from the shared refinements, consequently promoting an efficient and diversified exploration of the search space.
The process of adding refinements to the shared pool is governed by specific conditions. First, the algorithm maintains a record of the best nodes from previous trees and also the refinements added by the current node, Second, the current expression is checked against this list. If the current expression and any of the previous bests do not share a class in common, a conjunction of them is computed. This conjunction is only added to the shared poll when all the classes are distinct, aiming to maximize the class expressions which bring new information, and which do not have multiple identical classes.
The complexity of a node (i.e. class expression) is measured as the length of the expression, thereby a conjunction of two expressions might be overly complex. To avoid this case, the length of the resulting expressing is added to the shared pool only
when it doesn't exceed a threshold set using a FM parameter. The conjunction selection process is shown in the Algorithm 2 The rest of the algorithm, refining nodes, selecting best nodes is very similar to CELOE [2].
```
1:for all CE_previous in bestNodesFromEachTree do
2:\(C_{\text{previous}}\leftarrow\text{classes}(CE_{\text{previous}})\)
3:\(C_{\text{current}}\leftarrow\text{classes}(CE_{\text{current}})\)
4:if\(C_{\text{previous}}\cap C_{\text{current}}=\emptyset\)then
5:\(CE_{\text{conjunction}}\leftarrow\text{conjunction}(CE_{\text{current}},CE_{ \text{previous}})\)
6: complexity \(\leftarrow|\text{classes}(CE_{\text{conjunction}})|\)
7:if\(complexity<maxLength\)then
8: Add \(CE_{\text{conjunction}}\) to refinementSharedPool
9:endif
10:endif
11:endfor=0
```
**Algorithm 2** Conjunction for Shared Pool
FM represents an extension of the CELOE approach (see Figure 1). To evaluate the proposed FM, we rely on the University Ontology Benchmark (UOBM) generator. The UOBM generator outputs scalable and realistic ontologies, tailored for benchmarking ontology-based systems [11].
Ontologies alone do not suffice for generating a complete test. In the case of a specific ontology generated with the UOBM generator, we require a class expression (also called ground truth or target) along with two sets of individuals: one belonging to the class and the other not. To handle this, we designed the Algorithm 3 that finds a suitable class expression and individuals in the given ontology. The corresponding diagram flow appears in Figure 2. To implement the proposed algorithm we rely on GPT-4. Human implementation was used in a few selected places where the code generated did not capture the correct logic steps. This is why in Figure 1 GPT-4 is represented as an external system (i.e. colored red). The test generation modules are colored purple, which means they are the result of computer gener
Figure 1: System Architecture
Figure 2: Generating testing ontologies
```
0:procedureOntologyReasoning
0:\(o\gets LoadOntology(filePath)\)
0:\(classes\gets ontology.GetClasses()\)
0:\(reasoner\gets InitReasoner(ontology)\)
0:\(int\leftarrow\emptyset\)
0:repeat
0:\(class\gets classes.ChooseRandom()\)
0:\(property\gets GetPropertyBasedOnClass(reasoner,class)\)
0:\(int\gets Intersection(class,property)\)
0:until\(int.size()\geq|posExample|\)
0:\(posExamples\gets int.ChooseRandom(|posExamples|)\)
0:\(individuals\gets o.GetIndividuals().Remove(|posExamples|)\)
0:\(negExamples\gets individuals.ChooseRandom(|negExample|)\)
0:\(ApplyNoise(posExamples,negExamples,noiseRatio)\)
0:\(classExpression\gets class\sqcap(\text{Some}(property,Thing))\)
0:\(accuracy\gets CalculateAccuracy()\)
0:endprocedureGetPropertyBasedOnClass(reasoner,class)
0:\(individuals\gets GetIndividuals(class)\)
0:\(suitableProperty\leftarrow\emptyset\)
0:for all\(i\in individuals\)do
0:\(properties\gets GetProperties(i)\)
0:for all\(p\in properties\)do
0:if\(p.GetIndividuals().size>|posExamples|\)then
0:\(suitableProperty\gets CurrentProperty\)
0:return\(suitableProperty\)
0:endif
0:endfor
0:endfor
0:endprocedure=0
```
**Algorithm 3** Generating Example Test
### _Heuristics designed for FM approach_
The proposed \(HT_{1}\) heuristic differs from the standard CELOE heuristic by considering the parent node's refinement count, which is essentially the number of its child nodes. The premise behind this heuristic is the potential value of less-branching paths in the search tree, which might prove beneficial in later steps.
In the standard CELOE heuristic, branches with more child nodes (or refinements) are prioritized, as they are often seen as more promising. However, \(HT_{1}\) posits that less-branching paths (those with fewer child nodes) could also be of value. To encourage the exploration of these less-branching paths, \(HT_{1}\) integrates an additional term into the final score calculation - the inverse of the parent's refinement count, multiplied by a weight factor. This effectively gives a score boost to nodes that have fewer siblings.
\[HT_{1}=\begin{cases}start_{bonus}-( horiz-1)\cdot\beta-refin\cdot\gamma,&\text{if node = root}\\ (acc-acc_{parent})\cdot\delta+\frac{1}{refin_{parent}}\cdot\epsilon-( horiz-1)\cdot\beta-refin\cdot\gamma,&\text{otherwise}\end{cases}\]
Here, \(start_{bonus}\) is score for the root of the search tree, \(acc\) is the accuracy of the evaluated node, while \(acc_{parent}\) is the accuracy of the parent of the evaluated node. The \( horiz\) parameter counts the number of horizontal expansions to reach the current node, or the length of the class expression. Number of refinements or the number of
children in the search tree is represented by \(refin\), while \(refin_{parent}\) is the number of refinements of the parent, or the number of nodes on the same level that came from the same parent The values \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) are weights chosen based on the problem in the domain.
The \(FH\) heuristic factors in both the depth of the current node within the search tree and its F1 score. The depth of a node in the search tree is the number of steps from the root to the node. Nodes deeper in the search tree often signify more "complex" solutions. Hence, we introduce a depth-based penalty to encourage simpler solutions in line with Occam's razor, the simplest solution is often the best, as shown in equation 6:
\[FH_{1}=-horiz\cdot\alpha+\begin{cases}f1\cdot\beta,&\text{if }f1\geq 0.8\\ -f1\cdot\gamma,&\text{if }f1\leq 0.3\\ 0,&\text{otherwise}\end{cases} \tag{6}\]
Here, \(horiz\) is depth of the node in the search tree, \(\alpha\) is the penalty factor for the depth of a node in the search tree, \(f1\) the F1 score of the node, \(\beta\) the bonus factor for high F1 score (when \(f1\geq 0.8\)), while \(\gamma\) is a penalty for very low F1 score (when \(f1\leq 0.3\)).
## V **Learning ontologies with fortest mixing approach**
We illustrate the functionality of FM on a small sized ontology. The trace of FMA displays console outputs in small clusters, after which a brief explanation is provided. The ontology and examples were created manually in order to contain disjoint classes, in this case the classes \(Student\) and \(UniversityEmployee\) have common individuals. In the configuration file examples the individuals chosen as positive examples are all students, university employees and work in a research program. The goal of these examples is to better explain how the selection process of nodes work and how refinements are added to the search tree.
The algorithm tries to find a class expression that best differentiates between the positive and negative training examples.
```
Runningalgorithminstance"alg"(FM) FMAstarting Nboftreerootstofind:2 Thingcov1.0,ResearchProgramcov0.0,Studentcov1.0,Universitycov0.0,UniversityEmployeecov1.0,Studentcov1.0,UniversityEmployee cov1.0 cov1.0
2treesfoundwithroots:[Student,UniversityEmployee]
```
Listing 1 Step 1: Identifying starting classes for search trees
Within the DL-Learner framework, the FM algorithm is initialized with a specific configuration. The user selects the number of trees to be used, which in this case is set to 2 (Listing 1). FM then proceeds to generate and explore multiple classes, beginning with the top concept (\(\top\)), and iteratively specializes them until they cannot be further specialized without compromising the coverage of positive examples. The first two classes obtained through this process are identified as the best starting classes as the roots of our search trees.
```
Studentacc:0.66 Bestdescriptionsofar:Student acc:0.6 f-score:0.66666666666666666 ref:0time:9 UniversityEmployeeacc:0.6
```
Listing 2: Step 2: Picking the most promising class for refinement
Since \(Student\) is the first node is selected as best one so far. The accuracy, f-score, number of refinements required to get the expression and time in ms are also displayed. (Listing 2).
```
NodeStudentscorecalculation: Horizontalexpansion:1.0 Startnode:1.0 Accgain:-1.0 ParentRefinements:0.0 Refinements:0.0 score:0.7 CURRENTTREEWITHROOT:Student Currentnode:Student,accuracy:0.6 HorizontalExpansion:1 REFaddedfromconj: RefinementsfornodeStudent:[]
```
Listing 3: "Step 3: Refining the current class"
The best node from the tree is selected and its score calculation is displayed. Since the expansion is 1, we can not find refine a new class expression with length 1 (Listing 3).
```
NodeUniversityEmployeescorecalculation: Horizontalexpansion:1.0 Startnode:1.0 Accgain:-1.0 ParentRefinements:0.0 Refinements:0.0 score:0.7 REFaddedfromconj: RefinementsfornodeUniversityEmployee:[]
```
Listing 4: Step 4: Selecting another node for expansion
After the first tree either has no refinements or added the maximum number of nodes, the second tree best node is selected (Listing 4). Again, horizontal expansion is 1 and no refinements are found.
```
NodeStudentscorecalculation: Horizontalexpansion:3.0... score:0.49999999999999994 CURRENTTREEWITHROOT:Student Currentnode:Student,accuracy:0.6 HorizontalExpansion:3 RefinementsfornodeStudent:[StudentandStudent,Studentand UniversityEmployee]
```
Selected refinement: Student and Studentt acc: 0.6 Node Added Selected refinement: Student and UniversityEmployee: acc 0.6 Best description so far: Student and UniversityEmployee acc: 0.8 Node Added ```
Listing 6: Step 6: Using conjunction as refinement
We select the node \(Student\) again, we are back to the first search tree, but this time the expansion is 3 and we find a refinement (Listing 5). The best current expression for the target class is \(Student\sqcap UniversityEmployee\).
``` NodeStudentandUniversityEmployeescorecalculation: Horizontalexpansion:3.0... score:0.660000000000001 REF added from conj: (Student and UniversityEmployee) RefinementsfornodeStudentandUniversityEmployee:[] Selectedrefinement:(StudentandUniversityEmployee) (StudentandUniversityEmployee) acc: 0.8 Addednode:(StudentandUniversityEmployee) ```
Listing 6 shows that nodes created from the conjunction (\(\sqcap\)) of best nodes form different trees are created. This node was already added in the tree with root \(Student\), but here its first added as the conjunction before it is added as a normal refinement.
``` NodeStudentandUniversityEmployeeand(not(ResearchProgram))score calculation: Horizontalexpansion:6.0... CURRENTTREEWITHROOT:UniversityEmployee... StudentandUniversityEmployeeand(inProgramssomeThing)acc:1.0 Bestdescriptionsofar:StudentandUniversityEmployeeand(inProgramsomeThing)acc:1.0f-score:1.0ref:40time:71 Addednode:StudentandUniversityEmployeeand(inProgramsomeThing) ```
Listing 7: Step 7: Using quantified relations as refinement
In Listing 7 the best class expression is found
\[Student\sqcap UniversityEmployee\sqcap(\exists inProgram.\top). \tag{7}\]
We can see the accuracy, f-score, the number of refinements needed to get here and time in miliseconds displayed. From this point onward the algorithm will search for better class expressions but it won't find any.
## VI **Running Experiments**
Before presenting the results, we briefly introduce what constitutes a test, the setup and algorithms employed.
### _Experiments Setup_
A test in the context of ILP and hypothesis search can be defined as a triplet denoted by \((\mathcal{K},E,\mathcal{C})\), where \(\mathcal{K}\) represents the knowledge base, in our case an ontology in the OWL format, \(E\) represents the set of examples, and \(\mathcal{C}\) represents the target class expression. For testing the performance of FM, (1) we used datasets from the DL-Learner and additionally (2) we created our own synthetic datasets tailored to specific testing scenarios. For the knowledge base \(\mathcal{K}\), we used the UOBM generator.
For \(E\) and \(\mathcal{C}\) in the test triplet, we designed a Java algorithm that finds a class expressions of the format \(classA\sqcap\exists hasRelationR.Thing\). This class expression is found in previously generated ontology \(\mathcal{K}\). We chose this simple structure as the majority of relations we seek are simple. While this approach uses brute force, and may not be the most efficient, it serves our requirements due to the simplicity of the expressions.
The users can specify a minimum number of positive examples, After class expressions are found and positive and negative examples are determined, we add an additional layer of noise to our testing. We randomly remove 5% of examples from both positive and negative sets, followed by a swapping of examples between the two sets. The swapping guarantees that the accuracy is not 1.0, because we rarely see this example in real life. The deletion ensures that the examples do not cover all individuals of the class, which is again not a real scenario.
### _Results_
To test the Forest Mixing approach, we used two datasets: (i) one real-world dataset known as the Carcinogenesis dataset from the DL-Learner, and (ii) a synthetic dataset tailored to our specific testing scenarios. The Carcinogenesis dataset revolves around compounds and cancer-related data and contains 142 classes, 4 object properties, 15 data properties and 22.372 individuals. The synthetic dataset which we created consists of 40 classes, 6 object properties, 15 data properties and 26.766 individuals. These datasets were chosen for their ability to represent general cases, rather than specific ones where FMA is theoretically optimized.
The testing employed different heuristics as part of the FM approach. The use of these various heuristics aids in exploring the impact on performance and results under a broad array of scenarios, thereby providing a well-rounded understanding of FM's capabilities. Table II presents the metric results and the corresponding class expressions learned from the carcinogenesis dataset, while Table III for the syntetic dataset. Here, FM1 represents the FM algorithm with a single search tree and FM2 represents the FMA algorithm with two search trees.
Furthermore, we evaluated FM's performance in comparison to CELOE in scenarios involving non-disjoint classes. For this experimental context, we created a compact
ontology encapsulating such non-disjoint classes. This model represents a small segment of a university ecosystem, comprising students, university employees, each of whom can be associated either with a research program, a university, or both. T his ontology consists of 4 classes, 2 object properties, 0 data properties and 11 individuals.
Our underlying assumption here was that FM, due to its inherent design advantages when dealing with non-disjoint classes, should outperform CELOE in terms of finding the correct class expression in a more efficient manner. The outcomes of these tests with corresponding class expressions are listed in Table IV. The first nine tests have learned the same class expression:
\[Student\sqcap UniversityEmployee\sqcap\exists inProgram.ResearchProgram \tag{8}\]
The tenth approach, i.e. FMA2, learned the distinct expression:
\[Student\sqcap\exists inProgram.ResearchProgram\sqcap\] \[UniversityEmployee\sqcap\exists inProgram.\top \tag{9}\]
One additional test was conducted. FMA requires a parameter for limiting the number of nodes a search tree can it to itself at once. In the previous testing we noticed that the more trees we have the more refinements we need to find our class expression. In order to have a conclusion we isolated that case, using FMA with two search trees and \(Student\) as starting class and we varied the number of nodes allowed for a tree to add. In Figure 3 the x-axis denotes the parameter maxNodesAddedPerTree, which represents the maximum number of nodes a tree can incorporate into itself during a single cycle. The y-axis simultaneously tracks two different metrics, distinguished by color. The red line illustrates the changes in the number of refinement iterations required, while the blue line maps out the time consumed.
Our observations from the graph suggest that the addition of more than one node leads to a near-constant requirement of refinements. This insinuates that in a small ontology environment, the system behaves akin to a single tree with the inclusion of two or more nodes. The time consumption, depicted by the blue line, increases at the extreme points of maxNodesAddedPerTree, yet this parameter negligibly affects the system's overall performance due to minor temporal differences.
## VII **Conclusion**
We examined the potential benefits of using multiple search trees and a shared pool of refinements as an enhancement to CELOE in the context of DL-Learner. Our initial hypothesis suggested that the Forest Mixing approach would outperform CELOE when handling non-disjoint classes and specific target class expressions. The results from our experiments indicate that, contrary to our research hypothesis, FM is less efficient than CELOE. Furthermore, in the context of FM alone, an increase in the number of trees surprisingly appears to negatively impact performance. These findings prompt further investigation to fully understand the factors influencing the performance of FM and how it could potentially be optimized for the task at hand.
It is also conceivable that the current FM algorithm may not be fully optimized with regard to the number of refinements it uses in order to generate the target class expressions. Given that the number of refinements can substantially increase the complexity of the search space, excessive refinement operations may result in performance degradation. Using different heuristics and refinement operators could potentially enhance the algorithm's performance, as the ones used in CELOE might not be the most suitable for this algorithm.
Additionally, a deeper investigation into the core logic of FM, particularly the management of the shared pool, could yield valuable insights. Experimenting with various
Fig. 3: Comparison of Time and Number of Refinements Relative to Maximum Number of Nodes Added to a Search Tree
types of pool management techniques and strategies for combining the most promising nodes from each tree could further refine the efficacy of the algorithm.
## Acknowledgement
A. Groza is supported by the project number PN-III-P2-2.1-PED-2021-2709, within PNCDI III.
|
2308.00188 | Quantum simulation of Pauli channels and dynamical maps: algorithm and
implementation | Pauli channels are fundamental in the context of quantum computing as they
model the simplest kind of noise in quantum devices. We propose a quantum
algorithm for simulating Pauli channels and extend it to encompass Pauli
dynamical maps (parametrized Pauli channels). A parametrized quantum circuit is
employed to accommodate for dynamical maps. We also establish the mathematical
conditions for an N-qubit transformation to be achievable using a parametrized
circuit where only one single-qubit operation depends on the parameter. The
implementation of the proposed circuit is demonstrated using IBM's quantum
computers for the case of one qubit, and the fidelity of this implementation is
reported. | Tomas Basile, Carlos Pineda | 2023-07-31T22:57:29Z | http://arxiv.org/abs/2308.00188v1 | Quantum simulation of Pauli channels and dynamical maps: algorithm and implementation
## Abstract
Pauli channels are fundamental in the context of quantum computing as they model the simplest kind of noise in quantum devices. We propose a quantum algorithm for simulating Pauli channels and extend it to encompass Pauli dynamical maps (parametrized Pauli channels). A parametrized quantum circuit is employed to accommodate for dynamical maps. We also establish the mathematical conditions for an \(N\)-qubit transformation to be achievable using a parametrized circuit where only one single-qubit operation depends on the parameter. The implementation of the proposed circuit is demonstrated using IBM's quantum computers for the case of one qubit, and the fidelity of this implementation is reported.
## 1 Introduction
Since their inception, quantum computers were proposed as powerful tools for the simulation of quantum systems [1]. Being open quantum systems of fundamental [2, 3] and practical [4] interest, there has been efforts towards the simulation of the evolution of open quantum systems [5, 6, 7] and specifically for quantum channels [8, 9, 10].
Such systems have been simulated because of their many applications, such as studying the emergence of multipartite entanglement [11, 12], studying dissipative processes [13] and modeling non-Markovian dynamics [14]. Among quantum systems, the simplest case is that of a qubit [15], and withing them, the simplest class of channels that produce decoherence are Pauli channels [16, 17, 18]. Indeed, they serve as effective models for the noise affecting quantum devices [19].
To represent the algorithms implemented in quantum computers, either to simulate a physical system or for some other purpose, one often uses a quantum circuit [15]. In this work, we shall also work with parametrized quantum circuits, that is, quantum circuits in which some of the operations depend on variable parameters [20]. These circuits play an important part in applications such as quantum machine learning [21] and describing general quantum transformations dependent on parameters. Substantial research has been devoted to enhancing the efficiency of these circuits [22].
We start by providing the definition of quantum channels, the general framework used here, and multi-qubit Pauli channels in section 2. Our first objective is to present a quantum algorithm capable of simulating Pauli channels on quantum computers; we
do this in section 3, where we also demonstrate its implementation using IBM's quantum computers for the particular case of single-qubit Pauli channels. Expanding beyond discrete Pauli channels, we introduce the concept of Pauli dynamical maps, defined as a continuous parametrization of multi-qubit Pauli channels. Therefore, in section 4 we shift our focus to study parametrized quantum channels, aiming to adapt the algorithm developed for channels to dynamical maps. Furthermore, we contribute to the body of work related to parametrized quantum circuits by establishing a theorem, which sets the mathematical conditions for the transformations that can be done using a parametrized circuit with the condition that only a controlled single-qubit rotation in the circuit may depend on the parameter. Finally, in section 5, we conclude about the Pauli dynamical maps that fulfill the conditions of theorem 2.
## 2 Pauli channels and dynamical maps
In this section we introduce the concept of quantum channels, focusing on a specific type called Pauli channels. Furthermore, we define Pauli dynamical maps, which are curves of Pauli channels parametrized by a variable.
### Quantum channels
In quantum mechanics, a closed system's state is represented by a vector in a Hilbert space \(\mathcal{H}\). The state's evolution is unitary and given by Schrodinger's equation [23]. However, in real-world situations, quantum systems are usually open, which means that they interact with their environment [4]. For instance, the system's state may become entangled with the environment, leading to a loss of information about the system's state over time.
To describe open systems, instead of state vectors, we use matrices \(\rho\) that act on \(\mathcal{H}\). These matrices are called density matrices, and they include information about the system's interaction with its environment. For a density matrix \(\rho\) to be physically valid, it must satisfy two conditions: \(\mathrm{tr}(\rho)=1\) and it must be positive semi-definite, which is denoted as \(\rho\geq 0\)[15].
Knowing this, we can now define quantum channels. Quantum channels are operators \(\mathcal{E}\) that can describe the evolution of open quantum systems, such that \(\rho\rightarrow\mathcal{E}(\rho)\). Quantum channels are the most general linear operations that a quantum system can undergo independently of its past [24, 25]. These channels are constructed based on three fundamental properties: linearity, trace preservation, and complete positivity.
Linearity ensures that a quantum channel \(\mathcal{E}\) maps any ensemble of density matrices into the corresponding ensemble of their evolution. The trace preserving property is given by \(\mathrm{tr}(\mathcal{E}[\rho])=\mathrm{tr}(\rho)=1\) and guarantees that the quantum channel does not change the condition that \(\mathrm{tr}(\rho)=1\). Finally, the channel should also preserve the condition \(\rho\geq 0\), and a map that does this is called a positive map. However, positivity of \(\mathcal{E}\) is not enough, and we actually require the more restrictive condition of complete positivity. Complete positivity means that \(\mathcal{E}\otimes\mathbb{I}_{n}\) is positive for any positive integer \(n\) (where \(\mathbb{I}_{n}\) is the \(n\times n\) identity matrix). This ensures that even if the principal system is entangled with another system, applying \(\mathcal{E}\) to the principal system while doing nothing to the other one still results in a positive semidefinite state for the principal system [15].
Given a quantum channel \(\mathcal{E}\), the condition of trace preservation is straightforward to verify but complete positivity is not as simple. To test complete positivity of a quantum channel, Jamiolkowski and Choi [26, 27] developed a simple algorithm that exploits the isomorphism between a channel \(\mathcal{E}\) and the state \(\mathcal{D}=(\mathbb{I}\otimes\mathcal{E})|[\Omega\rangle\langle\Omega|]\), where \(|\Omega\rangle=1/\mathrm{dim}(\mathcal{H})\sum_{i}^{\mathrm{dim}(\mathcal{H})}| i\rangle|i\rangle\) is a maximally entangled state between the original
system and an ancilla. Remarkably, the map \(\mathcal{E}\) is completely positive if and only if \(\mathcal{D}\) (also known as the Choi or dynamical matrix of \(\mathcal{E}\)) is positive semidefinite.
### Pauli channels
We have discussed the main features of quantum channels and now we turn our attention to a specific type of channels for \(N\)-qubit systems called Pauli channels. First we will define these channels for single-qubit systems, whose most general density matrix can be written as [15]:
\[\rho=\frac{1}{2}\sum_{\alpha=0}^{3}r_{\alpha}\sigma_{\alpha}, \tag{1}\]
with \(\sigma_{0}=\mathbb{I}\), and \(\sigma_{1,2,3}\) the usual Pauli matrices. The condition \(\text{tr}(\rho)=1\) requires that \(r_{0}=1\) while \(\rho\geq 0\) implies that the remaining \(r_{1,2,3}\) form a vector \(\vec{r}=(r_{1},r_{2},r_{3})\) inside a unit sphere known as the Bloch sphere [28]. That is, every possible density matrix for a one-qubit system is uniquely associated with a point in a unit sphere.
Given a one-qubit system described by \(\rho\), a Pauli channel is defined as an operation that with probability \(k_{\gamma}\) applies the Pauli matrix \(\sigma_{\gamma}\) to the system, for \(\gamma=0,1,2,3\)[16]. Mathematically, the Pauli channel is written in the following way:
\[\mathcal{E}(\rho)=\sum_{\gamma=0}^{3}k_{\gamma}\sigma_{\gamma}\rho\sigma_{ \gamma}, \tag{2}\]
where the probabilities \(k_{\gamma}\) of applying \(\sigma_{\gamma}\) are non-negative real numbers such that \(\sum_{\gamma}k_{\gamma}=1\) (these conditions also ensure that the channel is trace preserving and completely positive).
Pauli channels are some of the most fundamental noise models in quantum information science [29]. Some notable examples of Pauli channels are the following:
* **Bit Flip Channel:** This is a channel that with probability \(1-p\) leaves the qubit as it is and with probability \(p\) applies the \(\sigma_{1}\) matrix (which flips the basis states \(|0\rangle\) and \(|1\rangle\) of the qubit), and so it is given by: \[\mathcal{E}(\rho)=(1-p)\rho+p\sigma_{1}\rho\sigma_{1}.\] Analogous channels exist using \(\sigma_{3}\) (called the bit flip channel, which has a probability \(p\) of adding a relative phase \(\pi\) to the state) or using \(\sigma_{2}\) (called the phase flip channel, which has a probability \(p\) of flipping the base states and also add a relative phase \(\pi\)).
* **Depolarizing channel:** This channel has a probability \(1-p\) of doing nothing to the qubit and a probability \(p\) of converting it into the maximally mixed state \(\frac{1}{2}\mathbb{I}\) and it can be written as: \[\mathcal{E}(\rho)=(1-p)\rho+p\frac{1}{2}\mathbb{I}=\left(1-\frac{3p}{4} \right)\sigma+\frac{p}{4}\sigma_{1}\rho\sigma_{1}+\frac{p}{4}\sigma_{2}\rho \sigma_{2}+\frac{p}{4}\sigma_{3}\rho\sigma_{3}.\] (3) We can also see how an arbitrary Pauli channel acts on an arbitrary density matrix. To do it, we substitute Eq (1) in Eq (2): \[\mathcal{E}(\rho)=\frac{1}{2}\sum_{\gamma,\alpha=0}^{3}k_{\gamma}r_{\alpha} \sigma_{\gamma}\sigma_{\alpha}\sigma_{\gamma}.\] (4)
This can be simplified by using the following property of Pauli matrices:
\[\sigma_{\gamma}\sigma_{\alpha}\sigma_{\gamma}=A_{\alpha,\gamma}\sigma_{\alpha}, \quad\text{ with }A_{\alpha,\gamma}=\begin{pmatrix}1&1&1&1\\ 1&1&-1&-1\\ 1&-1&1&-1\\ 1&-1&-1&1\end{pmatrix}, \tag{5}\]
which leads to
\[\mathcal{E}(\rho)=\frac{1}{2}\sum_{\alpha}\left(\sum_{\gamma}A_{\alpha,\gamma }k_{\gamma}\right)r_{\alpha}\sigma_{\alpha}. \tag{6}\]
Eq (6) once again has the form of Eq (1) but with components \(\left(\sum_{\gamma}A_{\alpha,\gamma}k_{\gamma}\right)r_{\alpha}\). This gives us another way of understanding Pauli channels as operations that take each component \(r_{\alpha}\) of the density matrix and multiplies them by \(\sum_{\gamma}A_{\alpha,\gamma}k_{\gamma}\), that is:
\[r_{\alpha}\xrightarrow[\text{Channel}]{\text{Pauli}}\tau_{\alpha}r_{\alpha}, \ \ \tau_{\alpha}:=\sum_{\gamma}A_{\alpha,\gamma}k_{\gamma}. \tag{7}\]
Notice that \(\tau_{0}=1\), which is a consequence of \(\sum_{\gamma}k_{\gamma}=1\) and ensures that after the channel, the resulting density matrix still has trace one. Furthermore, reverting the definition of \(\tau_{\alpha}\) by using that \(A^{-1}=\frac{1}{4}A\), we get that \(k_{\gamma}=\frac{1}{4}\sum_{\alpha}A_{\alpha,\gamma}\tau_{\alpha}\). Then, using that \(k_{\gamma}\geq 0\) we get the following conditions on the multipliers \(\tau_{\alpha}\):
\[1+\tau_{i}-\tau_{j}-\tau_{k}\geq 0, \text{ for }i,j,k\text{ different numbers in }\{1,2,3\}, \tag{8}\] \[1+\tau_{1}+\tau_{2}+\tau_{3}\geq 0. \tag{9}\]
These conditions imply that \((\tau_{1},\tau_{2},\tau_{3})\) has to be inside a tetrahedron with vertices \((1,1,1),(1,-1,-1),(-1,1,-1)\) and \((-1,-1,1)\). Therefore, the \(\tau_{1,2,3}\) are numbers between \(-1\) and \(1\), which means that the components \(r_{\alpha}\) of the density matrix are always attenuated and possibly sign flipped.
Having defined the one qubit case, we can now generalize to \(N\) qubits. In order to do it, we need to introduce the so-called _Pauli strings_, defined as
\[\sigma_{\vec{\alpha}}=\sigma_{\alpha_{1}}\otimes\sigma_{\alpha_{2}}\otimes \cdots\otimes\sigma_{\alpha_{N}}, \tag{10}\]
where \(\vec{\alpha}\) denotes a multi-index \((\alpha_{1},\cdots,\alpha_{N})\) and \(\alpha_{i}\in\{0,1,2,3\}\). These operators form an orthogonal basis in the space of operators acting on \(N\) qubits. Similarly to the single-qubit case, the density matrix \(\rho\) of a system of \(N\) qubits can be written using Pauli strings as:
\[\rho=\frac{1}{2^{N}}\sum_{\vec{\alpha}}r_{\vec{\alpha}}\sigma_{\vec{\alpha}}. \tag{11}\]
Then, just as before, we define a Pauli channel as a transformation that applies the operator \(\sigma_{\vec{\gamma}}\) to \(\rho\) with probability \(k_{\vec{\gamma}}\) and is therefore described mathematically by:
\[\mathcal{E}(\rho)=\sum_{\vec{\gamma}}k_{\vec{\gamma}}\sigma_{\vec{\gamma}}\rho \sigma_{\vec{\gamma}}, \tag{12}\]
where just as before, \(k_{\vec{\gamma}}\) are non-negative real numbers such that \(\sum_{\vec{\gamma}}k_{\vec{\gamma}}=1\).
As in the one qubit case, Pauli channels for \(N\) qubits attenuate the components \(r_{\vec{\alpha}}\) of the density matrix. This can be seen by substituting Eq (11) in Eq (12) and using
the property of Eq (5):
\[\mathcal{E}(\rho) =\frac{1}{2^{N}}\sum_{\vec{\gamma},\vec{\alpha}}k_{\vec{\gamma}}r_{ \vec{\alpha}}\sigma_{\vec{\gamma}}\sigma_{\vec{\alpha}}\sigma_{\vec{\gamma}}\] \[=\frac{1}{2^{N}}\sum_{\vec{\alpha}}\left(\sum_{\vec{\gamma}} \left(A^{\otimes^{N}}\right)_{\vec{\alpha},\vec{\gamma}}k_{\vec{\gamma}} \right)r_{\vec{\alpha}}\sigma_{\vec{\alpha}},\]
which means that applying the Pauli channel multiplies the components \(r_{\vec{\alpha}}\) by \(\tau_{\vec{\alpha}}:=\sum_{\vec{\gamma}}\left(A^{\otimes^{N}}\right)_{\vec{ \alpha},\vec{\gamma}}k_{\vec{\gamma}}\).
### Pauli dynamical maps
As seen in the last section, Pauli channels and in general quantum channels are discrete maps that transform a density matrix \(\rho\) into \(\mathcal{E}(\rho)\). However, we could also define a continuous set of channels \(\varepsilon_{p}\) with \(p\) a real parameter.
For the special case of Pauli channels, we define a Pauli dynamical map as a continuous parametrized curve drawn inside the set of Pauli channels and starting at the identity channel. Therefore, a Pauli dynamical map can be written as
\[\mathcal{E}_{p}(\rho)=\sum_{\vec{\gamma}}k_{\vec{\gamma}}(p)\sigma_{\vec{ \gamma}}\rho\sigma_{\vec{\gamma}}, \tag{13}\]
where \(p\) is a parameter in an interval \([a,b]\) and \(\mathcal{E}_{p}\) is a Pauli channel for every \(p\), with \(\mathcal{E}_{a}\) being the identity channel.
## 3 Circuit for a Pauli channel
In this section we propose a quantum circuit that simulates \(N\)-qubit Pauli channels. Moreover, we implement the circuit for \(N=1\) on a quantum computer and analyze the results using the diamond norm. We find that close to the depolarizing channel, the general circuit simulator can implement such channels with the highest fidelity.
### Description of the circuit for a Pauli channel
To design the circuit that implements Eq (12), we construct a state that includes the probabilities \(k_{\gamma}\) on the ancilla qubits and subsequently apply controlled Pauli operations on the main qubits. The circuit that does this is presented in Fig 1.
The first part of the circuit involves the creation of the state
\[\text{Ancilla state}=\sum_{\vec{\gamma}}b_{\vec{\gamma}}|\vec{\gamma}\rangle \tag{14}\]
on the ancilla qubits, where \(b_{\vec{\gamma}}\) are numbers such that \(|b_{\vec{\gamma}}|^{2}=k_{\vec{\gamma}}\) and the \(2N\)-qubit state \(|\vec{\gamma}\rangle\) is defined as \(|\gamma_{1}\rangle\cdots|\gamma_{N}\rangle\). When measured in the computational basis, the state given in Eq (14) collapses to \(|\vec{\gamma}\rangle\) with a probability \(|b_{\vec{\gamma}}|^{2}=k_{\vec{\gamma}}\). The circuit in Fig 1 uses this fact to apply \(\sigma_{\vec{\gamma}}\) on the main qubits with a probability \(k_{\vec{\gamma}}\) by using controlled operations conditioned on the state of the system being \(|\vec{\gamma}\rangle\), just as the Pauli channel is supposed to do.
### Simulation for one-qubit Pauli channels
For the particular case of a Pauli channel on one qubit, the circuit that simulates it can be constructed as in Fig 2, which is a special case of Fig 1 but with all details explicitly shown. In said figure, the ancilla state of Eq (14) can be taken to be
\(\sqrt{k_{0}}|00\rangle+\sqrt{k_{1}}|01\rangle+\sqrt{k_{2}}|10\rangle+\sqrt{k _{3}}|11\rangle\) and it is created on the ancilla qubits with the help of three rotations of angles defined by the following equations:
\[\cos\left(\frac{\theta_{0}}{2}\right) =\sqrt{k_{0}+k_{1}},\] \[\tan\left(\frac{\theta_{1}+\theta_{2}}{2}\right) =\sqrt{k_{1}/k_{0}}, \tag{15}\] \[\tan\left(\frac{\theta_{2}-\theta_{1}}{2}\right) =\sqrt{k_{3}/k_{2}}.\]
We took a sample of one-qubit Pauli channels and evaluated their implementation on IBM's ibmq-lima quantum computer [30], as shown in Fig 3. For each of the channels sampled, we used quantum process tomography [30, 31] to obtain the operator \(\xi_{I}\) corresponding to the implementation of the circuit in the quantum computer. Then, we compared \(\xi_{I}\) with the theoretical operator \(\xi_{T}\) of the Pauli channel we wanted to implement. To see how close the operators \(\xi_{I}\) and \(\xi_{T}\) are, we shall use the diamond distance [32], which is defined by
\[||\xi_{I}-\xi_{T}||_{\diamond}=\max_{\rho}||(\xi_{I}\otimes I)\rho-(\xi_{T} \otimes I)\rho||_{1}, \tag{16}\]
with \(I\) the identity map, \(||\cdot||_{1}\) the trace norm and the maximization done over all density matrices \(\rho\). The calculation of this norm is done using the semi-definite program from reference [33]. When the two channels are the same, the diamond distance has a value of \(0\), while in the case that the channels are completely distinguishable, the distance reaches its maximum value of \(2\)[34]. For the analysis done in Fig 3, we define a sort of "diamond fidelity" as:
\[f=1-\frac{1}{2}||\varepsilon_{I}-\varepsilon_{T}||_{\diamond}, \tag{17}\]
Fig 2: **One-qubit Pauli channel circuit.** Circuit for a one-qubit Pauli channel, which is a particular case of Fig 1. Here we have two ancilla qubits and we use three rotations of angles given by Eq (15) to create the Ancilla State of of Eq (14) for the two ancilla qubits.
which ranges from 0, when the channels have a maximum distance, to 1, when they are exactly equal.
Finally, using the representation of Pauli channels in a tetrahedron as in Eq (8), we show in Fig 3 the diamond fidelity defined by Eq (17) for the channels analyzed. We can see that channels close to the completely depolarizing channel (that is close to the center of the tetrahedron) have a high \(f\), while those close to unitary channels have much lower \(f\). This is reasonable because quantum computers are prone to errors that depolarize qubits, which isn't very problematic when trying to simulate depolarization but it is when simulating unitary processes. Moreover, the algorithm of Fig 2 is not optimal for unitary channels (that is, the channels corresponding to the vertices of the tetrahedron). These straightforward channels could be accomplished more efficiently by simply applying the corresponding Pauli operation directly. Nevertheless, due to its general design to accommodate any Pauli channel, the algorithm employs numerous quantum gates even in such scenarios.
Fig 3: **Results of the diamond fidelities as defined in Eq (17) for a sample of Pauli channels in the tetrahedron.** Notice that channels close to the center of the tetrahedron have high fidelities, while those close to the borders do not. Moreover, we show the results for cuts of the tetrahedron at different values of \(\tau_{3}\).
## 4 One parameter circuits
Just as Pauli channels, Pauli dynamical maps can be implemented using the circuit of Fig 1. However, there is one difference: the state to be created on the ancilla qubits now depends on a parameter \(p\), and it is represented by the expression:
\[\sum_{\overline{\gamma}}b_{\overline{\gamma}}(p)|\overline{\gamma}\rangle. \tag{18}\]
Thus, we temporarily shift our focus from Pauli channels and dynamical maps to the general problem of creating a circuit to generate a curve of states like the one described in Eq (18).
In general, producing this curve of states for \(N\) qubits will require many rotations parametrized by \(p\), such as the three rotations used for the ancilla qubits in Fig 2. However, it would be preferable to achieve the same effect using only one parametrized rotation. This would allow us to interpret said rotation as a knob that smoothly traverses the curve of states. Consequently, we are faced with the question of which curves of states, such as the one described in Eq (18), can be produced using just a single parametrized rotation. To clarify this, we provide the following definition for a circuit with one parametrized rotation.
**Definition 1**: _1-Parameter Rotation Circuit:_ A 1-Parameter Rotation (1PR) circuit is a parametrized quantum circuit that includes only one gate dependent on a parameter \(p\). Moreover, the parametrized gate is a one-qubit rotation about any axis, whether controlled or not.
Based on this definition, we aim to determine which curves of states can be generated using 1PR circuits. To accomplish this, we begin by proving that all 1PR circuits have the form depicted in Fig 4, where the parametrized rotation is around \(\sigma_{3}\) and is applied to the last qubit.
**Theorem 1**: _An \(N-\)qubit 1PR circuit can always be transformed into the form shown in Fig 4._
**Proof:** First, we observe that according to the definition, a 1PR circuit always consists of an operation \(B\) followed by the parametrized rotation and then another operation \(A\), where \(A\) and \(B\) are not parametrized.
Next, we note that it is not necessary to consider rotations about an arbitrary axis, as a rotation about any axis \(\hat{n}\) parameterized by \(p\) can be transformed into a rotation about \(\sigma_{3}\) without introducing gates that depend on \(p\). To see this, consider the rotation
Fig 4: **General form of a 1PR circuit**. Any 1PR circuit can be transformed into this form, where the rotation on the last qubit can be controlled or not by any of the other qubits. \(A\) and \(B\) are \(N\)-qubit gates that do not depend on the parameter \(p\) and \(s=s(p)\) is a function of the parameter.
\(R_{\hat{n}}(2s)\), where \(2s\) is a function of \(p\) (the factor of \(2\) is for convenience later on) and \(\hat{n}=(n_{1},n_{2},n_{3})\) represents the rotation axis. We can express \(\hat{n}\) as
\((\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\), where \(\theta\) and \(\phi\) are fixed angles dependent on \(\hat{n}\). The rotation can then be rewritten as follows:
\[R_{\hat{n}}(2s)=R_{\sigma_{3}}(\phi)R_{\sigma_{2}}(\theta)R_{\sigma_{3}}(2s)R_ {\sigma_{2}}(-\theta)R_{\sigma_{3}}(-\phi). \tag{19}\]
Since the angles \(\theta\) and \(\phi\) do not depend on the parameter \(p\), any 1PR circuit can be transformed into a circuit where the parametrized rotation is around \(\sigma_{3}\) instead of an arbitrary axis. Moreover, without loss of generality, we can choose the last qubit as the target qubit for the rotation, since if it weren't, we could use swap gates to move the rotation to the first qubit without adding gates that depend on \(p\).
Therefore, a 1PR circuit can be transformed such that the rotation is around \(\sigma_{3}\) and is applied to the last qubit (possibly controlled by other qubits), resulting in the form depicted in Fig 4. \(\blacksquare\)
With the aid of this theorem, we can now determine the curves of states of \(N\) qubits that can be generated using a 1PR circuit. This result is stated in the following theorem.
**Theorem 2**: _Consider a 1PR circuit of \(N\) qubits parametrized by \(p\) and denote by \(U\) the operator it implements on this system. Then, for every \(j\in\{0,1,\cdots,2^{N}-1\}\), we have that:_
\[U|j\rangle=e^{is(p)}|a^{j}\rangle+e^{-is(p)}|b^{j}\rangle+|c^{j}\rangle\]
_with \(s(p)\) some function of \(p\), \(|a^{j}\rangle,|b^{j}\rangle,|c^{j}\rangle\) orthogonal states and \(\langle a^{j}|a^{j}\rangle+\langle b^{j}|b^{j}\rangle+\langle c^{j}|c^{j} \rangle=1\)._
**Proof:** We can conclude from theorem 1 that \(U=ARB\), where \(A\) and \(B\) are unitary matrices and \(R\) is a \(\sigma_{3}\) rotation of angle \(2s\) applied to the last qubit and controlled by some of the other ones.
First, applying \(B\) to \(|j\rangle\) results in \(B|j\rangle=B_{0,j}|0\rangle+B_{1,j}|1\rangle+\cdots+B_{2^{n}-1,j}|2^{n}-1\rangle\), with \(B_{i,j}\) the entries of matrix \(B\). This can be rewritten by separating last qubit from the other \(N-1\):
\[B|j\rangle=\sum_{k=0}^{2^{N-1}-1}\left(B_{2k,j}|k\rangle|0\rangle+B_{2k+1,j}| k\rangle|1\rangle\right). \tag{20}\]
After the operator \(B\), the circuit applies the controlled rotation \(R\). To simplify the analysis, we separate the states of the first \(N-1\) qubits into those that fulfill the control conditions of the rotation (which we denote as the set \(\mathcal{C}\)) and those that do not, and write it as
\[B|j\rangle=\sum_{k\in\mathcal{C}}\left(B_{2k,j}|k\rangle|0\rangle+B_{2k+1,j}| k\rangle|1\rangle\right)+\sum_{k\not\in\mathcal{C}}\left(B_{2k,j}|k\rangle|0 \rangle+B_{2k+1,j}|k\rangle|1\rangle\right). \tag{21}\]
Then, the rotation \(R\) will only affect the states on the first sum (since they fulfill the control conditions) and not the others. Therefore, remembering that a \(\sigma_{3}\) rotation acts by adding a phase \(e^{-is(p)}\) to \(|0\rangle\) and a phase \(e^{is(p)}\) to \(|1\rangle\), we have that,
\[RB|j\rangle =e^{-is(p)}\sum_{k\in\mathcal{C}}B_{2k,j}|k\rangle|0\rangle+e^{is (p)}\sum_{k\in\mathcal{C}}B_{2k+1,j}|k\rangle|1\rangle+\sum_{k\not\in \mathcal{C}}\left(B_{2k,j}|k\rangle|0\rangle+B_{2k+1,j}|k\rangle|1\rangle\right)\] \[=e^{-is(p)}|\tilde{b}^{j}\rangle+e^{is(p)}|\tilde{a}^{j}\rangle+| \tilde{c}^{j}\rangle, \tag{22}\]
where we defined
\[|\tilde{a}^{j}\rangle =\sum_{k\in\mathcal{C}}B_{2k,j}|k\rangle|0\rangle,\] \[|\tilde{b}^{j}\rangle =\sum_{k\in\mathcal{C}}B_{2k+1,j}|k\rangle|0\rangle,\] \[|\tilde{c}^{j}\rangle =\sum_{k\notin\mathcal{C}}\left(B_{2k,j}|k\rangle|0\rangle+B_{2k +1,j}|k\rangle|1\rangle\right).\]
These states are clearly orthogonal because they are each linear combinations of different orthogonal states of the computational basis. Moreover, they satisfy \(\langle a^{j}|a^{j}\rangle+\langle b^{j}|b^{j}\rangle+\langle c^{j}|c^{j} \rangle=1\) because this quantity is the squared norm of the \(j\)th column of \(B\), which is unitary.
Finally, after having applied the rotation, the circuit applies gate \(A\), so that the result is given by:
\[U|j\rangle=ARB|j\rangle=e^{-is(p)}A|\tilde{a}^{j}\rangle+e^{is(p)}A|\tilde{b}^{ j}\rangle+A|\tilde{c}^{j}\rangle=e^{-is(p)}|a^{j}\rangle+e^{is(p)}|b^{j}\rangle+|c^{j}\rangle, \tag{23}\]
where \(|a^{j}\rangle=A|\tilde{a}^{j}\rangle,|b^{j}\rangle=A|\tilde{b}^{j}\rangle,|c^ {j}\rangle=A|\tilde{c}^{j}\rangle\) are still orthogonal states that satisfy \(\langle a|a\rangle+\langle b|b\rangle+\langle c|c\rangle=1\) because \(A\) is unitary. \(\blacksquare\)
This theorem implies that when starting from the state \(|0\rangle\) or any other initial state, the only possible curves of states that can be created using a 1PR circuit are of the following form:
\[|\eta(p)\rangle=|c\rangle+e^{is(p)}|a\rangle+e^{-is(p)}|b\rangle, \tag{24}\]
with conditions defined by the equations:
\[\langle a|a\rangle+\langle b|b\rangle+\langle c|c\rangle=1,\quad\langle a|b \rangle=\langle a|c\rangle=\langle b|c\rangle=0. \tag{25}\]
Moreover, it is possible to construct a 1PR circuit to generate any given curve of states described by Eq (24). One approach to achieve this is by utilizing the circuit depicted in Fig 4, with the parametrized rotation \(R\) applied to the last qubit controlled by all the other qubits. The operators \(A\) and \(B\) can be defined as follows:
\[B|0\rangle=\sqrt{\langle a|a\rangle}|2^{N}-1\rangle+\sqrt{\langle b |\bar{b}\rangle}|2^{N}-2\rangle+\sqrt{\langle c|\bar{c}\rangle}|2^{N}-3\rangle,\] \[A|2^{N}-1\rangle=\frac{1}{\sqrt{\langle a|a\rangle}}|a\rangle, \;A|2^{N}-2\rangle=\frac{1}{\sqrt{\langle b|b\rangle}}|b\rangle,\;A|2^{N}-3 \rangle=\frac{1}{\sqrt{\langle c|c\rangle}}|c\rangle.\]
The remaining part of the operators \(A\) and \(B\) can be defined in any arbitrary manner as long as they are unitary. By starting from the initial state \(|0\rangle\) and applying the circuit shown in Fig 4, straightforward calculations lead us to obtain the resulting curve of states described in Eq (24).
To see it, we can rewrite the expression of \(B|0\rangle\) by separating the first \(N-1\) qubits from the last one:
\[B|0\rangle=\sqrt{\langle a|a\rangle}|2^{N-1}-1\rangle|1\rangle+\sqrt{\langle b |\bar{b}\rangle}|2^{N-1}-1\rangle|0\rangle+\sqrt{\langle c|\bar{c}\rangle}|2^ {N-1}-2\rangle|1\rangle.\]
Since the parametrized rotation \(R\) is controlled by all the first \(N-1\) qubits, it only applies to the first two terms of \(B|0\rangle\). As a result, we obtain:
\[RB|0\rangle=\sqrt{\langle a|a\rangle}e^{is(p)}|2^{N-1}-1\rangle|1\rangle+\sqrt {\langle b|\bar{b}\rangle}e^{-is(p)}|2^{N-1}-1\rangle|0\rangle+\sqrt{\langle c |\bar{c}\rangle}|2^{N-1}-2\rangle|1\rangle.\]
Finally, applying the defined operator \(A\) to this state yields the desired result.
## 5 1PR circuit for a Pauli map
We can now use the previous results to conclude directly which Pauli dynamical maps can be implemented with a 1PR circuit. For this, the curve of states of Eq (18) has to be constructed with only one parametrized rotation, so it has to satisfy the conditions of theorem 2. Therefore, this implies that the map
\[\varepsilon_{p}(\rho)=\sum_{\vec{\gamma}}k_{\vec{\gamma}}(p)\sigma_{\vec{\gamma }}\rho\sigma_{\vec{\gamma}}, \tag{26}\]
can be implemented if there are numbers \(\beta_{\vec{\gamma}}(p)\) such that \(|\beta_{\vec{\gamma}}(p)|^{2}=k_{\vec{\gamma}}(p)\) and
\[\sum_{\vec{\gamma}}\beta_{\vec{\gamma}}(p)|\vec{\gamma})=|c\rangle+e^{is(p)}| a\rangle+e^{-is(p)}|b\rangle, \tag{27}\]
where \(|a\rangle,|b\rangle,|c\rangle\) fulfill the conditions of Eq (25).
For the particular case of one qubit, we can show some examples of Pauli dynamical maps implementable with a 1PR circuit, which are plotted in Fig 5. The examples we show include some of the most common maps: the bit flip, phase flip, bit-phase flip and depolarizing. However, we also include the parabolic dynamical map, defined in Eq (28) and shown in Fig 5. This map traces a parabola inside the tetrahedron connecting two of its vertices and it describes a frontier in the tetrahedron between Pauli channels that are reachable by Lindbladian dynamics and those that are not [18].
* **Depolarizing:** This dynamical map is given by \[\varepsilon_{p}(\rho)=(1-3p/4)\rho+(p/4)\sigma_{1}\rho\sigma_{1}+(p/4)\sigma_ {2}\rho\sigma_{2}+(p/4)\sigma_{3}\rho\sigma_{3},\] with \(p\in[0,1]\). Therefore, the curve of states \(|\beta(p)\rangle\) needed on the ancilla qubits is such that \(|\beta_{0}(p)|^{2}=(1-3p/4)\), \(|\beta_{1}(p)|^{2}=|\beta_{2}(p)|^{2}=|\beta_{3}(p)|^{2}=p/4\). Then, taking the \(\beta_{j}\) to be real, the curve of states can be \[|\beta(p)\rangle=\sqrt{1-3p/4}|00\rangle+\sqrt{p/4}|01\rangle+\sqrt{p/4}|10 \rangle+\sqrt{p/4}|11\rangle.\]
Fig 5: **Some Pauli dynamical maps that can be implemented with a 1PR circuit.** The curves painted in these tetrahedrons represent Pauli dynamical maps that can be implemented with 1PR circuits. (a) shows the dynamical maps mentioned in the main text, which are: depolarizing (purple), bit flip (blue), phase flip (green), bit-phase flip (red) and parabolic (orange). (b) shows dynamical maps selected at random that can be implemented with 1PR circuits.
This state can be rewritten as:
\[|\beta(p)\rangle=e^{is}\left(\frac{1}{2}|00\rangle-\frac{i}{2\sqrt{3 }}|01\rangle-\frac{i}{2\sqrt{3}}|10\rangle-\frac{i}{2\sqrt{3}}|11\rangle\right)\] \[\qquad\qquad+e^{-is}\left(\frac{1}{2}|00\rangle+\frac{i}{2\sqrt{3 }}|01\rangle+\frac{i}{2\sqrt{3}}|10\rangle+\frac{i}{2\sqrt{3}}|11\rangle\right),\]
with \(\sin s=\sqrt{3p/4}\). We can see that this curve satisfies the conditions of Eq (25), meaning that it can be created with a 1PR circuit.
* **Parabolic dynamical map:** We define the parabolic dynamical map as: \[\epsilon(\rho)=\frac{1}{4}(1-p)^{2}\rho+\frac{1}{4}(1-p^{2})\sigma_{1}\rho \sigma_{1}+\frac{1}{4}(1-p^{2})\sigma_{2}\rho\sigma_{2}+\frac{1}{4}(1+p)^{2} \sigma_{3}\rho\sigma_{3},\] (28) with \(p\in[-1,1]\). If we take the \(\beta_{j}\) to be real, the curve of states needed on the ancilla qubits can be: \[|\beta(p)\rangle=\frac{1}{2}(1-p)|00\rangle+\frac{1}{2}\sqrt{1-p^{2}}|01 \rangle+\frac{1}{2}\sqrt{1-p^{2}}|10\rangle+\frac{1}{2}(1+p)|11\rangle.\] (29) This can be rewritten as \[|\beta(p)\rangle= \left(\frac{1}{2}|00\rangle+\frac{1}{2}|11\rangle\right)+e^{is} \left(\frac{i}{4}|00\rangle+\frac{1}{4}|01\rangle+\frac{1}{4}|10\rangle-\frac{ i}{4}|11\rangle\right)\] \[+e^{-is}\left(\frac{-i}{4}|00\rangle+\frac{1}{4}|01\rangle+\frac{ 1}{4}|10\rangle+\frac{i}{4}|11\rangle\right),\] with \(\sin s=p\), so that this map fulfills the conditions of Eq (25).
* **Bit flip map:** This dynamical map is defined as \[\varepsilon_{p}(\rho)=(1-p)\rho+p\sigma_{1}\rho\sigma_{1},\] for \(p\in[0,1]\). In particular, if we take the \(\beta_{j}\) to be real, we need to create the curve of states: \[|\beta(p)\rangle=\sqrt{1-p}|00\rangle+\sqrt{p}|01\rangle.\] This can be rewritten as \[|\beta(p)\rangle=e^{is}\left(\frac{1}{2}|00\rangle-\frac{i}{2}|01\rangle\right) +e^{-is}\left(\frac{1}{2}|00\rangle+\frac{i}{2}|01\rangle\right),\] with \(\sin s=\sqrt{p}\). Therefore, we can see that the curve satisfies the conditions of Eq (25) and it can be created with a 1PR circuit. Note that in this case we actually only need one ancilla qubit since the state \(|\beta(p)\rangle\) is only two dimensional. The exact same thing can be done for the phase flip and bit phase flip dynamical maps by changing \(\sigma_{1}\) to \(\sigma_{3}\) and \(\sigma_{2}\) respectively. For example, the bit phase flip map was implemented in [11, 35] using an optical arrangement, and it was indeed done by varying only one angle that depends on the parameter \(p\) (the angle of a half waveplate).
Furthermore, we can construct other examples of Pauli dynamical maps such that they can be implemented with a 1PR circuit. To do it, we only need to choose the three states \(|a\rangle\), \(|b\rangle\), \(|c\rangle\) that satisfy the conditions of Eq (25). For example, this can be done systematically for the case of curves of states of two qubits (that is, for Pauli dynamical maps of one qubit) with the following procedure:
1. We first choose the norms \(|a|\), \(|b|\), \(|c|\) such that \(\langle a|a\rangle+\langle b|b\rangle+\langle c|c\rangle=1\). This can be done by selecting two angles \(\mu\in[0,\pi/2]\), \(\nu\in[0,\pi/2]\) and defining: \[|a|=\sin\nu\cos\mu,\ |b|=\sin\nu\sin\mu,\ |c|=\cos\nu.\]
2. We define \(|a^{\prime}\rangle=|a||0\rangle\), \(|b^{\prime}\rangle=|b||1\rangle\), \(|c^{\prime}\rangle=|c||2\rangle\).
3. Finally, we choose a unitary matrix \(V\) with the condition that its first row is equal to \(e^{i\theta}(|a|,|b|,|c|,0)\) with \(\theta\) a uniform random phase. That way, we can define \(|a\rangle=V|a^{\prime}\rangle\), \(|b\rangle=V|b^{\prime}\rangle\), \(|c\rangle=V|c^{\prime}\rangle\) and since \(V\) is unitary, these unprimed vectors will fulfill the conditions of Eq (25). Furthermore, the form of the first row ensures that the dynamical map begins at the identity, since it implies that when \(s=0\), the state created in Eq (27) is \(|a\rangle+|b\rangle+|c\rangle=e^{i\theta}|0\rangle\), which corresponds with applying the identity channel. Such a matrix \(V\) can be randomly constructed by first finding three vectors \(\vec{w}_{1},\vec{w}_{2},\vec{w}_{3}\) orthogonal to the first row using the Gram-Schmidt process. Then selecting random complex numbers \(r_{1},r_{2},r_{3}\) such that \(|r_{1}|^{2}+|r_{2}|^{2}+|r_{3}|^{2}=1\) and defining the second row of \(V\) to be \(r_{1}\vec{w}_{1}+r_{2}\vec{w}_{2}+r_{3}\vec{w}_{3}\). Once the first two rows are chosen, use Gram-Schmidt to find two vectors \(\vec{v}_{1},\vec{v}_{2}\) orthogonal to them and similarly define the third row as \(q_{1}\vec{v}_{1}+q_{2}\vec{v}_{2}\) with \(|q_{1}|^{2}+|q_{2}|^{2}\) and \(q_{1},q_{2}\) selected at random. Finally, there is only one choice for the fourth row so that it is orthonormal to the first three and a random phase can be given to it.
Following this procedure for random angles and unitary matrices \(V\), we plot four Pauli dynamical maps selected at random that can be implemented with a 1PR circuit in Fig 5.
## 6 Conclusion
In this work, we found a quantum algorithm for simulating Pauli channels in \(N\)-qubit systems and generalized it to Pauli dynamical maps by using parametrized quantum circuits. Furthermore, we implemented single-qubit Pauli channels on one of IBM's quantum computers and obtained their fidelities. Finally, when working with Pauli dynamical maps, we searched for a way of simplifying the parametrized circuit by requiring that only one single-qubit rotation depends on the parameter. In theorem 2 we found the general mathematical conditions for this, applicable to any parametrized circuit. Therefore, this work presents yet another example of the current exploration into simulating open quantum systems in quantum computers, and we observe the big effect that the error of quantum computers have on these simulations. On the other hand, the result of theorem 2 shows what can be done with the condition of using only one parametrized rotation and can be applied to any quantum algorithm that requires parametrized circuits, such as those used for quantum machine learning.
## 7 Acknowledgments
Support by projects CONACyT 285754, and UNAM-PAPIIT IG101421 is acknowledged.
|
2307.16609 | Noisy Self-Training with Data Augmentations for Offensive and Hate
Speech Detection Tasks | Online social media is rife with offensive and hateful comments, prompting
the need for their automatic detection given the sheer amount of posts created
every second. Creating high-quality human-labelled datasets for this task is
difficult and costly, especially because non-offensive posts are significantly
more frequent than offensive ones. However, unlabelled data is abundant,
easier, and cheaper to obtain. In this scenario, self-training methods, using
weakly-labelled examples to increase the amount of training data, can be
employed. Recent "noisy" self-training approaches incorporate data augmentation
techniques to ensure prediction consistency and increase robustness against
noisy data and adversarial attacks. In this paper, we experiment with default
and noisy self-training using three different textual data augmentation
techniques across five different pre-trained BERT architectures varying in
size. We evaluate our experiments on two offensive/hate-speech datasets and
demonstrate that (i) self-training consistently improves performance regardless
of model size, resulting in up to +1.5% F1-macro on both datasets, and (ii)
noisy self-training with textual data augmentations, despite being successfully
applied in similar settings, decreases performance on offensive and hate-speech
domains when compared to the default method, even with state-of-the-art
augmentations such as backtranslation. | João A. Leite, Carolina Scarton, Diego F. Silva | 2023-07-31T12:35:54Z | http://arxiv.org/abs/2307.16609v1 | # Noisy Self-Training with Data Augmentations for Offensive and Hate Speech Detection Tasks
###### Abstract
Online social media is rife with offensive and hateful comments, prompting the need for their automatic detection given the sheer amount of posts created every second. Creating high-quality human-labelled datasets for this task is difficult and costly, especially because non-offensive posts are significantly more frequent than offensive ones. However, unlabelled data is abundant, easier, and cheaper to obtain. In this scenario, self-training methods, using weakly-labelled examples to increase the amount of training data, can be employed. Recent "noisy" self-training approaches incorporate data augmentation techniques to ensure prediction consistency and increase robustness against noisy data and adversarial attacks. In this paper, we experiment with default and noisy self-training using three different textual data augmentation techniques across five different pre-trained BERT architectures varying in size. We evaluate our experiments on two offensive/hate-speech datasets and demonstrate that (i) self-training consistently improves performance regardless of model size, resulting in up to +1.5% F1-macro on both datasets, and (ii) noisy self-training with textual data augmentations, despite being successfully applied in similar settings, decreases performance on offensive and hate-speech domains when compared to the default method, even with state-of-the-art augmentations such as backtranslation.
## 1 Introduction
Online social media platforms are widely used by modern society for many productive purposes. However, they are also known for intensifying offensive and hateful comments, attributed in part to factors such as user anonymity Mondal et al. (2017). Manual identification of hate speech is impractical at scale due to the massive number of posts generated every second and the potential harm to the mental health of moderators. Therefore, there is a need for automatic approaches to detect offensive and hateful speech.
In recent years, research on this topic has increased, resulting in new models and datasets published in various languages and sources Fortuna and Nunes (2018). A common characteristic among available datasets is label skewness towards the negative class (non-offensive/hateful), which is usually more frequent than the positive class (offensive/hateful). Apart from traditional ways of dealing with imbalanced classes (e.g. under or oversampling or applying class weighting), semi-supervised techniques such as self-training can be used to extend the training set with unseen examples that introduce new learning signals without the costly burden of manual data labeling.
Self-training is a technique that involves iteratively training models using both labelled and unlabelled data. The process begins by training a model using human-labelled data only, which is then used to infer labels for a set of unlabelled data, creating a weakly-labelled dataset. The weakly-labelled dataset and the human-labelled dataset are then aggregated and used to retrain the model. This iterative process is repeated for a fixed number of steps or until no performance improvement is observed. Self-training can be particularly useful when labelled data is scarce or expensive to obtain, and was successfully applied in a variety of domains such as computer vision Schiappa et al. (2022), audio and speech processing Liu et al. (2022), and natural language processing He et al. (2019).
Several variants of self-training have been proposed over the years Amini et al. (2022). One common approach is to use a teacher-student framework, in which the "student" model learns from the output generated by the "teacher" model Blum and Mitchell (1998); Xie et al. (2020); Chen et al. (2021); Karamanolakis et al. (2021). Additionally, a confi
dence threshold filter may be applied to remove examples that are too ambiguous or non-informative. This process is summarised in Figure 1.
Recent research on self-training has reported further improvements in performance by introducing perturbations directly into the raw input or to its latent representation, improving generalisation and convergence (Rasmus et al., 2015; Laine and Aila, 2017; Miyato et al., 2018; He et al., 2019; Xie et al., 2020). These perturbations are often introduced in the form of data augmentations, which are widely applied in Computer Vision tasks but are less commonly explored in Natural Language Processing tasks, especially in the context of self-training. These "noisy self-training" methods can be particularly useful in settings where the input data is noisy or subject to a high degree of variation, improving prediction consistency and adversarial robustness (Carmon et al., 2019; Alayrac et al., 2019; Najafi et al., 2019).
Bayer et al. (2022) argue that data augmentation depends on the underlying classification task, thus it cannot be effectively applied in all circumstances. Previous work focusing solely on data augmentation methods, not coupled with self-training, has shown mixed results for the domain of offensive/hate speech classification (Section 2.1). This indicates that there may not be a best method, while some may even negatively impact performance.
An open question is whether noisy self-training with text data augmentations can contribute to text classification tasks using state-of-the-art transfer-learning BERT models that have been shown to be invariant to various data transformations (Longpre et al., 2020). The task of offensive/abusive speech detection poses a difficult challenge for generating high-quality semantic invariant augmented examples, since it is a domain that is intrinsically associated with specific keywords that, if modified, can completely change the semantics of the text. In this paper, we innovate by providing an extensive experimentation setup using three different data augmentation techniques - backtranslation, random word swap, and random synonym substitution - in a self-training framework, with five different pre-trained BERT architectures varying in size, on two different datasets.
We demonstrate that self-training, either with or without data noising, outperforms default fine-tuning regardless of model size, on both datasets. However, when comparing self-training without data noising vs 'noisy' self-training, we find that data augmentations decrease performance, despite the literature reporting the superiority of noisy self-training in other domains. We further investigate how the augmentation methods fail to create label-invariant examples for the offensive/hate speech domain. Finally, we discuss future research ideas to address the limitations found in this work.
## 2 Related Work
### Data Augmentation
Bayer et al. (2022) present a survey on data augmentation methods for NLP applications, reporting performance gains on various tasks.
In the domain of offensive/hate speech classification, Ibrahim et al. (2018) experiment with three different text augmentation techniques to expand and balance their Wikipedia dataset by augmenting negative (non-offensive) examples. From a binary view of the dataset, more than 85% of their examples are labelled as non-offensive, and from a multi-label view of the dataset, three of the six offensive classes are represented by less than 7% of the dataset. They report F1-score increases of +1.4% with unique words augmentation, +2.9% with unique words and random mask, and +3.6% with unique words, random mask, and synonym replacement.
Mosolova et al. (2018) use a custom synonym replacement augmentation method to experiment with a 'toxic' dataset with 6 classes from a Kaggle competition1. They experiment with character and word embeddings with a CNN architecture, and report a +3.7% and +5.1% ROC-AUC increase when applying their augmentation method with character embeddings on the public and private
Figure 1: Teacher-student self-training loop
scores2, respectively. However, when coupled with word embeddings, they find that their augmentations result in a decrease of -0.09% and -0.21% ROC-AUC scores on the public and private scores, respectively.
Footnote 2: Public scores are computed over a smaller portion of the test set. At the end of the competition, private scores are computed with the remainder of the test set.
Rizos et al. (2019) propose three text-based data augmentation techniques to address the class imbalance in datasets, and apply them on three English hate speech datasets named HON Davidson et al. (2017), RSN-1 Waseem and Hovy (2016) and RSN-2 Waseem (2016). Their augmentation methods include (i) synonym replacement based on word embedding, (ii) warping of the token words along the padded sequence, and (iii) class-conditional RNN language generation. They compare the three methods on different architectures combining word embeddings, CNNs, GRUs, and LSTMs, and they report an average across four different architecture configurations of -6.3% F1-Macro using (i), +5% F1-Macro using (ii), and -4% F1-Macro using (iii).
Marivate and Sefara (2020) experiment with four different data augmentation techniques: WordNet synonym substitution, backtranslation between German and English, word embedding substitution according to cosine similarity, and mixup Zhang et al. (2018). Authors experiment with three datasets from different domains: Sentiment 140 Go et al. (2009), AG News Zhang et al. (2015) and a Hate Speech dataset Davidson et al. (2017). They observe performance increases on both Sentiment 140 and AG News across different augmentation methods, up to +0.4% and +0.5% accuracy score on AG News and Sentiment 140, respectively. However, they report performance decreases with all methods on the Hate Speech dataset, with decreases of 0.0% with mixup, -0.3% with embedding similarity, -0.8% with synonym substitution, and -2.3% with backtranslation.
### Self-Training
Xie et al. (2020) present a method called _noisy student_, which achieves state-of-the-art results on the ImageNet dataset Deng et al. (2009) by performing self-training with a teacher-student approach, using student models that are equal or larger-sized than the teacher models, and adding noise both to the input data through random image augmentations and to the model via dropout.
He et al. (2019) apply a similar idea using textual data augmentation methods such as backtranslation Edunov et al. (2018) and token modifications to a self-training LSTM architecture for the tasks of machine translation and text summarization. They find that both model noise, in the form of dropout, and data noise, in the form of data augmentations, are crucial to their observed increase in performance on both tasks.
Xie et al. (2020) use six text classification and two image classification benchmark datasets to experiment with different types of noise-inducing techniques for self-training. They argue that state-of-the-art augmentations like backtranslation for text classification and RandAugment Cubuk et al. (2020) for image classification, outperform simple noise inducing techniques, such as additive Gaussian noise.
The use of noisy self-training approaches in the domain of offensive/hate speech classification is still limited, but default 'non-noisy' self-training has been successfully applied in some recent works. Alsafari and Sadaoui (2021) collect unlabelled Arabic tweets and perform semi-supervised classification with self-training for the domain of Offensive and Hate Speech detection using multiple text representations such as N-grams, Word2Vec, AraBert and Distilbert, and multiple model architectures such as SVM, CNN and BiLSTM. They report up to 7% performance increase in low resource settings where only a few labelled examples are available.
Leonardelli et al. (2020) apply self-training in their submission to the HaSpeeDe shared task on Italian hate speech detection (task A). They fine-tune an AlBERTo model with the human-labelled dataset provided by the task organisers and extend it with a weakly-labelled dataset using self-training. Additionally, they oversample the human-labelled set in an attempt to make the model more robust to inconsistencies in the weakly-labelled set. Their submission achieve an F1-macro score of 75.3% on tweets, placing 11th out of 29 teams, and 70.2% on news headlines, placing 5th out of 29 teams.
Pham-Hong and Chokshi (2020) report experiments with the noisy student method from Xie et al. (2020) in the OffensEval 2020 shared task, achieving 2nd place at subtask B (Automatic categorization of offense types). In their setup, although dropout is applied to a BERT-large model, no noise is injected into the data, which is a crucial component of the noisy student method. Because of
this, we argue that this work is actually applying a default self-training method instead of a noisy self-training method. Also, OffensEval 2020's training data does not contain human-labelled data3, thus both their weakly-labelled dataset and ground-truth dataset consist of inferred examples.
Footnote 3: In OffensEval 2020, the labels in the training data are the average confidence score and confidence standard deviation aggregated from an ensemble of models.
Richardson et al. (2022) detect hate speech on Twitter in the context of the Covid-19 pandemic. They employ a simple approach, utilizing a bag-of-words representation combined with an SVM classifier. Authors demonstrate that by employing self-training with only 20% of the training data, they manage to improve accuracy by +1.55% compared to default training using 80% of the training data.
To the best of our knowledge, Santos et al. (2022) is the only previous work in which a **noisy** self-training approach was attempted on an offensive/hate speech classification task. They propose an ensemble of two semi-supervised models to create FIGHT, a Portuguese hate speech corpus. Authors combine GANs, a BERT-based model, and a label propagation model, achieving 66.4% F1-score. They attempt to increase performance using backtranslation as data augmentation, but ultimately observe no performance gains, thus their best model is obtained with default self-training, not with noisy self-training.
## 3 Materials and Methods
This section presents the description of the datasets, data augmentation methods and self-training architectures used throughout our experiments. Our code is available at GitHub4.
Footnote 4: [https://github.com/JAugusto97/Offense-Self-Training](https://github.com/JAugusto97/Offense-Self-Training)
### Data Description
We use two English binary offensive/hate speech detection datasets in our experiments. Table 1 presents their target class distributions.
Offensive Language Identification Dataset (OLID)(Zampieri et al., 2019) contains a collection of annotated tweets following three levels: Offensive Language Detection, Categorization of Offensive Language, and Offensive Language Target Identification. This work only uses the first level - Offensive Language Detection. The dataset was normalised by replacing URLs and user mentions with placeholders. The best model in (Zampieri et al., 2019) achieves 80% macro-\(F1\) using convolutional neural networks, with 70% and 90% of \(F1\)-Score for the positive and negative classes, respectively.
ConvAbuse(Cercas Curry et al., 2021) is a dataset on abusive language towards three conversational AI systems: an open-domain social bot, a rule-based chatbot, and a task-based system. Authors find that the distribution of abuse towards conversational systems differs from other commonly used datasets, with more than 50% of the instances containing sexism or sexual harassment. To normalise the data, web addresses were replaced with a placeholder. Authors provide standard train, development, and test sets and achieve up to 88.92% macro-\(F1\) using a fine-tuned BERT model. In our experiments, we concatenate the interactions between the user and the chatbot into a single text document divided by new line separators, and we use majority voting between the annotations to consolidate the binary abusive vs. non-abusive label.
Unlabelled dataWe collected 365,456 tweets in English with the Twitter API using an unbiased query rule: random tweets mentioning stop-words like "in", "on", "a", "is", "not", "or" and so on. We also preprocess the data by removing user mentions, urls, punctuations, extra whitespace and accents.
### Self-Training Architecture
Our noisy self-training system is similar to that introduced by Xie et al. (2020) and Xie et al. (2020), and works as follows:
1. A teacher model is trained to minimise the cross-entropy loss on the human-labelled training set exclusively.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{OLID} \\ & Train & Dev & Test \\ \hline Not-Offensive & 8,840 & 0 & 620 \\ Offensive & 4,400 & 0 & 240 \\ \hline \multicolumn{4}{c}{ConvAbuse} \\ & Train & Dev & Test \\ \hline Not-Offensive & 2,163 & 719 & 725 \\ Offensive & 338 & 112 & 128 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Target class distribution for OLID and ConvA-base.
2. The teacher model infers weak labels from the unlabelled dataset. * A confidence threshold filter is applied, and examples that fall below this threshold are removed. * Apply _downsampling_ on the inferred examples, ending up with a perfectly balanced weakly-labelled dataset.
3. All the examples selected from the previous step are augmented once with one of the data augmentation methods, doubling the amount of weakly-labelled examples. The labels obtained with the 'clean/without noise' text in step 2 are replicated for the augmented texts.
4. An equal-sized student model minimises the combined cross-entropy loss on human-labelled and weakly-labelled datasets: \[L=\frac{1}{n}\sum_{i=1}^{n}L_{\mathrm{labelled}}+\frac{1}{m}\sum_{i=1}^{m}L_{ \mathrm{inferred}}\] (1)
5. Repeat from step 2 using the current student model as the teacher model.
In our experiments, we compare this noisy self-training framework against the default 'non-noisy' self-training method, which simply skips step 3, meaning we do not apply any form of data augmentation.
### Data Augmentation Methods
In each noisy self-training experiment we use nlpaug5 to apply one of the three following data augmentation methods for textual data:
Footnote 5: [https://github.com/makcedward/nlpaug](https://github.com/makcedward/nlpaug)
Random Synonym SubstitutionUses WordNet (Miller, 1995) to randomly replace tokens by one of its synonyms. For each sentence, 30% of its tokens will be replaced.
Random Word SwapRandomly swaps adjacent tokens in a sentence. For each sentence, 30% of its tokens are swapped.
BacktranslationFirst translates the original texts into a second language, then translates them back from the second language to the original language. We use the backtranslation model from nlpaug, which uses the two different transformer models from Ng et al. (2019) to translate the data from English to German, then from German back to English.
## 4 Experimental Setup
Firstly, we experiment with each dataset to estimate the hyperparameters for the base models, which is the first teacher models in the self-training loop. We use a batch size of 128, maximum sequence length of 128, learning rate of 0.00001, 15% of the training set as warm-up batches, weight decay of 0.001 and 20 training epochs. We apply a dropout rate of 10% for both the attention and classification layers. The model with highest validation F1-macro score6 obtained during training is loaded at the end of the last epoch. For the hyperparameters associated with the self-training method, we set the number of teacher-student iterations to 4 (including the first teacher model) and a confidence threshold filter of 80%, similarly to Xie et al. (2020). Also, we experiment with five different pre-trained BERT models: DistilBERT, BERT-base-cased, BERT-large-cased, RoBERTa-base and RoBERTa-large, aiming to investigate the impact of model size in performance gains associated with self-training.
Footnote 6: Lowest training loss in the case of OLID, since no development set is provided.
From the above-listed configurations, we designed two main classification scenarios. The first scenario accounts for a regular self-training loop without data noise injection through augmentations, while the second scenario uses the noisy self-training approach, introducing data noise with one of the three augmentation methods described in Section 3.3.
Finally, we conduct a deeper analysis of each augmentation method. We use the first teacher model, trained exclusively with the human-labelled data of each dataset, to infer both the 'clean/without augmentation' and the 'noisy/augmented' versions of the unlabelled dataset and verify the following: (i) Does the augmentation method create new tokens that are not present in the vocabulary of the 'clean/without augmentation' unlabelled dataset? and (ii) Are the augmentations semantically invariant, meaning both the 'clean' and 'noisy' pairs of examples are assigned the same label?
## 5 Results
### Default Fine-Tuning vs. Self-Training
Table 2 displays the mean and standard deviation \(F1\)-macro scores computed over three different random seed initializations for each experiment. Note
that self-training, regardless of whether coupled with data augmentation methods or not, improves over default fine-tuning for every model architecture, increasing the F1-macro score from +0.7% up to +1.5% on OLID and +0.8% up to +1.5% on ConvAbuse depending on the pre-trained model architecture.
Also, we highlight how self-training can make smaller models, which require fewer resources to maintain in practical applications, achieving the same performance as larger and more costly models that are trained with default fine-tuning. Self-training on a DistilBERT (66M parameters) outperforms a BERT-large-cased (340M parameters) with default fine-tuning on both OLID and ConvAbuse. On OLID, a RoBERTa-base architecture (125M parameters) with self-training outperforms a RoBERTa-large (354M parameters) architecture with default fine-tuning, although this does not hold true for ConvAbuse.
Furthermore, we point out that OLID and ConvAbuse's data come from different sources, the first being Twitter, and the second one representing conversations between humans and chatbots, thus their structure differs significantly. Since our unlabelled dataset is composed of Twitter data, it would be fair to assume that the benefits of self-training in our experiments would be more prominent for the OLID dataset, but our results do not show this, since models trained with ConvAbuse benefited from self-training with our Twitter-originated unlabelled dataset just as much as models trained with OLID.
### Default Self-Training vs. Noisy Self-Training
After verifying that self-training is beneficial to both datasets on all model architectures, we compare default self-training with noisy self-training, and the impacts of adding data noise in the form of data augmentations. We find that introducing data augmentations to the self-training pipeline increases performance against default self-training only for RoBERTa-large on both OLID and ConvAbuse, with DistilBERT also showing improvements for ConvAbuse, but not for OLID. On all other architectures, for both datasets, default self-training without data augmentations achieves the highest scores.
In our results for offensive/hate speech classification, backtranslation does not achieve the highest score in any setup, while synonym substitution and word swap tie for highest score in three scenarios: ConvAbuse with DistilBERT, ConvAbuse with BERT-large-cased, and OLID with RoBERTa-large. Synonym substitution outperforms all the remaining methods on ConvAbuse with RoBERTa-large.
An important remark is that our results diverge from He et al. (2019), which finds that state-of-the-art data augmentation methods such as backtranslation outperform simpler methods on self-training for machine translation and text summarization. However, our results align with Marivate and Serfara (2020), although their work is not focused on
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{OLID} \\ Architecture & DF & ST & ST + BT & ST + SS & ST + WS \\ \hline DistilBERT & 78.4 \(\pm\) 0.1 & **79.2 \(\pm\) 0.2** & 79.0 \(\pm\) 0.3 & 79.0 \(\pm\) 0.3 & 79.0 \(\pm\) 0.3 \\ BERT-base-cased & 77.2 \(\pm\) 0.3 & **78.7 \(\pm\) 0.1** & 78.1 \(\pm\) 0.1 & 78.3 \(\pm\) 0.3 & 78.3 \(\pm\) 0.3 \\ BERT-large-cased & 79.2 \(\pm\) 0.2 & **80.0 \(\pm\) 0.3** & 79.4 \(\pm\) 0.1 & 79.3 \(\pm\) 0.3 & 79.3 \(\pm\) 0.3 \\ RoBERTa-base & 79.4 \(\pm\) 0.7 & **80.1 \(\pm\) 0.3** & 80.0 \(\pm\) 0.4 & 80.0 \(\pm\) 0.4 & 80.0 \(\pm\) 0.4 \\ RoBERTa-large & 79.8 \(\pm\) 0.3 & 80.4 \(\pm\) 0.4 & 80.3 \(\pm\) 0.4 & **80.7 \(\pm\) 0.7** & **80.7 \(\pm\) 0.7** \\ \hline \multicolumn{6}{c}{ConvAbuse} \\ Architecture & DF & ST & ST + BT & ST + SS & ST + WS \\ \hline DistilBERT & 85.7 \(\pm\) 0.5 & 86.8 \(\pm\) 0.3 & 87.1 \(\pm\) 0.3 & **87.2 \(\pm\) 0.3** & **87.2 \(\pm\) 0.3** \\ BERT-base-cased & 86.8 \(\pm\) 0.8 & **87.6 \(\pm\) 0.1** & 87.2 \(\pm\) 0.5 & 87.2 \(\pm\) 0.5 & 87.2 \(\pm\) 0.5 \\ BERT-large-cased & 87.1 \(\pm\) 0.6 & **87.9 \(\pm\) 0.5** & 87.4 \(\pm\) 0.2 & **87.9 \(\pm\) 0.5** & **87.9 \(\pm\) 0.5** \\ RoBERTa-base & 84.5 \(\pm\) 0.3 & **85.5 \(\pm\) 0.4** & 85.3 \(\pm\) 0.8 & 85.4 \(\pm\) 0.5 & 85.4 \(\pm\) 0.5 \\ RoBERTa-large & 86.0 \(\pm\) 0.1 & 86.2 \(\pm\) 0.3 & 86.6 \(\pm\) 0.3 & **86.9 \(\pm\) 0.1** & 86.8 \(\pm\) 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean \(\pm\) 1 std F1-Macro scores obtained over three random seed initializations.
DF=Default Fine-Tuning, ST=Self-Training, BT=Backtranslation, SS=Synonym Substitution, WS=Word Swap
self-training, but instead on how different data augmentation techniques impact their models on three datasets from different domains. They report back-translation as their worst augmentation method on a hate speech dataset, decreasing accuracy by -2.3%. Our findings bridge this gap and reveal that back-translation has significant limitations in the domain of offensive/hate speech detection, even when used in a noisy self-training approach.
### Data Augmentation Analysis
Our first data augmentation analysis is to understand if the augmented text introduces new unseen tokens to the vocabulary of the 'clean' unlabelled set when both are combined. We find a vocabulary size increase of 39.5%, 9.0% and 4.7% averaging across all different pre-trained architectures for backtranslation, synonym substitution and word swap7 respectively. This indicates that backtranslation is heavily superior in terms of introducing new unseen tokens, but this is not correlated with performance increase, as backtranslation appears as the worst augmentation method for noisy self-training in our classification experiments.
Footnote 7: Word swap is unintuitively capable of creating new tokens depending on how a sentence is split into tokens and then merged back after swapping the tokens.
Next, in order to verify the performance of the data augmentation methods in generating semantically invariant examples, we use the base models trained exclusively with the human-labelled data from each dataset, on each pre-trained architecture, and use them to perform inference on both the 'clean' and the noisy/augmented unlabelled set. We then compare both predictions and analyse how augmentations may shift the underlying target class. We will refer to **positive shift** when a non-offensive example is classified as offensive after being augmented, and **negative shift** when an offensive example is classified as non-offensive after being augmented.
Table 3 presents the total class shift percentage for each augmentation method, averaging across both datasets and all model architectures, of which we further divide into positive and negative label shift percentages. Notice that backtranslation is the method that produces the highest amount of label shifting at 23.8%, of which 54.7% are negative shifts, which is a 6.6% increase over synonym substitution and a 4.8% increase word swap.
It is fair to assume that not all of the class shifting occurs from the augmentation changing the semantic that defines if an example is either offensive or not-offensive. In most cases, class shifting may occur because of small perturbations that are semantically invariant, meaning both the 'clean' and the augmented text's true underlying classes are still the same, even if the classifier predicted them as different classes. In these cases, when we set the label of the augmented text to be the same as the one obtained when inferring the 'clean' version of the text, as presented in section 3.2, we are reinforcing the model to be more robust against these small perturbations, which is one of the main benefits of noisy self-training. However, when augmentation methods create semantically different versions of the original texts, replicating the inferred label from the original text to the augmented text results in the addition of incorrect ground-truth labels to the train set, which may degrade performance.
Currently, to the best of our knowledge, there is no dataset annotated for offense/hate speech before and after applying data augmentation, which would enable a more accurate estimation of semantic variations produced by them. In tables 4 and 5 we show two examples for each augmentation method that suffered from positive shift (not-offensive to offensive) and negative shift (offensive to not-offensive), respectively.
An example of a recurrent theme among various target shifted examples is the substitution of the keywords 'fuck' with 'damn' or 'hell', indicating that despite these keywords being semantically similar, they are not always interchangeable with respect to the target class, and the mere replacement of one for another is enough to shift the target class. This could be expected, as offense detection is highly impacted by the mere presence or absence of offensive keywords.
## 6 Conclusion
In this work, we analysed the impact of self-training on offensive and hate speech classification tasks using five different pre-trained BERT models
\begin{table}
\begin{tabular}{l c|c c} \hline Augmentation & Total Shift & Positive Shift & Negative Shift \\ \hline BT & 23.8\% & 46.7\% & 54.7\% \\ SS & 23.5\% & 48.7\% & 51.3\% \\ WS & 23.3\% & 47.8\% & 52.2\% \\ \hline \end{tabular}
\end{table}
Table 3: Average target class shift percentage on the weakly-labelled set. BT=Backtranslation, SS=Synonym Substitution, WS=Word Swap
of varying sizes and two different datasets. We also experimented with noisy self-training using three different data augmentation techniques for textual data. We found that self-training improves classification performance for all model architectures on both datasets, with an increase in F1-Macro of up to +1.5%. However, our experiments comparing default self-training versus noisy self-training showed that noisy self-training does not improve performance, despite its success in other domains. Finally, we investigated the three data augmentation methods and showed that the domain of offensive/hate speech classification is highly sensitive to semantic variances produced by them, and we discussed future research ideas to mitigate these problems.
## 7 Future Work
We understand that some of the semantic variations discussed in this work could be mitigated by data augmentation methods that both preserve existing offensive keywords, and do not introduce new offensive keywords randomly, as these are often conditional to the underlying ground-truth class. For some languages, most of these keywords are extensively documented8, thus they can be known a priori by these methods, and be treated differently, such as only substituting an offensive keyword by another offensive keyword, or not allowing a non-offensive keyword to be substituted by an offensive keyword. This custom approach can theoretically help mitigate semantic variations in this domain, but offensive/hateful comments can still be made without making use of a single offensive/hateful keyword. In these more subtle cases, a system would have to detect the offensive/hateful context without relying solely on keywords, and modify the example while still maintaining this context. We see potential benefits of using recent instruction-tuned large language models (Ouyang et al., 2022) as specialised data augmentation methods that are task-specific, and can be able to preserve the semantics associated with the task when modifying a given text. In this scenario, an instruction prompt can be designed to inform the system of the context of the task, and make it aware that this semantic must be preserved when modifying the given text. In the future, we aim towards extending this work with the above-mentioned research ideas.
Footnote 8: [https://hatebase.org/](https://hatebase.org/)
## Acknowledgments
We thank Olesya Razuvayevskaya and Freddy Heppell for their valuable feedback. This research has been funded by "SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics" (EU H2020, Grant Agreement n.871042 ([http://www.sobigdata.eu](http://www.sobigdata.eu))).
\begin{table}
\begin{tabular}{l l l} \hline Text & Augmented Text & Method \\ \hline I HATE ALL OF YOU & ALL I HATE OF YOU & WS \\ Maybe I dont respect all women & Maybe I respect dont women all & WS \\ Bitches and sports & Females and Sport & BT \\ Wooooow what the fuck & Wooooow, what the hell? & BT \\ Bitch you better be joking & Gripe you good be joking & SS \\ The NYT has been showing its whole ass [...] & The NYT has follow showing its whole butt [...] & SS \\ \hline \end{tabular}
\end{table}
Table 4: Examples of Offensive to Not-Offensive semantic shift created by data augmentation.
BT=Backtranslation, SS=Synonym Substitution, WS=Word Swap
\begin{table}
\begin{tabular}{l l l} \hline Text & Augmented Text & Method \\ \hline Is that Fat Albert & That Fat is Albert & WS \\ Man that is terrible & That man is terrible & WS \\ damn white people oppressing the blacks & fucking white people who oppress the blacks & BT \\ That damn staircase be beating my ass [...] & That fucking staircase will bang my ass [...] & BT \\ i will not get over this & i will not fuck off ended this & SS \\ Send me the link and Ill love you forever & Send pine tree state the link and Ill fuck you forever & SS \\ \hline \end{tabular}
\end{table}
Table 5: Examples of Not-Offensive to Offensive class shift created by data augmentation.
BT=Backtranslation, SS=Synonym Substitution, WS=Word Swap |
2309.09285 | Persistent vibrational structure in $^{110-116}$Cd | The empirical spectra and $E2$ decay rates in $^{110,112,114,116}$Cd are
shown to be consistent with a vibrational interpretation for low-lying normal
states, coexisting with a single deformed $\gamma$-soft band of intruder
states. The observed deviations from this paradigm show up in particular
non-yrast states, which are properly described by a Hamiltonian with U(5)
partial dynamical symmetry. The latter is characterized by a good (broken)
symmetry in most (in selected) normal states, weakly coupled to intruder
states. | N. Gavrielov, J. E. Garcia-Ramos, P. Van Isacker, A. Leviatan | 2023-09-17T14:34:02Z | http://arxiv.org/abs/2309.09285v1 | # Persistent vibrational structure in \({}^{110-116}\)Cd
###### Abstract
The empirical spectra and \(E2\) decay rates in \({}^{110,112,114,116}\)Cd are shown to be consistent with a vibrational interpretation for low-lying normal states, coexisting with a single deformed \(\gamma\)-soft band of intruder states. The observed deviations from this paradigm show up in particular non-yrast states, which are properly described by a Hamiltonian with U(5) partial dynamical symmetry. The latter is characterized by a good (broken) symmetry in most (in selected) normal states, weakly coupled to intruder states.
The concepts of shapes and symmetries play a pivotal role in the quest for understanding the structure and simple patterns in complex many-body systems. A notable example is found in atomic nuclei, where these notions are instrumental for interpreting the collective motion exhibited by a multitude of protons and neutrons subject to the strong interaction. Based on earlier ideas of Bohr and Kalckar [1; 2] and on Rainwater's suggestion [3] that nuclei may be intrinsically deformed, a standard description of the nucleus was proposed in terms of a quantum liquid drop, which can vibrate and, if deformed, also rotate. This is commonly referred to as the (Bohr-Mottelson) collective model of the nucleus [4; 5; 6]. Particular limits of the model provide insightful paradigms for the dynamics of spherical, axially-deformed and non-axial shapes. These geometric benchmarks correspond in the algebraic interacting boson model (IBM) of Arima and Iachello [7] to solvable limits, associated with dynamical symmetries.
Recent advances in high resolution spectroscopy of non-yrast states [8], impart valuable input for testing and challenging the accepted paradigms of collective motion in nuclei. The present work examines the collective model hypothesis of quadrupole oscillations about a spherical shape, in relation to the cadmium isotopes (\(Z\!=\!48\)). The latter since long have been considered as textbook examples of spherical- vibrator nuclei and U(5) dynamical symmetry [9; 10; 6; 7; 11]. On the other hand, detailed studies, using complementary spectroscopic methods, have provided evidence for marked deviations from such a structural paradigm [12; 13; 14; 15; 16; 17]. Two approaches have been proposed to address these unexpected findings. The first questions the spherical-vibrational character of the \({}^{110,112}\)Cd isotopes, replacing it with multiple coexistence of states with different deformed shapes in the same nucleus, a view qualitatively supported by a beyond-mean-field (BMF) calculation with the Gogny D1S energy density functional [18; 19]. A second approach is based on the recognition that the reported deviations from a spherical-vibrator behavior show up in selected states, while most states retain their vibrational character. In the terminology of symmetry, this implies that the symmetry in question is broken only in a subset of states, hence is partial [20]. Such a U(5) partial dynamical symmetry (PDS) approach was applied in Ref. [21] to describe the properties of \({}^{110}\)Cd.
In this Letter, it is shown that the U(5)-PDS approach of Ref. [21] can be extended to give a coherent description of a _series_ of cadmium isotopes with mass number \(A\)=110-116. Their properties are analyzed based on a vibrational interpretation coupled to the presence of intruder states. It is by now widely accepted that the Cd isotopes exhibit shape coexistence in their low-energy spectrum [22; 23; 24; 25]. However, unlike the multi-shape version of coexistence in Refs. [18; 19], only two coexisting configurations with different shapes are proposed here: a spherical one, exhibiting an anharmonic vibrational spectrum for normal states, and one that is prolate deformed with \(\gamma\)-soft characteristics, that is, an axially-symmetric shape that can easily turn triaxial, for intruder states. The anharmonicity is due to the presence of terms in the Hamiltonian that break the U(5) symmetry in selected normal states, and is essential to reproduce the unexpected observed \(E2\) decay patterns. In this respect, it should be mentioned that previous attempts to explain the experimental \(E2\) matrix elements relied on strong mixing between spherical and intruder states and ultimately proved unsuccessful [12; 13; 14; 15; 16; 26; 27].
Vibrations of spherical nuclei can be described in the U(5) dynamical symmetry (DS) limit of the IBM, associated with the chain, U(6) \(\supset\) U(5) \(\supset\) SO(5) \(\supset\) SO(3). The DS basis states \([|N],n_{d},\tau,n_{\Delta},L\rangle\) have quantum numbers which are the labels of irreducible representations of the algebras in the chain. Here \(N\) is the total number of monopole (\(s\)) and quadrupole (\(d\)) bosons, \(n_{d}\) and \(\tau\) are
the \(d\)-boson number and seniority, respectively, \(L\) is the angular momentum and \(n_{\Delta}\) is a multiplicity label. The U(5)-DS Hamiltonian can be transcribed in the form
\[\hat{H}_{\rm DS} = \rho_{1}\hat{n}_{d}+\rho_{2}\hat{n}_{d}(\hat{n}_{d}-1)+\rho_{3}[- \hat{C}_{\rm SO(5)}+\hat{n}_{d}(\hat{n}_{d}+3)] \tag{1}\] \[+\rho_{4}[\hat{C}_{\rm SO(3)}-6\hat{n}_{d}]\,\]
where \(\hat{C}_{\rm G}\) denotes a Casimir operator of G, and \(\hat{n}_{d}=\sum_{m}d_{m}^{\dagger}d_{m}=\hat{C}_{\rm U(5)}\). \(\hat{H}_{\rm DS}\) is completely solvable with eigenstates \([|N|,n_{d},\tau,n_{\Delta},L\rangle\) and energies \(E_{\rm DS}=\rho_{1}n_{d}+\rho_{2}n_{d}(n_{d}-1)+\rho_{3}(n_{d}-\tau)(n_{d}+ \tau+3)+\rho_{4}[L(L+1)-6n_{d}]\). The U(5)-DS spectrum is that of a spherical vibrator with states arranged in \(n_{d}\)-multiplets, the lowest ones being \((n_{d}=0,L=0),\ (n_{d}=1,L=2),\ (n_{d}=2,L=4,2,0),\)\((n_{d}=3,L=6,4,3,0,2)\) at energies \(E(n_{d})\approx n_{d}\,E(n_{d}\!=\!1)\). The \(E2\) operator in the IBM is proportional to
\[\hat{Q}_{\chi}=d^{\dagger}s+s^{\dagger}\tilde{d}+\chi(d^{\dagger}\tilde{d})^{( 2)}\, \tag{2}\]
where \(\tilde{d}_{m}\!=\!(-1)^{m}d_{-m}\). It is customary in the U(5)-DS limit to set \(\chi=0\), which results in vanishing quadrupole moments and strong \((n_{d}+1\!\to\!n_{d})\)\(E2\) transitions with particular ratios, _e.g._, \(\frac{B(E2;n_{d}\!+\!1,L^{\prime}\!=\!2n_{d}\!+\!2\!-\!n_{d},L\!=\!2n_{d})}{B(E 2;n_{d}\!=\!1,L\!=\!2\!-\!n_{d}\!=\!0,L\!=\!0)}=(n_{d}+1)\frac{(N-1)}{N}\).
The empirical spectrum of \({}^{112}\)Cd, shown in Fig. 1, consists of both normal and intruder levels, the latter based on 2p-4h proton excitations across the \(Z\!=\!50\) shell gap. At first sight, the normal states seem to follow the expected pattern of spherical-vibrator \(n_{d}\)-multiplets. The measured \(E2\) rates support this view for the majority of normal states, however, selected non-yrast states (shown in red in Fig. 1) reveal marked deviations from this behavior. Specifically, the \(0^{+}_{3}\) and \(2^{+}_{4}\) states in \({}^{112}\)Cd (denoted in Table 1 by \(0^{+}_{\alpha}\) and \(2^{+}_{\alpha}\)) which in the U(5)-DS classification are members of the \(n_{d}=2\) and \(n_{d}=3\) multiplets, respectively, have unusually small \(E2\) rates for the transitions \(0^{+}_{\alpha}\to 2^{+}_{1}\) and \(2^{+}_{\alpha}\to 2^{+}_{2}\), and large rates for \(0^{+}_{\alpha}\to 2^{+}_{2}\), at variance with the U(5)-DS predictions. Absolute \(B(E2)\) values for transitions from the \(0^{+}_{4}\) state are not known, but its branching ratio to the \(2^{+}_{2}\) state is small. As shown in Table 1, the same unexpected decay patterns occur in all \({}^{110-116}\)Cd isotopes and comprise the so called "Cd problem" [25]. We are thus confronted with a situation in which some states in the spectrum obey the predictions of U(5)-DS, while other states do not. These empirical findings suggest the presence of a PDS, as demonstrated for \({}^{110}\)Cd in [21]. In what follows, we show that the same U(5)-PDS approach is relevant also to the other Cd isotopes.
To describe both normal and intruder states, we adopt the interacting boson model with configuration mixing (IBM-CM) [29; 30], widely used to study shape coexis
Figure 1: Experimental energy levels in keV of \({}^{112}\)Cd [28]. Normal states are marked in black or in red if their \(E2\) decays deviate from those of a spherical vibrator. Intruder states are marked in blue.
tence in nuclei [31; 32; 33; 34; 35]. The Hamiltonian is written as
\[\hat{H}=\hat{H}^{(N)}_{\rm PDS}+\hat{H}^{(N+2)}_{\rm intrud}+\hat{V}^{(N,N+2)}_{ \rm mix}\, \tag{3}\]
where the superscript \((N)\) denotes a projection onto a space of \(N\) bosons. Here \(\hat{H}^{(N)}_{\rm PDS}\) represents the normal configuration (\(N\) boson space), \(\hat{H}^{(N+2)}_{\rm intrud}\) represents the intruder configuration (\(N\)+2 boson space) and \(\hat{V}^{(N,N+2)}_{\rm mix}\) a mixing term. Explicit forms are given by
\[\hat{H}_{\rm PDS} = \hat{H}_{\rm DS}+r_{0}\,G_{0}^{\dagger}G_{0}+e_{0}\,(G_{0}^{ \dagger}K_{0}+K_{0}^{\dagger}G_{0})\, \tag{4a}\] \[\hat{H}_{\rm intrud} = \kappa\hat{Q}_{\chi}\cdot\hat{Q}_{\chi}+\kappa^{\prime}\hat{L} \cdot\hat{L}+\Delta\,\] (4b) \[\hat{V}_{\rm mix} = \alpha\left[(s^{\dagger})^{2}+(d^{\dagger}d^{\dagger})^{(0)} \right]+{\rm H.c.}\, \tag{4c}\]
where \(\hat{H}_{\rm DS}\) is the U(5)-DS Hamiltonian of Eq. (1), \(G_{0}^{\dagger}\!\!=\!\![(d^{\dagger}d^{\dagger})^{(2)}d^{\dagger}]^{(0)}\), \(K_{0}^{\dagger}\!\!=\!\!s^{\dagger}(d^{\dagger}d^{\dagger})^{(0)}\), \(\hat{Q}_{\chi}\) is given in Eq. (2) and H.c. means Hermitian conjugate. As shown in [21], \(\hat{H}_{\rm PDS}\) has U(5)-PDS in the sense that it breaks the U(5) symmetry, yet maintains a subset of U(5)-DS basis states \(|n_{d}=\tau,\tau,n_{\Delta}=0,L\rangle\) with \(L\!\!=\!\!\tau,\tau+1,\ldots,2\tau-2,2\tau\), as solvable eigenstates. Henceforth, we refer to this special subset of states as class-A states. The eigenstates \(|\Psi;L\rangle\) of \(\hat{H}\), Eq. (3), involve normal (\(\Psi_{n}\)) and intruder (\(\Psi_{i}\)) components in the \([N]\) and \([N+2]\) boson spaces,
\[|\Psi;L\rangle=a\,|\Psi_{n};[N],L\rangle+b\,|\Psi_{i};[N+2],L\rangle\ \, \tag{5}\]
with \(a^{2}+b^{2}\!=\!1\). The \(E2\) operator in the IBM-CM reads
\[\hat{T}(E2)=e_{B}^{(N)}\,\hat{Q}_{\chi_{n}}^{(N)}+e_{B}^{(N+2)}\,\hat{Q}_{\chi }^{(N+2)}\, \tag{6}\]
with boson effective charges \(e_{B}^{(N)}\) and \(e_{B}^{(N+2)}\).
The parameters of \(\hat{H}\) (3) and \(\hat{T}(E2)\) (6) are determined by a combined fit to the spectra and \(E2\) transitions for the normal states \((2_{1}^{+},4_{1}^{+},2_{2}^{+},6_{1}^{+})\) and \((0_{\alpha}^{+},2_{\alpha}^{+})\), and for the lowest \((0^{+},2^{+})\) intruder states in each isotope. As shown in Fig. 2, the extracted parameters are fairly constant and vary smoothly as a function of neutron number. Notable exceptions are \(\rho_{1}\) whose decrease reflects the lowering of the \(2_{1}^{+}\) state, and \(\Delta\), which together with the \(\kappa\) term in \(\hat{H}_{\rm intrud}\) controls the lowering of the intruder levels towards mid-shell (neutron number 66), where boson particles are replaced by boson holes.
Figure 3: Comparison between selected experimental (left panels) and calculated (right panels) energy levels in MeV and \(E2\) transition rates in W.u.
Figure 2: Parameters of the IBM-CM Hamiltonian, Eq. (3), in MeV and of the \(E2\) operator, Eq. (6), with \(e_{B}^{(N)}\), \(e_{B}^{(N+2)}\) in \(\sqrt{\rm W.u.}\) and \(\chi_{n}\!=\!-0.7\), \(\chi\!=\!-0.09\) are dimensionless. The boson numbers in the (normal, intruder) configurations are \((N,N\!+\!2)\) with with \(N=7,8,9\) and \(\bar{N}=8\) (hole bosons) for neutron numbers 62, 64, 66, and 68, respectively.
An IBM-CM calculation has been performed for spectral properties of states in \({}^{110-116}\)Cd, with energies up to 4 MeV. A detailed account will be given in a forthcoming longer publication [36]. Here we focus on the main features which are relevant to the subject matter of this Letter, namely, the vibrational interpretation and symmetry aspects of these isotopes. The assignment of states as normal or intruder, is based on their measured \(E2\) decays when available, or on their calculated probabilities, \(a^{2}\) and \(b^{2}\), in Eq. (5).
As shown in Fig. 3 and Table 1, the U(5)-PDS calculation of spectra and \(E2\) rates provides a good description of the empirical data in \({}^{110-116}\)Cd. It yields the same \(B(E2)\) values as those of U(5)-DS for class-A states and reproduces correctly the \(E2\) transitions involving the \((0^{+}_{\alpha},2^{+}_{\alpha})\) states which deviate considerably from the U(5)-DS predictions. The origin of these features is revealed from Table 2, which shows for eigenfunctions of \(\hat{H}\) (3), the percentage of the wave function within the normal configuration [the probability \(a^{2}\) of \(\Psi_{n}\) in Eq. (5)] and the dominant \(n_{d}\) component in \(\Psi_{n}\) and its probability.
The class-A states are dominated by the normal component \(\Psi_{n}\) (large \(a^{2}\!\geq\!90\%\)), implying a weak mixing (small \(b^{2}\)) with the intruder states. The \(6^{+}_{1}\) state experiences a larger mixing consistent with its enhanced decay to the lowest \(4^{+}\) intruder state. The \((0^{+}_{\alpha},2^{+}_{\alpha})\) states are more susceptible to such mixing but still retain the dominance of \(\Psi_{n}\) (\(a^{2}\!\sim\!70\%\)). For both types of states the normal-intruder mixing increases with \(L\) for a given isotope, and increases towards mid-shell (\({}^{114}\)Cd), correlated with the decrease in energy of intruder states.
The class-A states possess good U(5) quantum numbers to a good approximation. Their \(\Psi_{n}\) part involves a single \(n_{d}\) component with probability \(P_{n_{d}}\geq 90\%\), as in U(5)-DS. In contrast, the structure of the non-\(\mathrm{\acute{e}}\)rst \((0^{+}_{\alpha},2^{+}_{\alpha})\) states changes dramatically. Specifically, the \(\Psi_{n}\) parts of the \(0^{+}_{\alpha}\) and \(2^{+}_{\alpha}\) states, which in the U(5)-DS classification have \(n_{d}=2\) and \(n_{d}=3\), have now dominant components with \(n_{d}=3\) and \(n_{d}=4\), respectively. The change \(n_{d}\mapsto(n_{d}+1)\) ensures weak (\(\Delta n_{d}=2\)) transitions from these states to class-A states, but secures strong \(2^{+}_{\alpha}\to 0^{+}_{\alpha}\) (\(\Delta n_{d}=1\)) transitions, in agreement with the data. While the class-A and \((0^{+}_{\alpha},2^{+}_{\alpha})\) states are predominantly spherical, the intruder states are members of a single deformed band with a characteristic \(\gamma\)-soft spectrum, shown in Fig. 3, and wavefunctions exhibiting a broad \(n_{d}\)-distribution and a pronounced SO(6) symmetry \(\sigma=N\!+\!2\).
The PDS-CM describes the data very well, but there are a few exceptions and remaining concerns. The observed quadrupole moments \(Q(2^{+}_{1})\) and \(B(E2;2^{+}_{2}\to 0^{+}_{1})\), shown in Table 1, are larger than the predicted values which, in turn, depend sensitively on the choice of \(\chi_{n}\) in Eq. (6). Larger values for these observables (which involve class-A states) can be accommodated by adding U(5) symmetry-breaking terms to the Hamiltonian. The \((0^{+}_{\alpha},2^{+}_{\alpha})\) states are predominantly \(n_{d}=(3,4)\). A relevant question [19], is where their partner states with \(n_{d}=(2,3)\) are with enhanced decays to states with \(n_{d}=(1,2)\). The observed \(0^{+}_{4}\) state, shown in Fig. 3, has a dominant branching to the intruder \(2^{+}_{3}\) state, hence does not match the properties expected for a \(n_{d}=2\) state. This may indicate a different structure for the \(0^{+}_{4}\) state (_e.g._, a 4p-6h proton excitation as speculated in [17]), although fragmentation of \(E2\) strength cannot be ruled out. In \({}^{110}\)Cd, the state \(2^{+}_{8}(2633)\) has a large \(B(E2;2^{+}_{8}\to 4^{+}_{1})=25^{+4}_{-5}\) W.u. [28], as expected for a \((n_{d}=3)\rightarrow(n_{d}=2)\) transition. More data is needed to shed light on this issue.
The vibrational interpretation proposed here is at variance with the microscopic BMF calculation of Refs. [18, 19] advocating multiple shape coexistence in \({}^{110,112}\)Cd, with states arranged in deformed rotational bands. Specifically, the states \(0^{+}_{1},2^{+}_{2},0^{+}_{3},0^{+}_{4}\), of \({}^{112}\)Cd, shown in Fig. 1, serve as bandheads for the ground, \(\gamma\) and two excited \(K\!=\!0\) bands, and \(0^{+}_{2},2^{+}_{5}\) are bandheads for intruder and intruder-\(\gamma\) bands. Similar assignments were suggested for \({}^{110}\)Cd. The BMF-based approach is parameter-free and provides a qualitative description of \({}^{110,112}\)Cd, but with noticeable shortcomings. In particular, the predicted energies are generally overestimated, and in-band \(B(E2)\) values and quadrupole moments are
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline & \multicolumn{2}{c}{\({}^{110}\)Cd} & \multicolumn{2}{c}{\({}^{112}\)Cd} & \multicolumn{2}{c}{\({}^{114}\)Cd} & \multicolumn{2}{c}{\({}^{116}\)Cd} \\ \cline{2-13} \(L^{+}_{k}\) & \(a^{2}\,(\%)\) & \([(n_{d})\) & \(P_{n_{d}}(\%)]\) & \(a^{2}\,(\%)\) & \([(n_{d})\) & \(P_{n_{d}}(\%)]\) & \(a^{2}\,(\%)\) & \([(n_{d})\) & \(P_{n_{d}}(\%)]\) & \(a^{2}\,(\%)\) & \([(n_{d})\) & \(P_{n_{d}}(\%)]\) \\ \hline \(0^{+}_{1}\) & 98.23 & [(0) & 98.22 ] & 97.94 & [(0) & 97.92 ] & 97.98 & [(0) & 97.95 ] & 98.27 & [(0) & 98.25 ] \\ \(2^{+}_{1}\) & 96.38 & [(1) & 96.36 ] & 95.10 & [(1) & 95.05 ] & 95.28 & [(1) & 95.22 ] & 96.84 & [(1) & 96.81 ] \\ \(4^{+}_{1}\) & 90.73 & [(2) & 90.69 ] & 83.19 & [(2) & 83.03 ] & 83.05 & [(2) & 82.87 ] & 92.95 & [(2) & 92.91 ] \\ \(2^{+}_{2}\) & 89.81 & [(2) & 89.74 ] & 81.62 & [(2) & 81.28 ] & 78.77 & [(2) & 78.33 ] & 91.31 & [(2) & 91.25 ] \\ \(6^{+}_{1}\) & 71.18 & [(3) & 71.09 ] & 42.92 & [(3) & 42.53 ] & 39.46 & [(3) & 38.98 ] & 79.34 & [(3) & 79.27 ] \\ \(0^{+}_{\alpha}\) & 70.75 & [(3) & 70.46 ] & 71.13 & [(3) & 69.54 ] & 71.55 & [(3) & 70.79 ] & 74.34 & [(3) & 74.14 ] \\ \(2^{+}_{\alpha}\) & 68.34 & [(4) & 66.07 ] & 65.89 & [(4) & 62.83 ] & 40.78 & [(4) & 40.13 ] & 55.68 & [(4) & 54.73 ] \\ \hline \end{tabular}
\end{table}
Table 2: Normal-intruder mixing and U(5) structure of the wavefunctions \(|\Psi,L\rangle\), Eq. (5), of selected eigenstates of \(\hat{H}\), Eq. (3). Shown are the probability \((a^{2})\) of the normal part \(\Psi_{n}\), the dominant \(n_{d}\) component in \(\Psi_{n}\) and its probability \(P_{n_{d}}\).
greater than observed, reflecting too large deformations in the calculated states. A detailed comparison between the BMF-based approach and the current PDS-based approach will be given in [36], with a view that ultimately, comparison with data should be the basis to accept or refute a model. One possible signature that can distinguish between the two approaches is to measure the value of \(B(E2;4_{2}^{+}\to 3_{1}^{+})\), which is expected to be small (large) in the PDS (BMF) approach, where the indicated states are in the same \(n_{d}\)-multiplet (in the same \(\gamma\) band).
To summarize, consistent with the empirical data, we have shown that a vibrational interpretation and good U(5) symmetry are maintained for the majority of low-lying normal states, coexisting with a single deformed band of intruder states in [110; 112; 114; 116]Cd isotopes. The observed deviations from this paradigm, are properly treated by an Hamiltonian which breaks the U(5) symmetry in selected non-yrast states, while keeping the mixing with intruder states weak. The results demonstrate, for the first time, the relevance of a partial dynamical symmetry (PDS) to a series of isotopes, and set the path for implementing a similar PDS-based approach to other regions of the nuclear chart, where a prescribed collective structure paradigm holds for a segment of the spectrum.
This work was supported, in part, (A.L. and N.G.) by the Israel Science Foundation and (J.E.G.R.) by project PID2019-104002GB-C21 funded by MCIN/AEI/10.13039/50110001103 and "ERDF A way of making Europe".
|
2309.09291 | OSmosis: No more Déjà vu in OS isolation | Operating systems provide an abstraction layer between the hardware and
higher-level software. Many abstractions, such as threads, processes,
containers, and virtual machines, are mechanisms to provide isolation. New
application scenarios frequently introduce new isolation mechanisms.
Implementing each isolation mechanism as an independent abstraction makes it
difficult to reason about the state and resources shared among different tasks,
leading to security vulnerabilities and performance interference. We present
OSmosis, an isolation model that expresses the precise level of resource
sharing, a framework in which to implement isolation mechanisms based on the
model, and an implementation of the framework on seL4. The OSmosis model lets
the user determine the degree of isolation guarantee that they need from the
system. This determination empowers developers to make informed decisions about
isolation and performance trade-offs, and the framework enables them to create
mechanisms with the desired degree of isolation. | Sidhartha Agrawal, Reto Achermann, Margo Seltzer | 2023-09-17T14:58:33Z | http://arxiv.org/abs/2309.09291v1 | # Osmosis: No more Deja vu in OS isolation
###### Abstract.
Operating systems provide an abstraction layer between the hardware and higher-level software. Many abstractions, such as threads, processes, containers, and virtual machines, are mechanisms to provide isolation. New application scenarios frequently introduce new isolation mechanisms. Implementing each isolation mechanism as an independent abstraction makes it difficult to reason about the state and resources shared among different tasks, leading to security vulnerabilities and performance interference.
We present _OSmosis_, an isolation model that expresses the precise level of resource sharing, a framework in which to implement isolation mechanisms based on the model, and an implementation of the framework on seL4. The _OSmosis_ model lets the user determine the degree of isolation guarantee that they need from the system. This determination empowers developers to make informed decisions about isolation and performance trade-offs, and the framework enables them to create mechanisms with the desired degree of isolation.
## 1. Introduction
From the moment that more than one person wanted to use a computer at the same time (some 60 years ago), the systems community has developed myriad of techniques to facilitate safe multiplexing. The community continues to struggle with how to provide the right degree of sharing and isolation for a given application and its users [3, 5, 7, 11, 12, 14, 16, 18, 21, 22, 25]. Even more problematic is that there is no clear understanding of the isolation levels provided by different mechanisms. Perhaps more fundamentally, given an isolation mechanism, it is not immediately clear what the application state consists of and thus, what parts of the application's state are shared with or isolated from other applications. While some application state is known (e.g., heap, code, data, and less obvious the kernel), there exists a significant amount of unknown state that the application is inadvertently sharing with other applications (e.g., system-level services (Section 2.2)). Worse, we lack a common vocabulary to describe an application's resources, including its known and unknown software state. This has led to many problems ranging from performance anomalies due to unintentional sharing and overheads from too much isolation to security vulnerabilities caused by unintentional sharing of known and unknown software state [17, 31]
We claim that there is a need for a principled way to talk about isolation and sharing, and a framework upon which to build implementations. Our hypothesis is that all OS mechanisms can be described as a set of **resources** and the **relationships** describing dependencies among them. Resources can be the virtual memory an application uses, the files to which it has access, the OS state it can query, etc. Resources lie on a sharing spectrum ranging from wholly shared to completely isolated. The metric that defines this spectrum is the distance to the first common resource found in the resource relationships of two entities (i.e., processes, containers, virtual machines). For example, two threads are on the shared end of the spectrum, because they share a virtual address space resource. Two processes running on two different virtual machines are more isolated, because the first resource they share is state maintained by a hypervisor.
We present _OSmosis_, which is composed of two parts. First, the _OSmosis_ model (Section 3.1) describes the types of entities in a system and how their sharing lies on a spectrum (Section 3.3). The model gives us a principled way to express isolation and sharing. Second, the _OSmosis_ framework (Section 4) describes the required OS mechanisms and tools for implementing the isolation model. The _OSmosis_ framework allows us to select a specific point in a high-dimensional space, where the different axes correspond to the different resources (e.g., physical memory, CPUs).
We have built a prototype system on the capabilities-based seL4 microkernel [13] that implements parts of our framework. We have built two existing and two new mechanisms with our framework. In contrast to existing implementations, we use the same set of building blocks for every mechanism.
_OSmosis_ lets us model the levels of isolation for each subsystem independently. For instance, if we are more concerned about attacks in the networking stack, we can give the networking stack stronger isolation than the file system stack. When subsystems are tightly coupled, (e.g., the virtual memory and file systems), increasing the isolation level for one raises the isolation level for the other, but this tight coupling does not exist between all subsystems.
With _OSmosis_, we can model the isolation requirements of a given application and then easily build it using the framework. This is especially exciting with the rise of serverless architectures, where the simple choice between VM and container has become significantly more complicated, and myriad new container/VM hybrids emerge regularly [9, 26, 27, 28, 29, 30]. There is no one-size-fits-all, and for a given application, one might want to pick different degrees of isolation/sharing between applications running on the same machine.
Motivation
New mechanisms are often motivated by one of: the emergence of a new use case, improving the performance of an existing use case, or defending against a security vulnerability. However, the solutions always use isolation as a tool. For example, they reserve resources (memory, storage, CPU time) (Krishnan et al., 2017), restrict access to unneeded state (kernel) (Krishnan et al., 2017), or share underlying state (drivers) (Bogorian et al., 2016; Krishnan et al., 2017) to improve performance. Similarly, they increase the isolation of resources or underlying state to build defenses. Given the importance of varying isolation, it is useful to have a clear understanding of which resources and state are shared among applications.
### New use cases
The systems community has developed many different isolation mechanisms in the last decade (Bogorian et al., 2016; Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019), and each provides a slightly different degree of isolation for a different resource, such as virtual memory, open files, performance, etc. Shreds (Bogorian et al., 2016), Secure Memory Views (SMV) (Krishnan et al., 2017), and Light Weight Contexts (LwC) (Krishnan et al., 2018) focus on providing compartmentalization within an address space, while LwCs also provide compartmentalization of some kernel state (e.g., file descriptor table) within the same process.
Whenever a new scenario arises, a solution is built to fit it, but there is no principled way to describe and implement these solutions, making it difficult to formally distinguish different solutions from one another. With new paradigms such as Function-as-a-Service, we see many more mechanisms arise (Krishnan et al., 2017; Krishnan et al., 2017). Some restrict access to underlying state (Krishnan et al., 2017), not needed by the function. Others run multiple functions in the same process and use additional mechanisms to create intra-address-space isolation (Krishnan et al., 2017). Although these mechanisms provide incremental isolation levels, their implementations are not incremental. A new implementation is prone to bugs since it cannot take advantage of years of testing on existing mechanisms (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019).
Furthermore, some organizations do not have the engineering resources to develop a new mechanism from scratch, so applications are retrofitted into existing mechanisms. Application developers might use a mechanism that provides weaker isolation than desired, leaving them vulnerable to exploits. Alternatively, they might use a mechanism with overly strong isolation and pay more for their deployment in a shared cloud environment.
### Unintentional resource sharing
The lack of clarity about the extent of sharing between two applications is also a source of security vulnerabilities. Even when applications appear isolated, such as in the case of a container, they still share kernel state. Since namespaces do not isolate all the visible state in the kernel, some state still leaks across container boundaries (e.g., the open file table) leading to denial-of-service attacks on other applications on the same host (Krishnan et al., 2019). Additionally, since the container infrastructure and the kernel run in the same address space, a simple buffer overflow in one part of the kernel can bring down the shared kernel and both containers. Lightweight VMs such as FireCracker (Bogorian et al., 2016) and KataContainers (Krishnan et al., 2017) are more secure alternatives to containers, providing the security of VMs with the overhead of containers. However, they achieve this performance by having the host OS provide functionality (e.g., drivers) for all VMs instead of the guest OS. Unfortunately, this leads to more shared state in the host kernel. Just as a shared kernel exposed issues with state leakage (for containers), shared drivers in the host kernel can do the same (for VMs).
## 3. _OSmosis_ Isolation Model
We now present the _OSmosis_ isolation model, the types of queries possible on the model, and how they lead to a precise definition of the isolation spectrum.
### Model
Listing 1 shows the _OSmosis_ model. It consists of a _system_ and a _resource relation_. A system consists of a set of _protection domains_ and a set of _resources_. Protection domains (PD) correspond to active entities (e.g., processes, threads, virtual machines). A PD has a set of resources and a _resource directory_. Resources are passive entities and can be either physical (e.g., RAM, CPUs or devices) or virtual (e.g., virtual memory region, file, socket). Physical resources all derive from tangible elements; virtual resources can be created by a PD. Both types of resources can be partitioned into smaller resources. The resource directory is a dictionary, keyed by a resource, that identifies the PD responsible for satisfying a request for a resource that the current PD does not possess. For example, a user-level process wanting to allocate some memory will call mmap() to request more virtual memory resources. This corresponds to a lookup in the PD's resource directory for virtual memory resources and then requesting more virtual memory from the PD to which that resource maps.
```
System={pds:Set<PD>,res:Set<Resource>}ResourceRelation::ResourcexResourcePD={res:Set<Resource>,rdir:ResourceDirectory}Resource=VirtualResource|PhysicalResourcevirtualResource=VirtualMemory,File,... PhysicalResource=RAM,Blocks,NIC,... ResourceDirectoryResource>=PDforaresourceListing 1.OSmosis Isolation Model
```
The _resource relation_ describes dependencies between two resources. Each traversal of the relation is called a hop. There are three types of resource relations. The first is due to the system topology and does not change. For example, the contents of DRAM may be loaded in the processor caches or sent
over the memory bus. The second type is added by system software. For example, the page table keeps track of which virtual memory pages are mapped to a physical page. And finally, a resource depends on the underlying resource from which it was allocated. For example, a virtual page depends on the virtual address space (resource) from which it was allocated.
We show the flexibility of our model by describing five scenarios in Fig. 1: 1) two threads in a process, 2) two threads with isolated stacks, 3) two processes, 4) a unikernel and a process, 5) a virtual machine and a process - all running on the same monolithic OS (e.g., Linux). In this example, we focus only on memory resources, but the concepts apply to all types of resources. The patterned boxes indicate where the sharing begins, and the ovals indicate the number of hops at which the sharing happens. Resources A and B represent the stack resource. Two threads both have each other's stacks in their protection domain ( Fig. 1 (a)). In 'threads with isolated stacks' (Fig. 1 (b)), each thread has access only to its own stack. However, they are still allocated from the same address space. Two processes (Fig. 1 (c)) have separate address spaces, but their virtual address space (VAS) data structures in \(PD_{0}\) depend on the kernel heap. In the case of a unikernel (Fig. 1 (d)), although there are additional levels of abstraction, address space management and the application are in the same PD. In the case of a guestOS (i.e., virtual machine), a process running on the VM is in a separate PD (Fig. 1 (e)).
None of the PDs for threads, threads with isolated stacks, or processes have direct access to physical resources. Instead, when they need physical memory (i.e., on a page fault), the Resource Directory indicates that \(PD_{0}\) (i.e., the operating system) will handle requests for physical memory. In contrast, the guest OS handles such requests from the process running inside the virtual machine, while the host OS handles requests from the hypervisor and its native process (P1). When \(PD_{0}\) maps a physical page to a virtual page, conceptually, it adds an entry to the resource relation, even though in implementation, this information is recorded in a page table.
### Queries
We now define queries on the model to extract information about PDs, their resources, and most importantly the relationship among resources in different PDs. In the next section, we show how to use this information to define the _isolation spectrum_.
**NHopResources:** The _resource relation_ lists the possible "one-hop" dependencies of a resource. However, as we saw in the discussion of virtual addresses and processor caches, there may be multiple levels of dependencies. Thus, to identify all the resources on which a specific resource depends, we compute the _n-hop transitive-reflexive_ closure for n=INFINITY on the _resource relation_.
\[\text{{NHopResources}}::\mathbb{N}\Rightarrow\text{{Set-Resource}} \Rightarrow\text{{Set-Resource}}\] \[\text{{NHopResources}}::n\ R=\bigcup_{r\in R}\text{{ResourceRelation}}^{n}\]
Referring back to Fig. 1(c), consider the stack virtual memory region (\(A\)). The stack's one-hop closure includes the VAS(in \(PD_{0}\)); The VAS depends on the heap virtual memory resource of \(PD_{0}\) from which its metadata was allocated; its two-hop closure includes cache sets (assuming virtually indexed caches) and the physical pages that have been allocated to the virtual memory region.
### Isolation Spectrum
Using the model and its queries, we can now define different forms of isolation as points on a _spectrum_ and thus, we can quantify how isolated two PDs are from each other.
**NHopResourcesOfPD:** We derive the n-hop resources of a PD by computing the NHopResource function on the PD's resources unioned with the \((n-1)\)-hop computation of NHopResourcesOfPD for each PD in the resource directory.
\[\text{{NHopResourcesOfPD}}::\mathbb{N}\Rightarrow\text{{PD}} \Rightarrow\text{{Set-Resource}}\] \[\text{{NHopResourcesOfPD}}::n\ pd=(\text{{NHopResources}}\ n\ pd.res)\ \cup\] \[\bigcup_{p\in pd.\mathit{drir.o.}\mathit{values}}(\text{{NHopResourcesOfPD} }\ (n-1)\ p)\]
Note, this includes both the resources currently accessible as well as those to which it may acquire access in the future.
**NHopShared:** We can now express the degree of sharing between two PDs by examining the intersection of the sets produced by NHopResourcesOfPD for any values of n.
\[\text{{NHopShared}}::\mathbb{N}\Rightarrow\mathbb{N}\Rightarrow\text{{PD}} \Rightarrow\text{{PD}}\Rightarrow\text{{Set-Resource}}\] \[\text{{NHopShared}}::n_{1}\ n_{2}\ pd_{1}\ pd_{2}=\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
subject to an exclusion set, \(\delta\). The exclusion set \(\delta\) is the set of resources the application does not care about. When two processes do not care about sharing a cache or the file system, we add those resources to \(\gamma\). We say that a PD is _NHopIsolated in the system_ if it is _NHopIsolated_ with every other PD.
\[\begin{split}\text{{NHopIsolated}}:&\exists\mathbf{ \Rightarrow}\mathbb{N}\Rightarrow\\ \text{{Set-Resource}}&\Rightarrow\text{{PD}} \Rightarrow\text{{PD}}\Rightarrow\text{{bool}}\\ \text{{NHopIsolated}}:& n_{1}\;n_{2}\;\delta\;pd_{1}\;pd_{2 }=\\ \text{{NHopShared}}& n_{1}\;n_{2}\;pd_{1}\;pd_{2} \subseteq\delta\end{split}\]
**IsolationLevel:** Given two PDs, we define _Isolation Level_ between them as the number of hops at which sharing begins. We find the minimum value of \(n_{1}\) or \(n_{2}\) for which _NHopIsolated_ is false. Taking the minimum of the tuple ensures proper accounting for asymmetric configurations. In Fig. 1(e) the isolation level is \(2\) derived from \(min(4,2)\), which means that sharing starts at \(2\) hops from at least one of the PDs.
\[\begin{split}\text{{IsolationLevel}}::\text{{PD}}\Rightarrow \text{{PD}}\Rightarrow\text{{Set-Resource}}\Rightarrow\mathbb{N}\\ \text{{IsolationLevel}}::\;pd_{1}\;pd_{2}\;\delta=n\;|\;\forall n _{1}n_{2}\\ \neg\text{{NHopIsolated}}\;n_{1}\;n_{2}\;\delta\;pd_{1}\;pd_{2} \Longrightarrow\;n\;\leq min(n_{1},n_{2})\end{split}\]
One could argue that it's more useful to instead view isolation level asymmetrically, i.e. from the perspective of each PD as opposed to between PDs. Both perspectives have merit and we believe that more experience in implementing various configurations will shed insight into which is more useful. In either case, the model provides all the information necessary to engage in this debate, and in fact, without the model, such debates cannot happen.
## 4. _OSmosis_ Framework
We now map the model described in the previous section to the functionality required to realize it.
### Unified API
Our model enables a unified API that can create any type of PD (inspired by the posix_spawn in Linux (Linux, 2018)). _newPD_ takes a set of _resources_ and a _resource directory_. By default (i.e., if the resource directory is empty), a PD directs requests for resources not given during its creation to the PD that created it (e.g., processes redirect those requests to the OS).
Listing 2 shows the pseudo code to create a normal process in _OSmosis_. Since a process runs in a separate address space, we first create a new virtual address space resource. The load call reads a binary file and allocates from the vas resource to create code, heap, and stack resources.
There are instances where the new PD is a close replica of an existing PD, and the pseudo code in Listing 2 can be cumbersome. Taking inspiration from clone (Krishnan, 2018), _clonePD_ creates a new PD by calling an _isolation function_ that defines how each resource and resource directory entry is shared with its parent (or another PD) before calling _newPD_. It is not fundamental to our model or framework, but it is the syntactic sugar that makes the model easier to use. We have written a few _isolation functions_, for example, one that creates a process or a thread with a slightly different address space and one where the resource directory entries for different types of resources are different PDs. We can also define _isolation functions_ that set up resources for the new PD based on the degrees of isolation \((n_{1},n_{2})\) for each resource and a \(\delta\). We envision having a suite of such template functions in a userspace library.
### Determining resource relations
The resource relation (Section 3.1) makes it possible to determine what underlying resources are shared. The resource relation captures all dependencies between resources and thus can become quite large. Yet, much of the information that is conceptually part of the resource relation is already present in the system. For example, Linux provides the sysfs, procfs, dev file systems, which describe system topology; page tables store virtual address to physical address dependencies. Dependencies between resources and the PDs that allocated them are implied. Designing an API to query the resource relation is straightforward; building an efficient implementation of that query API is a more interesting problem. Fortunately, queries of the resource relation are not on the performance critical path of normal operation.
## 5. Implementation
We have implemented a small portion of _OSmosis_- dealing with memory resources - using the capabilities-based microkernel seL4 (Krishnan, 2018; Krishnan, 2018). We chose this microkernel as it has no existing abstractions for processes, containers, or virtual machines. This lack of existing abstractions allows us to define the building blocks as we see fit. _Capabilities_ and _capability spaces_(Brands, 2018) map well to _OSmosis_'s resources and protection domains. In seL4, the capability space is modeled as a tree, and, by sharing parts of the subtree with other protection
\[\begin{split}\text{{directory}}=\text{currentResourceDirectory}();\\ \text{{vas}}=\text{newVAS}();\\ \text{{//}}\text{{Get code, stack, heap, and vCPU resources}}\\ \text{{code, heap, stack}}=\text{load(vas, "binary")};\\ \text{{vCPU}}=\text{{newVCPU}}();\\ \text{{resources}}=\text{{code, heap, stack, vCPU}};\\ \text{{//}}\text{{Create}}\text{{PD}}\\ \text{{pdID}}=\text{{newPD}}(\text{resources, directory});\\ \end{split}\]
**Listing 2**. Process creation in _OSmosis_
domain, we implement the sharing of resources amongst PDs.
## 6. Discussion and Use cases
_OSmosis_ enables us to explore the space of isolation mechanisms in a principled way. We discuss how the _model_ enables us to reason about isolation and the _framework_ lets us build new abstractions quickly.
### Comparing Isolation level between PDs
Viewing the systems as a collection of resources and relations enables us to define queries on the model state that can be used to precisely compare the level of isolation between two PDs. For instance, if we take the transitive closure of the resource relations starting at a PD, we get a set of all the resources on which a PD depends. Alternatively, we can restrict the number of relations to traverse (i.e., hops) to a small number and see how many resources two PDs share for a given value of hops, e.g., what is the set of resources that are shared in the 3-hop radii of two PDs. If a pair of PDs share fewer resources at a given number of hops than another pair of PDs, we can say that the former pair is more isolated than the latter.
### Existing and New Mechanisms
In Section3.1, we showed with some examples that _OSmosis_ is rich enough to capture existing mechanisms. For example, unikernels are similar to virtual machines in many respects, but the distinction between them is clearer in _OSmosis_. The application and kernel belong to the same PD, whose resources and resource directory is a subset of the union of a conventional virtual machine and process implementation. Similarly, building slight variations of existing mechanisms is trivial. For instance, to build processes that operate on a separate set of physical pages, _OSmosis_ assigns different resource directory entries (with a disjoint set of pages) to the PDs of those two processes. Lightweight contexts (LwC) (LwC) are equally straight forward; each LwC is a separate PD, but the various PDs share only the necessary resources, e.g., virtual memory, files.
### Viewing Isolation as spectrum
With _OSmosis_, it is possible to provide different isolation levels for different resources. By varying \(n_{1}\), \(n_{2}\) and \(\delta\) in _NHopIsolated_ shown in Section3.3, we show that there exists a vast high-dimensional space of isolation primitives created by assigning different isolation levels to different resources. When deploying a new PD in a shared cloud environment, the operator can vary these three parameters against other trusted and untrusted PDs. And for a given deployment, the operator can use _IsolationLevel_ to determine the degree of isolation between two untrusting PDs. For example, if a new threat is discovered in the networking stack, _OSmosis_ enables the deployment engineer to run just the networking stack with an additional isolation level until a vulnerability is patched.
## 7. Conclusion
We identify the problem that there is a lack of understanding about the level of isolation and sharing provided by different isolation mechanisms. Additionally, the plethora of isolation mechanisms do not share an underlying framework, which makes it challenging to build new mechanisms.
We present the _OSmosis_ model, which lets us reason about isolation between applications in a principled way. It lets us model precisely which parts of the known and unknown software state are shared between applications. We then present the _OSmosis_ framework, designed to realize the model by identifying the essential building blocks.
Finally, we show how _OSmosis_ lets us model and build new and existing mechanisms that are precisely tailored to the user's isolation requirements, and view the isolation of resources as a spectrum.
## Acknowledgments
We would like to thank Thomas Pasquier, Sam Leffler, and George Neville-Neil for providing feedback on our earlier draft.
|
2309.10918 | Posterior Contraction Rates for Matérn Gaussian Processes on
Riemannian Manifolds | Gaussian processes are used in many machine learning applications that rely
on uncertainty quantification. Recently, computational tools for working with
these models in geometric settings, such as when inputs lie on a Riemannian
manifold, have been developed. This raises the question: can these intrinsic
models be shown theoretically to lead to better performance, compared to simply
embedding all relevant quantities into $\mathbb{R}^d$ and using the restriction
of an ordinary Euclidean Gaussian process? To study this, we prove optimal
contraction rates for intrinsic Mat\'ern Gaussian processes defined on compact
Riemannian manifolds. We also prove analogous rates for extrinsic processes
using trace and extension theorems between manifold and ambient Sobolev spaces:
somewhat surprisingly, the rates obtained turn out to coincide with those of
the intrinsic processes, provided that their smoothness parameters are matched
appropriately. We illustrate these rates empirically on a number of examples,
which, mirroring prior work, show that intrinsic processes can achieve better
performance in practice. Therefore, our work shows that finer-grained analyses
are needed to distinguish between different levels of data-efficiency of
geometric Gaussian processes, particularly in settings which involve small data
set sizes and non-asymptotic behavior. | Paul Rosa, Viacheslav Borovitskiy, Alexander Terenin, Judith Rousseau | 2023-09-19T20:30:58Z | http://arxiv.org/abs/2309.10918v3 | # Posterior Contraction Rates for Matern Gaussian Processes on Riemannian Manifolds
###### Abstract
Gaussian processes are used in many machine learning applications that rely on uncertainty quantification. Recently, computational tools for working with these models in geometric settings, such as when inputs lie on a Riemannian manifold, have been developed. This raises the question: can these intrinsic models be shown theoretically to lead to better performance, compared to simply embedding all relevant quantities into \(\mathbb{R}^{d}\) and using the restriction of an ordinary Euclidean Gaussian process? To study this, we prove optimal contraction rates for intrinsic Matern Gaussian processes defined on compact Riemannian manifolds. We also prove analogous rates for extrinsic processes using trace and extension theorems between manifold and ambient Sobolev spaces: somewhat surprisingly, the rates obtained turn out to coincide with those of the intrinsic processes, provided that their smoothness parameters are matched appropriately. We illustrate these rates empirically on a number of examples, which, mirroring prior work, show that intrinsic processes can achieve better performance in practice. Therefore, our work shows that finer-grained analyses are needed to distinguish between different levels of data-efficiency of geometric Gaussian processes, particularly in settings which involve small data set sizes and non-asymptotic behavior.
## 1 Introduction
Gaussian processes provide a powerful way to quantify uncertainty about unknown regression functions via the formulation of Bayesian learning. Motivated by applications in the physical and engineering sciences, a number of recent papers [11; 9; 10; 37] have studied how to extend this model class to spaces with geometric structure, in particular Riemannian manifolds including important special cases such as spheres and Grassmannians [4], hyperbolic spaces and spaces of positive definite matrices [5], as well as general manifolds approximated numerically by a mesh [11].
These Riemannian Gaussian process models are starting to be applied for statistical modeling, and decision-making settings such as Bayesian optimization. For example, in a robotics setting, Jaquier et al. [27] has shown that using Gaussian processes with the correct geometric structure allows one to learn quantities such as the orientation of a robotic arm with less data compared to baselines. The same model class has also been used by Coveney et al. [15] to perform Gaussian process regression on a manifold which models the geometry of a human heart for downstream applications in medicine.
Given these promising empirical results, it is important to understand whether these learning algorithms have good theoretical properties, as well as their limitations. Within the Bayesian framework, a natural way to quantify data-efficiency and generalization error is to posit a data-generating mech
anism model and study if--and how fast--the posterior distribution concentrates around the true regression function as the number of observations goes to infinity.
Within the Riemannian setting, it is natural to compare _intrinsic_ methods, which are formulated directly on the manifold of interest, with _extrinsic_ ones, which require one to embed the manifold within a higher-dimensional Euclidean space. For example, the two-dimensional sphere can be embedded into the Euclidean space \(\mathbb{R}^{3}\): intrinsic Gaussian processes model functions on the sphere while extrinsic ones model functions on \(\mathbb{R}^{3}\), which are then restricted to the sphere. Are the former more efficient than the latter? Since embeddings--even isometric ones--at best only preserve distances locally, they can induce spurious dependencies, as points can be close in the ambient space but far away with respect to the intrinsic geodesic distance: this is illustrated in Figure 1. In cases where embeddings significantly alter distances, one can expect intrinsic models to perform better, and it is therefore interesting to quantify such differences.
In other settings, the manifold on which the data lies can be unknown, which makes using intrinsic methods directly no longer possible. There, one would like to understand how well extrinsic methods can be expected to perform. According to the _manifold hypothesis_[18], it is common for perceptual data such as text and images to concentrate on a lower-dimensional submanifold within, for instance, pixel space or sequence space. It is therefore also interesting to investigate how Gaussian process models--which, being kernel-based, are simpler than for instance deep neural networks--perform in such scenarios, at least in the asymptotic regime.
In this work, we develop geometric analogs of the Gaussian process posterior contraction theorems of van der Vaart and van Zanten [56]. More specifically, we derive posterior contraction rates for three main geometric model classes: (1) the intrinsic Riemannian Matern Gaussian processes, (2) truncated versions of the intrinsic Riemannian Matern Gaussian processes, which are used in practice to avoid infinite sums, and (3) the extrinsic Euclidean Matern Gaussian processes under the assumption that the data lies on a compact Riemannian manifold. In all cases, we focus on IID randomly-sampled input points--commonly referred to as _random design_ in the literature--and contraction in the sense of the \(L^{2}(p_{0})\) distance, defined in Section 2. We focus on _compact_ Riemannian manifolds: this allows one to define Matern Gaussian processes through their Karhunen-Loeve expansions, which requires a discrete spectrum for the Laplace-Beltrami operator--see for instance Borovitskiy et al. [11] and Chavel [13], Chapter 1--and is a common setting in statistics [39].
Contributions.We show that all three classes of Gaussian processes lead to optimal procedures, in the minimax sense, as long as the smoothness parameter of the kernel is aligned with the regularity of the unknown function. While this result is natural--though non trivial--in the case of intrinsic Matern processes, it is rather remarkable that it also holds for extrinsic ones. This means that in order to understand their differences better, finite-sample considerations are necessary. We therefore present experiments that compute the worst case errors numerically. These experiments highlight that intrinsic models are capable of achieving better performance in the small-data regime. We conclude with a discussion of why these results--which might at first seem counterintuitive--are very natural
Figure 1: Samples from different Matern Gaussian processes on different manifolds, namely a one-dimensional dumbbell-shaped manifold and a two-dimensional sphere. Notice that the values across the dumbbell’s bottleneck can be very different for the intrinsic process in (a), despite being very close in the ambient Euclidean distance and in contrast to the situation for the extrinsic model in (b). On the other hand, there is little qualitative difference between (c) and (d), since the embedding produces a reasonably-good global approximation to geodesic distances on the sphere.
when viewed from an appropriate mathematical perspective: they suggest that optimality is perhaps best seen as a basic property or an important guarantee that any sensible model should satisfy.
## 2 Background
_Gaussian process regression_ is a Bayesian approach to regression where the modeling assumptions are \(y_{i}=f(x_{i})+\varepsilon_{i}\), with \(\varepsilon_{i}\sim\mathrm{N}(0,\sigma_{\varepsilon}^{2})\), \(x_{i}\in X\), and \(f\) is assigned a Gaussian process prior. A _Gaussian process_ is a random function \(f:X\to\mathbb{R}\) for which all finite-dimensional marginal distributions are multivariate Gaussian. The distribution of such a process is uniquely determined by its _mean function_\(m(\cdot)=\mathbb{E}(f(\cdot))\) and _covariance kernel_\(k(\cdot,\cdot^{\prime})=\mathrm{Cov}(f(\cdot),f(\cdot^{\prime}))\), hence we write \(f\sim\mathrm{GP}(m,k)\).
For Gaussian process regression, the posterior distribution given the data is also a Gaussian process with probability kernel \(\Pi(\cdot\mid\mathbf{x},\mathbf{y})=\mathrm{GP}(m_{\Pi(\cdot\mid\mathbf{x},\mathbf{y})},k_{ \Pi(\cdot\mid\mathbf{x},\mathbf{y})})\), see Rasmussen and Williams [41],
\[m_{\Pi(\cdot\mid\mathbf{x},\mathbf{y})}(\cdot)=\mathbf{K}_{(\cdot)\mathbf{x }}(\mathbf{K}_{\mathbf{x}\mathbf{x}}+\sigma_{\varepsilon}^{2}\mathbf{I})^{-1}\mathbf{y}, \tag{1}\] \[k_{\Pi(\cdot\mid\mathbf{x},\mathbf{y})}(\cdot,\cdot^{\prime})=\mathbf{K} _{(\cdot,\cdot^{\prime})}-\mathbf{K}_{(\cdot)\mathbf{x}}(\mathbf{K}_{\mathbf{x}\mathbf{x} }+\sigma_{\varepsilon}^{2}\mathbf{I})^{-1}\mathbf{K}_{\mathbf{x}(\cdot^{\prime})}. \tag{2}\]
These quantities describe how incorporating data updates the information contained within the Gaussian process. We will be interested studying the case where \(X\) is a Riemannian manifold, but first review the existing theory on the asymptotic behaviour of the posterior when \(X=[0,1]^{d}\).
### Posterior Contraction Rates
Posterior contraction results describe how the posterior distribution concentrates around the true data generating process, as the number of observations increases, so that it eventually uncovers the true data-generating mechanism. The area of _posterior asymptotics_ is concerned with understanding conditions under which this does or does not occur, with questions of _posterior contraction rates_--how fast such convergence occurs--being of key interest. At present, there is a well-developed literature on posterior contraction rates, see Ghosal and van der Vaart [20] for a review.
In the context of Gaussian process regression with _random design_, which is the focus of this paper, the true data generating process is assumed to be of the form
\[y_{i}\mid x_{i}\sim\mathrm{N}(f_{0}(x_{i}),\sigma_{\varepsilon}^{2}) x_{i}\sim p_{0} \tag{3}\]
where \(f_{0}\in\mathcal{F}\subseteq\mathbb{R}^{X}\), a class of real-valued functions, and \(\mathrm{N}(\mu,\sigma^{2})\) denotes the Gaussian with moments \(\mu,\sigma^{2}\). Note that, in this particular variant, these equations exactly mirror those of the Gaussian process model's likelihood, including the use of the same noise variance \(\sigma_{\varepsilon}^{2}\) in both cases: in this paper, we focus on the particular case where \(\sigma_{\varepsilon}\) is known in advance. This setting is restrictive, one can extend to an unknown \(\sigma_{\varepsilon}>0\) using techniques that are not specific to our geometric setting: for instance, the approach of [55] allows to handle an unknown \(\sigma_{\varepsilon}\) if one assumes an upper and lower bound on it and keep the same contraction rates. In practice, more general priors, including ones that do not assume an upper or lower bound on \(\sigma_{\varepsilon}\), can be used, such as a conjugate one like in Banerjee [6]--these can also be analyzed to obtain contraction rates, albeit with additional considerations. The generalization error for prediction in such models is strongly related to the _weighted \(L^{2}\) loss_ given by
\[\left\|f-f_{0}\right\|_{L^{2}(p_{0})}=\left(\int_{X}\lvert f(x)-f_{0}(x) \rvert^{2}\,\mathrm{d}p_{0}(x)\right)^{1/2} \tag{4}\]
which is arguably the natural way of measuring discrepancy between \(f\) and \(f_{0}\), given the fact that the covariates \(x_{i}\) are sampled from \(p_{0}\). The posterior contraction rate is then defined as
\[\mathbb{E}_{\mathbf{x},\mathbf{y}}\,\mathbb{E}_{f\sim\Pi(\cdot\mid\mathbf{x},\mathbf{y})}\|f-f _{0}\|_{L^{2}(p_{0})}^{2} \tag{5}\]
where \(\mathbb{E}_{f\sim\Pi(\cdot\mid\mathbf{x},\mathbf{y})}(\cdot)\) denotes expectation under the posterior distribution while \(\mathbb{E}_{\mathbf{x},\mathbf{y}}(\cdot)\) denotes expectation under the true data generating process.1 In the case of covariates distributed on \([0,1]^{d}\), posterior contraction rates have been derived under Matern Gaussian process priors [47] in van der Vaart and van Zanten [56], who showed the following result.
Footnote 1: Note that other notions of posterior contraction can be found in the literature, see Ghosal and van der Vaart [20] and Rousseau [42] for examples that are slightly weaker than the definition we work with.
**Result 1** (Theorem 2 of van der Vaart and van Zanten [56]).: _In the Bayesian regression model, let \(f\) be a mean-zero Matern Gaussian process prior on \(\mathbb{R}^{d}\) with amplitude \(\sigma_{f}^{2}\), length scale \(\kappa\), and smoothness \(\nu>d/2\). Assume that the true data generating process is given by (3), where \(p_{0}\) has a Lebesgue density on \(X=[0,1]^{d}\) which is bounded from below and above by \(0<c_{p_{0}}<C_{p_{0}}<\infty\), respectively. Let \(f_{0}\in H^{\beta}\cap\mathcal{CH}^{\beta}\) with \(\beta>d/2\), where \(H^{\beta}\) and \(\mathcal{CH}^{\beta}\) the Sobolev and Holder spaces, respectively. Then there exists a constant \(C>0\), which does not depend on \(n\) but does depend on \(d\), \(\sigma_{f}^{2}\), \(\nu\), \(\kappa\), \(\beta\), \(p_{0}\), \(\sigma_{e}^{2}\), \(\|f_{0}\|_{H^{\beta}(\mathcal{M})}\), and \(\|f_{0}\|_{\mathcal{CH}^{\beta}(\mathcal{M})}\), such that_
\[\mathbb{E}_{\mathbf{x},\mathbf{y}}\,\mathbb{E}_{f\sim\Pi(\cdot|\mathbf{x},\mathbf{y})}\|f-f_{0 }\|_{L^{2}(p_{0})}^{2}\leq Cn^{-\frac{2\min(\beta,\nu)}{2\nu+d}} \tag{6}\]
_and, moreover, the posterior mean satisfies_
\[\mathbb{E}_{\mathbf{x},\mathbf{y}}\,\|m_{\Pi(\cdot|\mathbf{x},\mathbf{y})}-f_{0}\|_{L^{2}(p_{ 0})}^{2}\leq Cn^{-\frac{2\min(\beta,\nu)}{2\nu+d}}. \tag{7}\]
Note that \(m_{\Pi(\cdot|\mathbf{x},\mathbf{y})}\) is the Bayes estimator [52] of \(f\) associated to the weighted \(L^{2}\) loss and that the second inequality above is a direct consequence of the first. Therefore the posterior contraction rate implies the same convergence rate for \(m_{\Pi(\cdot|\mathbf{x},\mathbf{y})}\). The best rate is attained when \(\beta=\nu\): that is, when true smoothness and prior smoothness match--which is known to be minimax optimal in the problem of estimating \(f_{0}\): see Tsybakov [52]. In this paper, we extend this result to the manifold setting.
### Related Work and Current State of Affairs
The formalization of posterior contraction rates of Bayesian procedures dates back to the work of Schwartz [46] and Le Cam [29], but has been extensively developed since the seminal paper of Ghosal et al. [19] for various sampling and prior models, see for instance [20; 42] for reviews. This includes, in particular, work on Gaussian process priors [54; 56; 57; 43; 49]. Most of the results in the literature, however, assume Euclidean data: as a consequence, contraction properties of Bayesian models under manifold assumptions are still poorly understood, with exception of some recent developments in both density estimation [7; 8; 60] and regression [63; 60].
The results closest to ours are those of Yang and Dunson [63] and Castillo et al. [12]. In the former, the authors use an extrinsic length-scale-mixture of squared exponential Gaussian processes to achieve optimal contraction rates with respect to the weighted \(L^{2}\) norm, using a completely different proof technique compared to us, and their results are restricted to \(f_{0}\) having Holder smoothness of order less than or equal to two. On the other hand Castillo et al. [12] consider, as an intrinsic process on the manifold, a hierarchical Gaussian process based on its heat kernel and provide posterior contraction rates. For the Matern class, Li et al. [30] presents results which characterize the asymptotic behavior of kernel hyperparameters: our work complements these results by studying contraction of the Gaussian process itself toward the unknown ground-truth function. One can also study analogous discrete problems: Dunson et al. [17] and Sanz-Alonso and Yang [45] present posterior contraction rates for a specific graph Gaussian process model in a semi-supervised setting. In the next section, we present our results on Matern processes, defined either by restriction of an ambient process or by an intrinsic construction, and discuss their implications.
## 3 Posterior Contraction Rates on Compact Riemannian Manifolds
We now study posterior contraction rates for Matern Gaussian processes on manifolds, which are arguably the most-widely-used Gaussian process priors in both the Euclidean and Riemannian settings. We begin by more precisely describing our geometric setting before stating our key results and discussing their implications. From now on, we write \(X=\mathcal{M}\), to emphasize that the covariate space is a manifold.
**Assumption 2**.: _Assume that \(\mathcal{M}\subset\mathbb{R}^{D}\) is a smooth, compact submanifold (without boundary) of dimension \(d<D\) equipped with the standard Riemannian volume measure \(\mu\)._
We denote \(|\mathcal{M}|=\int_{\mathcal{M}}d\mu(x)\) for volume of \(\mathcal{M}\). With this geometric setting defined, we will need to describe regularity assumptions in terms of functional spaces on the manifold \(\mathcal{M}\). We work with Holder spaces \(\mathcal{CH}^{\gamma}(\mathcal{M})\), defined using charts via the usual Euclidean Holder spaces, the Sobolev spaces \(H^{s}(\mathcal{M})\), and Besov spaces \(B^{s}_{\infty,\infty}(\mathcal{M})\) which are one of the ways of generalizing
the Euclidean Holder spaces of smooth functions to manifolds. We follow Coulhon et al. [14] and Castillo et al. [12], and define these spaces using the Laplace-Beltrami operator on \(\mathcal{M}\) in Appendix A.
Recall that the data-generating process is given by (3), with \(f_{0}\) as the true regression function and \(p_{0}\) as the distribution of the covariates.
**Assumption 3**.: _Assume that \(p_{0}\) is absolutely continuous with respect to \(\mu\), and that its density, denoted by \(p_{0}\), satisfies \(c\leq p_{0}\leq C\) for \(0<c,C<\infty\). Assume the regression function \(f_{0}\) satisfies \(f_{0}\in H^{\beta}(\mathcal{M})\cap B^{\beta}_{\infty,\infty}(\mathcal{M})\) for some \(\beta>d/2\), and that \(\sigma_{\varepsilon}^{2}>0\) is fixed and known._
This setting can be extended to handle unknown variance \(\sigma_{\varepsilon}\) by putting a prior on \(\sigma_{\varepsilon}\), following the strategy of Salomond [44] and Naulet and Barat [36]. Since we are focused primarily on the impact of the manifold, we do not pursue this here. With the setting fully defined, we proceed to develop posterior contraction results for different types of Matern Gaussian process priors: intrinsic, intrinsic truncated and extrinsic.
### Intrinsic Matern Gaussian Processes
We now introduce the first geometric Gaussian process prior under study--the Riemannian Matern kernel of Whittle [62], Lindgren et al. [33], and Borovitskiy et al. [11]. This process was originally defined using stochastic partial differential equations: here, we present it by its Karhunen-Loeve expansion, to facilitate comparisons with its truncated analogs presented in Section 3.2.
**Definition 4** (Intrinsic Matern prior).: _Let \(\nu>0\), and let \((\lambda_{j},f_{j})_{j\geq 0}\) be the eigenvalues and orthonormal eigenfunctions of the Laplace-Beltrami operator on \(\mathcal{M}\), in increasing order. Define the intrinsic Riemannian Matern Gaussian process through its Karhunen-Loeve expansion to be_
\[f(\cdot)=\frac{\sigma_{f}^{2}}{C_{\nu,\kappa}}\sum_{j=1}^{\infty} \biggl{(}\frac{2\nu}{\kappa^{2}}+\lambda_{j}\biggr{)}^{-\frac{\nu+d/2}{2}} \xi_{j}f_{j}(\cdot)\hskip 56.905512pt\xi_{j}\sim\mathrm{N}(0,1) \tag{8}\]
_where \(\nu,\kappa,\sigma_{f}^{2}\) are positive parameters and \(C_{\nu,\kappa}\) is the normalization constant, chosen such that \(\frac{1}{|\mathcal{M}|}\int_{M}\mathrm{Var}(f(x))\,\mathrm{d}\mu(x)=\sigma_{f} ^{2}\), where \(\mathrm{Var}\) denotes the variance._
The covariance kernels of these processes are visualized in Figure 2. With this prior, and the setting defined in Section 3, we are ready to present our first result: this model attains the desired optimal posterior contraction rate as soon as the regularity of the ground-truth function matches the regularity of the Gaussian process, as described by the parameter \(\nu\).
**Theorem 5**.: _Let \(f\) be a Riemannian Matern Gaussian process prior of Definition 4 with smoothness parameter \(\nu>d/2\) and let \(f_{0}\) satisfy Assumption 3. Then there is a \(C>0\) such that_
\[\mathbb{E}_{\mathbf{x},\mathbf{y}}\,\mathbb{E}_{f\sim\Pi(\cdot|\mathbf{x}, \mathbf{y})}\|f-f_{0}\|_{L^{2}(\mathrm{p_{0}})}^{2}\leq Cn^{-\frac{2\min(\beta,\nu )}{2\nu+d}}. \tag{9}\]
All proofs are given in Appendix B. Our proof follows the general approach of van der Vaart and van Zanten [56], by first proving a contraction rate with respect to the distance \(n^{-1/2}\|f(\mathbf{x})-f_{0}(\mathbf{x})\|_{\mathbb{R}^{n}}\) at input locations \(\mathbf{x}\), and then extending the result to the true \(L^{2}\)-distance by applying a suitable concentration inequality. The first part is obtained by studying the _concentration function_, which is known to be the key quantity to control in order to derive contraction rates of Gaussian process priors--see Ghosal and van der Vaart [20] and van der Vaart and van Zanten [57] for an overview.
Figure 2: Different Matern kernels \(k(\bullet,x)\) on different manifolds.
Given our regularity assumptions on \(f_{0}\), the most difficult part lies in controlling the small-ball probabilities \(\Pi\big{[}\big{\|}f\big{\|}_{\mathcal{C}(\mathcal{M})}<\varepsilon\big{]}\): we handle this by using results relating this quantity with the entropy of an RKHS unit ball with respect to the uniform norm. Since our process' RKHS is related to the Sobolev space \(H^{\nu+d/2}(\mathcal{M})\) which admits a description in terms of charts, we apply results on the entropy of Sobolev balls in the Euclidean space to conclude the first part. Finally, to extend the rate to the true \(L^{2}(p_{0})\) norm, following van der Vaart and van Zanten [56], we prove a Holder-type property for manifold Matern processes, and apply Bernstein's inequality. Together, this gives the claim.
This result is good news for the intrinsic Matern model: it tells us that asymptotically it incorporates the data as efficiently as possible at least in terms of posterior contraction rates, given that its regularity matches the regularity of \(f_{0}\). An inspection of the proof shows that the constant \(C>0\) can be seen to depend on \(d\),\(\sigma_{f}^{2}\), \(\nu\), \(\kappa\), \(\beta\), \(p_{0}\),\(\sigma_{e}^{2}\), \(\|f_{0}\|_{H^{\beta}(\mathcal{M})}\), \(\|f_{0}\|_{B^{\beta}_{\infty\infty}(\mathcal{M})}\), and \(\|f_{0}\|_{\mathcal{C}\mathcal{H}^{\beta}(\mathcal{M})}\). Theorem 5 extends to the case where the norm is raised to any power \(q>1\) rather than the second power, with the right-hand side raised to the same power: see Appendix B for details. We now consider variants of this prior that can be implemented in practice.
### Truncated Matern Gaussian Processes
The Riemannian Matern prior's covariance kernel cannot in general be computed exactly, since Definition 4 involves an infinite sum. Arguably the simplest way to implement these processes numerically is to truncate the respective infinite series in the Karhunen-Loeve expansion by taking the first \(J\) terms, which is also optimal in an \(L^{2}(\mathcal{M})\)-sense.
Note that the truncated prior is a randomly-weighted finite sum of Laplace-Beltrami eigenfunctions, which have different smoothness properties compared to the original prior: the truncated prior takes its values in \(\mathcal{C}^{\infty}(\mathcal{M})\) since the eigenfunctions of \(\mathcal{M}\) are smooth--see for instance De Vito et al. [16]. Nevertheless, if the truncation level is allowed to grow as the sample size increases, then the regularity of the process degenerates and one gets a function with essentially-finite regularity in the limit.
Truncated random basis expansions have been studied extensively in the Bayesian literature in the Euclidean setting--see for instance Arbel et al. [2] and Yoo et al. [64] or Ghosal and van der Vaart [20], Chapter 11 for examples with priors based on wavelet expansions. It is known that truncating the expansion at a high enough level usually allows one to retain optimality. Instead of truncating deterministically, it is also possible to put a prior on the truncation level and resort to MCMC computations which would then select the optimal number of basis functions adaptively, at the expense of a more computationally intensive method--this is done, for instance, in van der Meulen et al. [53] in the context of drift estimation for diffusion processes. Random truncation has been proven to lead in many contexts to adaptive posterior contraction rates, meaning that although the prior does not depend on the smoothness \(\beta\) of \(f_{0}\), the posterior contraction rate--up to possible \(\ln n\) terms--is of order \(n^{-\beta/(2\beta+d)}\): see for instance Arbel et al. [2] and Rousseau and Szabo [43].
By analogy of the Euclidean case with its random Fourier feature approximations [40], we can call the truncated version of Definition 4 the _manifold Fourier feature_ model, for which we now present our result.
**Theorem 6**.: _Let \(f\) be a Riemannian Matern Gaussian process prior on \(\mathcal{M}\) with smoothness parameter \(\nu>d/2\), modified to truncate the infinite sum to at least \(J_{n}\geq cn^{\frac{d(\min(1,\nu/\beta))}{2\nu+d}}\) terms, and let \(f_{0}\) satisfy Assumption 3. Then there is a \(C>0\) such that_
\[\mathbb{E}_{\boldsymbol{x},\boldsymbol{y}}\,\mathbb{E}_{f\sim\Pi(\cdot| \boldsymbol{x},\boldsymbol{y})}\|f-f_{0}\|_{L^{2}(p_{0})}^{2}\leq Cn^{-\frac{2 \min(\beta,\nu)}{2\nu+d}}. \tag{10}\]
The proof is essentially-the-same as the non-truncated Matern, but involves tracking dependence of the inequalities on the truncation level \(J_{n}\), which implicitly defines a sequence of priors rather than a single fixed prior.
This result is excellent news for the intrinsic models: it means that they inherit the optimality properties of the limiting one, even in the absence of the infinite sum--in spite of the fact that the corresponding finite-truncation prior places its probability on \(\mathcal{C}^{\infty}(\mathcal{M})\). Again, the constant \(C>0\) can be seen to depend on \(d\),\(\sigma_{f}^{2}\), \(\nu\),\(\kappa\),\(\beta\),\(p_{0}\),\(\sigma_{e}^{2}\), \(\|f_{0}\|_{H^{\beta}(\mathcal{M})}\), \(\|f_{0}\|_{B^{\beta}_{\infty\infty}(\mathcal{M})}\), and \(\|f_{0}\|_{\mathcal{C}\mathcal{H}^{\beta}(\mathcal{M})}\). This concludes our results for the intrinsic Riemannian Matern priors. We now study what happens if, instead of working with a geometrically-formulated model, we simply embed everything into \(\mathbb{R}^{d}\) and formulate our models there.
### Extrinsic Matern Gaussian Processes
The results of the preceding sections provide good reason to be excited about the intrinsic Riemannian Matern prior: the rates it obtains match the usual minimax rates seen for the Euclidean Matern prior and Euclidean data, provided that we match the smoothness \(\nu\) with the regularity of \(f_{0}\). Another possibility is to consider an extrinsic Gaussian process, that is, a Gaussian process defined over an ambient space. This has been considered by Yang and Dunson [63] for instance for the square-exponential process, in an adaptive setting where one does not assume that the regularity \(\beta\) of \(f_{0}\) is explicitly known, but where \(\beta\leq 2\). In this section we prove a non-adaptive analog of this result for the Matern process.
**Definition 7** (Extrinsic Matern prior).: _Assume that the manifold \(\mathcal{M}\) is isometrically embedded in the Euclidean space \(\mathbb{R}^{D}\), such that we can regard \(\mathcal{M}\) as a subset of \(\mathbb{R}^{D}\). Consider the Gaussian process with zero mean and kernel given by restricting onto \(\mathcal{M}\) the standard Euclidean Matern kernel_
\[k_{\nu,\kappa,\sigma_{f}^{2}}(x,x^{\prime})=\sigma_{f}^{2}\frac{2^{1-\nu}}{ \Gamma(\nu)}\bigg{(}\sqrt{2\nu}\frac{\|x-x^{\prime}\|_{\mathbb{R}^{D}}}{\kappa }\bigg{)}^{\nu}K_{\nu}\bigg{(}\sqrt{2\nu}\frac{\|x-x^{\prime}\|_{\mathbb{R}^{D }}}{\kappa}\bigg{)} \tag{11}\]
_where \(\sigma_{f},\kappa,\nu>0\) and \(K_{\nu}\) is the modified Bessel function of the second kind [22]._
Since the extrinsic Matern process is defined in a completely agnostic way with respect to the manifold geometry, we would expect it to be less performant when \(\mathcal{M}\) is known. However, it turns out that the extrinsic Matern process converges at the same rate as the intrinsic one, as given in the following claim.
**Theorem 8**.: _Let \(f\) be a mean-zero extrinsic Matern Gaussian process prior with smoothness parameter \(\nu>d/2\) on \(\mathcal{M}\), and let \(f_{0}\) satisfy Assumption 3. Then for some \(C>0\) we have_
\[\mathbb{E}_{\mathbf{x},\mathbf{y}}\,\mathbb{E}_{f\sim\Pi(\cdot|\mathbf{x},\mathbf{y})}\|f-f_{ 0}\|_{L^{2}(p_{0})}^{2}\leq Cn^{-\frac{2\min(\beta,\nu)}{2\nu+d}}. \tag{12}\]
Theorem 8 is a surprising result because the optimal rates in this setting only require the knowledge of the regularity \(\beta\), but not the knowledge of the manifold or the intrinsic dimension. More precisely, the prior is not designed to be an adaptive prior, since it is a fixed Gaussian process, but it surprisingly adapts to the dimension of the manifold, and thus to the manifold.
The proof is also based on control of concentration functions. The main difference is that, although the ambient process has a well known RKHS--the Sobolev space \(H^{s+D/2}\big{(}\mathbb{R}^{D}\big{)}\)--the restricted process has a non-explicit RKHS, which necessitates further analysis. We tackle this issue by using results from Grosse and Schneider [24] relating manifold and ambient Sobolev spaces by linear bounded trace and extension operators, and from Yang and Dunson [63] describing a general link between the RKHS of an ambient process and its restriction. This allows us to show that the restricted process has an RKHS that is actually norm-equivalent to the Sobolev space \(H^{\nu+d/2}(\mathcal{M})\), which allows us to conclude the result in the same manner as in the intrinsic case.
As consequence, our argument applies _mutatis mutandis_ in any setting where suitable trace and extension theorems apply, with the Riemannian Matern case corresponding to the usual Sobolev results. In particular, our arguments therefore apply directly to other processes possessing similar RKHSs, such as for instance various kernels defined on the sphere--see e.g. Wendland [61], Chapter 17 and Hubbert et al. [26]. The constant \(C>0\) can be seen to depend on \(d\),\(D\),\(\sigma_{f}^{2}\), \(\nu\),\(\kappa\),\(\beta\),\(p_{0}\),\(\sigma_{\varepsilon}^{2}\), \(\|f_{0}\|_{H^{\beta}(\mathcal{M})}\),\(\|f_{0}\|_{B^{\beta}_{\text{\tiny{$\infty$}}}(\mathcal{M})}\),\(\|f_{0}\|_{\mathcal{CH}^{\beta}(\mathcal{M})}\)--notice that here \(C\) depends implicitly on \(D\) because of the presence of trace and extension operator continuity constants. We now proceed to understand the significance of the overall results.
### Summary of Results
As a consequence of our previous results, fixing a single common data generating distribution determined by \(p_{0},f_{0}\), under suitable conditions the intrinsic Matern process, its truncated version, and the extrinsic Matern process all possess the _same_ posterior contraction rate with respect to the \(L^{2}(p_{0})\)-norm, which depends on \(d\), \(\nu\), and \(\beta\), and is optimal if the regularities of \(f_{0}\) and the prior match. These results imply the following immediate corollary, which follows by convexity of \(\|\cdot\|_{L^{2}(p_{0})}^{2}\) using Jensen's inequality.
**Corollary 9**.: _Under the assumptions of Theorems 5, 6 and 8, it follows that, for some \(C>0\)_
\[\mathbb{E}_{\mathbf{x},\mathbf{y}}\left\|m_{\Pi(\cdot|\mathbf{x},\mathbf{y})}-f_{0}\right\|_{L^ {2}(p_{0})}^{2}\leq Cn^{-\frac{2\min(\beta,\nu)}{2\nu+d}} \tag{13}\]
_where \(m_{\Pi(\cdot|\mathbf{x},\mathbf{y})}\) is the posterior mean given a particular value of \(\left(x_{i},y_{i}\right)_{i=1}^{n}\)._
When \(\nu=\beta\), the optimality of the rates we present in the manifold setting can be easily inferred by lower bounding the \(L^{2}\)-risk of the posterior mean by the \(L^{2}\)-risk over a small subset of \(\mathcal{M}\) and using charts, which translates the problem into the Euclidean framework for which the rate is known to be optimal--see for instance Tsybakov [52].
To contextualize this, observe that even in cases where the geometry of the manifold is non-flat, the asymptotic rates are unaffected by the choice of the prior's length scale \(\kappa\)--in either the intrinsic, or the extrinsic case--but only by the smoothness parameter \(\nu\). Indeed, the RKHS of the process is only determined--up to norm equivalence--by \(\nu\), which plays an important role in the proofs. This, and the fact that extrinsic processes attain the same rates, implies that the study of asymptotic posterior contraction rates _cannot detect geometry_ in our setting, as was already hinted by Yang and Dunson [63]. Hence, in the geometric setting, optimal posterior contraction rates should be thought of more as a basic property that any reasonable model should satisfy. Differences in performance will be down to constant factors alone--but as we will see, these can be significant. To understand these differences, we turn to empirical analysis.
## 4 Experiments
From Theorems 5, 6 and 8, we know that intrinsic and extrinsic Gaussian processes exhibit the same posterior contraction rates in the asymptotic regime. Here, we study how these rates manifest themselves in practice, by examining how worst-case errors akin to those of Corollary 9 behave numerically. Specifically, we consider the pointwise worst-case error
\[v^{(\tau)}(t)=\sup_{\|f_{0}\|_{\mu^{\nu+d/2}}\leq 1}\mathbb{E}_{\varepsilon_{ i}\sim\mathrm{N}(0,\sigma_{\mathbf{z}}^{2})}\big{|}m_{\Pi(\cdot|\mathbf{x},\mathbf{y})}^{( \tau)}(t)-f_{0}(t)\big{|}^{2} \tag{14}\]
where \(m_{\Pi(\cdot|\mathbf{x},\mathbf{y})}^{(\tau)}\) is the posterior mean corresponding to the zero-mean Matern Gaussian process prior with smoothness \(\nu\), length scale \(\kappa\), amplitude \(\sigma_{f}^{2}\), which is intrinsic if \(\tau=\mathrm{i}\) or extrinsic if \(\tau=\mathrm{e}\). We use a Gaussian likelihood with noise variance \(\sigma_{\mathbf{z}}^{2}\) and observations \(y_{i}=f_{0}(x_{i})+\varepsilon_{i}\), and examine this quantity as a function of the evaluation location \(t\in\mathcal{M}\). By allowing us to assess how error varies in different regions of the manifold, this provides us with a fine-grained picture of how posterior contraction behaves.
One can show that \(v^{(\tau)}\) may be computed without numerically solving an infinite-dimensional optimization problem. Specifically, (14) can be calculated, in the respective intrinsic and extrinsic cases, using
\[v^{(\mathrm{i})}(t) =k^{(\mathrm{i})}(t,t)-\mathbf{K}^{(\mathrm{i})}_{\mathbf{i} \mathbf{X}}\Big{(}\mathbf{K}^{(\mathrm{i})}_{\mathbf{X}\mathbf{X}}+\sigma_{ \mathbf{z}}^{2}\mathbf{I}\Big{)}^{-1}\mathbf{K}^{(\mathrm{i})}_{\mathbf{X}t} \tag{15}\] \[v^{(\mathrm{e})}(t) \approx(\mathbf{K}^{(\mathrm{i})}_{\mathbf{i}\mathbf{X}^{\prime }}-\mathbf{\alpha}_{t}\mathbf{K}^{(\mathrm{i})}_{\mathbf{X}\mathbf{X}^{\prime}})( \mathbf{K}^{(\mathrm{i})}_{\mathbf{X}^{\prime}\mathbf{X}^{\prime}})^{-1}( \mathbf{K}^{(\mathrm{i})}_{\mathbf{X}^{\prime}t}-\mathbf{K}^{(\mathrm{i})}_{ \mathbf{X}^{\prime}\mathbf{X}}\mathbf{\alpha}_{t}^{\top})+\sigma_{\mathbf{z}}^{2}\bm {\alpha}_{t}\mathbf{\alpha}_{t}^{\top} \tag{16}\]
where, for the extrinsic case, \(\mathbf{\alpha}_{t}=\mathbf{K}^{(\mathrm{i})}_{\mathbf{i}\mathbf{X}}(\mathbf{K}^{ (\mathrm{e})}_{\mathbf{X}\mathbf{X}}+\sigma_{\mathbf{z}}^{2}\mathbf{I})^{-1}\), and \(\mathbf{X}^{\prime}\) is a set of points sampled uniformly from the manifold \(\mathcal{M}\), the size of which determines approximation quality. The intrinsic expression is simply the posterior variance \(k^{(\mathrm{i})}_{\Pi(\cdot|\mathbf{x},\mathbf{y})}(t,t)\), and its connection with worst-case error is a well-known folklore result mentioned somewhat implicitly in, for instance, Mutny and Krause [35]. The extrinsic expression is very-closely-related, and arises by numerically approximating a certain RKHS norm. A derivation of both is given in Appendix F. To assess the approximation error of this formula, we also consider an analog of (16) but instead defined for the intrinsic model, and compare it to (15): in all cases, the difference between the exact and approximate expression was found to be smaller than differences between models. By computing these expressions, we therefore obtain, up to numerics, the pointwise worst-case expected error in our regression model.
For \(\mathcal{M}\) we consider three settings: a dumbbell-shaped manifold, a sphere, and the dragon manifold from the Stanford 3D scanning repository. In all cases, we perform computations by approximating
the manifold using a mesh, and implementing the truncated Karhunen-Loeve expansion with \(J=500\) eigenpairs obtained from the mesh. We fix smoothness \(\nu=\frac{5}{2}\), amplitude \(\sigma_{f}^{2}=1\), and noise variance \(\sigma_{\varepsilon}^{2}=0.0005\), for both the intrinsic and extrinsic Matern Gaussian processes. Since the interpretation of the length scale parameter is manifold-specific, for the intrinsic Gaussian processes we set \(\kappa=200\) for the dumbbell, \(\kappa=0.25\) for the sphere, and \(\kappa=0.05\) for the dragon manifold. In all cases, this yielded functions that were neither close to being globally-constant, nor resembled noise. Each experiment was repeated \(10\) times to assess variability. Complete experimental details are given in Appendix G.2
Footnote 2: Code available at: [https://github.com/aterenin/geometric_asymptotics](https://github.com/aterenin/geometric_asymptotics).
The length scales \(\kappa\) are defined differently for intrinsic and extrinsic Matern kernels: in particular, using the same length scale in both models can result in kernels behaving very differently. To alleviate this, for the extrinsic process, we set the length scale by maximizing the extrinsic process' marginal likelihood using the full dataset generated by the intrinsic process, except in the dumbbell's case where the full dataset is relatively small, and therefore a larger set of 500 points was used instead. This allows us to numerically match intrinsic and extrinsic length scales to ensure a reasonably-fair comparison.
Figure 3 shows the mean, and spatial standard deviation of \(v_{\tau}(t)\), where by _spatial standard deviation_ we mean the sample standard deviation computed with respect to locations in space, rather than with respect to different randomly sampled datasets. From this, we see that on the dumbbell and dragon manifold--whose geometry differs significantly from the respective ambient Euclidean spaces--intrinsic models obtain better mean performance. The standard deviation plot reveals that intrinsic models have errors that are less-variable across space. This means that extrinsic models exhibit higher errors in some regions rather than others--such as, for instance, regions where embedded Euclidean and Riemannian distances differ--whereas in intrinsic models the error decays in a more spatially-uniform manner.
In contrast, on the sphere, both models perform similarly. Moreover, both the mean and spatial standard deviation decrease at approximately the same rates, indicating that the extrinsic model's predictions are correct about-as-often as the intrinsic model's, as a function of space. This confirms the view that, since the sphere does not possess any bottleneck-like areas where embedded Euclidean distances are extremely different from their Riemannian analogs, it is significantly less affected by differences coming from embeddings.
Figure 3: Worst-case error estimates for the intrinsic and extrinsic processes, on the _dumbbell_, _sphere_, and _dragon_ manifolds (lower is better, \(y\) axis is in the logarithmic scale). We see that, on the dumbbell and dragon manifold, intrinsic models achieve lower expected errors than extrinsic models for the ranges considered (top), and that their expected error consistently varies less as a function of space (bottom). In contrast, on the sphere, both models achieve similar performance, with differences between models falling within the range of variability caused by different random number seeds. We also see that the difference between computing the pointwise worst-case error exactly and approximately, in the intrinsic case where computing this difference is possible, is small in all cases.
In total, our experiments confirm that there are manifolds on which geometric models can perform significantly better than non-geometric models. This phenomenon was also noticed in Dunson et al. [17], where a prior based on the eigendecomposition of a random geometric graph, which can be thought as an approximation of our intrinsic Matern processes, is compared to a standard extrinsic Gaussian process. In our experiments, we see this through expected errors, mirroring prior results on Bayesian optimization performance. From our theoretical results, such differences cannot be captured through posterior contraction rates, and therefore would require sharper technical tools, such as non-asymptotic analysis, to quantify theoretically.
## 5 Conclusion
In this work, we studied the asymptotic behavior of Gaussian process regression with different classes of Matern processes on Riemannian manifolds. By using various results on Sobolev spaces on manifolds we derived posterior contraction rates for intrinsic Matern process defined via their Karhunen-Loeve decomposition in the Laplace-Beltrami eigenbasis, including processes arising from truncation of the respective sum which can be implemented in practice. Next, using trace and extension theorems which relate manifold and Euclidean Sobolev spaces, we derived similar contraction rates for the restriction of an ambient Matern process in the case where the manifold is embedded in Euclidean space. These theoretical asymptotic results were supplemented by experiments on several examples, showing significant differences in performance between intrinsic and extrinsic methods in the small sample size regime when the manifold's geometric structure differs from the ambient Euclidean space. Our work therefore shows that capturing such differences cannot be done through asymptotic contraction rates, motivating and paving the way for further work on non-asymptotic error analysis to capture empirically-observed differences between extrinsic and intrinsic models.
## Acknowledgments
The authors are grateful to Mojmir Mutny and Prof. Andreas Krause for fruitful discussions concerning this work. PR and JR were supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 834175). VB was supported by an ETH Zurich Postdoctoral Fellowship. AT was supported by Cornell University, jointly via the Center for Data Science for Enterprise and Society, the College of Engineering, and the Ann S. Bowers College of Computing and Information Science.
|
2303.17792 | The Maximum Chromatic Number of the Disjointness Graph of Segments on
$n$-point Sets in the Plane with $n\leq 16$ | Let $P$ be a finite set of points in general position in the plane. The
disjointness graph of segments $D(P)$ of $P$ is the graph whose vertices are
all the closed straight line segments with endpoints in $P$, two of which are
adjacent in $D(P)$ if and only if they are disjoint. As usual, we use
$\chi(D(P))$ to denote the chromatic number of $D(P)$, and use $d(n)$ to denote
the maximum $\chi(D(P))$ taken over all sets $P$ of $n$ points in general
position in the plane. In this paper we show that $d(n)=n-2$ if and only if
$n\in \{3,4,\ldots ,16\}$. | Jesús García-Davila, Jesús Leaños, Mario Lomelí-Haro, Luis Manuel Ríos-Castro | 2023-03-31T03:40:16Z | http://arxiv.org/abs/2303.17792v1 | The Maximum Chromatic Number of the Disjointness Graph of Segments on \(n\)-point Sets in the Plane with \(n\leq 16\)
###### Abstract
Let \(P\) be a finite set of points in general position in the plane. The _disjointness graph of segments_\(D(P)\) of \(P\) is the graph whose vertices are all the closed straight line segments with endpoints in \(P\), two of which are adjacent in \(D(P)\) if and only if they are disjoint. As usual, we use \(\chi(D(P))\) to denote the chromatic number of \(D(P)\), and use \(d(n)\) to denote the maximum \(\chi(D(P))\) taken over all sets \(P\) of \(n\) points in general position in the plane. In this paper we show that \(d(n)=n-2\) if and only if \(n\in\{3,4,\ldots,16\}\).
Keywords: Chromatic number, Disjointness graph of segments, Complete geometric graphs.
## 1 Introduction
The _chromatic number_ of a graph \(G\) is the minimum number of colors needed to color its vertices so that adjacent vertices receive different colors; it is denoted by \(\chi(G)\). For \(k,n\in\mathbb{Z}^{+}\) with \(k\leq n/2\), the _Kneser graph_\(KG(n,k)\) has as its vertices the \(k\)-subsets of \(\{1,2,\ldots,n\}\) and two of such \(k\)-subsets form an edge if and only if they are disjoint. Kneser conjectured [11] in 1956 that \(\chi(KG(n,k))=n-2k+2\). This conjecture was proved by Lovasz [12] and independently by Barany [5] in 1978.
Let \(P\) be a set of \(n\geq 2\) points in general position in the plane, and let \(\mathcal{P}\) be the set of all \(\binom{n}{2}\) closed straight line segments with endpoints in \(P\). The _disjointness graph of segments_\(D(P)\) of \(P\) is the graph whose vertex set is \(\mathcal{P}\), and two elements of \(\mathcal{P}\) are adjacent if and only if their corresponding segments are disjoint. See Figure 1 for an example.
The disjointness graph of segments \(D(P)\) was introduced in 2005 by Araujo et al. [4], as a geometric version of the Kneser graph \(KG(n,k)\) for \(k=2\). It follows from the definitions of \(KG(n,2)\) and \(D(P)\) that if \(|P|=n\geq 2\), then both have size \(\binom{n}{2}\) and \(KG(n,2)\) contains a subgraph which is isomorphic to \(D(P)\). Similarly, we note that the crossings between segments of \(\mathcal{P}\) are responsible for the edges of \(KG(n,2)\) that are not in \(D(P)\). Indeed, suppose that \(P=\{p_{1},p_{2},\ldots,p_{n}\}\) and consider the natural bijection \(f(i)\to p_{i}\) between \(\{1,2,\ldots,n\}\) and \(P\). Note that if \(\{i,j\}\) and \(\{k,l\}\) are adjacent in \(KG(n,2)\), then the corresponding segments of \(\mathcal{P}\) defined by \(\{p_{i},p_{j}\}\) and \(\{p_{k},p_{l}\}\) are also adjacent in \(D(P)\), unless they cross each other. In a nutshell, the difference between \(KG(n,2)\) and \(D(P)\) is completely determined by the crossings between segments of \(\mathcal{P}\). On the other hand, in [1] it was proved that the number of crossings in \(\mathcal{P}\) grows at least \(0.37997\binom{n}{4}+\Theta(n^{3})\), and so the number of edges that are in \(KG(n,2)\) but not in \(D(P)\) is linear in the number of edges of \(KG(n,2)\).
It is natural to ask about Kneser's conjecture in the context of the disjointness graph of segments. Since \(D(P)\) is a subgraph of \(KG(n,2)\), it is clear that \(\chi(D(P))\leq\chi(KG(n,2))=n-2\). Moreover, since
the number of edges of \(D(P)\) is significantly smaller than the number of edges of \(KG(n,2)\), one might expect that \(\chi(D(P))<n-2\) most of the time.
For \(n\in\mathbb{Z}^{+}\), let us define
\[d(n):=\max\{\chi(D(P))\ :\ P\text{ is an }n-\text{point set in general position in the plane}\}.\]
The following questions arise naturally from the above discussion.
**Question 1**.: _For what values of \(n\) is \(d(n)=n-2\)?_
**Question 2**.: More generally, _What is the value of \(d(n)\) for each \(n\in\mathbb{Z}^{+}\)?_
The systematic study of several combinatorial properties of \(D(P)\) began with the work of Araujo et al. in 2005 [4]. In particular, the following general bounds for \(d(n)\) with \(n\geq 3\) were proved in [4],
\[5\left\lfloor\frac{n}{7}\right\rfloor\leq d(n)\leq\min\left\{n-2,n+\frac{1}{2}- \frac{\left\lfloor\log\log(n)\right\rfloor}{2}\right\}.\]
We note that these inequalities imply that \(d(7)=5\).
The problem of determining the chromatic number \(\chi(D(P))\) of \(D(P)\) remains open in general. Furthermore, as far as we know, this problem has been solved only for two families of points: the convex sets and the double chains.
For \(m\in\mathbb{Z}^{+}\) we shall use \(C_{m}\) to denote a set of \(m\) points in (general and) _convex position_ in the plane. We recall that for \(k,l\in\mathbb{Z}^{+}\) a _double chain_\(C_{k,l}\) is a \((k+l)\)-point set in general position in the plane such that \(C_{k,l}\) is the dijoint union of \(C_{k}\) and \(C_{l}\) where the points are located in such a way that any point of \(C_{k}\) (resp. \(C_{l}\)) is below (resp. above) every straight line spanned by two points of \(C_{l}\) (resp. \(C_{k}\)). In Figure 1\((a)\) the sets \(T_{1}=\{t_{1}^{1},t_{1}^{2},t_{1}^{3}\}\) and \(T_{2}=\{t_{2}^{1},t_{2}^{2},t_{2}^{3}\}\) are two instances of \(C_{3}\), and \(T_{1}\cup T_{2}\) is an instance of \(C_{3,3}\).
In [9] 2018 the exact value of \(\chi(D(C_{n}))\) has been settled by Fabila-Monroy, Jonsson, Valtr and Wood:
\[\chi(D(C_{n}))=n-\left\lfloor\sqrt{2n+\frac{1}{4}}-\frac{1}{2}\right\rfloor.\]
Similarly, in [8] 2020 Fabila-Monroy, Hidalgo-Toscano, Leanos and Lomeli-Haro have shown that if \(C_{k,l}\) is a double chain with \(l\geq\max\{3,k\}\), then
\[\chi(D(C_{k,l}))=k+l-\left\lfloor\sqrt{2l+\frac{1}{4}}-\frac{1}{2}\right\rfloor.\]
Note that if \(l=k\), then \(\chi(D(C_{k,l}))\) provides the currently best lower bound of \(d(n)\), namely:
\[d(n)\geq n-\left\lfloor\sqrt{n+\frac{1}{4}}-\frac{1}{2}\right\rfloor.\]
As we shall see, the exact values of \(\chi(D(C_{n}))\) and \(\chi(D(C_{k,l}))\) will be useful in our proof. Another important fact that we use is a result proved by Szekeres and Peters in [15] 2006, in which they confirmed the following conjecture for \(n=6\).
**Conjecture 1** (Erdos and Szekeres, 1935).: _Any set of \(2^{m-2}+1\) points in general position in the plane contains a convex \(m\)-gon._
The study of the combinatorial properties of the disjointness graph of segments and related graphs have received considerable attention lately. For more results about this kind of graphs we refer the reader to [2, 3, 4, 6, 10, 13, 14].
Our main objective in this paper is to give a full answer to the Question 1. As we have mentioned above, by Lovasz's theorem [12] and the fact that \(D(P)\) is a subgraph of \(KG(n,2)\), it follows that \(\chi(D(P))\leq n-2\) whenever \(n=|P|\geq 2\). Our main result is the following theorem.
**Theorem 2**.: _For \(n\geq 2\) integer, let \(d(n)\) be defined as above. Then \(d(n)=n-2\) iff \(n\in\{3,4,\ldots,16\}\)._
In view of the above discussion, it is clear that in order to show that \(d(n)=n-2\) for \(n\in\{3,4,\ldots,16\}\), it suffices to exhibit an \(n\)-point set \(P\) in general position in the plane such that \(\chi(D(P))=n-2\). The rest of the paper is devoted to this end, and in fact the heart of our proof is to show that any subset \(X^{\prime}\) of the 16-point set \(X\) given in Figure 2, satisfies the required inequality for each \(|X^{\prime}|\geq 3\), namely that \(\chi(D(X^{\prime}))\geq|X^{\prime}|-2\).
In the early stages of this work, we were trying to attack this problem by computer search, but soon we convinced ourselves that the number of colorings of \(D(X)\) that must be considered grows exponentially with the number of vertices of \(D(X)\). As we shall see later, a surprising fact that illustrates the computational complexity of this problem is the following: \(D(X)\) has a subset \(\mathcal{S}\) with at least 100 vertices such that if \(v\in\mathcal{S}\), then \(D(X)\setminus\{v\}\) can be colored with only 13 colors.
The rest of the paper is organized as follows. In Section 2 we define the set \(X\) by means of the precise coordinates of its 16 points, and give a brief discussion on its geometric properties. In Section 3 we introduce some notation and terminology that we shall use in most of the proofs. In Section 4 we present a summary of basic (or known) facts that we shall use in Section 5. The more technical work of this paper is given in Section 5, there we prove the main claims behind the proof of Theorem 2. Finally, in Section 6 we establish Theorem 2 by combining in several ways the results stated in previous sections.
## 2 The \(16\)-point set \(X\)
We recall that if \(P\) and \(Q\) are two finite point sets in general position in the plane, then it is said they have the same _order type_ iff there is a bijection \(f:P\to Q\) such that each ordered triple \(abc\) in \(P\) has the same orientation as its image \(f(a)f(b)f(c)\). We shall write \(P\sim Q\) if \(P\) and \(Q\) have the same order type. In particular, it is well known (see for instance [1]) that if \(P\sim Q\), then \(D(P)\) and \(D(Q)\) are isomorphic.
For the rest of the paper, \(X\) denotes the 16-point set given in Figure 2 and we will refer to its points according to the labelling depicted there. In particular, we partition \(X\) in the subsets \(A,B,T_{1},T_{2}\), where \(A:=\{a_{1},a_{2},\ldots,a_{5}\}\), \(B:=\{b_{1},b_{2},\ldots,b_{5}\}\), \(T_{1}:=\{t_{1}^{1},t_{1}^{2},t_{1}^{3}\}\) and \(T_{2}:=\{t_{2}^{1},t_{2}^{2},t_{2}^{3}\}\). We recall that \(\mathcal{X}\) is the set of \(\binom{16}{2}\) closed straight line segments with endpoints in \(X\).
The following properties of \(X\) are easy to check:
* \(X\) is 16-point set in general position in the plane which has no \(6-\)points forming a convex hexagon,
* no segment of \(\mathcal{X}\) is neither horizontal nor vertical,
* each segment with endpoints in \(T_{i}\) has negative slope for each \(i\in\{1,2\}\),
* \(A\cup B\sim C_{5,5}\), \(A\cup T_{1}\sim C_{5,3}\), \(T_{2}\cup B\sim C_{3,5}\), \(T_{1}\cup T_{2}\sim C_{3,3}\) and \(A\cup T_{2}\sim B\cup T_{1}\).
Figure 1: \((a)\) Let \(T_{1}=\{t_{1}^{1},t_{1}^{2},t_{1}^{3}\}\) and \(T_{2}=\{t_{2}^{1},t_{2}^{2},t_{2}^{3},\}\). Then \(T_{1}\cup T_{2}\) is a set of 6 points in general position in the plane. Each of \(T_{1}\) and \(T_{2}\) is an instance of \(C_{3}\), and \(T_{1}\cup T_{2}\) is an instance of the double chain \(C_{3,3}\). \((b)\) This is the complete geometric graph \(\mathcal{T}_{1}\cup\mathcal{T}_{2}\) induced by \(T_{1}\cup T_{2}\). The graph in \((c)\) is the disjointness graph of segments induced by \(T_{1}\cup T_{2}\), and we shall denote it by \(D(T_{1}\cup T_{2})\).
From now on, we will use these properties of \(X\) without explicit mention.
For \(e\in\mathcal{X}\) we shall use \(l(e)\) and \(r(e)\) to denote the leftmost and the rightmost point of \(e\), respectively. Since \(\mathcal{X}\) has no vertical segments, then \(l(e)\) and \(r(e)\) are well-defined for each \(e\in\mathcal{X}\). Then \(l(e)\) and \(r(e)\) are points of \(X\) and are the endpoints of \(e\). We remark that if \(e\) has both endpoints in \(T_{i}\) for some \(i\in\{1,2\}\), then \(l(e)\) (resp. \(r(e)\)) is also the topmost (resp. lowest) point of \(e\).
## 3 Notational conventions and terminology
Our aim in this section is to introduce some useful notational conventions, terminology and concepts that we shall use in the rest of the paper.
Throughout this section, \(P\) denotes a set of \(n\geq 2\) points in general position in the plane. If \(x\) and \(y\) are distinct points of \(P\), then we shall use \(xy\) to denote the closed straight line segment whose endpoints are \(x\) and \(y\). If \(R\) and \(S\) are disjoint nonempty subsets of \(P\), then \(R*S:=\{xy\ :\ x\in R\ \text{and}\ y\in S\}\). We simply write \(x*S\) (resp. \(R*y\)) whenever \(R=\{x\}\) (resp. \(S=\{y\}\)).
If \(x,y,z\) are three distinct points of \(P\), then we will refer to the triangle formed by them in any of the following ways: \(\Delta(x,y,z)\), \(\Delta(xy,z)\), \(\Delta(xy,xz)\), etc.
We shall denote by \(\overline{P}\) the boundary of the convex hull of \(P\).
If \(Q\subseteq P\) with \(|Q|\geq 2\), then we shall use the font style \(\mathcal{Q}\) to denote the set of all \(\binom{|Q|}{2}\) closed straight line segments with endpoints in \(Q\). We often make no distinction between the set of segments \(\mathcal{Q}\) and the complete geometric graph that it induces.
**Note 1**.: _From the definition of \(D(P)\) it follows that \(\chi(D(P))\) is the minimum number of colors in an edge-coloring of the complete geometric graph \(\mathcal{P}\) in which any two edges belonging to the same chromatic class cross or are incident. We remark that all our proofs are given in terms of this kind of edge-colorings of \(\mathcal{P}\). Unless otherwise stated, from now on, any coloring of \(D(P)\) must be assumed to be an edge-coloring of the complete geometric graph \(\mathcal{P}\)._
We classify each chromatic class \(c\) of a given coloring \(\gamma\) of \(\mathcal{P}\) as either thrackle or star. We call \(c\) a _thrackle_ if at least two segments in \(c\) cross each other, and otherwise \(c\) is a _star_. If \(c\) is a star whose segments are incident with exactly \(m\) points of \(P\), then \(c\) is an \(m\)_-star_. Clearly, each \(m\)-star \(c\) with \(m\geq 3\)
Figure 2: The 16-point set defined by these coordinates is \(X\). The ellipse on the right is a zoom of the ellipse on the left. We will derive the proof of Theorem 2 by showing that \(\chi(D(X^{\prime}))=|X^{\prime}|-2\) for each \(X^{\prime}\subseteq X\) with \(|X^{\prime}|\geq 3\). We remark that the 6-point set \(\{t_{1}^{1},t_{1}^{2},t_{1}^{3},t_{2}^{2},t_{2}^{3}\}\) is the same (order type) as those in Figure 1(a)-(b).
has a unique _apex_, i.e. a point of \(P\) that is incident with all the segments of \(c\). If \(c\) is a \(2\)-star then \(c\) consists of a singleton segment \(e\), and in this case both ends of \(e\) are considered apices of \(c\).
Let \(\gamma\) be a coloring of \(\mathcal{P}\) and let \(Q\subseteq P\). For exposition purposes, we abuse notation and use \(\gamma(Q)\) to refer to the set of colors of \(\gamma\) that are present in \(\mathcal{Q}\). We note that the restriction \(\gamma|_{\mathcal{Q}}\) of \(\gamma\) to \(\mathcal{Q}\) is a coloring of \(\mathcal{Q}\). We let \(\gamma^{*}(Q)\) denote the number of points in \(Q\) that are apices of some star of \(\mathcal{Q}\) with respect to \(\gamma|_{\mathcal{Q}}\).
For \(Q\subseteq P\), let \(\mathcal{Q}^{+}\) be the set of all segments of \(\mathcal{P}\) that have no endpoints in \(Q\) and cross a segment of \(\mathcal{Q}\). We say that \(Q\) is _separable with respect to \(\gamma\)_ whenever \(\gamma(e)\notin\gamma(Q)\) for any \(e\in\mathcal{Q}^{+}\). If \(\mathcal{Q}^{+}=\emptyset\), then \(Q\) is a _separable_ set of \(P\). Clearly, if \(Q\) is a separable set of \(P\), then \(Q\) is separable with respect to any \(\gamma\). For example, each of \(A,B,T_{1},T_{2},A\cup B,T_{1}\cup T_{2}\) is a separable set of \(X\).
## 4 Basic facts
Our aim in this section is to prove some basic facts, which we will often use in the rest of the paper.
Again, in this section \(P\) denotes a set of \(n\geq 3\) points in general position in the plane. We recall that if \(\gamma\) is a coloring of \(\mathcal{P}\), then \(\gamma(P)\) is the set of colors used by \(\gamma\) to color the segments of \(\mathcal{P}\).
The next proposition is an immediate consequence of Lovasz's theorem, but here we give an algorithmic proof because we will use the resulting coloring several times.
**Proposition 3**.: \(\chi(D(P))\leq|P|-2\)_._
Proof.: Let \(p_{1},p_{2},\ldots,p_{n}\) be any labeling of the points of \(P\). Color each segment of the triangle \(\Delta(p_{1},p_{2},p_{3})\) with color \(c_{1}\), and for each \(j\in\{4,5,\ldots,n\}\) use \(c_{j}\) to color each segment in \(\{p_{j}p_{i}\ :\ i=1,2,\ldots,j-1\}\). Since this defines a proper coloring of \(\mathcal{P}\) with \(n-2\) colors, we are done.
Proposition 4 is not used in the rest of the paper, but it is relevant because illustrates the tightness of Theorem 2, and at the same time the difficulty of proving it by computer search.
**Proposition 4**.: _Let \(\mathcal{S}:=\{t_{1}^{1}t_{2}^{3},b_{1}t_{2}^{2},b_{1}t_{2}^{2},b_{1}t_{2}^{1},b_{1}a_{5},b_{2}a_{5},b_{3}a_{1},b_{3}a_{2},b_{3}a_{3},b_{3}a_{4},b_{3}a_{5}, b_{4}a_{1},b_{4}a_{2},b_{4}a_{3},b_{4}a_{4}\)-\(,b_{4}a_{5},b_{5}t_{1}^{2},b_{5}t_{1}^{2},b_{5}t_{2}^{2},b_{5}a_{1}\}\). For \(x_{1}x_{2}\in\mathcal{X}\setminus\mathcal{S}\), there are many ways to color \(D(X)\) with 14 colors in which the color of \(x_{1}x_{2}\) is not assigned to any other segment of \(\mathcal{X}\)._
Proof.: Let \(x_{1}x_{2}\in\mathcal{X}\setminus\mathcal{S}\). It is not hard to see that \(x_{1}x_{2}\) is a side (no diagonal) of some convex 5-gon \(X_{5}\subset X\). Similarly, it is easy to find a valid 3-coloring \(\gamma^{\prime}\) of the 10 segments of \(\mathcal{X}_{5}\) in such a way that the color \(\gamma^{\prime}(x_{1}x_{2})\) does not appear in any other segment of \(\mathcal{X}_{5}\). In fact, there are exactly 2 ways to do this. Let \(x_{6},x_{7},\ldots,x_{16}\) be any labeling of the points of \(X\setminus X_{5}\). Following the same argument as in the proof of Proposition 3, we can extend \(\gamma^{\prime}\) to a coloring \(\gamma\) of \(D(X)\) by adding a new star \(S_{i}\) of color \(c_{i}\) with apex \(x_{i}\) for each \(x_{i}\in X\setminus X_{5}\).
**Proposition 5**.: _Let \(P^{\prime}\) be a proper subset of \(P\), and let \(\gamma\) be an optimal coloring of \(D(P)\). Then the following hold:_
1. _If_ \(|P^{\prime}|\geq 3\) _and_ \(\chi(D(P))=|P|-2\)_, then_ \(\chi(D(P^{\prime}))=|P^{\prime}|-2\)_._
2. _If_ \(P^{\prime}\) _is separable with respect to_ \(\gamma\) _and_ \(|\gamma(P^{\prime})|=|P^{\prime}|,\) _then_ \(\chi(D(P))\geq\chi(D(P\setminus P^{\prime}))+|P^{\prime}|\)_._
3. _If_ \(S_{1},\ldots,S_{r}\) _are different stars of_ \(\gamma\) _with apices_ \(v_{1},\ldots,v_{r}\)_, respectively. Then_ \(\chi(D(P\setminus\{v_{1},\ldots,v_{r}\}))=\chi(D(P))-r\)_._
Proof.: By Proposition 3, in order to show \((i)\) it suffices to show that \(\chi(D(P^{\prime}))\geq|P^{\prime}|-2\). Seeking a contradiction, suppose that there exists a proper coloring \(\beta^{\prime}\) of \(D(P^{\prime})\) such that \(|\beta^{\prime}(P^{\prime})|<|P^{\prime}|-2\). Then, following the same argument as in the proof of Proposition 3, we can extend \(\beta^{\prime}\) to a coloring \(\beta\) of \(D(P)\) by adding a new star \(S_{i}\) of color \(c_{i}\) with apex \(p_{i}\) for each \(p_{i}\in P\setminus P^{\prime}\). Then the number of colors of \(\beta\) is less than \(|P^{\prime}|-2+|P\setminus P^{\prime}|=|P|-2\), contradicting the hypothesis that \(\chi(D(P))=|P|-2\).
Let \(P^{\prime}\) and \(\gamma\) be as in \((ii)\). Since \(P^{\prime}\) is a separable subset of \(P\) with respect to \(\gamma\), then \(\gamma(P^{\prime})\) and \(\gamma(P\setminus P^{\prime})\) are disjoint subsets of \(\gamma(P)\) and so \(\chi(P)=|\gamma(P)|\geq|\gamma(P\setminus P^{\prime})|+|\gamma(P^{\prime})| \geq\chi(P\setminus P^{\prime})+|P^{\prime}|\). This proves \((ii)\).
For \(i=1,\ldots,r\), let \(S_{i}\) and \(v_{i}\) be as in \((iii)\) and let \(Q=P\setminus\{v_{1},\ldots,v_{r}\}\). Since the restriction of \(\gamma\) to \(D(Q)\) is a coloring of \(D(Q)\) with at most \(\chi(D(P))-r\) colors, then \(\chi(D(Q))\leq\chi(D(P))-r\). On the other hand, if \(\chi(D(Q))<\chi(D(P))-r\) then we can proceed as in the proof of \((i)\) to obtain a coloring \(\beta\) of \(D(P)\) with less than \(\chi(D(Q))+r\) colors, which is a contradiction.
**Proposition 6**.: _Let \(\gamma\) be a 4-coloring of \(D(C_{5})\). If \(\gamma^{*}(C_{5})=5\), then \(C_{5}\) has two points \(p\) and \(q\) such that \(\{pq\}\) is a \(2-\)star of \(\gamma\) and neither \(p\) nor \(q\) is an apex of any other star of \(\gamma\)._
Proof.: We start by noting that any chromatic class of \(\gamma\) is formed by at most 5 segments of \(\mathcal{C}_{5}\). From \(|\mathcal{C}_{5}|=10\) and the hypothesis that \(\gamma\) has 4 chromatic classes it follows that \(\gamma\) has at most two \(2-\)stars. On the other hand, since each point of \(C_{5}\) is an apex of a star of \(\gamma\), then \(\gamma\) must contain at least one \(2-\)star, say \(S=\{pq\}\). We now show that if \(S^{\prime}\) is another star of \(\gamma\) with apex \(v\in\{p,q\}\), then \(\mathcal{C}_{5}\) has a segment that cannot be colored by \(\gamma\), contradicting the assumption that \(\gamma\) is a coloring of \(D(C_{5})\).
Suppose first that \(S^{\prime}\) is a \(2-\)star. Then, \(S^{\prime}=\{vw\}\) for some \(w\in C_{5}\setminus\{p,q\}\). Let \(x\) and \(y\) be the two points in \(C_{5}\setminus\{p,q,w\}\). By hypothesis, for each \(z\in\{x,y\}\), \(\gamma\) has a star \(S_{z}\) with apex \(z\). As \(S\) and \(S^{\prime}\) are the only \(2-\)stars of \(\gamma\), then \(S_{x}\neq S_{y}\), and so \(S,S^{\prime},S_{x}\), and \(S_{y}\) are the 4 chromatic classes of \(\gamma\). Since each of them is a star, then the segment joining the two points in \(C_{5}\setminus\{x,y,v\}\) cannot be colored by \(\gamma\).
Suppose now that \(S^{\prime}\) is a star and let \(z_{1},z_{2}\), and \(z_{3}\) be the points of \(C_{5}\setminus\{p,q\}\). Again, for each \(z_{i}\) we know that \(\gamma\) has a star \(S_{z_{i}}\) with apex \(z_{i}\). Since \(\gamma\) has exactly 4 chromatic classes, then two of \(S_{z_{1}},S_{z_{2}},S_{z_{3}}\) must be the same. W.l.o.g. we asume \(S_{z_{1}}=S_{z_{2}}\). Then \(S_{z_{1}}\) must be a \(2-\)star, and so \(S,S^{\prime},S_{z_{1}}\), and \(S_{z_{3}}\) are the 4 chromatic classes of \(\gamma\). Since each of them is a star, then the segment joining the two points in \(C_{5}\setminus\{v,z_{1},z_{3}\}\) cannot be colored by \(\gamma\).
**Proposition 7**.: _If \(\gamma\) is a 3-coloring of \(D(C_{5})\), then \(\gamma^{*}(C_{5})\leq 2\)._
Proof.: Let \(v_{1},v_{2},\ldots,v_{5}\) be the points of \(C_{5}\), so that they appear in this cyclic order in \(\overline{C_{5}}\). Since each color of \(\gamma\) is in at most two segments of \(\overline{C_{5}}\), then the 3 colors \(c_{1},c_{2},c_{3}\) of \(\gamma\) appear in \(\overline{C_{5}}\). w.l.o.g. assume that \(\gamma(v_{1}v_{2})=c_{1}\), \(\gamma(v_{2}v_{3})=\gamma(v_{3}v_{4})=c_{2}\) and \(\gamma(v_{4}v_{5})=\gamma(v_{5}v_{1})=c_{3}\).
Suppose first that \(v_{1}v_{2}\) is a \(2-\)star of \(\gamma\). Then, \(\gamma(v_{2}v_{4})=c_{2}\) and \(\gamma(v_{4}v_{1})=c_{3}\), and so none of \(v_{3},v_{4},v_{5}\) can be an apex of \(\gamma\), implying that \(\gamma^{*}(C_{5})\leq 2\). If \(v_{1}v_{2}\) is not a \(2-\)star of \(\gamma\), then each colour of \(\gamma\) appears in at least two segments of \(C_{5}\), and so each star of \(\gamma\) has a unique apex. Since if the 3 chromatic classes of \(\gamma\) are stars, then some of \(v_{1}v_{4}\) or \(v_{2}v_{4}\) cannot be colored by \(\gamma\), we can assume that \(\gamma\) has at most two stars, and hence \(\gamma^{*}(C_{5})\leq 2\).
**Theorem 8** (Theorem 1 in [8]).: _If \(l,k\in\mathbb{Z}^{+}\) and \(l\geq\max\{3,k\}\), then_
\[\chi(D(C_{k,l}))=k+l-\left\lfloor\sqrt{2l+\frac{1}{4}}-\frac{1}{2}\right\rfloor.\]
**Corollary 9**.: _Let \(\gamma\) be a coloring of \(D(X)\). Then_
1. \(|\gamma(T_{1}\cup T_{2})|\geq 4\)_._
2. \(|\gamma(A\cup B)|\geq 8\)_._
3. \(|\gamma(B\cup T_{2})|\geq 6\)_._
4. \(|\gamma(A\cup T_{1})|\geq 6\)_._
5. _If_ \(|\gamma(T_{1}\cup T_{2})|\geq 6\)_, then_ \(|\gamma(X)|\geq 14\)_._
6. _If_ \(|\gamma(A\cup B)|\geq 10\)_, then_ \(|\gamma(X)|\geq 14\)_._
Proof.: Since \(T_{1}\cup T_{2}\sim C_{3,3}\), \(A\cup B\sim C_{5,5}\) and \(B\cup T_{2}\sim A\cup T_{1}\sim C_{3,5}\) then \(|\gamma(T_{1}\cup T_{2})|\geq 4\), \(|\gamma(A\cup B)|\geq 8\), \(|\gamma(B\cup T_{2})|\geq 6\) and \(|\gamma(A\cup T_{1})|\geq 6\), by Theorem 8.
Since \(T_{1}\cup T_{2}\) is a separable subset of \(X\), then \(|\gamma(X)|\geq|\gamma(A\cup B)|+|\gamma(T_{1}\cup T_{2})|\). If \(|\gamma(T_{1}\cup T_{2})|\geq 6\), then _(ii)_ implies _(v)_. Similarly, if \(|\gamma(A\cup B)|\geq 10\), then _(i)_ implies _(vi)_.
Most of our proofs are by case analysis, depending on the cardinality of the following sets \(\gamma(A),\gamma(B),\gamma(T_{1})\) and \(\gamma(T_{2})\). For the rest of the paper we will use the following additional notation.
1. If \(|\gamma(A)|=3\), Proposition 7 implies that \(A\) has 3 points that are not apices of \(\gamma(A)\). We will denote by \(a_{i},a_{j},a_{k}\) these 3 points of \(A\), and w.l.o.g. we assume that \(i<j<k\). If \(|\gamma(B)|=3\) we define \(b_{p},b_{q},b_{r}\) with \(p<q<r\), similarly.
2. If \(|\gamma(A)|=4\) and \(\gamma^{*}(A)=5\), Proposition 6 implies that \(A\) has 2 points defining a \(2-\)star of \(\gamma(A)\) and are such that none of them is apex of any other star of \(\gamma(A)\). We will denote by \(a_{i}\) and \(a_{j}\) these 2 points of \(A\), and w.l.o.g. we assume that \(i<j\). If \(|\gamma(B)|=4\) and \(\gamma^{*}(B)=5\), we define \(b_{p}\) and \(b_{q}\) with \(p<q\), similarly.
* If \(|\gamma(T_{i})|=2\) for \(i\in\{1,2\}\), we let \(e_{i}\) denote the segment of \(T_{i}\) that has a different color than the other two segments of \(T_{i}\).
**Proposition 10**.: \(\chi(D(A\cup T_{2}))=6\) _and \(\chi(D(B\cup T_{1}))=6\)._
Proof.: Since \(A\cup T_{2}\sim B\cup T_{1}\), it is enough to show that \(\chi(D(B\cup T_{1}))=6\). Since \(\chi(D(B\cup T_{1}))\leq 6\) by Proposition 3, we need to show that \(\chi(D(B\cup T_{1}))\geq 6\). Let \(\gamma\) be a coloring of \(D(B\cup T_{1})\). Clearly, \(|\gamma(B)|\geq 3\), \(|\gamma(T_{1})|\geq 1\) and \(|\gamma(B\cup T_{1})|\geq|\gamma(B)|+|\gamma(T_{1})|\). Then either \(|\gamma(B)|\leq 4\) or we are done.
If \(|\gamma(B)|=3\), then \(b_{p}t_{1}^{1},b_{q}t_{1}^{2}\) and \(b_{r}t_{1}^{3}\) receive distinct color, where \(b_{p},b_{q}\) and \(b_{r}\) are as in (N1). Since none of these 3 colors appears in any segment of \(\gamma(B)\), we are done.
Suppose now that \(|\gamma(B)|=4\). Then \(|\gamma(T_{1})|=1\), as otherwise we are done. If \(B\) contains a point \(b\) that is not an apex of \(\gamma(B)\), then \(\gamma(bt_{1}^{1})\) is the 6th required color. Then, \(\gamma^{*}(B)=5\) and hence either \(\gamma(b_{p}t_{1}^{1})\) or \(\gamma(b_{q}t_{1}^{3})\) is the 6th required color, where \(b_{p}\) and \(b_{q}\) are as in (N2).
## 5 Technical claims behind the proof of Theorem 2
As we mentioned before, the bulk of this paper is the proof that
\[\chi(D(Q))\geq|Q|-2\text{ for any }Q\subseteq X\text{ with }|Q|\geq 3. \tag{1}\]
In this section we do this task. As we shall see, the "symmetry" of \(X\) and the good number of separable subsets that \(X\) contains will play a central role in this task. In particular, in this section we will proof that Inequality (1) holds for about 15 representative (and strategic) subsets of \(X\) of cardinality at most 11.
Roughly speaking, our basic proof technique is the following: given \(Q\subset X\), a coloring \(\gamma\) of \(\mathcal{Q}\), and a nonempty subset \(\{f_{1},\ldots,f_{m}\}\) of \(\mathcal{Q}\) for which the set of colors \(\{\gamma(f_{1}),\ldots,\gamma(f_{m})\}\) is known, we then proceed to find an ordered sequence \(g_{1},\ldots,g_{l}\) of segments in \(\mathcal{Q}\setminus\{f_{1},\ldots,f_{m}\}\) such that the number of colors in \(\{\gamma(f_{1}),\ldots,\gamma(f_{m})\}\cup\{\gamma(g_{1}),\ldots,\gamma(g_{l})\}\) is equal to \(|Q|-2\). We emphasize that for each \(i\in\{1,\ldots,l\}\), the determination of the unknown color \(\gamma(g_{i})\) depends on the colors previously fixed, namely
\[\{\gamma(f_{1}),\ldots,\gamma(f_{m})\}\cup\{\gamma(g_{1}),\ldots,\gamma(g_{i- 1})\}.\]
**Note 2**.: _When we have already fixed \(|Q|-3\) colors in this process, and so it remains to prove the needed of the last color, we often write \(\gamma(g_{j})\stackrel{{\pm}}{{=}}c\) to mean that \(\gamma(g_{j})\) must be equal to the color \(c\), as otherwise \(\gamma(g_{j})\) is precisely the last required color._
**Lemma 11**.: _If \(Q\) is a subset of \(T_{1}\cup I\cup T_{2}\) with \(I\in\{A,B\}\) and \(|Q|\geq 3\), then \(\chi(D(Q))=|Q|-2\)._
Proof.: By Proposition 5\((i)\), it is enough to show the assertion for \(Q=T_{1}\cup I\cup T_{2}\). By Proposition 3, all we need to show is that \(\chi(D(Q))\geq 9\). We only discuss the case \(I=A\). The case \(I=B\) can be handled in a similar way. Thus we assume \(Q=T_{1}\cup A\cup T_{2}\).
Let \(\gamma\) be an optimal coloring of \(D(Q)\). Since \(|\gamma(A)|\geq 3\), then we can assume \(|\gamma(T_{1}\cup T_{2})|\in\{4,5\}\), as otherwise we are done. Similarly, since \(|\gamma(Q)|\geq|\gamma(A)|+|\gamma(T_{1}\cup T_{2})|\), then \(|\gamma(A)|\in\{3,4\}\).
We claim that if \(|\gamma(T_{i})|\geq 3\) for some \(i\in\{1,2\}\), then \(\chi(D(Q))\geq 9\). Indeed, since \(T_{i}\) is a separable subset of \(Q\) with respect to \(\gamma\), then \(|\gamma(D(Q))|\geq|\gamma(D(Q\setminus T_{i}))|+|\gamma(D(T_{i}))|\geq|\gamma(D (Q\setminus T_{i}))|+3\) by Proposition 5\((ii)\). Since \(|\gamma(D(Q\setminus T_{i}))|\geq 6\) by Theorem 8, if \(i=2\) (respectively, Proposition 10 if \(i=1\)). Then we may assume that \(1\leq|\gamma(T_{1})|,|\gamma(T_{2})|\leq 2\).
Case 1. Suppose that \(|\gamma(A)|=3\). Let \(a_{i},a_{j},a_{k}\in A\) be as in (N1).
(1.1) If \(|\gamma(T_{1})|=1=|\gamma(T_{2})|\), then \(|\gamma(T_{1}\cup T_{2})|=5\) and one of \(a_{i}t_{1}^{1}\) or \(a_{j}t_{2}^{1}\) provides the 9th color.
(1.2) If \(|\gamma(T_{1})|=1\) and \(|\gamma(T_{2})|=2\), then none of \(\gamma(a_{i}t_{1}^{1}),\gamma(a_{j}t_{1}^{2})\), and \(\gamma(a_{k}t_{1}^{3})\) belongs to \(\gamma(A)\cup\gamma(T_{1})\), and so \(|\gamma(A\cup T_{1})|\geq 7\). This inequality together with \(|\gamma(T_{2})|=2\) imply the required inequality.
(1.3) If \(|\gamma(T_{1})|=2\) and \(|\gamma(T_{2})|=1\), then either \(l(e_{1})a_{i}\) or \(r(e_{1})t_{2}^{2}\) provides the 7th color, where \(e_{1}\) is as in (N3). The last two required colors are \(\gamma(t_{2}^{1}a_{j})\) and \(\gamma(t_{2}^{3}a_{k})\).
(1.4) Suppose that \(|\gamma(T_{1})|=2=|\gamma(T_{2})|\). We need to show the existence of 2 additional colors. Let \(e_{1}\) and \(e_{2}\) be as in (N3), let \(f_{1}:=l(e_{1})l(e_{2})\) and \(f_{2}:=r(e_{1})r(e_{2})\). We assume w.l.o.g. that \(\gamma(T_{1})=\{c_{4},c_{5}\},\gamma(T_{2})=\{c_{6},c_{7}\},\gamma(e_{1})=c_{4}\) and \(\gamma(e_{2})=c_{7}\).
We note that if \(\gamma(h)\notin\gamma(T_{2})\) for some \(h\in a_{k}*T_{2}\), then \(h\) and either \(l(e_{1})a_{i}\) or \(r(e_{1})a_{j}\) provide the required two colors. Then the color of any segment in \(a_{k}*T_{2}\) must be \(c_{6}\) or \(c_{7}\). For \(l\in\{1,2,3\}\), let \(h_{l}=a_{k}t_{2}^{l}\).
\(\bullet\) Suppose that \(e_{2}=t_{2}^{2}t_{2}^{3}\). Then \(\gamma(t_{2}^{1}t_{2}^{2})=\gamma(t_{2}^{1}t_{2}^{3})=c_{6}\), \(\gamma(h_{3})\doteqdot c_{6}\) and \(\gamma(h_{1})\doteqdot c_{7}\). Suppose first that \(\gamma(h_{2})=c_{7}\). If \(\gamma(f_{2})\neq c_{4}\), then \(f_{2}\) and \(t_{2}^{1}a_{j}\) provide the required colors. Then \(\gamma(f_{2})=c_{4}\), and \(f_{1}\) together with either \(l(f_{1})a_{i}\) or \(r(f_{1})a_{j}\) provide the required colors. Suppose now that \(\gamma(h_{2})=c_{6}\). We note that \(\gamma(t_{2}^{1}a_{i})\) and \(\gamma(t_{2}^{2}a_{j})\) cannot be \(c_{6}\), and moreover, exactly one of them must be \(c_{7}\) as otherwise we are done. Since if \(\gamma(t_{2}^{1}a_{i})=c_{7}\), then \(t_{2}^{2}a_{j}\) together with either \(l(e_{1})a_{i}\) or \(r(e_{1})t_{2}^{3}\) provide the two required colors. Then \(\gamma(t_{2}^{2}a_{j})=c_{7}\), and so \(t_{2}^{1}a_{i}\) together with either \(l(e_{1})t_{2}^{2}\) or \(r(e_{1})t_{2}^{2}\) provide the two required colors.
\(\bullet\) Suppose that \(e_{2}=t_{2}^{2}t_{2}^{3}\). Then \(\gamma(t_{2}^{1}t_{2}^{2})=\gamma(t_{2}^{1}t_{2}^{3})=c_{6}\), \(\gamma(h_{1})=c_{6}\) and \(\gamma(h_{3})=c_{7}\). We note that neither \(\gamma(t_{2}^{1}a_{i})\) nor \(\gamma(t_{2}^{2}a_{j})\) can be \(c_{7}\), and moreover, exactly one of them must be \(c_{6}\) as otherwise we are done. Suppose first that \(\gamma(t_{2}^{1}a_{i})=c_{6}\). Then \(\gamma(h_{2})\doteqdot c_{7}\), and so \(t_{2}^{2}a_{j}\) together with either \(l(e_{1})a_{i}\) or \(r(e_{1})t_{2}^{3}\) provide the two required colors. Suppose now that \(\gamma(t_{2}^{2}a_{j})=c_{6}\). Note that if \(\gamma(h_{2})=c_{6}\), then \(t_{2}^{1}a_{j}\) together with either \(l(e_{1})a_{i}\) or \(r(e_{1})t_{2}^{2}\) provide the two required colors. Then \(\gamma(h_{2})=c_{7}\), and hence \(t_{2}^{1}a_{i}\) and some of \(f_{1}\) or \(f_{2}\) give the two required colors.
\(\bullet\) Suppose that \(e_{2}=t_{2}^{1}t_{2}^{3}\). Then \(\gamma(t_{2}^{1}t_{2}^{2})=\gamma(t_{2}^{1}t_{2}^{3})=c_{6}\), \(\gamma(h_{1})=c_{7}\) and \(\gamma(h_{3})=c_{7}\). The required colors are given by \(t_{2}^{1}a_{j}\) and some segment of \(l(e_{1})a_{i}\) or \(r(e_{1})t_{2}^{3}\).
Case 2. Suppose that \(|\gamma(A)|=4\). Since \(|\gamma(T_{1})|=1=|\gamma(T_{2})|\) imply \(|\gamma(T_{1}\cup T_{2})|\geq 5\), and we know that \(|\gamma(A\cup T_{1}\cup T_{2})|\geq|\gamma(A)|+|\gamma(T_{1}\cup T_{2})|\), we can assume that some of \(|\gamma(T_{1})|\geq 2\) or \(|\gamma(T_{2})|\geq 2\) holds.
(2.1) Suppose \(\gamma^{*}(A)<5\). Let \(a\) be a point of \(A\) that is not an apex of \(\gamma(A)\).
\(\bullet\) If \(|\gamma(T_{1})|=2\) and \(|\gamma(T_{2})|=1\), then \(at_{2}^{3}\) together with either \(l(e_{1})t_{2}^{1}\) or \(r(e_{1})t_{2}^{2}\) give the two required colors.
\(\bullet\) If \(|\gamma(T_{1})|=1\) and \(|\gamma(T_{2})|=2\), then \(at_{1}^{1}\) together with either \(t_{1}^{2}l(e_{2})\) or \(t_{1}^{3}r(e_{2})\) give the two required colors.
\(\bullet\) Suppose that \(|\gamma(T_{1})|=2=|\gamma(T_{2})|\). Then we need to show the existence of one additional color. Let \(f_{1}:=l(e_{1})l(e_{2})\) and \(f_{2}:=r(e_{1})r(e_{2})\). We note that \(\{\gamma(f_{1}),\gamma(f_{2})\}=\{\gamma(e_{1}),\gamma(e_{2})\}\) as otherwise we are done. Then \(\gamma(f_{1})\neq\gamma(e_{i})\) for some \(i\in\{1,2\}\). Let \(v:=f_{1}\cap e_{i}\). Then \(v\in\{t_{1}^{1},t_{1}^{2},t_{1}^{2},t_{2}^{2}\}\). Since \(v\neq t_{2}^{2}\) implies that \(\gamma(av)\) is the required color, we can assume \(v=t_{2}^{2}\) and hence we must have \(e_{i}=t_{2}^{2}t_{2}^{3}\). Then \(\gamma(t_{2}^{2}r(e_{1}))\in\{\gamma(f_{1}),\gamma(f_{2})\}\) as otherwise we are done. If \(\gamma(t_{2}^{2}r(e_{1}))=\gamma(f_{1})\) (respectively, \(\gamma(t_{2}^{2}r(e_{1}))=\gamma(f_{2})\)), then \(\gamma(l(e_{1})a)\) (respectively, \(\gamma(t_{2}^{3}a)\)) is the required color.
(2.2) Suppose that \(\gamma^{*}(A)=5\). Let \(a_{i}\) and \(a_{j}\) be as in (N2).
(2.2.1) Suppose that \(|\gamma(T_{1})|=2\) and \(|\gamma(T_{2})|=1\). Clearly, \(\gamma(t_{2}^{1}a_{i}),\gamma(t_{2}^{3}a_{j})\notin\gamma(T_{2})\), and moreover, exactly one of these two must be \(\gamma(a_{i}a_{j})\) as otherwise we are done. If \(\gamma(t_{2}^{1}a_{i})=\gamma(a_{i}a_{j})\), then \(t_{2}^{3}a_{j}\) together with either \(l(e_{1})t_{2}^{1}\) or \(r(e_{1})t_{2}^{2}\) give the two required colors. Then \(\gamma(t_{2}^{3}a_{j})=\gamma(a_{i}a_{j})\), and so \(t_{2}^{1}a_{i}\) together with either \(l(e_{1})t_{2}^{1}\) or \(r(e_{1})t_{2}^{3}\) give the two required colors.
(2.2.2) Suppose that \(|\gamma(T_{1})|=1\) and \(|\gamma(T_{2})|=2\). We need to show the existence of two additional colors. Assume w.l.o.g. that \(\gamma(T_{2})=\{c_{6},c_{7}\}\) and that \(\gamma(e_{2})=c_{7}\).
\(\bullet\) Suppose that \(e_{2}=t_{2}^{2}t_{2}^{3}\). Then \(\gamma(t_{2}^{1}t_{2}^{2})=\gamma(t_{2}^{1}t_{2}^{3})=c_{6}\). Clearly, either \(\gamma(t_{1}^{2}t_{2}^{2})\) or \(\gamma(t_{1}^{3}t_{2}^{3})\) is the 8th color \(c_{8}\) and the other one must be \(c_{7}\), as otherwise we are done. If \(\gamma(t_{1}^{2}t_{2}^{2})=c_{7}\) and \(\gamma(t_{1}^{3}t_{2}^{3})=c_{8}\), then either \(a_{i}t_{1}^{1}\) or \(a_{j}t_{1}^{2}\) provides the 9th color, and if \(\gamma(t_{1}^{3}t_{2}^{3})=c_{7}\) and \(\gamma(t_{1}^{2}t_{2}^{2})=c_{8}\), then \(\gamma(t_{1}^{1}a_{i})\doteqdot\gamma(a_{i}a_{j})\), \(\gamma(t_{1}^{2}a_{j})\doteqdot c_{8}\) and \(\gamma(t_{1}^{3}t_{2}^{2})\doteqdot c_{7}\). These imply that \(\gamma(t_{2}^{3}a_{j})\) is the 9th color.
\(\bullet\) Suppose that \(e_{2}=t_{2}^{1}t_{2}^{2}\). Then \(\gamma(t_{2}^{1}t_{2}^{3})=\gamma
\(\bullet\) Suppose that \(e_{2}=t_{2}^{2}t_{2}^{3}\). Then \(\gamma(t_{2}^{1}t_{2}^{3})=\gamma(t_{2}^{2}t_{2}^{3})=c_{6}\). If \(\gamma(f_{1})=c_{7}\) and \(\gamma(f_{2})=c_{4}\), then either \(\gamma(u_{1}t_{2}^{2})\in\{c_{4},c_{7}\}\) or we are done. If \(\gamma(u_{1}t_{2}^{2})=c_{7}\) then either \(u_{1}a_{i}\) or \(t_{2}^{1}a_{j}\) must be colored with \(c_{9}\), and if \(\gamma(u_{1}t_{2}^{2})=c_{4}\) then \(\gamma(u_{1}a_{i})\stackrel{{\pm}}{{=}}\gamma(a_{i}a_{j})\) and either \(t_{2}^{2}a_{j}\) or \(t_{2}^{3}u_{2}\) is colored with \(c_{9}\). Suppose now that \(\gamma(f_{1})=c_{4}\) and \(\gamma(f_{2})=c_{7}\). Then either \(\gamma(u_{2}t_{2}^{1})\in\{c_{4},c_{7}\}\) or we are done. If \(\gamma(u_{2}t_{2}^{1})=c_{4}\) then either \(u_{1}a_{i}\) or \(t_{2}^{1}a_{j}\) must be colored with \(c_{9}\), and if \(\gamma(u_{2}t_{2}^{1})=c_{7}\), then \(\gamma(a_{i}t_{2}^{1})=c_{7}\), then \(\gamma(a_{i}t_{2}^{1})=c_{7}\), and so \(\gamma(u_{1}a_{i})=c_{9}\). \(\bullet\) Suppose that \(e_{2}=t_{2}^{2}t_{2}^{3}\). Then \(\gamma(t_{2}^{1}t_{2}^{2})=\gamma(t_{2}^{1}t_{2}^{3})=c_{6}\). If \(\gamma(f_{1})=c_{7}\) and \(\gamma(f_{2})=c_{4}\), then either \(u_{1}a_{i}\) or \(t_{2}^{3}a_{j}\) must be colored with \(c_{9}\). Suppose now that that \(\gamma(f_{1})=c_{4}\) and \(\gamma(f_{2})=c_{7}\). Then either \(\gamma(u_{2}t_{2}^{2})\in\{c_{4},c_{7}\}\) or we are done. If \(\gamma(u_{2}t_{2}^{2})=c_{4}\), then \(\gamma(u_{1}a_{i})\stackrel{{\pm}}{{=}}\gamma(a_{i}a_{j}),\gamma( t_{2}^{2}a_{j})\stackrel{{\pm}}{{=}}c_{6}\), and so \(\gamma(u_{1}t_{2}^{2})=c_{9}\). If \(\gamma(u_{2}t_{2}^{2})=c_{7}\), then \(\gamma(t_{2}^{3}a_{j})\stackrel{{\pm}}{{=}}\gamma(a_{i}a_{j}), \gamma(t_{2}^{2}a_{i})\stackrel{{\pm}}{{=}}c_{6}\), and so \(\gamma(u_{1}a_{i})=c_{9}\). \(\square\)
**Corollary 12**.: _Let \(\gamma\) be a coloring of \(D(X)\). If \(I\in\{A,B\}\) and \(|\gamma(I)|\geq 5\) then \(|\gamma(D(X))|\geq 14\)._
Proof.: Since \(I\) is a separable subset of \(X\), then Lemma 11 imply \(|\gamma(D(X))|\geq|\gamma(X\setminus I)|+|\gamma(I)|\geq 9+5\).
**Lemma 13**.: _Let \(A^{\prime}\subset A,B^{\prime}\subset B\) and \(T^{\prime}\subset(T_{1}\cup T_{2})\) be such that \(|A^{\prime}|=|B^{\prime}|=|T^{\prime}|=3\). If \(Q\) is a subset of \(A^{\prime}\cup T^{\prime}\cup B^{\prime}\) with \(|Q|\geq 3\), then \(\chi(D(Q))=|Q|-2\)._
Proof.: By Proposition 5\((i)\), it is enough to show the assertion for \(Q:=A^{\prime}\cup T^{\prime}\cup B^{\prime}\). By Proposition 3, all we need to show is that \(\chi(D(Q))\geq 7\). On the other hand, it is not hard see that if \(T^{\prime}\) is concave up (respectively, concave down) then \(Q\sim A^{\prime}\cup T_{2}\cup B^{\prime}\) (respectively, \(Q\sim A^{\prime}\cup T_{1}\cup B^{\prime}\)). Then, w.l.o.g. we can assume that \(Q\) is either \(A^{\prime}\cup T_{1}\cup B^{\prime}\) or \(A^{\prime}\cup T_{2}\cup B^{\prime}\). We only analyze the case \(T^{\prime}=T_{1}\). The case \(T^{\prime}=T_{2}\) can be handled in a similar way.
Let \(\gamma\) be a coloring of \(D(Q)\). Clearly, each of \(A^{\prime},B^{\prime}\) and \(T_{1}\) is a separable subset of \(Q\). Since \(|\gamma(T_{1}\cup B^{\prime})|\geq 4\) by Lemma 11, we can assume \(1\leq|\gamma(A^{\prime})|\leq 2\), as otherwise we are done. Similarly, from \((A^{\prime}\cup B^{\prime})\sim C_{3,3}\sim(A^{\prime}\cup T_{1})\) and \(|\gamma(C_{3,3})|\geq 4\) (by Theorem 8), we can conclude that \(1\leq|\gamma(T_{1})|,|\gamma(B^{\prime})|\leq 2\). Let \(a_{i},a_{j},a_{k}\) (resp. \(b_{p},b_{q},b_{r}\)) be the points of \(A^{\prime}\) (respectively, \(B^{\prime}\)) with \(i<j<k\) (resp. \(p<q<r\)).
Case 1. Suppose that \(|\gamma(T_{1})|=1\).
(1.1) Suppose that \(|\gamma(A^{\prime})|=1\). Then, \(\gamma\) assigns distinct colors to \(a_{i}t_{1}^{1}\), \(a_{j}t_{1}^{2}\), and \(a_{k}t_{1}^{3}\) and hence \(|\gamma(A^{\prime}\cup T_{1})|\geq 5\). Since \(|\gamma(B^{\prime})|\geq 2\) implies the required inequality, we can assume that \(|\gamma(B^{\prime})|=1\). Then either \(a_{i}b_{p}\) or \(t_{1}^{1}b_{q}\) provides the 7th color.
(1.2) Suppose that \(|\gamma(A^{\prime})|=2\) and \(|\gamma(B^{\prime})|=1\). Let \(a^{\prime}\) be the segment of \(A^{\prime}\) that has a different color than the other two segments of \(A^{\prime}\). Since \(|\gamma(T_{1})|=1\), we need to show the existence of 3 additional colors. Clearly, \(c_{1}:=\gamma(b_{p}t_{1}^{1})\) and \(c_{2}:=\gamma(b_{r}t_{1}^{3})\) are 2 new colors. Then \(\gamma(t_{1}^{1}b_{q})\stackrel{{\pm}}{{=}}c_{1}\) and so either \(l(a^{\prime})b_{p}\) or \(r(a^{\prime})t_{1}^{2}\) provides the 7th color.
(1.3) Suppose that \(|\gamma(A^{\prime})|=2\) and \(|\gamma(B^{\prime})|=2\). Let \(a^{\prime}\) (resp. \(b^{\prime}\)) be the segment of \(A^{\prime}\) (resp. \(B^{\prime}\)) that has a different color than the other two segments of \(A^{\prime}\) (resp. \(B^{\prime}\)). Since \(|\gamma(T_{1})|=1\), we need to show the existence of 2 additional colors, say \(c_{6}\) and \(c_{7}\). Clearly, a segment of \(l(b^{\prime})t_{1}^{1}\) or \(r(b^{\prime})t_{1}^{3}\) provides the 6th color \(c_{6}\) and the other one must be colored with \(\gamma(b^{\prime})\), as otherwise we are done. If \(\gamma(l(b^{\prime})t_{1}^{1})=c_{6}\) then either \(l(a^{\prime})t_{1}^{2}\) or \(r(a^{\prime})t_{1}^{3}\) is colored with \(c_{7}\). If \(\gamma(r(b^{\prime})t_{1}^{1})=c_{6}\) then either \(l(a^{\prime})t_{1}^{1}\) or \(r(a^{\prime})t_{1}^{2}\) is colored with \(c_{7}\).
Case 2. Suppose that \(|\gamma(T_{1})|=2\). Let \(e_{1}\) be as in (N3). Suppose that \(\gamma(T_{1})=\{c_{1},c_{3}\}\) and that \(\gamma(e_{1})=c_{1}\).
(2.1) Suppose that \(|\gamma(A^{\prime})|=1\). If \(|\gamma(B^{\prime})|=1\) then \(|\gamma(A^{\prime}\cup B^{\prime})|\geq 5\), and so \(|\gamma(Q)|\geq 7\), as required. Thus, we can assume that \(|\gamma(B^{\prime})|=2\). Let \(b
\(\bullet\) If \(\gamma(l(e_{1})a_{j})=c_{6}\) and \(\gamma(r(e_{1})a_{k})=c_{1}\), then \(\gamma(a_{j}r(b^{\prime}))\doteq c_{6}\) and \(\gamma(l(e_{1})a_{k})\doteq c_{1}\). If \(t_{1}^{3}\in e_{1}\), then \(t_{1}^{3}r(b^{\prime})\) provides the 7th color. If \(t_{1}^{3}\notin e_{1}\), then \(e_{1}=t_{1}^{1}t_{1}^{2}\) and \(\gamma(t_{1}^{1}r(b^{\prime}))\doteq c_{6}\), and either \(t_{1}^{2}r(b^{\prime})\) or \(t_{1}^{3}a_{k}\) provides the 7th color.
(2.2) Suppose that \(|\gamma(A^{\prime})|=2\) and \(|\gamma(B^{\prime})|=1\). Let \(a^{\prime}\) be the segment of \(A^{\prime}\) that has a different color than the other two segments of \(A^{\prime}\). We need to show the existence of 2 additional colors, say \(c_{6}\) and \(c_{7}\). Clearly, a segment in \(\{l(a^{\prime})b_{p},r(a^{\prime})b_{r}\}\) is colored with the 6th color \(c_{6}\) and the other one must be colored with \(\gamma(a^{\prime})\), as otherwise we are done. Let \([]\) be the quadrilateral formed by \(b_{p},b_{r},l(a^{\prime})\) and \(r(a^{\prime})\).
Suppose that \(\gamma(l(a^{\prime})b_{p})=\gamma(a^{\prime})\) and \(\gamma(r(a^{\prime})b_{r})=c_{6}\). Suppose first that \(T_{1}\) lies inside of \([]\).
\(\bullet\) If \(e_{1}=t_{1}^{1}t_{1}^{2}\), then \(\gamma(t_{1}^{1}b_{p})\doteq c_{1}\) and \(\gamma(r(a^{\prime})t_{1}^{2})\doteq c_{6}\). Then either \(b_{q}t_{1}^{2}\) or \(b_{r}t_{1}^{3}\) is colored with \(c_{7}\).
\(\bullet\) If \(e_{1}=t_{1}^{1}t_{1}^{3}\), then \(\gamma(t_{1}^{1}b_{q})\doteq c_{1}\). Then either \(b_{q}t_{1}^{1}\) or \(b_{q}t_{1}^{3}\) is colored with \(c_{7}\).
\(\bullet\) If \(e_{1}=t_{1}^{2}t_{1}^{3}\), then \(\gamma(t_{1}^{1}b_{q})\doteq c_{1},\gamma(r(a^{\prime})t_{1}^{2})\doteq c_{6}\) and \(\gamma(t_{1}^{1}b_{p})\doteq c_{3}\). Then either \(b_{q}t_{1}^{2}\) or \(b_{r}t_{1}^{3}\) is colored with \(c_{7}\).
Suppose now that \(T_{1}\) lies in the exterior of \([]\). Then \(T_{1}\) lies on the right of \([]\) and \(\gamma(r(a^{\prime})b_{q})\doteq c_{6}\).
\(\bullet\) If \(e_{1}=t_{1}^{1}t_{1}^{2}\), then \(\gamma(b_{r}t_{1}^{1})\doteq c_{1}\) and \(\gamma(b_{r}t_{1}^{3})\doteq c_{3}\). Then \(\gamma(b_{r}t_{1}^{2})\in\{c_{1},c_{3}\}\) or we are done. If \(\gamma(b_{r}t_{1}^{2})=c_{1}\), then either \(r(a^{\prime})t_{1}^{2}\) or \(b_{q}t_{1}^{4}\) must be colored with \(c_{7}\). If \(\gamma(b_{r}t_{1}^{2})=c_{3}\), then \(\gamma(r(a^{\prime})t_{1}^{2})\doteq c_{6}\), \(\gamma(r(a^{\prime})t_{1}^{3})\doteq c_{1}\) and so \(b_{p}t_{1}^{4}\) must be colored with \(c_{7}\).
\(\bullet\) If \(e_{1}=t_{1}^{1}t_{1}^{3}\), then \(\gamma(t_{1}^{1}b_{r})\doteq c_{1}\) and \(\gamma(t_{1}^{3}b_{r})\doteq c_{1}\). Then either \(b_{p}t_{1}^{1}\) or \(r(a^{\prime})t_{1}^{3}\) is colored with \(c_{7}\).
\(\bullet\) If \(e_{1}=t_{1}^{2}t_{1}^{3}\), then \(\gamma(t_{1}^{3}b_{r})\doteq c_{1}\), \(\gamma(r(a^{\prime})t_{1}^{2})\doteq c_{6},\gamma(t_{1}^{1}b_{p})\doteq c_{3}\) and \(\gamma(t_{1}^{2}b_{r})\doteq c_{1}\). Then either \(r(a^{\prime})t_{1}^{3}\) or \(b_{q}t_{1}^{2}\) is colored with \(c_{7}\).
Suppose now that \(\gamma(l(a^{\prime})b_{p})=c_{6}\) and \(\gamma(r(a^{\prime})b_{r})=\gamma(a^{\prime})\).
\(\bullet\) If \(e_{1}=t_{1}^{1}t_{1}^{2}\), then \(\gamma(t_{1}^{3}b_{r})\doteq c_{3},\gamma(t_{1}^{1}b_{q})\doteq c_{1},\gamma(t _{1}^{2}b_{q})\doteq c_{3}\) and \(\gamma(t_{1}^{3}r(a^{\prime}))\doteq\gamma(a^{\prime})\). These imply that either \(t_{1}^{2}(t^{\prime})\) or \(t_{1}^{1}b_{p}\) must be colored with \(c_{7}\).
\(\bullet\) If \(e_{1}=t_{1}^{1}t_{1}^{3}\), then either \(t_{1}^{1}b_{q}\) or \(t_{1}^{3}b_{r}\) is colored with \(c_{7}\).
\(\bullet\) If \(e_{1}=t_{1}^{2}t_{1}^{3}\), then \(\gamma(t_{1}^{3}b_{r})\doteq c_{1},\gamma(t_{1}^{1}b_{q})\doteq c_{3},\gamma(t _{1}^{2}b_{q})\doteq c_{3},\gamma(t_{1}^{2}b_{r})\doteq c_{1}\) and \(\gamma(t_{1}^{3}r(a^{\prime}))\doteq\gamma(a^{\prime})\). These imply that either \(t_{1}^{1}b_{p}\) or \(t_{1}^{2}l(a^{\prime})\) is colored with \(c_{7}\).
\(\bullet\) If \(e_{1}=t_{1}^{2}t_{1}^{3}\), then \(\gamma(t_{1}^{3}b_{r})\doteq c_{1},\gamma(t_{1}^{1}b_{q})\doteq c_{3},\gamma(t _{1}^{2}b_{q})\doteq c_{3},\gamma(t_{1}^{2}b_{r})\doteq c_{1}\) and \(\gamma(t_{1}^{3}r(a^{\prime}))\doteq\gamma(a^{\prime})\). These imply that either \(t_{1}^{1}b_{p}\) or \(t_{1}^{2}l(a^{\prime})\) is colored with \(c_{7}\).
\(\left(\begin{array}{c}2.3\\ 2.3\end{array}\right)\) Suppose that \(|\gamma(A^{\prime})|=2\) and \(|\gamma(B^{\prime})|=2\). Since \(|\gamma(T_{1})|=2\), we need to show the existence of 1 additional color. Let \(a^{\prime}\) (resp. \(b^{\prime}\)) be the segments of \(A^{\prime}\) (resp. \(B^{\prime}\)) that has a different color than the other two segments of \(A^{\prime}\) (resp. \(B^{\prime}\)). We note that \(\{\gamma(a^{\prime}),\gamma(b^{\prime})\}=\{\gamma(l(a^{\prime})l(b^{\prime})), \gamma(r(a^{\prime})r(b^{\prime}))\}\) as otherwise we are done.
Let \([]\) be the convex quadrilateral formed be the endpoints of \(a^{\prime}\) and \(b^{\prime}\), and let \(x\) (resp. \(y\)) be the endpoint of \(a^{\prime}\) (resp. \(b^{\prime}\)) that is incident with both colors \(\gamma(a^{\prime})\) and \(\gamma(b^{\prime})\). Then, either \(x=r(a^{\prime})\) or \(y=r(b^{\prime})\) holds. Suppose that \(T_{1}\) lies in the interior of \([]\). If \(t_{1}^{3}\in e_{1}\), then one of \(xl(e_{1})\) or \(yt_{1}^{3}\) provides the required color. If \(t_{1}^{3}\notin e_{1}\), then \(e_{1}=t_{1}^{1}t_{1}^{2}\) and one of \(xt_{2}^{2}\) or \(yt_{1}^{1}\) provides the required color. Thus, we can assume that \(T_{1}\) lies outside of \([]\). Let \(t_{1}^{m}
(1.1) Suppose that \(j=1\). Then \(|\gamma(T_{1})|=2\) and \(|\gamma(T_{2})|=1\). Note that if \(e=t_{1}^{2}t_{2}^{1}\), then one of \(\gamma(t_{2}^{3}a)\) or \(\gamma(t_{2}^{3}b)\) must be \(c_{6}\). Similarly, if \(e=t_{1}^{2}t_{2}^{2}\), then one of \(\gamma(t_{2}^{2}a)\) or \(\gamma(t_{2}^{3}b)\) must be \(c_{6}\). Thus we may assume that \(e=t_{1}^{3}t_{2}^{3}\). Then \(\gamma(t_{2}^{2}a)\stackrel{{ a}}{{=}}c_{0}\), \(\gamma(t_{2}^{3}b)\stackrel{{ a}}{{=}}\gamma(e)\), and either \(l(c_{1})t_{2}^{2}\) or \(r(e_{1})t_{2}^{2}\) must be colored with \(c_{6}\).
(1.2) Suppose that \(j=2\). Then \(|\gamma(T_{1})|=1\) and \(|\gamma(T_{2})|=2\). Note that if \(e=t_{1}^{2}t_{2}^{2}\), then one of \(\gamma(t_{1}^{3}a)\) or \(\gamma(t_{2}^{3}b)\) must be \(c_{6}\). Similarly, if \(e=t_{1}^{3}t_{2}^{3}\), then one of \(\gamma(t_{1}^{3}a)\) or \(\gamma(t_{2}^{3}b)\) must be \(c_{6}\). Thus we may assume that \(e=t_{1}^{2}t_{2}^{1}\). Then \(\gamma(t_{1}^{3}b)\stackrel{{ a}}{{=}}c_{0}\), \(\gamma(t_{1}^{3}a)\stackrel{{ a}}{{=}}\gamma(e)\), and so either \(l(c_{2})t_{1}^{2}\) or \(r(e_{2})t_{1}^{3}\) must be colored with \(c_{6}\).
(2) Suppose that \(|\gamma(T_{1})|=|\gamma(T_{2})|=2\). Suppose that \(\gamma(T_{1})=\{c_{1},c_{3}\},\gamma(T_{2})=\{c_{2},c_{4}\},\gamma(e_{1})=c_{1}\) and \(\gamma(e_{2})=c_{4}\). From \(|\gamma(T_{1}\cup T_{2})|=4\) it follows that there is a monochromatic triangle \(\Delta\) with vertices in the endpoints of \(e_{1}\) and \(e_{2}\). Then exactly one of \(e_{1}\in\Delta\) or \(e_{2}\in\Delta\) holds, and so \(\gamma(\Delta)\in\{c_{1},c_{4}\}\).
\(\bullet\) Suppose \(e_{1}\in\Delta\). Then \(\gamma(\Delta)=c_{1}\). If \(e_{1}=t_{1}^{1}t_{1}^{3}\), then either \(\gamma(t_{1}^{1}a)\) or \(\gamma(t_{1}^{3}b)\) must be \(c_{6}\). If \(e_{1}=t_{1}^{2}t_{1}^{3}\), then either \(\gamma(t_{1}^{2}a)\) or \(\gamma(t_{1}^{3}b)\) must be \(c_{6}\). Similarly, if \(e_{1}=t_{1}^{1}t_{1}^{2}\), then \(\gamma(t_{1}^{1}b)\stackrel{{ a}}{{=}}c_{0}\), \(\gamma(t_{1}^{1}a)\stackrel{{ a}}{{=}}c_{0}\), \(\gamma(t_{2}^{1}b)\stackrel{{ a}}{{=}}c_{3}\), and so \(\gamma(t_{1}^{3}a)=c_{6}\).
\(\bullet\) Suppose \(e_{2}\in\Delta\). Then \(\gamma(\Delta)=c_{4}\). If \(e_{2}=t_{2}^{1}t_{2}^{3}\), then either \(\gamma(t_{2}^{1}a)\) or \(\gamma(t_{2}^{3}b)\) must be \(c_{6}\). If \(e_{2}=t_{1}^{3}t_{2}^{2}\), then either \(\gamma(t_{2}^{1}a)\) or \(\gamma(t_{2}^{3}b)\) must be \(c_{6}\). Similarly, if \(e_{2}=t_{2}^{2}t_{2}^{3}\) then \(\gamma(t_{2}^{3}b)\stackrel{{ a}}{{=}}c_{0}\), \(\gamma(t_{2}^{3}a)\stackrel{{ a}}{{=}}c_{0}\), \(\gamma(t_{2}^{2}a)\stackrel{{ a}}{{=}}c_{2}\), and so \(\gamma(t_{2}^{1}b)=c_{6}\).
**Corollary 15**.: _Let \(\gamma\) be an optimal coloring of \(D(X)\). If there are \(a\in A\) and \(b\in B\) such that \(|\gamma(A\cup B\setminus\{a,b\})|\geq 8\) and \(\gamma(A\cup B\setminus\{a,b\})\cap\gamma(T_{1}\cup\{a,b\}\cup T_{2})=\emptyset\), then \(|\gamma(X)|\geq 14\)._
Proof.: From \(\gamma(A\cup B\setminus\{a,b\})\cap\gamma(T_{1}\cup\{a,b\}\cup T_{2})=\emptyset\) it follows that \(|\gamma(X)|\geq|\gamma(A\cup B\setminus\{a,b\})|+|\gamma(T_{1}\cup\{a,b\}\cup T _{2})|\geq 8+6\), due to Lemma 14.
**Lemma 16**.: _Let \(a\in A,b\in B\) and \(Q:=T_{1}\cup\{a,b\}\cup T_{2}\). If \(\gamma\) is a coloring of \(D(Q)\) such that \(\gamma(ab)\neq\gamma(g)\) for any vertex (segment) \(g\) of \(\mathcal{Q}\setminus\{ab\}\), then \(|\gamma(Q)|\geq 7\)._
Proof.: By rotating \(Q\) an angle \(\pi\) around the origin, if necessary, we may assume w.l.o.g that \(T_{1}\cup T_{2}\) lies on the right semiplane of the line spanned by \(ab\). Similarly, we may assume that \(\gamma(ab)=c_{0}\). By Corollary 9\((i)\) we know that \(|\gamma(T_{1}\cup T_{2})|\geq 4\). We proceed similarly as in the proof of Lemma 14.
(1) Suppose that \(|\gamma(T_{1})|=|\gamma(T_{2})|=1\). Assume w.l.o.g. that \(\gamma(T_{1})=c_{1}\) and \(\gamma(T_{2})=c_{2}\). Then \(\gamma(t_{1}^{1}t_{2}^{1})=c_{3},\gamma(t_{1}^{2}t_{2}^{2})=c_{4}\), \(\gamma(t_{1}^{3}t_{2}^{3})=c_{5}\), and so either \(\gamma(t_{1}^{1}b)\) or \(\gamma(t_{2}^{2}a)\) must be the required color.
(2) Suppose that \(|\gamma(T_{j})|=2\) and \(|\gamma(T_{3-j})|=1\) for some \(j\in\{1,2\}\). From \(|\gamma(T_{1}\cup T_{2})|\geq 4\) we know that there exists \(e\in\{t_{1}^{1}t_{2}^{1},t_{1}^{2}t_{2}^{2},t_{1}^{3}t_{2}^{2}\}\) such that \(\gamma(e)\notin\gamma(T_{1})\cup\gamma(T_{2})\).
(2.1) Suppose that \(j=1\). Let \(\gamma(T_{1})=\{c_{1},c_{3}\},\gamma(T_{2})=\{c_{2}\},\gamma(e_{1})=c_{1}\) and \(\gamma(e)=c_{4}\). If \(e=t_{1}^{1}t_{1}^{2}\), then \(\gamma(t_{2}^{3}a)\) and \(\gamma(t_{2}^{3}b)\) are the required colors, and if \(e=t_{1}^{2}t_{2}^{2}\) then \(\gamma(t_{2}^{2}a)\) and \(\gamma(t_{2}^{3}b)\) are the required colors.
Thus we can assume that \(e=t_{1}^{3}t_{2}^{3}\). Then \(\gamma(t_{2}^{2}a)=c_{5}\) and \(\gamma(t_{2}^{2}b)\stackrel{{ a}}{{=}}c_{4}\). If \(e_{1}=t_{1}^{1}t_{1}^{2}\) then \(\gamma(t_{1}^{1}b)\stackrel{{ a}}{{=}}c_{1}\) and so \(\gamma(t_{1}^{2}t_{2}^{2})\) is the required color. Then we can assume that \(e_{1}=t_{1}^{1}t_{1}^{3}\) for some \(l\in\{1,2\}\). Then \(\gamma(t_{1}^{1}t_{2}^{2})\stackrel{{ a}}{{=}}c_{1}\), \(\gamma(t_{1}^{3}t_{2}^{2})\stackrel{{ a}}{{=}}c_{1}\), \(\gamma(t_{1}^{3})\stackrel{{ a}}{{=}}c_{4}\) and \(\gamma(at_{2}^{3})\stackrel{{ a}}{{=}}c_{5}\). These imply that \(\gamma(t_{1}^{1}t_{2}^{1})\) is the required color.
(2.2) Suppose that \(j=2\). Let \(\gamma
(3.3) Suppose that \(e_{1}=t_{1}^{2}t_{1}^{3}\). Then either \(\gamma(at_{1}^{2})=c_{1}\) or \(\gamma(bt_{1}^{3})=c_{1}\) holds, as otherwise we are done.
\(\bullet\) Suppose that \(\gamma(at_{1}^{2})=c_{1}\). Then \(\gamma(bt_{1}^{3})=c_{5}\). Note that if \(e_{2}=t_{2}^{2}t_{2}^{l}\) for some \(l\in\{2,3\}\), then \(\gamma(at_{1}^{2})\doteq c_{2}\), \(\gamma(bt_{2}^{l})\doteq c_{5}\), \(\gamma(at_{1}^{3})\doteq c_{1}\), \(\gamma(at_{1}^{1})\doteq c_{3}\), \(\gamma(bt_{1}^{2})\doteq c_{5}\) and so \(\gamma(t_{1}^{3}t_{1}^{2})=c_{6}\). Similarly, if \(e_{2}=t_{2}^{2}t_{2}^{3}\) then \(\gamma(at_{2}^{3})\doteq c_{2}\), \(\gamma(bt_{2}^{2})\doteq c_{5}\), \(\gamma(at_{1}^{3})\doteq c_{1}\), \(\gamma(at_{1}^{1})\doteq c_{3}\), \(\gamma(bt_{1}^{2})\doteq c_{5}\), and so \(\gamma(t_{1}^{3}t_{1}^{2})=c_{6}\).
\(\bullet\) Suppose that \(\gamma(bt_{1}^{3})=c_{1}\). Then \(\gamma(at_{1}^{2})=c_{5}\). Note that if \(e_{2}=t_{2}^{2}t_{2}^{3}\) for some \(l\in\{1,2\}\), then \(\gamma(bt_{2}^{3})\doteq c_{2}\), \(\gamma(bt_{2}^{3})\doteq c_{2}\) and \(\gamma(at_{2}^{3})\doteq c_{5}\). Since these imply \(\gamma(t_{1}^{2}t_{2}^{2})=c_{6}\), we can assume \(e_{2}=t_{2}^{2}t_{2}^{2}\). Then \(\gamma(bt_{2}^{2})\doteq c_{2}\), \(\gamma(bt_{2}^{3})\doteq c_{4}\), \(\gamma(t_{1}^{2}t_{2}^{2})\doteq c_{5}\), and so \(\gamma(at_{2}^{2})=c_{6}\).
**Lemma 17**.: _Let \(a\in A,b\in B\) and let \(T_{j}\) be the triangle in \(\{T_{1},T_{2}\}\) that is closest to \(ab\). Let \(Q:=T_{j}\cup\{a,b,t\}\) with \(t\in T_{3-j}\) and let \(\gamma\) be a coloring of \(D(Q)\). If \(\gamma(ab)\neq\gamma(xt)\) for some fixed \(x\in\{a,b\}\) and \(\gamma(\ell)\notin\{\gamma(ab),\gamma(xt)\}\) for any \(\ell\in\mathcal{Q}\setminus\{ab,xt\}\), then \(|\gamma(Q)|\geq 6\)._
Proof.: By rotating \(Q\) an angle \(\pi\) around the origin and relabeling the points of \(Q\), if necessary, we may assume w.l.o.g. that \(T_{j}\cup\{t\}\) lies on the right semiplane of the line spanned by \(ab\). Then \(T_{j}=T_{1}\) and \(t\in T_{2}\). Let \(x\in\{a,b\}\) be as in the statement of lemma, let \(y\) be such that \(\{x,y\}=\{a,b\}\), and let \(\Delta:=\Delta(a,b,t)\). For brevity, let \(c_{0}=\gamma(ab)\), \(c_{1}=\gamma(xt)\) and \(c_{2}=\gamma(yt)\). Then \(\gamma(\Delta)=\{c_{0},c_{1},c_{2}\}\). Since \(|\gamma(Q)|\geq|\gamma(T_{1})|+|\gamma(\Delta)|\), we can assume \(|\gamma(T_{1})|\in\{1,2\}\), as otherwise we are done.
(1) Suppose that \(\gamma(T_{1})=\{c_{4}\}\). Then we can always connect the corners of \(T_{1}\) with the corners of \(\Delta\) (without creating any intersections) by means of three pairwise disjoint segments \(s_{1},s_{2}\) and \(s_{3}\). Since \(\gamma(s_{1}),\gamma(s_{2}),\gamma(s_{3}),c_{0},c_{1},c_{4}\) are pairwise distinct, we are done.
(2) Suppose now that \(\gamma(T_{1})=\{c_{4},c_{5}\}\). Let \(\gamma(e_{1})=c_{4}\). Clearly, \(c_{0},c_{1},c_{2},c_{4}\) and \(c_{5}\) are pairwise distinct.
\(\bullet\) Suppose that \(e_{1}=t_{1}^{1}t_{1}^{3}\). Then \(\{c_{2},c_{4}\}=\{\gamma(t_{1}^{3}y),\gamma(t_{1}^{3}t)\}\) or the 6th color is provided by some of \(t_{1}^{3}y\) or \(t_{1}^{3}t\). That equality implies that some of \(t_{1}^{1}x\) or \(t_{1}^{3}x\) provides the 6th color.
\(\bullet\) Suppose that \(e_{1}=t_{1}^{1}t_{1}^{2}\). If \(a=x\), then \(\gamma(t_{1}^{1}x)\doteq c_{4}\), \(\gamma(t_{1}^{2}x)\doteq c_{4}\) and either \(\gamma(t_{1}^{1}y)\) or \(\gamma(t_{1}^{2}t)\) is the 6th color. Then \(b=x\) and so \(\gamma(t_{1}^{1}x)\doteq c_{4}\), \(\gamma(t_{1}^{2}t)\doteq c_{2}\), \(\gamma(t_{1}^{3}y)\doteq c_{4}\), \(\gamma(t_{1}^{2}y)\doteq c_{2}\), and either \(\gamma(t_{1}^{2}x)\) or \(\gamma(t_{1}^{3}t)\) is the 6th color.
\(\bullet\) Suppose that \(e_{1}=t_{1}^{2}t_{1}^{3}\). If \(a=x\), then \(\gamma(t_{1}^{2}x)\doteq c_{4}\), \(\gamma(t_{1}^{3}x)\doteq c_{4}\), \(\gamma(t_{1}^{3}y)\doteq c_{2}\), \(\gamma(t_{1}^{3}t)\doteq c_{2}\), and either \(\gamma(t_{1}^{1}x)\) or \(\gamma(t_{1}^{3}y)\) is the 6th color. Then \(b=x\) and so \(\gamma(t_{1}^{3}x)\doteq c_{4}\), \(\gamma(t_{1}^{2}y)\doteq c_{2}\), \(\gamma(t_{1}^{2}t)\doteq c_{2}\) and \(\gamma(t_{1}^{3}t)\doteq c_{4}\). These imply that either \(\gamma(t_{1}^{2}x)\) or \(\gamma(t_{1}^{1}y)\) is the 6th color.
**Lemma 18**.: _Let \(a\in A,b\in B\) be such that \(T_{1}\) lies on the right semiplane of the line spanned by \(ab\). Let \(Q:=T_{1}\cup\{a,b,t_{1}^{2},t_{2}^{2}\}\) with \(1\leq i<j\leq 3\), and let \(\Delta^{\prime}:=\Delta(a,t_{1}^{2},t_{2}^{2})\). If \(\gamma\) is a coloring of \(D(Q)\), \(|\gamma(\Delta^{\prime})|=1\), \(\gamma(ab)\neq\gamma(bt_{2}^{j})\) and \(\gamma(\ell)\notin\{\gamma(ab),\gamma(bt_{2}^{j})\}\) for any \(\ell\in\mathcal{Q}\setminus\{ab,bt_{2}^{j}\}\), then \(|\gamma(Q)|\geq 7\)._
Proof.: For brevity, let \(c_{0}=\gamma(ab)\), \(c_{1}=\gamma(bt_{2}^{j})\), \(c_{2}=\gamma(at_{2}^{j})\) and \(\Delta=\overline{Q}\). Then \(\gamma(\Delta)=\{c_{0},c_{1},c_{2}\}\). Let \(c_{3}=\gamma(at_{1}^{2})\), \(c_{4}=\gamma(bt_{1}^{1})\) and \(c_{5}=\gamma(t_{1}^{3}t_{2}^{j})\). Clearly, \(c_{0},c_{1},\ldots,c_{5}\) are pairwise distinct. We need to show the existence of one additional color. If \(|\gamma(T_{1})|=1\), then \(\gamma(T_{1})\notin\{c_{0},c_{1},\ldots,c_{5}\}\) is the required color. Then we may assume that \(|\gamma(T_{1})|\geq 2\) and \(\gamma(T_{1})\subset\{c_{3},c_{4},c_{5}\}\), as otherwise we are done. We also note that if \(|\gamma(T_{1})|=3\), then \(\gamma(T_{1})=\{c_{3},c_{4},c_{5}\}\) and \(bt_{1}^{i}\) provides the required color. Then we can assume that \(|\gamma(T_{1})|=2\). If \(\gamma(T_{1})=\{c_{3},c_{4}\}\), then \(\gamma(bt_{2}^{i})\doteq c_{5}\), \(\gamma(at_{1
\(\bullet\) Suppose that \(|\gamma(\Delta)|=1\). Assume first that \(\Delta\) has a vertex in \(a\in A\) and the other two in \(b_{p},b_{q}\in B\) with \(p<q\). Since \(T_{1}\cup T_{2}\) lies in the interior of \(\Delta\), then \(p\in\{1,2,3\}\) and \(q\in\{4,5\}\). By applying Lemma 16 to \(T_{1}\cup\{a,b_{p}\}\cup T_{2}\) we obtain \(|\gamma(Q)|\geq 7\), as desired. The case in which \(\Delta\) has an endpoint in \(b\in B\) and the other two in \(A\) can be handled in a similar way.
\(\bullet\) Suppose that \(|\gamma(\Delta)|=2\). Then two sides, say \(\ell_{1}\) and \(\ell_{2}\), of \(\Delta\) receive the same color and they form a star of \(\gamma|(Q)\). We can assume that \(\ell\notin\{\ell_{1},\ell_{2}\}\), as otherwise \(\ell\) is in a star of \(\gamma(Q)\), as claimed. Let \(v\) be the common endpoint of \(\ell_{1}\) and \(\ell_{2}\). Since \(\ell_{1}\) and \(\ell_{2}\) are clean in \(\mathcal{Q}\), the chromatic class \(\gamma(\ell_{1})=\gamma(\ell_{2})\) is a star of \(\gamma(Q)\) with apex in \(v\), and hence \(|\gamma(Q\setminus\{v\})|=|\gamma(Q)|-1\) by Proposition 5\((iii)\). Since \(\ell\in A*B\), then \(|\gamma(Q\setminus\{v\})|\geq 6\) by Lemma 14 and so \(|\gamma(Q)|\geq 7\) as required.
**Lemma 20**.: _Let \(\Delta_{0}\) be a triangle with vertices in \(A\cup B\), and let \(Q\subseteq T_{1}\cup\Delta_{0}\cup T_{2}\) with \(|Q|\geq 3\). If \(\gamma\) is an optimal coloring of \(D(Q)\), then \(|\gamma(Q)|\geq|Q|-2\)._
Proof.: By Proposition 5\((i)\), it is enough to verify the case in which \(Q:=T_{1}\cup\Delta_{0}\cup T_{2}\). Then, we need to show that \(|\gamma(Q)|\geq 7\). Since \(|\gamma(Q)|\geq|\gamma(T_{1}\cup T_{2})|+|\gamma(\Delta_{0})|\geq 4+|\gamma( \Delta_{0})|\), we can assume \(|\gamma(\Delta_{0})|\in\{1,2\}\).
Suppose first that the \(3\) points of \(\Delta_{0}\) are in \(A\). Since \(|\gamma(Q)|\leq 6\) implies \(D(T_{1}\cup A\cup T_{2})\leq|\gamma(Q)|+2\leq 8\), and this last contradicts Lemma 11, we conclude that \(|\gamma(Q)|\geq 7\). An analogous reasoning shows that \(|\gamma(Q)|\geq 7\) if \(\Delta_{0}\subset B\). Then \(\Delta_{0}\) has at least one vertex in each of \(A\) and \(B\). Let \(\ell,\ell^{\prime},\ell^{\prime\prime}\) be the sides of \(\Delta_{0}\), and assume w.l.o.g. that \(\ell,\ell^{\prime}\in A*B\). Then \(\ell^{\prime\prime}\) has both endpoints in either \(A\) or \(B\).
Case 1. Suppose that \(T_{1}\cup T_{2}\) lies in the interior of \(\Delta_{0}\). By Proposition 19 we can assume that \(\ell=ab\) belongs to a star with apex \(v\in\{a,b\}\), as otherwise we are done. If \(v\in\ell^{\prime\prime}\) then Lemma 14 implies \(|\gamma(Q\setminus\{v\})|\geq 6\) and so \(|\gamma(Q)|\geq 7\). Similarly, if \(v\notin\ell^{\prime\prime}\) then Lemma 11 and Proposition 5\((i)\) imply that \(|\gamma(Q\setminus\{v\})|\geq 6\), and so \(|\gamma(Q)|\geq 7\).
Case 2. Suppose that \(T_{1}\cup T_{2}\) lies in the exterior of \(\Delta_{0}\). Assume w.l.o.g. that \(\ell=ab\) (with \(a\in A\) and \(b\in B\)) is closer to \(O=(0,0)\) than \(\ell^{\prime}\). Then, \(\ell^{\prime}\) and \(\ell^{\prime\prime}\) are clean in \(\mathcal{Q}\).
**Claim 1**.: _If \(\gamma(Q)\) has a star with apex in a corner of \(\Delta_{0}\), then \(|\gamma(Q)|\geq 7\)._
Proof of Claim 1.: Suppose that \(v\) is a corner of \(\Delta_{0}\) that is apex of a star of \(\gamma(Q)\). If such \(v\in\ell^{\prime\prime}\), then \(|\gamma(Q\setminus\{v\})|\geq 6\) by Lemma 14, and so \(|\gamma(Q)|\geq 7\). Similarly, if \(v\notin\ell^{\prime\prime}\) then \(|\gamma(Q\setminus\{v\})|\geq 6\) by Lemma 11 and Proposition 5\((i)\), and hence \(|\gamma(Q)|\geq 7\).
(2.1) Suppose that \(|\gamma(\Delta_{0})|=1\). Then \(\ell,\ell^{\prime},\ell^{\prime\prime}\) receive the same color, and so \(\gamma\) and \(T_{1}\cup\{a,b\}\cup T_{2}\) satisfy the hypotheses of Lemma 16. Therefore, \(|\gamma(Q)|\geq 7\).
(2.2) Suppose that \(|\gamma(\Delta_{0})|=2\). Then, two sides of \(\Delta_{0}\) receive the same color, say \(c_{2}\). Suppose that \(\gamma(\Delta_{0})=\{c_{1},c_{2}\}\). By Corollary 9\((i)\), \(|\gamma(T_{1}\cup T_{2})|\geq 4\). Since if \(|\gamma(T_{1}\cup T_{2})|=5\) there is nothing to prove, we can assume that \(\gamma(T_{1}\cup T_{2})=\{c_{3},c_{4},c_{5},c_{6}\}\).
(2.2.1) Suppose that \(\gamma(\ell)=c_{1}\). Since \(\ell^{\prime}\) and \(\ell^{\prime\prime}\) are clean in \(\mathcal{Q}\) and \(\gamma(\ell^{\prime})=c_{2}=\gamma(\ell^{\prime\prime})\), then \(c_{2}\) must be a star of \(\gamma(Q)\) with apex \(v=\ell^{\prime}\cap\ell^{\prime\prime}\) and we are done by Claim 1.
(2.2.2) Suppose that \(\gamma(\ell)=c_{2}\).
**Claim 2**.: _We can assume that \(\gamma(\ell^{\prime})=c_{1}\) and \(\gamma(\ell)=\gamma(\ell^{\prime\prime})=c_{2}\)._
Proof of Claim 2.: Suppose that \(\gamma(\ell)\neq\gamma(\ell^{\prime\prime})\). From \(\gamma(\ell)=c_{2}\) it follows that \(\gamma(\ell^{\prime\prime})=c_{1}\), and so \(\gamma(\ell^{\prime})=c_{2}\). We recall that \(\ell^{\prime\prime}\) has its endpoints in either \(A\) or \(B\), and that it is clean in \(\mathcal{Q}\). It is easy to see that each segment of colored with \(c_{1}\) intersects to \(\ell\) and so we can recolor \(\ell\) with \(c_{1}\) without affecting the essential properties of \(\gamma(Q)\) (with the roles of \(c_{1}\) and \(c_{2}\) in \(\Delta_{0}\) interchanged).
**Claim 3**.: _If \(\gamma(Q)\) has three distinct stars with apices in \(T_{1}\cup T_{2}\), then Lemma 20 holds._
Proof of Claim 3.: Let \(u_{1},u_{2},u_{3}\in T_{1}\cup T_{2}\) be the apices of such stars of \(\gamma(Q)\). Then \(|\gamma(Q\setminus\{u_{1},u_{2},u_{3}\})|\geq 4\) by Lemma 13, and hence \(|\gamma(Q)|\geq 7\).
Let \(\ell^{\prime}\) be as above. Let us define \(U_{\ell^{\prime}}\subset T_{1}\cup T_{2}\) as in Definition 1. From the choice of \(\ell^{\prime}\) it is not hard to see that each point of \(U_{\ell^{\prime}}\) is the apex of a proper star of \(\gamma(Q)\). By Claim 3, we may asssume that \(|U_{\ell^{\prime}}|\leq 2\). Let \(\{v^{\prime}_{0},\ldots,v^{\prime}_{s}\},\Delta_{\ell^{\prime}},\ell^{\prime}_{1}, \ell^{\prime}_{2}\) and \(\Delta^{\prime}_{1},\Delta^{\prime}_{2}\) be the objects corresponding to \(\ell^{\prime}\) described in Definition 1. Since \(|U_{\ell^{\prime}}|\leq 2\) all these are well-defined. Let \(Q^{\prime}=Q\setminus U_{\ell^{\prime}}\). By Proposition 5\((iii)\), it is enough to show \(|\gamma(Q^{\prime})|\geq 7-|U_{\ell^{\prime}}|\). We note that \(\Delta_{\ell^{\prime}}\) is a separable subset of \(Q^{\prime}\) with respect to \(\gamma\). Then \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{\ell^{\prime}})|+| \gamma(\Delta_{\ell^{\prime}})|\) by Proposition 5\((ii)\).
**Claim 4**.: \(\gamma(\Delta_{\ell^{\prime}})=\{c_{1}\}\) _or Lemma 20 holds._
Proof of Claim 4.: By Claim 2\(\gamma(\ell^{\prime})=c_{1}\), and so \(c_{1}\in\gamma(\Delta_{\ell^{\prime}})\). Since \(Q^{\prime}\setminus\Delta_{\ell^{\prime}}\subset T_{1}\cup I\cup T_{2}\) for some \(I\in\{A,B\}\), Lemma 11 implies \(|\gamma(Q^{\prime}\setminus\Delta_{\ell^{\prime}})|\geq|Q^{\prime}\setminus \Delta_{\ell^{\prime}}|-2\), and so \(|\gamma(Q^{\prime})|\geq|Q^{\prime}\setminus\Delta_{\ell^{\prime}}|-2+|\gamma (\Delta_{\ell^{\prime}})|\). We note that \(|Q^{\prime}\setminus\Delta_{\ell^{\prime}}|=9-|U_{\ell^{\prime}}|-|\Delta_{ \ell^{\prime}}|\).
If \(|\gamma(\Delta_{\ell^{\prime}})|=|\Delta_{\ell^{\prime}}|=3\), then \(|\gamma(Q^{\prime})|\geq(9-|U_{\ell^{\prime}}|-|\Delta_{\ell^{\prime}}|)-2+| \Delta_{\ell^{\prime}}|=7-|U_{\ell^{\prime}}|\), as required. Then we can assume that \(|\gamma(\Delta_{\ell^{\prime}})|\in\{1,2\}\). Since \(|\gamma(\Delta_{\ell^{\prime}})|=1\) implies \(\gamma(\Delta_{\ell^{\prime}})=\{c_{1}\}\), as claimed, we must have \(|\gamma(\Delta_{\ell^{\prime}})|=2\). From \(v_{0}^{\prime}\notin U_{\ell^{\prime}}\) we know that \(\gamma(\ell^{\prime}_{1})\neq\gamma(\ell^{\prime}_{2})\). Since \(|\gamma(\Delta_{\ell^{\prime}})|=2\), we must have \(c_{1}\in\{\gamma(\ell^{\prime}_{1}),\gamma(\ell^{\prime}_{2})\}\). In any case, \(c_{1}\) is a star with apex in \(\Delta_{0}\). This last and Claim 1 imply \(|\gamma(Q)|\geq 7\). \(\triangle\)
**Claim 5**.: _Let \(f\) and \(g\) be the segments that join \(v_{s}^{\prime}\) with the endpoints of \(\ell^{\prime\prime}\). Then \(\gamma(f)=c_{2}=\gamma(g)\) or Lemma 20 holds._
Proof of Claim 5.: Let \([]\) be the convex quadrilateral formed by the \(3\) points of \(\Delta_{0}\) together with \(v_{s}^{\prime}\). Then either \(f\) or \(g\) is a side of \([]\) and the other one is a diagonal of \([]\). Suppose that \(f\) (resp. \(g\)) is a side (resp. diagonal) of \([]\). We start by showing that \(|\gamma([])|\in\{2,4\}\) implies \(|\gamma(Q^{\prime})|\geq 7-|U_{\ell^{\prime}}|\), as required.
Suppose that \(|\gamma([])|=2\). Then \(\gamma([])=\{c_{1},c_{2}\}\) and \(\gamma(f)=c_{2}\). Then \(\gamma(g)\neq c_{2}\) implies that \(c_{2}\) is a star of \(\gamma(Q)\) with apex in a corner \(v\) of \(\Delta_{0}\). From this and Claim 1 we can have that \(|\gamma(Q)|\geq 7\) and so \(|\gamma(Q^{\prime})|\geq 7-|U_{\ell^{\prime}}|\).
If \(|\gamma([])|=4\) then \([]\) is a separable subset of \(Q^{\prime}\), and Lemma 11 implies \(|\gamma(Q^{\prime}\setminus[])|\geq 9-|U_{\ell^{\prime}}|-|[]|-2\), and hence \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus[])|+|\gamma([])|=(9-|U_{ \ell^{\prime}}|-|[]|-2)+|[]|=7-|U_{\ell^{\prime}}|\). Thus we can assume that \(\gamma([])=\{c_{1},c_{2},c_{3}\}\) for some \(c_{3}\in\gamma(Q^{\prime})\setminus\{c_{1},c_{2}\}\). From the difinition of \(f\) it is not hard to see that \(\ell^{\prime},\ell^{\prime\prime},f\) are consecutive in \([]\) and appear in this cyclic order. Then \(\ell^{\prime}\cap f=\emptyset\) and \(\gamma(\ell^{\prime})=c_{1}\) imply \(\gamma(f)\neq c_{1}\).
Let \(h\) be the side of \([]\) that joins \(v_{s}^{\prime}\) with \(v=\ell\cap\ell^{\prime}\). From Claim 4 and \(v_{0}^{\prime}\neq v_{s}^{\prime}\) it follows that \(\gamma(h)\neq c_{1}\). Similarly, \(h\cap\ell^{\prime\prime}=\emptyset\) and \(\gamma(\ell^{\prime\prime})=c_{2}\) imply \(\gamma(h)\neq c_{2}\). These and \(\gamma([])=\{c_{1},c_{2},c_{3}\}\) imply \(\gamma(h)=c_{3}\). From \(v_{s}^{\prime}\notin U_{\ell^{\prime}}\) we have \(\gamma(f)\neq c_{3}\) and so \(\gamma(f)=c_{2}\). We note that if \(\gamma(g)\neq c_{2}\), then \(c_{2}\) must be a star of \(\gamma(Q)\) with apex in \(\Delta_{0}\), and so we are done by Claim 1. \(\triangle\)
We are ready to complete the proof of (2.2.2) and so the proof of Lemma 20. Let \(a^{\prime}\in A\) and \(b^{\prime}\in B\) be such that \(\ell^{\prime}=a^{\prime}b^{\prime}\). We recall that \(\Delta_{l}^{\prime}=\Delta(\ell^{\prime}_{l},v^{\prime}_{l})\) for \(l\in\{1,2\}\). From \(v_{1}^{\prime}\notin U_{\ell^{\prime}}\) we know that \(\gamma(v_{1}^{\prime}a^{\prime})\neq\gamma(v_{1}^{\prime}b^{\prime})\) and so \(|\gamma(\Delta_{l}^{\prime})|=3=|\Delta_{l}^{\prime}|\) for some \(l\in\{1,2\}\).
Let \(f,g\) and \([]\) be as in the proof of Claim 5. Then \(f\) is the side of \([]\) that joins \(v_{s}^{\prime}\) with \(\ell^{\prime\prime}\) and \(\gamma(f)=c_{2}=\gamma(g)\). Since \(f\) is disjoint from any segment in \(\Delta_{l}^{\prime}\), then \(c_{2}\notin\gamma(\Delta_{l}^{\prime})\). From this and the fact that \(\Delta_{l}^{\prime}\) is a separable subset of \(Q^{\prime}\) we know that \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{l}^{\prime})|+| \gamma(\Delta_{l}^{\prime})|\).
Let \(v=\ell\cap\ell^{\prime}\). If \(v\in\Delta_{l}^{\prime}\), then \(Q^{\prime}\setminus\Delta_{l}^{\prime}\subset T_{1}\cup I\cup T_{2}\) for some \(I\in\{A,B\}\), then Lemma 11 implies \(|\gamma(Q^{\prime}\setminus\Delta_{l}^{\prime})|\geq|Q^{\prime}\setminus\Delta_{l}^ {\prime}|-2\). If \(v\notin\Delta_{l}^{\prime}\), then \(Q^{\prime}\setminus\Delta_{l}^{\prime}\subset T_{1}\cup\{a,b\}\cup T_{2}\) with \(\ell=ab\), then Lemma 16 and Proposition 5\((ii)\) imply \(|\gamma(Q^{\prime}\setminus\Delta_{l}^{\prime})|\geq|Q^{\prime}\setminus\Delta_{l} ^{\prime}|-2\). In any case, we have \(|\gamma(Q^{\prime})|\geq|Q^{\prime}\setminus\Delta_{l}^{\prime}|-2+|\gamma(\Delta_{l }^{\prime})|\). Since \(|Q^{\prime}\setminus\Delta_{l}^{\prime}|=9-|U_{\ell^{\prime}}|-|\Delta_{l}^{ \prime}|\) we have \(|\gamma(Q^{\prime})|\geq(9-|U_{\ell^{\prime}}|-|\Delta_{l}^{\prime}|)-2+|\gamma (\Delta_{l}^{\prime})|=7-|U_{\ell^{\prime}}|\), as required.
**Lemma 21**.: _Let \(\Delta_{U}\) be a triangle with corners in \(U\in\{A,B\}\), \(y\in(A\cup B)\setminus U\) and \(Q\subseteq T_{1}\cup\Delta_{U}\cup\{y\}\cup T_{2}\). If \(\gamma\) is an optimal coloring of \(D(Q)\), then \(|\gamma(Q)|\geq|Q|-2\)._
Proof.: By Proposition 5\((i)\), it is enough to verify the case in which \(Q=T_{1}\cup\Delta_{U}\cup\{y\}\cup T_{2}\). Then, we need to show that \(\gamma(Q)\geq 8\). We can assume that \(\gamma\) does not have a star with apex in a point \(v\in\Delta_{U}\cup\{y\}\). Indeed, if such \(v\) exists the required inequality follows by applying Lemma 20 and Proposition 5\((iii)\) to \(Q\setminus\{v\}\). Let \(u_{i},u_{j},u_{k}\) be the corners of \(\Delta_{U}\) and suppose that \(i<j<k\).
Let \(\Delta:=\Delta(u_{i},u_{k},y)\). Then \(u_{j}\) lies in the interior of \(\Delta\). Note that \(\
For \(\ell=u_{i}y\), let \(U_{\ell}\) be as in Definition 1. From the assertions in previous paragraph it is not hard to see that each point of \(U_{\ell}\) is the apex of a proper star of \(\gamma(Q)\). By using the argument in the proof of Claim 3, we can asssume that \(|U_{\ell}|\leq 2\). Let \(\{v_{0},\ldots,v_{s}\},\Delta_{\ell},\ell_{1},\ell_{2}\) and \(\Delta_{1},\Delta_{2}\) be as in Definition 1. Since \(|U_{\ell}|\leq 2\), all these are well-defined. Let \(Q^{\prime}=Q\setminus U_{\ell}\). By Proposition 5\((iii)\), it is enough to show that \(|\gamma(Q^{\prime})|\geq 8-|U_{\ell}|\).
From the definition of \(U_{\ell}\) we know that \(\Delta^{\prime}\subset Q^{\prime}\). Suppose first that \(\Delta^{\prime}=\Delta_{\ell}\), and let \([]\) be the convex quadrilateral formed by the \(3\) points of \(\Delta\) together with \(v_{s}\). Then \([]=\overline{Q^{\prime}}\). From \(v_{s}\notin U_{\ell}\) and \(\gamma(\Delta_{\ell})=\{c_{1}\}\) it follows that \(|\gamma(\llbracket)|=4\). Then \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\llbracket)|+4\). On the other hand, Lemma 11 implies \(|\gamma(Q^{\prime}\setminus\llbracket)|\geq(10-|U_{\ell}|)-4-2=4-|U_{\ell}|\), and hence \(|\gamma(Q^{\prime})|\geq 8-|U_{\ell}|\). Suppose finally that \(\Delta^{\prime}\neq\Delta_{\ell}\). From \(v\notin U_{\ell}\) it follows that \(\gamma(\ell_{1})\neq\gamma(\ell_{2})\), and so \(|\gamma(\Delta_{\ell})|=3\). Since \(\Delta_{\ell}\) is a separable subset of \(Q^{\prime}\), then \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{\ell})|+3\). By Lemma 11 we know that \(|\gamma(Q^{\prime}\setminus\Delta_{\ell})|\geq|Q^{\prime}\setminus\Delta_{ \ell}|-2=5-|U_{\ell}|\), as required.
**Proposition 22**.: _Let \([]\) be a convex quadrilateral with corners in \(A\cup B\), let \(\ell\) be a side of \([]\) such that \(\ell\in A*B\), and let \(Q\subseteq T_{1}\cup[]\cup T_{2}\) with \(|Q|\geq 3\). If \(\gamma\) is an optimal coloring of \(D(Q)\) and \([]\) contains \(T_{1}\cup T_{2}\) in its interior, then \(\ell\) belongs to a star of \(\gamma(Q)\) or \(|\gamma(Q)|\geq|Q|-2\)._
Proof.: By Proposition 5\((i)\), it is enough to verify the case in which \(Q:=T_{1}\cup[]\cup T_{2}\). We note that \(\ell\) is clean in \(Q\) and that \(|\gamma([])|\geq 2\). We need to show that \(|\gamma(Q)|\geq 8\). Since \(|\gamma(Q)|\geq|\gamma(T_{1}\cup T_{2})|+|\gamma([])|\geq 4+|\gamma([])|\), we can assume \(|\gamma([])|\in\{2,3\}\).
Let \(U_{\ell}\) be as in Definition 1. From the choice of \(\ell\) it is not hard to see that each point of \(U_{\ell}\) is the apex of a proper star of \(\gamma(Q)\). Again, by using the argument in the proof of Claim 3, we can asssume that \(|U_{\ell}|\leq 2\). Then, if \(\{v_{0},\ldots,v_{s}\},\Delta_{\ell},\ell_{1},\ell_{2}\) and \(\Delta_{1},\Delta_{2}\) are as in Definition 1, they are well-defined because \(|U_{\ell}|\leq 2\). Let \(Q^{\prime}=Q\setminus U_{\ell}\). By Proposition 5\((iii)\), it is enough to show \(|\gamma(Q^{\prime})|\geq 8-|U_{\ell}|\).
Let \(\ell^{\prime}\) be the opposite side of \(\ell\) in \([]\), and note that \(c_{1}:=\gamma(\ell)\in\gamma(\Delta_{\ell})\). Since the set of points forming \(\Delta_{\ell}\) is a separable subset of \(Q^{\prime}\), then \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{\ell})|+|\gamma( \Delta_{\ell})|\) by Proposition 5\((ii)\).
Because \(Q^{\prime}\setminus\Delta_{\ell}\subset T_{1}\cup\{a,b\}\cup T_{2}\) for \(ab=\ell^{\prime}\), then Lemma 14 and Proposition 5\((ii)\) imply \(|\gamma(Q^{\prime}\setminus\Delta_{\ell})|\geq|Q^{\prime}\setminus\Delta_{\ell} |-2\), and so \(|\gamma(Q^{\prime})|\geq|Q^{\prime}\setminus\Delta_{\ell}|-2+|\gamma(\Delta_{ \ell})|\). We note that \(|Q^{\prime}\setminus\Delta_{\ell}|=10-|U_{\ell}|-|\Delta_{\ell}|\).
If \(|\gamma(\Delta_{\ell})|=|\Delta_{\ell}|\), then \(|\gamma(Q^{\prime})|\geq(10-|U_{\ell}|-|\Delta_{\ell}|)-2+|\Delta_{\ell}|=8-|U_{ \ell}|\), as required. Then we must have \(|\gamma(\Delta_{\ell})|\in\{1,2\}\).
\(\bullet\) Suppose that \(|\gamma(\Delta_{\ell})|=1\). From \(v_{1}\notin U_{\ell}\) it follows that \(|\gamma(\Delta_{\ell})|=3\) for some \(l\in\{1,2\}\). Then \(\Delta_{l}\) is a separable subset of \(Q^{\prime}\), and hence \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{l})|+|\gamma( \Delta_{\ell})|\). By Lemma 14 and Proposition 5\((i)\) we know that \(|\gamma(Q^{\prime}\setminus\Delta_{l})|\geq|Q^{\prime}\setminus\Delta_{l}|-2=(1 0-|U_{\ell}|)-2-|\Delta_{l}|\), and hence \(|\gamma(Q^{\prime}\setminus\Delta_{l})|\geq(10-|U_{\ell}|)-2-|\Delta_{l}|+3=8-|U_ {\ell}|\), as required.
\(\bullet\) Suppose that \(|\gamma(\Delta_{\ell})|=2\). From \(v_{1}\notin U_{\ell}\) we know that \(\gamma(\ell_{1})\neq\gamma(\ell_{2})\), and hence \(c_{1}\in\{\gamma(\ell_{1}),\gamma(\ell_{2})\}\). In any case, we have that \(c_{1}\) must be a star with apex in an endpoint of \(\ell\), as required.
**Lemma 23**.: _Let \(a_{i},a_{j}\in A\), \(b_{p},b_{q}\in B\) and \(Q:=T_{1}\cup\{a_{i},a_{j},b_{p},b_{q}\}\cup T_{2}\). If \(\gamma\) is an optimal coloring of \(D(Q)\), then \(|\gamma(D(Q))|\geq 8\)._
Proof.: Let \([]\) be the convex quadrilateral defined by \(a_{i},a_{j},b_{p},b_{q}\). W.l.o.g suppose that \(i<j\) and \(p<q\). We may assume that \(\gamma\) does not have a star with apex in a corner \(v\) of \([]\). Indeed, if such \(v\) exists then we can deduce the required inequality by applying Lemma 20 and Proposition 5\((iii)\) to \(Q\setminus\{v\}\). Similarly, we can assume that \(T_{1}\cup T_{2}\) lies in the exterior of \([]\), as otherwise \(|\gamma(Q)|\geq 8\) by Proposition 22.
From \(\gamma(a_{i}a_{j})\neq\gamma(b_{p}b_{q})\), we know that \(\gamma([])|\geq 2\). Since \(\gamma(Q)\geq\gamma(T_{1}\cup T_{2})+\gamma([])\) and \(\gamma(T_{1}\cup T_{2})\geq 4\), then either \(\gamma([])\in\{2,3\}\) or we are done. Let \(\ell_{1},\ell_{2},\ell_{3},\ell_{4}\) be the sides of \([]\) and suppose that they appear in this cyclic (clockwise) order. W.l.o.g. let \(\ell_{1}\) (resp. \(\ell_{3}\)) be the farthest (resp. closest) segment of \(\{a_{i}b_{p},a_{j}b_{q}\}\) to \((0,0)\). For \(i\in\{1,3\}\), let \(c_{i}=\gamma(\ell_{i})\). Clearly, \(c_{1}\neq c_{3}\).
Case 1. Suppose that \(\gamma([])=2\). By rotating \(Q\) an angle \(\pi\) around the origin and relabeling the \(10\) points of \(Q\), if necessary, we may assume w.l.o.g that \(T_{1}\cup T_{2}\) lies on the right semiplane of the line spanned by \(\ell_{3}\). Then, either \(\gamma(\ell_{2})=c_{1},\gamma(\ell_{4})=c_{3}\) or \(\gamma(\ell_{2})=c_{3},\gamma(\ell_{4})=c_{1}\) hold.
Case 1. Suppose that \(|\gamma(T_{2})|=1\). Let \(\gamma(T_{2})=\{c_{2}\}\). We first analyze the case \(\gamma(\ell_{2})=c_{1}\) and
Clearly, \(c_{2}\notin\{c_{1},c_{3}\}\). Let \(\Delta:=\Delta(t_{2}^{1},t_{2}^{3},a_{j})\). From \(|\gamma(T_{2})|=1\) it follows that \(|\gamma(\Delta)|\in\{2,3\}\). Since \(\Delta\) is a separable subset of \(Q\), then \(|\gamma(Q)|\geq|\gamma(Q\setminus\Delta)|+|\gamma(\Delta)|\). Since Lemma 20 implies \(|\gamma(Q\setminus\Delta)|\geq|Q\setminus\Delta|-2\), we can assume that \(|\gamma(\Delta)|=2\) or we are done. Let \(c_{4}\) be such that \(\gamma(\Delta)=\{c_{2},c_{4}\}\). Then the sides \(a_{j}t_{2}^{1}\) and \(a_{j}t_{2}^{3}\) of \(\Delta\) are both colored with \(c_{4}\notin\{c_{1},c_{2},c_{3}\}\). From \(\gamma(a_{i}t_{2}^{3})=c_{1}\), and the fact that \(t_{2}^{1}t_{2}^{3}\) and \(a_{i}t_{2}^{3}\) are the only segments intersectig both \(a_{j}t_{2}^{1}\) and \(a_{j}t_{2}^{3}\), it follows that \(c_{4}\) must be a star with apex \(a_{j}\in[]\), and we are done.
Suppose now that \(\gamma(\ell_{2})=c_{3}\) and \(\gamma(\ell_{4})=c_{1}\). As \(\gamma\) has no apices in \([]\), then \(\gamma(a_{i}b_{q})=c_{1}\). As before, we can assume that any segment incident with \(b_{p}\) is colored with color \(c_{1}\). Let \(\Delta:=\Delta(t_{2}^{2},t_{2}^{3},b_{q})\). Then, \(\Delta\) is a separable subset of \(Q\) and so \(|\gamma(Q)|\geq|\gamma(Q\setminus\Delta)|+|\gamma(\Delta)|\). Since Lemma 20 implies \(|\gamma(Q\setminus\Delta)|\geq|Q\setminus\Delta|-2\) and \(|\gamma(T_{2})|=1\), then \(|\gamma(\Delta)|=2\) or we are done. Then \(b_{q}t_{2}^{2}\) and \(b_{q}t_{2}^{3}\) are both colored with \(c_{4}\notin\{c_{1},c_{2},c_{3}\}\). If \(|\gamma(T_{1})|=1\), then \(t_{1}^{2}t_{2}^{3}\) and \(t_{1}^{3}t_{2}^{2}\) provide the 6th and 7th color, and either \(a_{i}t_{1}^{1}\) or \(a_{j}t_{2}^{3}\) provides the 8th color.
Suppose now that \(|\gamma(T_{1})|=2\) with \(\gamma(T_{1})=\{c_{5},c_{6}\}\). Clearly, \(\{c_{1},c_{2},\ldots,c_{6}\}\) are pairwise distinct. Let \(e_{1}\) be as in (N3) and suppose that \(\gamma(e_{1})=c_{6}\). If \(\gamma(a_{j}t_{2}^{3})\neq c_{3}\), then \(\gamma(a_{j}t_{2}^{3})\) is the 7th color, and either \(l(e_{1})t_{2}^{1}\) or \(r(e_{1})t_{2}^{2}\) provides the 8th color. Thus we may assume that \(\gamma(a_{j}t_{2}^{3})=c_{3}\). Then some of \(l(e_{1})t_{2}^{1}\) or \(r(e_{1})t_{2}^{2}\) provides the 7th color \(c_{7}\), and the other one must be colored with \(c_{6}\), as otherwise we are done. If \(\gamma(l(e_{1})t_{2}^{1})=c_{6}\) and \(\gamma(r(e_{1})t_{2}^{2})=c_{7}\), then \(a_{i}t_{2}^{1}\) provides the 8th color. Then, we must have \(\gamma(r(e_{1})t_{2}^{2})=c_{6}\) and \(\gamma(l(e_{1})t_{2}^{1})=c_{7}\), and hence \(\gamma(a_{i}t_{2}^{1})\triangleq c_{7}\), \(\gamma(l(e_{1})t_{2}^{2})\triangleq c_{6}\) and \(\gamma(r(e_{1})t_{2}^{2})\triangleq c_{4}\). If \(e_{1}=t_{1}^{2}t_{1}^{3}\), then either \(a_{i}t_{1}^{1}\) or \(b_{q}t_{1}^{2}\) provides the 8th required color. Similarly, if \(e_{1}\neq t_{1}^{2}t_{1}^{3}\), then \(b_{q}t_{1}^{1}\) provides the 8th required color.
Case 1.2. Suppose that \(|\gamma(T_{1})|=1\) and \(|\gamma(T_{2})|=2\). Let \(\gamma(T_{1})=\{c_{2}\}\) and note that \(c_{2}\notin\{c_{1},c_{3}\}\).
(1.2.1) Suppose that \(\gamma(\ell_{2})=c_{3}\) and \(\gamma(\ell_{4})=c_{1}\). Since \(\gamma\) does not have a star with apex in \([]\), then \(\gamma(a_{i}b_{q})=c_{1}\) and as before, we can assume that any segment incident with \(b_{p}\) is colored with color \(c_{1}\).
Let \(\Delta:=\Delta(t_{1}^{1},t_{1}^{3},b_{q})\). From \(|\gamma(T_{1})|=1\) it follows that \(|\gamma(\Delta)|\in\{2,3\}\). We note that any segment of \(Q\) intersecting \(\Delta\) is incident with a point of \(\Delta\cup\{b_{p}\}\), and so \(|\gamma(Q)|\geq|\gamma(Q\setminus\Delta)|+|\gamma(\Delta)|\). Since Lemma 20 implies \(|\gamma(Q\setminus\Delta)|\geq|Q\setminus\Delta|-2\), then we must have \(|\gamma(\Delta)|=2\), or we are done. Let \(c_{4}\) be such that \(\gamma(\Delta)=\{c_{2},c_{4}\}\). Then \(b_{q}t_{1}^{1}\) and \(b_{q}t_{1}^{3}\) are both colored with \(c_{4}\notin\{c_{1},c_{2},c_{3}\}\). Since any segment that is incident with \(b_{p}\) is colored with \(c_{1}\), then \(c_{4}\) must be a star with apex \(b_{q}\) in a corner of \([]\), and so we are done.
(1.2.2) Suppose that \(\gamma(\ell_{2})=c_{1}\) and \(\gamma(\ell_{4})=c_{3}\). Since \(\gamma\) does not have a star with apex in \([]\), then \(\gamma(a_{j}b_{p})=c_{1}\) and as before, we can assume that any segment incident with \(a_{i}\) is colored with color \(c_{1}\).
Let \(\Delta:=\Delta(t_{1}^{1},t_{1}^{2},a_{j})\). From \(|\gamma(T_{1})|=1\) it follows that \(|\gamma(\Delta)|\in\{2,3\}\). We note that any segment of \(Q\) intersecting \(\Delta\) is incident with a point of \(\Delta\cup\{a_{i}\}\), and so \(|\gamma(Q)|\geq|\gamma(Q\setminus\Delta)|+|\gamma(\Delta)|\). Since Lemma 20 implies \(|\gamma(Q\setminus\Delta)|\geq|Q\setminus\Delta|-2\), then we must have \(|\gamma(\Delta)|=2\), or we are done. Let \(c_{4}\) be such that \(\gamma(\Delta)=\{c_{2},c_{4}\}\). Then \(a_{j}t_{1}^{1}\) and \(a_{j}t_{1}^{2}\) are both colored with \(c_{4}\notin\{c_{1},c_{2},c_{3}\}\).
Suppose that \(\gamma(T_{2})=\{c_{5},c_{6}\}\), and let \(e_{2}\) be as in (N3). Suppose that \(\gamma(e_{2})=c_{5}\). Clearly, some of \(t_{1}^{2}l(e_{2})\) or \(t_{1}^{3}r(e_{2})\) provides a new color, say \(c_{7}\), and hence \(\gamma(b_{p}t_{1}^{1})\triangleq c_{3}\).
If \(\gamma(t_{1}^{2}l(e_{2}))=c_{7}\), then \(\gamma(t_{1}^{3}r(e_{2}))\triangleq c_{5}\) and \(b_{q}t_{1}^{3}\) provides the 8th color. Thus, we may assume that \(\gamma(t_{1}^{3}r(e_{2}))=c_{7}\), and so \(\gamma(t_{1}^{2}l(e_{2}))\triangleq c_{5}\), \(\gamma(b_{q}t_{1}^{3})\triangleq c_{7}\), \(\gamma(b_{q}r(e_{2}))\triangleq c_{7}\), \(\gamma(t_{1}^{2}r(e_{2}))\triangleq c_{5}\) and \(\gamma(t_{1}^{1}l(e_{2}))\triangleq c_{4}\). If \(t_{2}^{3}\in e_{2}\), then \(a_{j}t_{2}^{3}\) provides the 8th color. Similarly, if \(t_{2}^{3}\notin e_{2}\) then some of \(b_{q}t_{2}^{3}\) or \(a_{j}t_{2}^{2}\) provides the 8th color. This proves Case 1
Suppose now that \(\gamma(\ell_{2})=c_{3}\) and \(\gamma(\ell_{4})=c_{1}\). Since \(\gamma\) does not have a star with apex in \([]\), then \(\gamma(a_{i}b_{q})=c_{1}\) and as before we can assume that any segment incident with \(b_{p}\) is colored with \(c_{1}\). Let \(c_{0}=\gamma(a_{j}w)\). If \(\gamma(b_{q}w)=c_{0}\), then \(c_{0}\) is star with apex \(w\), and so \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\{w\})|+1=6\) by Proposition 5 (_iii_) and Lemma 13. Thus, we may assume that \(\gamma(b_{q}w)\neq c_{0}\). Since \(c_{0}\notin\gamma(T_{1})\cup\{c_{1},\gamma(b_{q}w)\}\), then \(c_{0}=c_{3}\) or \(c_{0}\) is the 6th required color. Since \(c_{3}\) cannot be a star of \(\gamma\), then \(\gamma(a_{i}w)\triangleq c_{3}\). By applying Lemma 17 to \(Q^{\prime\prime}=Q^{\prime}\setminus\{a_{j},b_{p}\}\) we have \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime\prime})|\geq 6\), as required. \(\triangle\)
(1.3.1) Suppose that \(\gamma(\ell_{2})=c_{1}\) and \(\gamma(\ell_{4})=c_{3}\). Since \(\gamma\) does not have a star with apex in \([]\), then \(\gamma(a_{j}b_{p})=c_{1}\) and as before we can assume that any segment incident with \(a_{i}\) is colored with color \(c_{1}\).
**Claim 7**.: \(\gamma(Q)\) _has a proper star with apex \(u\in T_{2}\) or \(|\gamma(Q)|\geq 8\)._
Proof of Claim 7.: Suppose that \(t_{2}^{2}\) is not an apex of \(\gamma(Q)\). Then \(e_{2}=t_{2}^{2}t_{2}^{k}\) for some \(k\in\{1,3\}\). Let \(l\) be such that \(\{l,k\}=\{1,3\}\) and let \(\Delta:=\Delta(e_{2},a_{j})\). Additionally, we suppose that \(t_{2}^{l}\) is not an apex of \(\gamma(Q)\). This implies that \(\gamma(a_{j}t_{2}^{2})=c_{5}\) and, as a consequence, any segment of \(\mathcal{Q}\) coloured with \(c_{5}\) must be incident with a corner of \(\Delta\). Since \(e_{2}\) and \(a_{j}t_{2}^{2}\) are sides of \(\Delta\), \(|\gamma(\Delta)|\geq 2\).
We now note that no color of \(\gamma(\Delta)\) belongs to \(\gamma(Q^{\prime})\) for \(Q^{\prime}=Q\setminus\Delta\). Then \(|\gamma(Q)|\geq|\gamma(Q^{\prime})|+|\gamma(\Delta)|\). Since \(|\gamma(Q^{\prime})|\geq 5\) by Lemma 20, then \(|\gamma(\Delta)|=2\) or we are done. Then we must have that \(\gamma(a_{j}t_{2}^{k})=c_{6}\). From this last it follows that \(c_{6}\) must be a star of \(\gamma\) with apex \(t_{2}^{k}\in T_{2}\), as claimed. \(\triangle\)
By Claim 7 we know that \(\gamma(Q)\) has a star with apex \(u\in T_{2}\). By Proposition 5 (_iii_), it is enough to show \(|\gamma(Q^{\prime})|\geq 7\) for \(Q^{\prime}=Q\setminus\{u\}\). Let \(\{t_{2}^{l},t_{2}^{k}\}=T_{2}\setminus\{u\}\) with \(l<k\), and let \(\Delta^{\prime}:=\Delta(t_{2}^{l},t_{2}^{k},a_{j})\). Since \(\Delta^{\prime}\) is a separable subset of \(Q^{\prime}\), we may assume that \(|\gamma(\Delta^{\prime})|\in\{1,2\}\) or we are done by Lemma 13.
Suppose that \(|\gamma(\Delta^{\prime})|=1\). Then either \(l(e_{1})a_{j}\) or \(r(e_{1})t_{2}^{l}\) provides the 6th color of \(\gamma(Q^{\prime})\), and so \(\gamma(b_{q}t_{2}^{k})\triangleq c_{3}\). This and the fact that \(b_{q}\) cannot be apex of any star of \(\gamma\) imply \(\gamma(b_{p}t_{2}^{k})\triangleq c_{3}\). By applying Lemma 18 to \(Q^{\prime\prime}=Q^{\prime}\setminus\{a_{i},b_{q}\}\) we have \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime\prime})|\geq 7\), as required.
Suppose now that \(|\gamma(\Delta^{\prime})|=2\). If \(\gamma(a_{j}t_{2}^{l})=\gamma(a_{j}t_{2}^{k})\) then \(a_{j}\) is an apex of \(\gamma(Q^{\prime})\) and we are done by Lemma 20. Then \(t_{2}^{l}t_{2}^{k}\) must have the same color \(c_{5}^{\prime}\) that some of \(a_{j}t_{2}^{l}\) or \(a_{j}t_{2}^{k}\). From this last it is easy to see that \(c_{5}^{\prime}\) must be a star of \(\gamma\) with apex \(v\in\{t_{2}^{l},t_{2}^{k}\}\), and so the required inequality follows by applying Claim 6 to \(Q\setminus\{u,v\}\). This proves (1.3.1).
(1.3.2) Suppose that \(\gamma(\ell_{2})=c_{3}\) and \(\gamma(\ell_{4})=c_{1}\). Since \(\gamma\) does not have a star with apex in \([]\), then \(\gamma(a_{i}b_{q})=c_{1}\) and as before we can assume that any segment incident with \(b_{p}\) is colored with color \(c_{1}\).
**Claim 8**.: _If \(u,v\in T_{1}\) are such that \(\gamma(Q\setminus\{u,v\})\subseteq\gamma(Q)\setminus\gamma(T_{1})\), then \(|\gamma(Q)|\geq 8\)._
Proof of Claim 8.: Suppose that \(T_{1}=\{u,v,w\}\). Since \(|\gamma(T_{1})|=2\) and \(\gamma(Q\setminus\{u,v\})\subseteq\gamma(Q)\setminus\gamma(T_{1})\), it is enough to show that \(|\gamma(Q^{\prime})|\geq 6\) for \(Q^{\prime}=Q\setminus\{u,v\}\).
Let \(c_{7}:=\gamma(b_{q}w)\) and note that \(c_{7}\notin\{c_{1},c_{3},c_{5},c_{6}\}\). Thus, we need to show the existence of one additional color. Since Lemma 13 implies that \(|\gamma(Q\setminus\{u,v,w\})|\geq 5\), we can assume that \(w\) is not an apex of \(\gamma(Q^{\prime})\). This last implies that \(\gamma(a_{i}w)\neq c_{7}\), and so \(\gamma(a_{i}w)\triangleq c_{3}\) and \(\gamma(a_{j}w)\triangleq c_{3}\). Then we can modify \(\gamma\), if necessary, by recoloring with \(c_{3}\) all segments in \(a_{i}*(T_{2}\cup\{w\})\). It is not hard to see this modification does not affect the essential properties of \(\gamma(Q^{\prime})\).
We may assume that \(c_{5}\) is a thrackle of \(\gamma(Q^{\prime})\). Indeed, if \(c_{5}\) is a star with apex \(x\in T_{2}\), then Proposition 5 (_iii_) and Lemma 13 imply \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\{x\})|+1\geq 6\), as required. Then \(e_{2}=t_{2}^{2}t_{2}^{k}\) for some \(k\in\{1,3\}\) and \(\gamma(a_{j}t_{2}^{2})=c_{5}\). These imply \(\gamma(a_{j}t_{2}^{k})\triangleq c_{6}\) and so \(c_{6}\) must be a star with apex \(t_{2}^{k}\). By Proposition 5 (_iii_) and Lemma 13 we have \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\{t_{2}^{k}\})|+1\geq 6\), as required. \(\triangle\)
**Claim 9**.: \(\gamma(Q)\) _has a proper star with apex \(u\in T_{1}\) or \(|\gamma(Q)|\geq 8\)._
Proof of Claim 9.: Suppose that \(t_{1}^{2}\) is not an apex of \(\gamma(Q)\). Then \(e_{1}=t_{1}^{2}t_{1}^{k}\) for some \(k\in\{1,3\}\). Let \(l\) be such that \(\{l,k\}=\{1,3\}\) and let \(\Delta:=\Delta(e_{1},b_{q})\). Additionally, we may assume that \(t_{1}^{l}\) is not an apex of \(\gamma(Q)\). This implies that \(\gamma(b_{q}t_{1}^{2})=c_{2}\) and, as a consequence, any segment of \(\mathcal{Q}\) coloured with \(c_{2}\) must be incident with a corner of \(\Delta\). Since \(e_{1}\) and \(b_{q}t_{1}^{2}\) are sides of \(\Delta\), \(|\gamma(\Delta)|\geq 2\).
We now note that no color of \(\gamma(\Delta)\) belongs to \(\gamma(Q^{\prime})\) for \(Q^{\prime}=Q\setminus\Delta
Claim 8. Thus, we can assume that \(|\gamma(\Delta^{\prime})|=1\) and that neither \(t_{1}^{\prime}\) nor \(t_{1}^{\prime}\) is an apex of \(\gamma(Q)\). By Proposition 5\((iii)\), it is enough to show \(|\gamma(Q^{\prime})|\geq 7\) for \(Q^{\prime}=Q\setminus\{u\}\). We remark that \(c_{1},c_{3},\gamma(\Delta^{\prime}),c_{5}\) and \(c_{6}=\gamma(e_{2})\) are pairwise distinct.
Clearly, some of \(t_{1}^{k}(t_{2})\) or \(b_{q}r(e_{2})\) provides the \(6\)th of \(\gamma(Q^{\prime})\). If \(\gamma(b_{q}r(e_{2}))\) is such \(6\)th color, then \(\gamma(t_{1}^{k}l(e_{2}))\doteq c_{6}\) and either \(t_{1}^{\prime}a_{i}\) or \(t_{1}^{\prime}a_{j}\) provides the \(7\)th required color. Thus we may assume that \(\gamma(t_{1}^{k}l(e_{2}))\) is the \(6\)th color, say \(c_{6}^{\prime}\). Then \(\gamma(b_{q}r(e_{2}))\doteq c_{6}\), \(\gamma(t_{1}^{\prime}l(e_{2}))=c_{6}^{\prime}\), \(\gamma(t_{1}^{l}a_{i})\doteq c_{3}\), \(\gamma(t_{1}^{l}a_{j})\doteq c_{3}\), \(\gamma(t_{1}^{k}a_{j})\doteq c_{6}^{\prime}\) and \(\gamma(b_{q}l(e_{2}))\doteq c_{6}\). The \(7\)th color is given by \(t_{2}^{3}a_{j}\) if \(t_{2}^{3}\in e_{2}\), and by either \(t_{2}^{2}a_{j}\) or \(b_{q}t_{2}^{3}\) when \(t_{2}^{3}\notin e_{2}\). This proves (1.3.2).
Case 2. Suppose that \(\gamma([])=3\). Then one of \(\gamma(\ell_{2})\notin\{c_{1},c_{3}\}\) or \(\gamma(\ell_{4})\notin\{c_{1},c_{3}\}\) holds. Suppose that \(c_{2}=\gamma(\ell_{2})\notin\{c_{1},c_{3}\}\). Then \(\gamma(\ell_{4})\in\{c_{1},c_{3}\}\). Note that if \(\gamma(\ell_{4})=c_{1}\), then \(\ell_{3}\) can be recolored with \(c_{2}\) and we are done by Case 1. Then we can assume \(\gamma(\ell_{4})=c_{3}\). Since any segment that intersects \(\ell_{2}\) also intersect the diagonal \(\ell_{2}^{\prime}\) of \([]\) that goes from \(\ell_{1}\cap\ell_{4}\) to \(\ell_{2}\cap\ell_{3}\), we can assume that \(\gamma(\ell_{2}^{\prime})=c_{2}\). Similarly, if \(\ell_{4}^{\prime}\) is the diagonal of \([]\) that goes from \(\ell_{1}\cap\ell_{2}\) to \(\ell_{3}\cap\ell_{4}\), we can assume that \(\gamma(\ell_{4}^{\prime})=c_{3}\). Since \(c_{1}\) cannot be a star of \(\gamma(Q)\), then \(c_{1}\) must be a triangle \(\Delta\) with a side in \(\ell_{1}\).
Let \(U_{\ell_{1}}\) be as in Definition 1. From the definition of \(\ell_{1}\) it is not hard to see that each point of \(U_{\ell_{1}}\) is the apex of a proper star of \(\gamma\). Thus, we may assume that \(|U_{\ell}|\leq 2\), as otherwise Proposition 5\((iii)\) and Lemma 13 imply \(|\gamma(Q)|\geq 8\). Let \(\{v_{0},\ldots,v_{s}\}\) be as in Definition 1. We remark that \(s\geq 4\) because \(|U_{\ell_{1}}|\leq 2\). Let \(Q^{\prime}=Q\setminus U_{\ell_{1}}\). By Proposition 5\((iii)\), it is enough to show \(|\gamma(Q^{\prime})|\geq|Q^{\prime}|-2\).
Let \(v_{i}\in\{v_{0},\ldots,v_{s}\}\) be the corner of \(\Delta\) in \(T_{1}\cup T_{2}\). Suppose first that \(v_{i}=v_{0}\), let \(\Delta_{j}:=\Delta(v_{0}v_{1},x_{k})\) with \(x_{k}=\ell_{1}\cap\ell_{k}\) and \(k\in\{2,4\}\), and let \(c_{4}=\gamma(v_{0}v_{1})\). Clearly, \(c_{4}\notin\{c_{1},c_{2},c_{3}\}\), \(x_{2}=a_{i}\), and \(x_{4}=b_{p}\). It is not hard to see that \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{j})|+|\gamma(\Delta _{j})|\). Since \(|\gamma(Q^{\prime}\setminus\Delta_{j})|\geq|Q^{\prime}\setminus\Delta_{j}|-2\) by Lemma 20, we can assume that \(|\gamma(\Delta_{j})|=2\) and so that \(\gamma(\Delta_{j})=\{c_{1},c_{4}\}\). From this last it follows that \(\gamma(v_{1}x_{2})=c_{4}=\gamma(v_{1}x_{4})\), contradicting that \(v_{1}\notin U_{\ell_{1}}\). Then we can assume that \(v_{i}\neq v_{0}\). Since \(v_{0}\notin U_{\ell_{1}}\), the triangle \(\Delta_{\ell_{1}}:=\Delta(\ell_{1},v_{0})\) satisfies \(|\gamma(\Delta_{\ell_{1}})|=3\). Suppose that some \(c_{j}\in\{c_{2},c_{3}\}\) is in \(\gamma(\Delta_{\ell_{1}})\). Since any segment of color \(c_{j}\) intersects to \(\ell_{3}\), we can recololor \(\ell_{3}\) (if necessary) with \(c_{j}\). Then \(\gamma(Q^{\prime}\setminus\Delta_{\ell_{1}})=\gamma(Q^{\prime})\setminus\{c_{1},c _{k}\}\) with \(\{c_{j},c_{k}\}=\{c_{2},c_{3}\}\) and \(\ell_{3}\) is the only segment of \(Q^{\prime}\setminus\Delta_{\ell_{1}}\) colored with \(c_{j}\). Moreover, we note that in such a case \(Q^{\prime}\setminus\Delta_{\ell_{1}}\) and \(\ell_{3}\) satisfy the conditions of Lemma 16, and so we can conclude that \(|\gamma(Q^{\prime}\setminus\Delta_{\ell_{1}})|\geq|Q^{\prime}\setminus\Delta_{ \ell_{1}}|-1\), or equivalently, \(|\gamma(Q^{\prime})|\geq|Q^{\prime}|-2\). Then we can assume that neither \(c_{2}\) nor \(c_{3}\) belongs to \(\gamma(\Delta_{\ell_{1}})\). From this fact it is easy to see that \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{\ell_{1}})|+|\gamma( \Delta_{\ell_{1}})|\). Since \(|\gamma(\Delta_{\ell_{1}})|=3\), and \(|\gamma(Q^{\prime}\setminus\Delta_{\ell_{1}})|\geq|Q^{\prime}\setminus\Delta_{ \ell_{1}}|-2\) by Lemma 14, then \(|\gamma(Q^{\prime})|\geq|Q^{\prime}|-2\). The case in which \(\gamma(\ell_{4})\notin\{c_{1},c_{3}\}\) can be handled in a similar way.
**Lemma 24**.: _Let \(\Delta_{U}\) be a triangle with corners in \(U\in\{A,B\}\), \(v_{p},v_{q}\in(A\cup B)\setminus U\) and \(Q:=T_{1}\cup\Delta_{U}\cup\{v_{p},v_{q}\}\cup T_{2}\). If \(\gamma\) is an optimal coloring of \(D(Q)\), then \(|\gamma(D(Q))|\geq 9\)._
Proof.: Let \(u_{i},u_{j},u_{k}\) be the corners of \(\Delta_{U}\) with \(i<j<k\). W.l.o.g. we assume that \(p<q\). We may assume that \(\gamma\) does not have a star with apex \(w\in\Delta_{U}\) (resp. \(w\in\{v_{p},v_{q}\}\)). Indeed, if such an apex \(w\) exists, then we can deduce the required inequality by applying Lemma 23 (resp. Lemma 21) and Proposition 5\((iii)\) to \(Q\setminus\{w\}\). From Lemma 11 we know that \(|\gamma(Q\setminus\Delta_{U})|\geq 6\). From this and the fact that \(\Delta_{U}\) is a separable triangle of \(Q\), we have that \(|\gamma(Q)|\geq|\gamma(Q\setminus\Delta_{U})|+|\gamma(\Delta_{U})|\geq 6+|\gamma( \Delta_{U})|\) and so we may assume that \(|\gamma(\Delta_{U})|\in\{1,2\}\), as otherwise we are done. Moreover, since \(|\gamma(\Delta_{U})|=2\) implies that some vertex \(w\) of \(\Delta_{U}\) is an apex of \(\gamma(Q)\), we may assume \(|\gamma(\Delta_{U})|=1\). Let \(c_{0}=\gamma(\Delta_{U})\) and let \([]\) be the convex quadrilateral defined by \(\{u_{i},u_{k},v_
the points of \(\Delta_{i}\) it is not hard to see that any segment colored with some color in \(\gamma(\Delta_{i})\) is incident with a point of \(\Delta_{i}\), and hence \(|\gamma(Q^{\prime})|\geq|\gamma(Q^{\prime}\setminus\Delta_{i})|+|\gamma(\Delta_{ i})|\).
If \(\Delta_{i}\cap\Delta_{U}\neq\emptyset\), then \(Q^{\prime}\setminus\Delta_{i}\subset Q\setminus(\Delta_{i}\cap\Delta_{U})\) and so Lemma 23 and Proposition 5\((ii)\) imply \(|\gamma(Q^{\prime}\setminus\Delta_{i})|\geq|Q^{\prime}\setminus\Delta_{i}|-2\). If \(\Delta_{i}\cap\Delta_{U}=\emptyset\), then \(Q^{\prime}\setminus\Delta_{i}\subset Q\setminus(\Delta_{i}\cap\{v_{p},v_{q}\})\) and so Lemma 21 and Proposition 5\((i)\) imply \(|\gamma(Q^{\prime}\setminus\Delta_{i})|\geq|Q^{\prime}\setminus\Delta_{i}|-2\). In any case, we have \(|\gamma(Q^{\prime})|\geq|Q^{\prime}\setminus\Delta_{i}|-2+|\gamma(\Delta_{i})|\). Since \(|Q^{\prime}\setminus\Delta_{i}|=11-|U_{\ell}|-|\Delta_{i}|\) we have \(|\gamma(Q^{\prime})|\geq(11-|U_{\ell}|-|\Delta_{i}|)-2+|\gamma(\Delta_{i})|=9 -|U_{\ell}|\), as required.
Case 2. Suppose that \(T_{1}\cup T_{2}\) lies in the exterior of \([]\). Then the index \(i\) of \(u_{i}\) is at most \(3\), and hence \(T_{1}\cup T_{2}\) must lie on the right side of \([]\), as depicted in Figure 3. Let \(\ell_{1}=u_{i}v_{p},\ell_{2}=v_{p}v_{q},\ell_{3}=v_{q}u_{k}\) and \(\ell_{4}=u_{i}u_{k}\) be the sides of \([]\) and let \(S=\{u_{i},u_{j},u_{k},v_{p},v_{q}\}\). By Lemma13 we may assume that \(\gamma\) has at most \(2\) stars with apices in \(T_{1}\cup T_{2}\).
**Claim 10**.: _W.l.o.g. we may assume that the segments with both endpoints in \(S\) are coloured by \(\gamma\) as in some of the four cases illustrated in Figure 3._
Proof of Claim 10.: Suppose first that \(\Delta_{U}\subset A\). Thus the points of \(S\) are accommodated as depicted in Figure 3\((a)-(b)\). Note that if \(\mathcal{S}_{1}=\{v_{p}u_{j},v_{p}u_{k},v_{q}u_{i}\}\), then \(\ell_{1}\cup\mathcal{S}_{1}\) form a thrackle and moreover, if \(\ell\in\mathcal{Q}\setminus\{\ell_{1}\}\) is such that \(\ell_{1}\cap\ell\neq\emptyset\), then \(\ell\cap\ell^{\prime}\neq\emptyset\) for each \(\ell^{\prime}\in\mathcal{S}_{1}\). From these it follows that we can recolour each segment of \(\mathcal{S}_{1}\) with color \(\gamma(\ell_{1})\) without affecting the essential properties of \(\gamma(Q)\). We now consider two subcases, depending on whether or not \(\gamma(\ell_{1})=\gamma(\ell_{2})\).
Suppose that \(\gamma(\ell_{1})=\gamma(\ell_{2})\). Then \(\gamma(u_{j}v_{q})\notin\{\gamma(\ell_{1}),\gamma(\ell_{4})\}\). We note that any \(\ell\in\mathcal{Q}\setminus\{u_{j}v_{q}\}\) with \(\gamma(\ell)=\gamma(u_{j}v_{q})\) must intersect \(\ell_{3}\). Then, if necessary, we can recolour \(\ell_{3}\) with color \(\gamma(u_{j}v_{q})\) without affecting the essential properties of \(\gamma(Q)\), and so we are in the case depicted in Figure 3\((a)\).
Suppose now that \(\gamma(\ell_{1})\neq\gamma(\ell_{2})\). We note that if \(\mathcal{S}_{2}=\{v_{q}u_{i},v_{q}u_{j},v_{q}u_{k}\}\), then \(\{\ell_{2}\}\cup\mathcal{S}_{2}\) is a thrackle. Moreover, note that if \(\gamma(\ell)\notin\{\gamma(\ell_{1}),\gamma(\ell_{4})\}\) and \(\ell\cap\ell_{2}\neq\emptyset\), then \(\ell\) intersects to each segment of \(\mathcal{S}_{2}\), and so we can recolour each segment of \(\mathcal{S}_{2}\) with color \(\gamma(\ell_{2})\), lying in the case depicted in Figure 3\((b)\).
An analogous reasoning shows that if \(\Delta_{U}\subset B\), then \(\gamma(S)\) is as in some of the cases depicted in Figure 3\((c)-(d)\).
From now on, we let \(c_{1}=\gamma(\ell_{1})\) and \(c_{3}=\gamma(\ell_{3})\). We recall that \(c_{0}=\gamma(\Delta_{U})\).
Figure 3: Here \(T_{1}\cup T_{2}\) lies in the exterior of the convex quadrilateral \([]\) formed by the thick segments. W.l.o.g. we may assume that if \(S\ =\ \{u_{i},u_{j},u_{k},v_{p},v_{q}\}\), then the segments with both endpoints in \(S\) are coloured by \(\gamma\) as illustrated in some of these four cases. In (b) \(f\in v_{p}*(T_{1}\cup T_{2})\) and \(g\in u_{i}*(T_{1}\cup T_{2})\). The existence of such \(f\) (resp. \(g\)) it follows from the assumption that \(v_{q}\) (resp. \(v_{p}\)) cannot be an apex of \([]\). Similarly, we can assume the existence of the corresponding \(f\) and \(g\) in (d).
Case 2.1. Suppose that the segments in \(\mathcal{S}\) are coloured by \(\gamma\) as in Figure 3\((a)\). By recoloring, if necessary, we may assume that each segment incident with \(v_{p}\) is coloured with \(c_{1}\). We remark that this assumption does not affect the essential properties of \(\gamma(Q)\).
Case 2.1.1. Suppose that \(|\gamma(T_{1})|=1\). Let \(\Delta:=\Delta(t_{1}^{1},t_{1}^{3},v_{q})\). We note that any segment that crosses \(\Delta\) is incident with \(v_{p}\) and so has color \(c_{1}\). Then \(\Delta\) is a separable subset of \(Q\) with respect to \(\gamma\) and so we can assume that \(|\gamma(\Delta)|\in\{1,2\}\), as otherwise we are done by Lemma 21. This assumption and \(|\gamma(T_{1})|=1\) imply that \(|\gamma(\Delta)|=2\). From this last it follows that \(\gamma\) has a star with apex \(v_{q}\in[]\), and so we are done by applying Lemma 21 and Proposition 5\((iii)\) to \(Q\setminus\{v_{q}\}\).
Case 2.1.2. Suppose that \(|\gamma(T_{1})|=2\) and \(|\gamma(T_{2})|=1\). We remark that the 5 colors in \(C=\{c_{0},c_{1},\gamma(T_{2})\}\cup\gamma(T_{1})\) are pairwise distinct. Let \(c_{6}=\gamma(t_{2}^{3}u_{k}),c_{7}=\gamma(t_{2}^{1}u_{j})\) and \(c_{8}=\gamma(t_{2}^{2}v_{q})\) and note that the colors in \(C\cup\{c_{6},c_{7},c_{8}\}\) are pairwise distinct. Then \(\gamma(l(e_{1})u_{i})\doteq\gamma(e_{1})\), \(\gamma(r(e_{1})u_{j})\doteq c_{7}\), \(\gamma(t_{2}^{2}u_{k})\doteq c_{6}\) and \(\gamma(t_{2}^{2}v_{q})\doteq c_{8}\). From these it follows that \(\gamma(r(e_{1})t_{2}^{2})\) is the 9th color.
Case 2.1.3. Suppose that \(|\gamma(T_{1})|=2=|\gamma(T_{2})|\). Let \(e_{2}\) be as in (N3).
\(\bullet\) Suppose that \(\gamma(Q)\) has 2 stars with apices \(u,v\in T_{1}\). It is enough to show that \(|\gamma(Q\setminus\{u,v\})|\geq 7\). Let \(w\) be such that \(T_{1}=\{u,v,w\}\). By Lemma 11 we may assume that no point of \(T_{2}\cup\{w\}\) is an apex of \(\gamma\). We note that the 5 colors in \(C=\{c_{0},c_{1},c_{3}\}\cup\gamma(T_{2})\) are pairwise distinct. Then \(c_{6}=\gamma(wu_{k})\) must be the 6th color, and so \(\gamma(ww_{q})\doteq c_{3}\). Suppose first that \(t_{3}^{3}\in e_{2}\). Since \(c_{3}\) cannot be a star, then \(\gamma(\Delta)=\gamma(e_{2})\) where \(\Delta:=\Delta(e_{2},v_{q})\). Then either \(l(e_{2})w\) or \(t_{2}^{3}u_{k}\) provides the 7th color. Suppose now that \(t_{3}^{3}\notin e_{2}\). Then \(e_{2}=t_{2}^{1}t_{2}^{2}\), and either \(\gamma(v_{q}t_{3}^{2})\) is the 7th required color or \(\gamma(v_{q}t_{3}^{2})=\gamma(t_{2}^{1}t_{2}^{2})=\gamma(t_{2}^{2}t_{2}^{2})\). Since these equalities imply that \(t_{3}^{2}\in T_{2}\) is an apex of \(\gamma\), we are done.
\(\bullet\) Suppose that \(\gamma(Q)\) has exactly 1 star with apex in \(T_{1}\). Let \(u\) be the only apex of \(\gamma\) in \(T_{1}\), and let \(t_{1}^{l},t_{1}^{m}\) be such that \(T_{1}=\{u,t_{1}^{l},t_{1}^{m}\}\) with \(l<m\). By Proposition 5\((iii)\), it is enough to show that \(|\gamma(Q\setminus\{u\})|\geq 8\). Let \(\Delta:=\Delta(t_{1}^{l},t_{1}^{m},v_{q})\). We note that any segment that crosses \(\Delta\) is incident with \(v_{q}\) and so has color \(c_{1}\). Then \(\Delta\) is a separable subset of \(Q\) with respect to \(\gamma\), and so we can assume that \(|\gamma(\Delta)|\in\{1,2\}\), as otherwise we are done by Lemma 21. Moreover, since \(|\gamma(\Delta)|=2\) implies that \(\gamma\) has an apex in a corner of \(\Delta\), and this is impossible by previous assumptions, we must have \(|\gamma(\Delta)|=1\). Then \(c_{7}=\gamma(t_{1}^{l}u_{k})\) is the 7th color, and so \(\gamma(t_{1}^{m}u_{k})\doteq c_{7}\), \(\gamma(t_{1}^{m}l(e_{2}))\doteq\gamma(e_{2})\) and \(\gamma(v_{q}r(e_{2}))\doteq c_{3}\). Then \(t_{1}^{l}u_{j}\) provides the 8th required color.
\(\bullet\) Suppose that no point of \(T_{1}\) is an apex of \(\gamma(Q)\). Then either \(e_{1}=t_{1}^{1}t_{1}^{2}\) or \(e_{1}=t_{1}^{2}t_{1}^{3}\). Let \(c_{2}\) (resp. \(c_{4}\)) be such that \(\{c_{2},\gamma(e_{1})\}=\gamma(T_{1})\) (resp. \(\{c_{4},\gamma(e_{2})\}=\gamma(T_{2})\)). We remark that the 7 colors in \(C=\{c_{0},\ldots,c_{4},\gamma(e_{1}),\gamma(e_{2})\}\) are pairwise distinct. Since \(c_{2}\) cannot be a star of \(\gamma(Q)\) and any segment incident with \(v_{p}\) is colored with \(c_{1}\), then \(\gamma(v_{q}t_{1}^{2})=c_{2}\).
\(-\) Suppose that \(e_{1}=t_{1}^{1}t_{1}^{2}\). From \(\gamma(v_{q}t_{1}^{2})=c_{2}\) it follows that \(c_{8}:=\gamma(t_{1}^{3}u_{k})\) is the 8th color. We may assume that \(\gamma(t_{1}^{1}v_{q})\in\{\gamma(e_{1}),c_{3}\}\), as otherwise \(\gamma(t_{1}^{1}v_{q})\) is the required color. If \(\gamma(t_{1}^{1}v_{q})=\gamma(e_{1})\), then \(\gamma(t_{1}^{2}u_{k})\doteq c_{8}\), \(\gamma(t_{1}^{2}u_{k})\doteq c_{8}\), and hence either \(l(e_{2})t_{1}^{3}\) or \(r(e_{2})v_{q}\) provides the 9th color. Suppose now that \(\gamma(t_{1}^{1}v_{q})=c_{3}\). Since \(c_{3}\) cannot be a star of \(\gamma(Q)\) with apex in \(v_{q}\), then neither \(l(e_{2})v_{q}\) nor \(r(e_{2})v_{q}\) is colored \(c_{3}\). Then \(\gamma(v_{q}(e_{2}))\doteq\gamma(e_{2})\doteq\gamma(v_{q}r(e_{2}))\). If \(t_{3}^{2}\in e_{2}\), then either \(t_{1}^{3}l(e_{2})\) or \(t_{3}^{2}u_{k}\) provides the 9th color, and if \(t_{3}^{2}\notin e_{2}\) then \(\gamma(t_{1}^{3}t_{2}^{2})\doteq c_{8}\), \(\gamma(t_{2}^{3}v_{q})\doteq c_{4}\) and so the 9th required color is provided by \(t_{2}^{2}u_{k}\).
\(-\) Suppose that \(e_{1}=t_{1}^{2}t_{1}^{3}\). From \(\gamma(v_{q}t_{1}^{2})=c_{2}\) it follows that \(c_{8}:=\gamma(t_{1}^{1}u_{k})\) is the 8th color. We may assume that \(\gamma(t_{1}^{3}v_{q})\in\{\gamma(e_{1}),c_{3}\}\), as otherwise \(\gamma(t_{1}^{3}v_{q})\) is the required color. If \(\gamma(t_{1}^{3}v_{q})=\gamma(e_{1})\), then \(\gamma(t_{1}^{2}l(e_{2}))\doteq\gamma(e_{2})\), \(\gamma(r(e_{2})v_{q})\doteq c_{3}\), \(\gamma(t_{1}^{1}u_{j})\doteq c_{8}\) and so \(t_{1}^{2}u_{k}\) provides the 9th color. Suppose now that \(\gamma(t_{1}^{3}v_{q})=c_{3}\). Then \(\gamma(t_{1}^{1}u_{j})\doteq c_{8}\). Since \(c_{3}\) cannot be a star of \(\gamma(Q)\), then \(\gamma(v_{q}r(e_{2}))\doteq\gamma(e_{2})\), \(\gamma(t_{1}^{3}l(e_{2}))\doteq\gamma(e_{1}
Then \(\gamma(l(e_{1})t_{2}^{2})\stackrel{{\triangle}}{{=}}\gamma(e_{1})\), \(\gamma(r(e_{1})t_{2}^{2})\stackrel{{\triangle}}{{=}}\gamma(e_{1})\), \(\gamma(l(e_{1})u_{j})\stackrel{{\triangle}}{{=}}c_{8}\) and \(\gamma(t_{2}^{1}u_{k})\stackrel{{\triangle}}{{=}}c_{7}\). Then \(t_{2}^{3}r(e_{1})\) provides the 9th color.
Case 2.2.4. Suppose that \(|\gamma(T_{1})|=2=|\gamma(T_{2})|\). Let \(\gamma(T_{1})=\{c_{2},\gamma(e_{1})\}\) and \(\gamma(T_{2})=\{c_{4},\gamma(e_{2})\}\).
\(\bullet\) Suppose that \(\gamma(Q)\) has 2 stars with apices \(u,v\in T_{1}\). It is enough to show that \(|\gamma(Q\setminus\{u,v\})|\geq 7\). Let \(w\) be such that \(T_{1}=\{u,v,w\}\). By Lemma 11 we may assume that no point of \(T_{2}\cup\{w\}\) is an apex of \(\gamma(Q)\). We note that the 5 colors \(c_{0},c_{1},c_{3},c_{4},\gamma(e_{2})\) are pairwise distinct.
Then \(c_{6}:=\gamma(wu_{j})\) must be the 6th color, and so \(\gamma(wu_{k})\stackrel{{\triangle}}{{=}}c_{6}\). Since \(t_{2}^{2}\) cannot be an apex of \(\gamma\), then either \(e_{2}=t_{2}^{1}t_{2}^{2}\) or \(e_{2}=t_{2}^{2}t_{2}^{3}\). If \(e_{2}=t_{2}^{1}t_{2}^{2}\), then \(\gamma(t_{2}^{2}u_{k})\stackrel{{\triangle}}{{=}}\gamma(e_{2})\), \(\gamma(t_{2}^{2}u_{k})\stackrel{{\triangle}}{{=}}c_{4}\), \(\gamma(wt_{2}^{2})\stackrel{{\triangle}}{{=}}c_{6}\), and so \(\gamma(wt_{2}^{1})\in\{\gamma(e_{2}),c_{6}\}\). We note that if \(\gamma(wt_{2}^{1})=\gamma(e_{2})\) (respectively, \(\gamma(wt_{2}^{1})=c_{6}\)) then \(t_{2}^{1}\) (respectively, \(w\)) is an apex of \(\gamma\), contradicting that \(\gamma(T_{2}\cup\{w\})\) has no apices. Thus \(e_{2}=t_{2}^{2}t_{2}^{3}\) and so \(\gamma(t_{2}^{3}u_{k})\stackrel{{\triangle}}{{=}}\gamma(e_{2})\). Since \(\gamma(e_{2})\) cannot be a star, then \(\gamma(t_{2}^{2}u_{k})\stackrel{{\triangle}}{{=}}\gamma(e_{2})\), \(\gamma(t_{2}^{3}w)\stackrel{{\triangle}}{{=}}c_{6}\) and \(\gamma(t_{2}^{2}w)\stackrel{{\triangle}}{{=}}c_{6}\). But these imply that \(c_{6}\) is a star with apex in \(w\), a contradiction.
\(\bullet\) Suppose that \(\gamma(Q)\) has exactly 1 star with apex in \(T_{1}\). Let \(u\) be the only apex of \(\gamma\) in \(T_{1}\), let \(t_{1}^{l},t_{1}^{m}\) be such that \(T_{1}=\{u,t_{1}^{1},t_{1}^{m}\}\) with \(l<m\), and let \(e=t_{1}^{l}t_{1}^{m}\). It is enough to show that \(|\gamma(Q\setminus\{u\})|\geq 8\).
We note that the 6 colors \(c_{0},c_{1},c_{3},c_{4},\gamma(e),\gamma(e_{2})\) are pairwise distinct. Clearly, either \(t_{1}^{l}u_{j}\) or \(t_{1}^{m}u_{k}\) provides the 7th color \(c_{7}\).
\(-\) Suppose that \(\gamma(t_{1}^{l}u_{j})=c_{7}\). Then \(\gamma(t_{1}^{m}u_{k})\stackrel{{\triangle}}{{=}}\gamma(e)\). Since \(t_{1}^{m}\) cannot be an apex, there is \(h\in t_{1}^{l}*T_{2}\) such that \(\gamma(h)=\gamma(e)\).
If \(t_{2}^{3}\in e_{2}\), then \(\gamma(t_{2}^{3}u_{k})\stackrel{{\triangle}}{{=}}\gamma(e_{2})\), \(\gamma(l(e_{2})t_{1}^{m})\stackrel{{\triangle}}{{=}}\gamma(e)\) and \(\gamma(l(e_{2})v_{q})\stackrel{{\triangle}}{{=}}c_{3}\). These and the existence of \(h\) imply \(\gamma(t_{1}^{m}v_{p})\stackrel{{\triangle}}{{=}}c_{1}\) and \(\gamma(t_{1}^{m}u_{i})\stackrel{{\triangle}}{{=}}c_{1}\). Since \(t_{1}^{l}\) cannot be an apex, then \(t_{1}^{l}v_{p}\) cannot be colored \(c_{7}\), and so \(t_{1}^{l}v_{p}\) provides the required 8th color. Similarly, if \(t_{2}^{3}\notin e_{2}\) then \(e_{2}=t_{2}^{1}t_{2}^{2}\), and so \(\gamma(t_{2}^{2}u_{k})\stackrel{{\triangle}}{{=}}\gamma(e_{2})\), \(\gamma(t_{2}^{2}t_{1}^{m})\stackrel{{\triangle}}{{=}}\gamma(e)\) and \(\gamma(t_{2}^{2}v_{q})\stackrel{{\triangle}}{{=}}c_{3}\). These and the existence of \(h\) imply \(\gamma(t_{1}^{m}v_{p})\stackrel{{\triangle}}{{=}}c_{1}\) and \(\gamma(t_{1}^{m}u_{i})\stackrel{{\triangle}}{{=}}c_{1}\). Since \(t_{1}^{l}\) cannot be an apex, then \(t_{1}^{l}v_{p}\) cannot be colored \(c_{7}\), and so \(t_{1}^{l}v_{p}\) provides the required 8th color.
\(-\) Suppose that \(\gamma(t_{1}^{m}u_{k})=c_{7}\). Then \(\gamma(t_{1}^{l}u_{j})\stackrel{{\triangle}}{{=}}\gamma(e)\). Since \(t_{1}^{l}\) cannot be an apex, there is \(h\in t_{1}^{m}*\{u_{i},u_{j}\}\) such that \(\gamma(h)=\gamma(e)\). Then \(\gamma(t_{1}^{m}v_{q})\stackrel{{\triangle}}{{=}}c_{3}\), \(\gamma(t_{1}^{l}v_{p})\stackrel{{\triangle}}{{=}}c_{1}\) and \(\gamma(t_{1}^{l}u_{i})\stackrel{{\triangletriangle}}{{=}}c_{1}\). Since \(t_{1}^{m}\) cannot be an apex, then \(\gamma(t_{1}^{m}v_{p})\stackrel{{\triangletriangle}}{{=}}c_{7}\) and so \(\gamma(t_{1}^{m}v_{p})\stackrel{{\triangletriangle}}{{=}}c_{3}\). Then the triangle formed by \(e_{2}\) and \(v_{q}\) must be colored with \(\gamma(e_{2})\) and so \(\gamma(t_{1}^{m}l(e_{2}))\stackrel{{\triangletriangle}}{{=}}c_{7}\). If \(t_{2}^{3}\in e_{2}\), then \(t_{2}^{3}u_{k}\) provides the required 8th color, and if \(t_{2}^{3}\notin e_{2}\) then \(e_{2}=t_{2}^{3}t_{2}^{2}\) and either \(t_{2}^{2}u_{k}\) or \(t_{2}^{3}v_{q}\) provides the required 8th color.
\(\bullet\) Suppose that \(\gamma\) has no stars in \(T_{1}\). Then either \(e_{1}=t_{1}^{1}t_{1}^{2}\) or \(e_{1}=t_{1}^{2}t_{1}^{3}\). Since \(c_{2}\) cannot be a star, there is \(h\in t_{1}^{2}*\{v_{p},v_{q}\}\) such that \(\gamma(h)=c_{2}\). We remark that the 7 colors \(c_{0},c_{1},c_{3},c_{2},c_{4},\gamma(e_{1}),\gamma(e_{2})\) are pairwise distinct.
\(-\) Suppose that \(e_{1}=t_{1}^{1}t_{1}^{2}\). Then either \(t_{1}^{1}u_{j}\) or \(t_{1}^{2}u_{k}\) needs a new color \(c_{8}\). Then \(\{\gamma(t_{1}^{1}u_{j}),\gamma(t_{1}^{2}u_{k})\}\), as otherwise we are done. If \(\gamma(t_{1}^{1}u_{j})=\gamma(e_{1})\) and \(\gamma(t_{1}^{2}u_{k})=c_{8}\), then \(\gamma(t_{1}^{3}l(e_{2}))\stackrel{{\triangle}}{{=}}\gamma(e_{2})\), \(\gamma(t_{1}^{3}r(e_{2}
color.
Case 2.3.3. Suppose that \(|\gamma(T_{1})|=2=|\gamma(T_{2})|\).
\(\bullet\) Suppose that \(\gamma(Q)\) has 2 stars with apices \(u,v\in T_{1}\). It is enough to show that \(|\gamma(Q\setminus\{u,v\})|\geq 7\). Let \(w\) be such that \(T_{1}=\{u,v,w\}\). By Lemma 11 we may assume that no point of \(T_{2}\cup\{w\}\) is an apex of \(\gamma\). We note that the 4 colors in \(C=\{c_{0},c_{1}\}\cup\gamma(T_{2})\) are pairwise distinct. Then \(c_{5}:=\gamma(wu_{i})\) must be the 5th color. Since \(w\) cannot be an apex of \(\gamma\), then \(c_{6}:=\gamma(wv_{q})\) must be the 6th color. We remark that possibly \(c_{3}\in\{c_{5},c_{6}\}\). Then either \(l(e_{2})u_{j}\) or \(r(e_{2})u_{k}\) provides the 7th color.
\(\bullet\) Suppose that \(\gamma(Q)\) has exactly 1 star with apex in \(T_{1}\). Let \(u\) be the only apex of \(\gamma\) in \(T_{1}\), and suppose that \(T_{1}=\{u,t^{1}_{1},t^{m}_{1}\}\) with \(l<m\). It is enough to show that \(|\gamma(Q\setminus\{u\})|\geq 8\). Let \(\Delta:=\Delta(t^{1}_{1},t^{m}_{1},v_{q})\). Since any segment that crosses a segment of \(\Delta\) is incident a point of \(\Delta\cup\{v_{p}\}\), then \(\Delta\) is a separable subset of \(Q\) with respect to \(\gamma\) and so we can assume that \(|\gamma(\Delta)|\in\{1,2\}\), as otherwise we are done by Lemma 21. Let \(c_{2}:=\gamma(t^{1}_{1}t^{m}_{1})\). Note that the 6 colors in \(C=\{c_{0},c_{1},c_{2},c_{3}\}\cup\gamma(T_{2})\) are pairwise distinct.
If \(|\gamma(\Delta)|=1\) then \(\gamma(\Delta)=\{c_{2}\}\), and \(\gamma(t^{1}_{1}u_{k})\) must be the 7th color. Then the triangle \(\Delta^{\prime}:=\Delta(e_{2},t^{m}_{1})\) must be coloured with \(\gamma(e_{2})\) or we are done. Then either \(l(e_{2})v_{q}\) or \(r(e_{2})v_{q}\) provides the 8th color.
Suppose now that \(|\gamma(\Delta)|=2\). Then two sides of \(\Delta\) have the same color. Since \(t^{1}_{1}\) cannot be an apex, then \(\gamma(t^{1}_{1}v_{q})\neq c_{2}\), and so either \(\gamma(v_{q}t^{1}_{1})=\gamma(v_{q}t^{m}_{1})\) or \(\gamma(v_{q}t^{m}_{1})=c_{2}\). If \(\gamma(v_{q}t^{1}_{1})=\gamma(v_{q}t^{m}_{1})\), then \(c_{7}:=\gamma(v_{q}t^{1}_{1})\) must be the 7th color, as otherwise \(c_{3}=\gamma(v_{q}t^{1}_{1})\) is a star with apex \(v_{q}\in[]\) and we are done by Lemma 21. Then \(\gamma(t^{1}_{1}u_{k})\doteq c_{2}\) and either \(l(e_{2})t^{m}_{1}\) or \(r(e_{2})u_{k}\) provides the 8th color. Thus we may assume that \(\gamma(v_{q}t^{m}_{1})=c_{2}\) and hence \(c_{7}:=\gamma(t^{1}_{1}u_{k})\) must be the 7th color. Since \(t^{1}_{1}\) and \(t^{m}_{1}\) cannot be apices of \(\gamma\), then \(\gamma(v_{q}t^{1}_{1})\doteq c_{3}\) and \(\gamma(t^{m}_{1}u_{j})\doteq c_{7}\), respectively. If \(t^{2}_{2}\in e_{2}\) then either \(l(e_{2})u_{k}\) or \(t^{3}_{2}v_{q}\) provides the 8th color, and if \(t^{2}_{2}\notin e_{2}\) then either \(t^{2}_{2}u_{k}\) or \(t^{1}_{2}v_{q}\) provides the 8th color.
\(\bullet\) Suppose that no point of \(T_{1}\) is an apex of \(\gamma\). Then either \(e_{1}=t^{1}_{1}t^{2}_{1}\) or \(e_{1}=t^{2}_{1}t^{3}_{1}\) and there is a segment \(h\in t^{2}_{1}*\Delta_{U}\) such that \(\gamma(h)=\gamma(t^{1}_{1}t^{3}_{1})\). Let \(c_{2}\) and \(c_{4}\) be such that \(\{c_{2},\gamma(e_{1})\}=\gamma(T_{1})\) and \(\{c_{4},\gamma(e_{2})\}=\gamma(T_{2})\). We remark that the 7 colors in \(C=\{c_{0},\ldots,c_{4},\gamma(e_{1}),\gamma(e_{2})\}\) are pairwise distinct.
Let \(\Delta:=\Delta(t^{1}_{1},t^{2}_{1},v_{q})\). Since any segment that crosses \(\Delta\) is incident with \(v_{p}\) and all these are colored with \(c_{1}\), then \(\Delta\) is a separable subset of \(Q\) with respect to \(\gamma\), and so we can assume that \(|\gamma(\Delta)|\in\{1,2\}\), as otherwise we are done by Lemma 21.
Suppose that \(|\gamma(\Delta)|=1\). Then \(e_{1}=t^{1}_{1}t^{2}_{1}\) and \(\gamma(e_{1})=\gamma(\Delta)\). Then \(\gamma(t^{1}_{1}u_{k})\) is the 8th color. This last and the existence of \(h\) imply that either \(l(e_{2})t^{2}_{1}\) or \(r(e_{2})t^{3}_{1}\) provides the 9th color.
Suppose now that \(|\gamma(\Delta)|=2\). Since \(t^{1}_{1}\) cannot be an apex, we must have \(\gamma(t^{1}_{1}t^{2}_{1})\neq\gamma(t^{1}_{1}v_{q})\). Then either \(\gamma(t^{1}_{1}t^{2}_{1})=\gamma(t^{2}_{1}v_{q})\) or \(\gamma(t^{1}_{1}v_{q})=\gamma(t^{2}_{1}v_{q})\). If \(\gamma(t^{1}_{1}t^{2}_{2})=\gamma(t^{2}_{1}v_{q})\), then \(e_{1}=t^{1}_{1}t^{2}_{1}\) and \(c_{8}:=\gamma(t^{1}_{1}u_{k})\) is the 8th color. Since \(t^{1}_{1}\) cannot be an apex of \(\gamma\), then \(\gamma(t^{1}_{1}v_{q})\doteq c_{3}\). From this, the existence of \(h\), and our supposition that \(v_{q}\in[]\) cannot be an apex of \(\gamma\), it follows that \(\gamma(t^{3}_{1}v_{q})\) must be the 9th color. Suppose finally that \(\gamma(v_{q}t^{1}_{1})=\gamma(v_{q}t^{2}_{1})\). Since \(v_{q}\) cannot be an apex of \(\gamma\), then \(\gamma(v_{q}t^{1}_{1})\neq c_{3}\) and so \(\gamma(v_{q}t^{1}_{1})\) must be the 8th color. Then \(\gamma(r(e_{2})u_{k})\doteq\gamma(e_{2})\). If \(e_{1}=t^{1}_{1}t^{2}_{1}\), then either \(t^{1}_{1}u_{k}\) or \(t^{1}_{1}(e_{2})\) provides the 9th color. Similarly, if \(e_{1}=t^{2}_{1}t^{2}_{1}\), either \(t^{3}_{1}u_{k}\) or \(t^{2}_{1}l(e_{2})\) provides the 9th color.
Case 2.4. Suppose that the segments in \(\mathcal{S}\) are coloured by \(\gamma\) as in Figure 3 (\(d\)).
Case 2.4.1. Suppose that \(|\gamma(T_{1})|=1=|\gamma(T_{2})|\). Let \(c_{6}:=\gamma(t^{1}_{1}t^{1}_{2})\), \(c_{7}:=\gamma(t^{2}_{1}t^{2}_{2})\) and \(c_{8}:=\gamma(t^{3}_{1}t^{3}_{2})\). Clearly, the colors \(c_{0},c_{1},c_{3},c_{6},c_{7},c_{8},\gamma(T_{1}),\gamma(T_{2})\) are pairwise distinct. Then \(\gamma(t^{3}_{1}u_{k})\doteq c_{8}\) and \(\gamma(t^{2}_{2}u_{k})\doteq c_{8}\), and so \(t^{3}_{1}u_{j}\) provides the 9th color.
Case 2.4.2. Suppose that \(|\gamma(T_{1})|=1\) and \(|\gamma(T_{2})|=2\). Let \(\gamma(T_{1})=\{c_{2}\}\), \(\gamma(T_{2})=\{c_{4},\gamma(e_{2})\}\), \(c_{7}:=\gamma(t^{1}_{1}u_{j})\) and
\(\gamma(wu_{i})\stackrel{{\cong}}{{=}}c_{1}\) and \(\gamma(t_{2}^{i}u_{k})\stackrel{{\cong}}{{=}}c_{4}\). Then either \(l(e_{2})v_{p}\) or \(r(e_{2})v_{q}\) provides the 7th color.
\(\bullet\) Suppose that \(\gamma(Q)\) has exactly 1 star with apex in \(T_{1}\). Let \(u\) be the only apex of \(\gamma\) in \(T_{1}\), let \(t_{1}^{l},t_{1}^{m}\) be such that \(T_{1}=\{u,t_{1}^{l},t_{1}^{m}\}\) with \(l<m\). We need to show \(|\gamma(Q\setminus\{u\})|\geq 8\). We note that \(c_{0},c_{1},c_{2}:=\gamma(t_{1}^{l}t_{1}^{m}),c_{3},c_{4},\gamma(e_{2})\) are pairwise distinct. Then either \(l(e_{2})u_{j}\) or \(r(e_{2})u_{k}\) provides the 7th color \(c_{7}\).
\(-\) Suppose that \(\gamma(r(e_{2})u_{k})=c_{7}\). Then \(\gamma(l(e_{2})u_{j})\stackrel{{\cong}}{{=}}\gamma(e_{2})\), \(\gamma(t_{1}^{l}u_{j})\stackrel{{\cong}}{{=}}c_{2}\), \(\gamma(t_{1}^{m}u_{j})\stackrel{{\cong}}{{=}}c_{2}\), \(\gamma(t_{1}^{l}u_{i})\stackrel{{\cong}}{{=}}c_{1}\) and \(\gamma(t_{1}^{m}u_{k})\stackrel{{\cong}}{{=}}c_{7}\). If \(t_{2}^{3}\in e_{2}\), then either \(t_{1}^{m}v_{p}\) or \(t_{2}^{3}v_{q}\) provides the 8th color. Otherwise \(e_{2}=t_{2}^{1}t_{2}^{2}\), and \(\gamma(t_{2}^{1}t_{1}^{m})\stackrel{{\cong}}{{=}}\gamma(e_{2})\), \(\gamma(t_{2}^{2}u_{j})\stackrel{{\cong}}{{=}}c_{7}\) and \(\gamma(t_{2}^{2}u_{k})\stackrel{{\cong}}{{=}}c_{4}\). Then either \(t_{1}^{m}v_{p}\) or \(t_{2}^{2}v_{q}\) provides the 8th color.
\(-\) Suppose now that \(\gamma(l(e_{2})u_{j})=c_{7}\). Then \(\gamma(r(e_{2})u_{k})\stackrel{{\cong}}{{=}}\gamma(e_{2})\). We may assume that \(\gamma(t_{1}^{m}u_{k})\in\{c_{2},c_{7}\}\), as otherwise \(\gamma(t_{1}^{m}u_{k})\) is the required 8th color. If \(\gamma(t_{1}^{m}u_{k})=c_{7}\), then \(\gamma(t_{1}^{l}l(e_{2}))\stackrel{{\cong}}{{=}}c_{2}\) and \(\gamma(t_{1}^{l}u_{j})\stackrel{{\cong}}{{=}}c_{2}\). Since these imply (a contradiction) that \(t_{1}^{l}\) is an apex of \(\gamma\), we may assume that \(\gamma(t_{1}^{m}u_{k})=c_{2}\). Again, since \(t_{1}^{m}\) cannot be an apex, then \(\gamma(t_{1}^{l}u_{k})\stackrel{{\cong}}{{=}}c_{2}\), \(\gamma(t_{1}^{l}u_{j})\stackrel{{\cong}}{{=}}c_{7}\), \(\gamma(t_{1}^{l}u_{i})\stackrel{{\cong}}{{=}}c_{1}\) and \(\gamma(l(e_{2})u_{k})\stackrel{{\cong}}{{=}}\gamma(e_{2})\). Then either \(t_{1}^{m}v_{p}\) or \(l(e_{2})v_{q}\) provides the 8th color.
\(\bullet\) Suppose that no point of \(T_{1}\) is an apex of \(\gamma(Q)\). Then \(e_{1}=t_{1}^{2}t_{1}^{m}\) for some \(m\in\{1,3\}\). Let \(c_{2}\) and \(c_{4}\) be such that \(\{c_{2},\gamma(e_{1})\}=\gamma(T_{1})\) and \(\{c_{4},\gamma(e_{2})\}=\gamma(T_{2})\). Since \(\gamma\) has no apices in \(T_{1}\), there is a segment \(h\in t_{1}^{2}*\Delta_{U}\) such that \(\gamma(h)=c_{2}\). We note that the 7 colors \(c_{0},\ldots,c_{4},\gamma(e_{1}),\gamma(e_{2})\) are pairwise distinct. Clearly, either \(l(e_{2})u_{j}\) or \(r(e_{2})u_{k}\) provides the 8th color \(c_{8}\).
\(-\) Suppose that \(\gamma(r(e_{2})u_{k})=c_{8}\). Then \(\gamma(l(e_{2})u_{j})\stackrel{{\cong}}{{=}}\gamma(e_{2})\), \(\gamma(t_{1}^{m}u_{j})\stackrel{{\cong}}{{=}}\gamma(e_{1})\) and \(\gamma(t_{1}^{2}v_{q})\stackrel{{\cong}}{{=}}c_{3}\). Since \(t_{1}^{m}\) cannot be an apex, there is \(h_{1}\in t_{1}^{2}*\Delta_{U}\) such that \(\gamma(h_{1})=\gamma(e_{1})\). The existence of \(h_{1}\) and \(h\) imply that \(\gamma(t_{1}^{1}v_{p})\stackrel{{\cong}}{{=}}c_{1}\) and so \(\gamma(u_{1}t_{1}^{1})\stackrel{{\cong}}{{=}}c_{1}\). Then either \(t_{1}^{2}v_{p}\) or \(t_{1}^{3}v_{q}\) provides the 9th color.
\(-\) Suppose that \(\gamma(l(e_{2})u_{j})=c_{8}\). Then \(\gamma(r(e_{2})u_{k})\stackrel{{\cong}}{{=}}\gamma(e_{2})\). Since if \(\gamma(t_{1}^{m}u_{j})=c_{8}\), the existence of \(h\) implies that \(\gamma(t_{1}^{4-m}l(e_{2}))\) is the 9th color. Then \(\gamma(t_{1}^{m}u_{j})\stackrel{{\cong}}{{=}}\gamma(e_{1})\), and as in previous paragraph, we can deduce \(\gamma(t_{1}^{2}v_{q})\stackrel{{\cong}}{{=}}c_{3}\) and that there is \(h_{1}\in t_{1}^{2}*\Delta_{U}\) such that \(\gamma(h_{1})=\gamma(e_{1})\). From the existence of \(h\) and \(h_{1}\) it follows that \(\gamma(t_{1}^{1}v_{p})\stackrel{{\cong}}{{=}}c_{1}\) and \(\gamma(t_{1}^{1}u_{i})\stackrel{{\cong}}{{=}}c_{1}\). Then either \(t_{1}^{2}v_{p}\) or \(t_{1}^{3}v_{q}\) provides the 9th color.
## 6 The proof of Teorem 2
We note that \(d(2)=1\) follows trivially. On the other hand, by the Szekeres and Peters result [15], we know that any point set \(P\) in the plane in general position with \(|P|\geq 17\) contains a subset \(Q\) such that \(Q\sim C_{6}\). Starting with a coloring \(\beta^{\prime}\) of \(D(Q)\) with 3 colors, we proceed as in the proof of Proposition 3 in order to extend \(\beta^{\prime}\) to a coloring \(\beta\) of \(D(P)\) by adding a new star \(S_{i}\) of color \(c_{i}\) with apex \(p_{i}\) for each \(p_{i}\in P\setminus Q\). Since \(\beta\) has exactly \(|P|-3\) colors, we have that \(d(m)\leq n-3\) for any integer \(n\geq 17\).
Let \(\gamma\) be an optimal coloring of \(D(X)\). By Lovasz's theorem and Proposition 5\((i)\), in order to show Theorem 2 it suffices to show that \(|\gamma(X)|\geq 14\). We analyze separately several cases, depending on the sizes of \(\gamma(A)\) and \(\gamma(B)\). By Corollary 12 we know that \(3\leq|\gamma(A)|,|\gamma(B)|\leq 4\).
Case 1. \(|\gamma(A)|=3\) and \(|\gamma(B)|=3\). It follows from Proposition 7 that there are \(a_{i},a_{j},a_{k}\in A\) (respectively, \(b_{p},b_{q},b_{r}\in B\)) with \(i<j<k\) (respectively, \(p<q<r\)) such that none of them is an apex of \(\gamma(A)\) (respectively, \(\gamma(B)\)). Then, \(c_{1}=\gamma(a_{i}b_{p}),c_{2}=\gamma(a_{j}b_{q})\) and \(c_{3}=\gamma(a_{k}b_{r})\) are pairwise distinct. Let \(ab\) be the segment in \(\{a_{i}b_{p},a_{j}b_{q},a_{k}b_{r}\}\) that is closest to \(t_{1}^{1}\). By the choice of \(ab\) we have that \(\gamma(A\cup B\setminus\{a,b\})\) is disjoint from \(\gamma
\(T_{1}\cup\{a_{i},a_{j},b_{p},b_{q},b_{r}\}\cup T_{2}\). Again, from Proposition 5\((ii)\) and the choice of \(a_{i},a_{j},b_{l_{1}},b_{l_{2}}\) we can deduce that \(|\gamma(X)|\geq|\gamma(Q)|+5\). By applying Lemma 24 to \(Q\) we know that \(|\gamma(Q)|\geq 9\), as required.
Case 3. Suppose that \(|\gamma(A)|=3\) and \(|\gamma(B)|=4\). This case can be handled in the same manner as Case 2 (just interchange the roles of \(A\) and \(B\)).
Case 4. Suppose that \(|\gamma(A)|=4\) and \(|\gamma(B)|=4\).
(4.1) Suppose that \(\gamma^{*}(I)\leq 4\) for some \(I\in\{A,B\}\). We only analyze the case \(I=A\) because the case \(I=B\) can be handled analogously. Then there is \(a\in A\) such that \(a\) is not an apex of \(\gamma(A)\).
\(\bullet\) Suppose that \(\gamma^{*}(B)\leq 4\). Then there is \(b\in B\) such that \(b\) is not an apex of \(\gamma(B)\). Let \(Q=T_{1}\cup\{a,b\}\cup T_{2}\). From Proposition 5\((ii)\) and the choice of \(a\) and \(b\) we can deduce that \(|\gamma(X)|\geq|\gamma(Q)|+|\gamma(A)|+|\gamma(B)|=|\gamma(Q)|+8\). By applying Lemma 14 to \(Q\) we obtain that \(|\gamma(Q)|\geq 6\), as required.
\(\bullet\) Suppose that \(\gamma^{*}(B)=5\). By Proposition 6, there are \(b_{p},b_{q}\in B\) with \(p<q\) such that \(e=b_{p}b_{q}\) is a \(2-\)star of \(\gamma(B)\) and neither \(b_{p}\) nor \(b_{q}\) is an apex of any other star of \(\gamma(B)\). Let \(Q=T_{1}\cup\{b_{p},b_{q},a\}\cup T_{2},A^{\prime}=A\setminus\{a\}\) and \(B^{\prime}=B\setminus\{b_{p},b_{q}\}\). From Proposition 5\((ii)\) and the choice of \(b_{p},b_{q}\) and \(a\) we can deduce that \(|\gamma(X)|\geq|\gamma(Q)|+|\gamma(A^{\prime})|+|\gamma(B^{\prime})|=|\gamma(Q )|+7\). By applying Lemma 20 to \(Q\) we obtain that \(|\gamma(Q)|\geq 7\), as required.
(4.2) Suppose that \(\gamma^{*}(A)=5=\gamma^{*}(B)\). By Proposition 6, there are \(a_{i},a_{j}\in A\) (resp. \(b_{p},b_{q}\in B\)) with \(i<j\) (resp. \(p<q\)) such that \(\{a_{i}a_{j}\}\) (resp. \(\{b_{p}b_{q}\}\)) is a \(2-\)star of \(\gamma(A)\) (resp. \(\gamma(B)\)) and none of \(a_{i},a_{j}\) (resp. \(b_{p},b_{q}\)) is an apex of any other star of \(\gamma(A)\) (resp. \(\gamma(B)\)). Let \(Q=T_{1}\cup\{a_{i},a_{j},b_{p},b_{q}\}\cup T_{2}\). From Proposition 5\((ii)\) and the choice of \(a_{i},a_{j},b_{p},b_{q}\) we can deduce that \(|\gamma(X)|\geq|\gamma(Q)|+6\). By applying Lemma 23 to \(Q\) we obtain that \(|\gamma(Q)|\geq 8\), as required.
|
2309.12869 | Coulomb and Higgs Phases of $G_2$-manifolds | Ricci flat manifolds of special holonomy are a rich framework as models of
the extra dimensions in string/$M$-theory. At special points in vacuum moduli
space, special kinds of singularities occur and demand a physical
interpretation. In this paper we show that the topologically distinct
$G_2$-holonomy manifolds arising from desingularisations of codimension four
orbifold singularities due to Joyce and Karigiannis correspond physically to
Coulomb and Higgs phases of four dimensional gauge theories. The results
suggest generalisations of the Joyce-Karigiannis construction to arbitrary
ADE-singularities and higher order twists which we explore in detail in
explicitly solvable local models. These models allow us to derive an
isomorphism between moduli spaces of Ricci flat metrics on these non-compact
$G_2$-manifolds and flat ADE-connections on compact flat 3-manifolds which we
establish explicitly for $\operatorname{SU}(n)$. | Bobby Samir Acharya, Daniel Andrew Baldwin | 2023-09-22T13:46:46Z | http://arxiv.org/abs/2309.12869v1 | # Coulomb and Higgs Phases of \(G_{2}\)-manifolds.
###### Abstract
Ricci flat manifolds of special holonomy are a rich framework as models of the extra dimensions in string/\(M\)-theory. At special points in vacuum moduli space, special kinds of singularities occur and demand a physical interpretation. In this paper we show that the topologically distinct \(G_{2}\)-holonomy manifolds arising from desingularisations of codimension four orbifold singularities due to Joyce and Karigiannis correspond physically to Coulomb and Higgs phases of four dimensional gauge theories. The results suggest generalisations of the Joyce-Karigiannis construction to arbitrary ADE-singularities and higher order twists which we explore in detail in explicitly solvable local models. These models allow us to derive an isomorphism between moduli spaces of Ricci flat metrics on these non-compact \(G_{2}\)-manifolds and flat ADE-connections on compact flat 3-manifolds which we establish explicitly for \(\mathrm{SU}(n)\).
KCL-PH-TH/2023-23
Introduction.
One of the important lessons from superstring/\(M\)-theory over the last four decades has been the significant role played by special kinds of singularities in space. The broad and rich framework of the underlying theory can often be used to show that certain singularities may be perfectly sensible physically and, moreover, often support localised, light, interacting degrees of freedom. The geometric and topological properties of such singularities often provide microscopic insights into the fundamental properties of the quantum field theories which describe these degrees of freedom. Therefore it is important to try to understand which kinds of singularities are physically sensible and to provide a description of the physics supported at such singularities.
The classic examples of such singularities are orbifold singularities in space [16, 15] of which the supersymmetric, conical ADE-singularities (\(\mathbb{C}^{2}/\Gamma_{ADE}\)) are perhaps the best understood. Other supersymmetric cases are also reasonably well understood to some extent, such as singularities in Calabi-Yau threefolds, but there is much more to explore; for instance much of the literature focuses on algebraic descriptions of such singularities whilst the properties of the spacetime background are less well studied. See [2] for further comments.
We will discuss four dimensional supersymmetric vacua of \(M\)-theory obtained by modelling the 7 extra dimensions by a space \(X\), with metric \(g\), whose holonomy group is the exceptional Lie group \(G_{2}\). In particular, for smooth \(X\), the physics of \(M\)-theory has only Abelian gauge symmetries and only neutral light particles. Non-Abelian gauge fields have been shown to arise from codimension four orbifold singularities of \(X\)[4, 1] whilst chiral fermions arise from particular conical codimension seven singularities [3]. In both cases, these degrees of freedom are in fact wrapped \(M2\)-branes which have collapsed to formally zero size (and mass) at the singularity. Such models of \(M\)-theory on \(G_{2}\)-holonomy spaces have been shown to give rise to models of physics beyond the Standard Model with a rich phenomenology [7, 6].
Proving the existence of \(G_{2}\)-holonomy metrics on a compact 7-manifold \(X\) is notoriously difficult. The known existence results involve surgery and gluing methods whereby one constructs \(X\) by gluing together non-compact model spaces along common boundary regions [21, 22, 26, 14, 28]. Often, one starts with a model space \(X_{0}\) that itself has very special kinds of singularities, removes a neighbourhood of the singular regions and glues in a suitable model space which gives a smooth \(X\). One then uses perturbation theory methods to prove the existence of the \(G_{2}\)-holonomy metric [21, 24]. Thankfully, the kinds of singularities which arise in gluing constructions themselves tend to have interesting and sensible physical interpretations, with localised light degrees of freedom, often describable by an interacting quantum field theory.
Joyce and Karigiannis [25] have shown that, under certain topological assumptions, that perhaps the simplest orbifold singularities in a compact \(G_{2}\)-holonomy space \((X_{0},g_{0})\) can be desingularised to produce smooth topologically distinct \(G_{2}\)-manifolds \((X_{c},g_{c})\) and \((X_{h},g_{h})\) respectively. The goal of this paper is to interpret these results physically and to generalise them to more complicated singularities. The main conclusions are that the topologically distinct desingularisations considered by Joyce and Karigiannis are describable physically by the Coulomb and Higgs branches, respectively, of certain four dimensional gauge theories. The basic result is explained in section three, after reviewing the relevant features of Joyce-Karigiannis in section two.
In section four we introduce some simple, exactly solvable local models of the kind originally introduced in [1] and subsequently studied in [10, 31]. These models are obtained as fibrations of ADE-type ALE (or even ALF) spaces over compact flat 3-manifolds and we establish a correspondence between the (complexified) moduli space of \(G_{2}\)-holonomy metrics on these 7-manifolds, the moduli space of (complex) flat connections over the 3-manifolds and the classical moduli space of the physical four dimensional field theories. At the end of the paper we combine all the results to show that massless matter in fundamental representations of the gauge group arise at special points in the semi-classical moduli space of \(M\)-theory on certain _compact_\(G_{2}\)-holonomy manifolds and determine the light particle spectrum of most of Joyce's compact examples.
_Background Material and Notation._ In this paragraph, for the ease of the reader, we collect some background
material and definitions concerning \(G_{2}\)-manifolds. A \(G_{2}\)-structure on a compact 7-manifold \(X\), is defined by a 3-form, \(\varphi\) which is \(G_{2}\)-invariant at each point wrt the natural action of \(G_{2}\) on the tangent spaces of \(X\) at each point: \(\mathbb{R}^{7}\equiv T(X)|_{pt}\). Since \(G_{2}\subset\mathrm{SO}(7)\), \(\varphi\) induces a metric, \(g(X)\) and orientation on \(X\). The holonomy group of \(g(X)\) is a subgroup of \(G_{2}\) if and only if \(\varphi\) is parallel wrt the Levi-Cevita connection, \(\nabla_{g}\varphi=0\). This is equivalent to \(d\varphi=d^{*}\varphi=0\). If \(\nabla_{g}\varphi=0\) and the universal cover of \(X\) is compact then \(Hol(g(X))=G_{2}\). The latter conditions are equivalent to the existence of a single parallel spinor field, \(\eta\), \(\nabla_{g}\eta=0\), which also implies that \(G_{2}\)-manifolds preserve supersymmetry when used as models of the extra dimensions in superstring/\(M\)-theory. Since the Ricci tensor of a \(G_{2}\)-holonomy metric is identically zero, the classical vacuum of such models have zero cosmological constant.
## 2 Joyce-Karigiannis Manifolds.
In this section we give a brief overview of constructions of compact \(G_{2}\)-holonomy manifolds with emphasis on the Joyce-Karigiannis construction which will feature throughout this paper. For reasons of brevity our description will necessarily be sketchy with no analytic details at all on the existence results, but we encourage the reader to consult the original papers for more details.
The first examples of compact manifolds with \(G_{2}\)-holonomy are due to Joyce [21, 22]. These were constructed by a generalised Kummer construction, where one begins with a finite quotient of a 7-torus, \(T^{7}/\Gamma\) which is a singular \(G_{2}\)-orbifold. Then, for suitable choices of \(\Gamma\), one can remove the singular set and glue in special holonomy model spaces to produce a smooth 7-manifold, which in favourable circumstances, will have a \(G_{2}\)-structure which is approximately close to being \(G_{2}\)-holonomy. For such \(G_{2}\)-structures, Joyce's main existence theorem asserts that one can perturb this \(G_{2}\)-structure to a genuinely \(G_{2}\)-holonomy structure. We will meet some explicit examples in section four.
Another construction is the twisted connected sum construction of [26, 14, 28] in which one glues together a pair of asymptotically cylindrical Calabi-Yau three folds times a circle in a specific way, proves the existence of an approximately \(G_{2}\)-holonomy structure and again one applies Joyce's existence theorem.
More recently, Joyce and Karigiannis constructed \(G_{2}\)-holonomy manifolds by resolving codimension four orbifold singularities. These are the focus of this paper. The starting point is \((X_{0},\varphi_{0})\), a compact \(G_{2}\)-holonomy orbifold with torsion free \(G_{2}\)-structure \(\varphi_{0}\). Further suppose that the orbifold singularities occur in codimension four along a connected 3-manifold \(L\). Then the singularities must be of \(ADE\) type, i.e. the fibers of the normal bundle to \(L\) will be of the form \(\mathbb{R}^{4}/\Gamma_{ADE}\) with \(\Gamma_{ADE}\) a finite subgroup of \(\mathrm{SU}(2)\) acting irreducibly on \(\mathbb{R}^{4}\). Joyce-Karigiannis restrict to the simplest case when \(\Gamma=\mathbb{Z}_{2}\). They show that under certain conditions which we describe shortly the orbifold singularities of \((X_{0},\varphi_{0})\) can be desingularised, by excising a neighbourhood of \(L\) and gluing in a certain family of Eguchi-Hanson 4-manifolds, \(M_{EH}=T^{*}S^{2}\) parametrised by \(L\). A key assumption is that \(L\) admits a nowhere vanishing harmonic 1-form, \(\alpha_{c}\), with respect to the induced metric on \(L\). The volume of the sphere at the origin of \(T^{*}S^{2}\) is controlled by the norm of \(\alpha_{c}\) times an overall scale, \(t\) i.e. \(\mathrm{Vol}(S^{2})=\pi t|\alpha_{c}|\) This produces a smooth 7-manifold, \(X_{c}\) which they prove has metrics of \(G_{2}\)-holonomy. They also consider a \(\mathbb{Z}_{2}\)-twisted version of the construction which produces a different 7-manifold, \(X_{h}\). In these constructions, the model space \(M_{EH}\times L\) does not have a known exactly \(G_{2}\)-holonomy metric. This makes this construction different to those described above, since it is not based on gluing together model metrics. However, remarkably the authors are able to prove that suitable cancellations occur allowing for the existence of an approximately \(G_{2}\)-holonomy structure on the compact 7-manifold such that Joyce's existence result can again be applied. This proves that the 7-manifolds \(X_{c}\) and \(X_{h}\) admit metrics with \(G_{2}\)-holonomy. We will need to describe some aspects of the topology of these manifolds.
In the first case a small neighbourhood of the singular set is removed and replaced by \(M_{EH}\times L\). This gluing procedure increases both the second and third Betti numbers, in the sense that \(b^{i}(X_{c})=b^{i}(X_{0})+b^{i-2}(L)\) for
\(i=2,3\). The induced metric on \(M_{EH}\) admits a harmonic 2-form, \(\beta\), essentially the Poincare dual of the zero-section of \(T^{*}S^{2}\) and \(\beta\) extends to a harmonic form in \(X_{c}\). The wedge product of \(\beta\) with \(\alpha_{c}\) is a harmonic 3-form on \(X_{c}\). Furthermore, if \(\gamma\) is any other harmonic 1-form on \(L\), then \(\alpha_{c}+\epsilon\gamma\) will also be nowhere vanishing for small \(\epsilon\). This explains the Betti numbers of \(X_{c}\). In terms of homology, the Poincare dual of \(\beta\) in \(X_{c}\) is a 5-cycle of topology \(S^{2}\times L\), where \(S^{2}\) can be thought of as the zero-section of \(T^{*}S^{2}\). The dual of \(\alpha_{c}\wedge\beta\) is a 4-cycle with topology \(S^{2}\times\Sigma\), with \(\Sigma\) being the Poincare dual of \(\alpha_{c}\) in \(L\).
In the \(\mathbb{Z}_{2}\)-twisted case, the singular set is replaced by \((M_{EH}\times\hat{L})/\mathbb{Z}_{2}\) where the \(\mathbb{Z}_{2}\) acts non-trivially on \(M_{EH}\). This is a non-trivial fibration over \(L=\hat{L}/\mathbb{Z}_{2}\) with \(M_{EH}\) fibres. A key fact is that \(\beta\) is odd under this action and hence \(X_{h}\) does not inherit any additional harmonic 2-forms from the gluing and \(b^{2}(X_{h})=b^{2}(X_{0})\). The 3-form \(\alpha_{h}\wedge\beta\), however, is \(\mathbb{Z}_{2}\)-invariant and becomes a harmonic 3-form on \(X_{h}\) and hence \(b^{3}(X_{h})=b^{3}(X_{0})+b^{1}(L,\mathbb{Z}_{2})\), where the last term is the number of \(\mathbb{Z}_{2}\)-twisted harmonic 1-forms on \(L\). This is equal to the number of independent nowhere vanishing harmonic 1-forms on \(\hat{L}\) which are \(\mathbb{Z}_{2}\)-odd. The Poincare dual of this harmonic 3-form is topologically of the form \((S^{2}\times\hat{\Sigma})/\mathbb{Z}_{2}\), where \(S^{2}\) is the zero section in \(M_{EH}\) and \(\hat{\Sigma}\) is the Poincare dual of \(\alpha_{h}\) in \(\hat{L}\). We note in passing that these and other \(\mathbb{Z}_{2}\)-twisted harmonic fields have a variety of applications in mathematical gauge theories in various dimensions [35, 17, 36]
The reason that the harmonic 1-form is assumed to have no zeroes is because the volumes of the spheres in the Eguchi-Hanson spaces is directly proportional to the norm of \(\alpha_{c,h}\). If \(\alpha_{c,h}\) were allowed to have a zero the spheres would collapse to a point there and the total space would develop an additional singularity. Unfortunately, having control over the metric and curvature tensor in this more general situation is rather difficult, hence the assumption that \(\alpha_{c,h}\) has no zeroes. Physically, as we discuss in the next section, one actually expects the existence of light degrees of freedom, in fact chiral fermions, when \(\alpha_{c,h}\) has isolated zeroes.
## 3 Interpretation in \(M\)-theory
\(M\)-theory compactified on a manifold of \(G_{2}\)-holonomy \((X,\varphi)\) gives rise semi-classically to a four dimensional supergravity theory with \(b^{2}(X)\) U(1) vector multiplets, \(b^{3}(X)\) neutral chiral multiplets, \(\Phi_{i}\), and four supercharges (\(i=1....b^{3}(X)\)). The complex scalar fields, \(\phi_{j}=t_{j}+is_{j}\), in the chiral multiplets contain axions, \(t_{j}\), from harmonic modes of the 3-form field \(C\) and the moduli, \(s_{j}\), of the \(G_{2}\)-holonomy metric which appear as harmonic deformations of \(\varphi\).
Additional, physically relevant light particles can arise if \(X\) has special kinds of singularities. Non-abelian gauge fields of type \(ADE\) arise if \(X\) contains codimension four orbifold singularities of type \(ADE\)[4, 1]. Chiral fermions charged under such gauge symmetries will arise from additional, special kinds of conical codimension seven singularities [3].
In the Joyce-Karrigiannis construction, we consider a \(G_{2}\)-orbifold with a codimension four \(A_{1}\)-singularity along a 3-manifold \(L\subset X\), hence our story begins with an SU(2) gauge theory on \(L\times\mathbb{R}^{3,1}\) with the latter factor being our four dimensional spacetime. Since \(G_{2}\)-holonomy preserves supersymmetry, this SU(2) gauge theory is supersymmetric. By integrating over \(L\) and neglecting massive modes our goal is to provide a complete description of the low energy dynamics of this SU(2) gauge theory in the form of a four-dimensional effective field theory.
These \(M\)-theory backgrounds have been analysed previously, beginning in [1], and later in [5, 29, 10, 13]. To briefly summarise the analysis, one is considering 7d SU(2) supersymmetric Yang-Mills theory compactified on \(L\). The 7d theory in flat space is known to have three scalar fields \(\vec{\phi}\), each in the adjoint representation of SU(2). When compactified on \(L\) in a \(G_{2}\)-orbifold the three fields become the components of a 1-form field \(B\), again in the adjoint of SU(2). We thus have a Yang-Mills gauge field \(A\) and a 1-form Higgs field \(B\) as the bosonic fields on \(L\). These fields naturally pair up into a complex gauge field \(\mathcal{A}=A+iB\) and the conditions on \(\mathcal{A}\) which minimise the potential whilst preserving supersymmetry is that \(\mathcal{A}\) is a harmonic flat connection on \(L\)[5, 29]. The space of classical vacua
of the low energy effective theory is therefore the space of flat complex SU(2) connections on \(L\). In general, this space will have distinct disconnected components. As we will show, the distinct components naturally correspond to the topologically distinct desingularisations of \(X_{0}\) constructed in [25]. Previous analyses have focused on the flat connections continuously connected to the identity.
### The Coulomb Phase
The identity connected component of the space of flat SU(2) connections is \(b^{1}(L)\)-dimensional. Once complexified by \(B\) we obtain \(b^{1}(L)\) massless chiral multiplets in the four-dimensional effective theory. These naturally match up with the \(b^{1}(L)\) moduli of \(X_{c}\) which desingularise the orbifold \(X_{0}\), as reviewed above.
A crucial fact about the Joyce-Karigiannis theorem is the assumption that the harmonic 1-form must be nowhere vanishing; whereas in the physical analysis, the nowhere zero condition is generally not required. The harmonic 1-form \(\alpha_{c}\) which appears in the Joyce-Karigiannis theorem is identified with \(B\) in the direction of the Cartan subalgebra of SU(2) and further can be identified with the volume and complex structure of the \(S^{2}\) in the centre of \(T^{*}S^{2}\) as it varies over \(L\). If \(\alpha_{c}\) had a zero at a point \(p\), \(B\) would vanish and hence, SU(2) gauge symmetry is restored at \(p\). Geometrically, the norm of \(\alpha_{c}\) controls the size of the two-sphere of \(M_{EH}\), hence, away from \(p\) the glued in \(M_{EH}\) spaces are smooth. But at \(p\) the \(M_{EH}\) degenerates to an orbifold. At this point we expect that \(X\) itself develops a further singularity. In fact, the cone over \(\mathbb{CP}^{3}\) i.e. \(\mathbb{R}^{+}\times\mathbb{CP}^{3}\) is, topologically, a 3-dimensional family of Eguchi-Hanson spaces which at the origin degenerate to \(\mathbb{R}^{4}/\mathbb{Z}_{2}\) and this was precisely the description given in [3], where this additional singularity is interpreted as giving rise to a chiral fermion charged under the U(1) gauge symmetry. One would certainly like to have a better understanding of what the zeroes of harmonic 1-forms on \(L\) imply physically and for the would-be \(G_{2}\)-holonomy space \((X,\varphi)\).
In any case, when \(\alpha_{c}\) has no zeroes, at a generic point in moduli space, SU(2) is broken to its maximal torus and hence we refer to this branch of vacua as the Coulomb branch (hence the subscript on \(\alpha_{c}\)). Hence, the low energy description is simply an \(\mathcal{N}=1\) supersymmetric SU(2) gauge theory with \(b^{1}(L)\) adjoint chiral multiplets, as originally found in [1].
### The Higgs Phase
Interpretation of the dynamics in \(M\)-theory on \(X_{h}\) is one of the main results. We will see that in this case there is a non-identity connected component of the space of flat SU(2)-connections, which naturally corresponds to the moduli of \(X_{h}\). In this case we have \(L=\hat{L}/\mathbb{Z}_{2}\) and \(\hat{L}\) has a \(\mathbb{Z}_{2}\)-odd harmonic 1-form, \(\alpha_{h}\), with no zeroes. The existence of \(\alpha_{h}\) implies that \(b_{1}(\hat{L})\) is at least one and that \(\pi_{1}(\hat{L})\) contains an element \(g_{\alpha}\) of infinite order. Furthermore, this element is \(\mathbb{Z}_{2}\) odd, hence, if \(g_{\beta}\) gives the order two action on \(\hat{L}\),
\[g_{\beta}g_{\alpha}g_{\beta}^{-1}=g_{\alpha}^{-1} \tag{1}\]
In general, although \(g_{\beta}\) is of order two on \(\hat{L}\) it need not be of order two on its universal cover. Hence
\[g_{\beta}^{2}=g_{\gamma} \tag{2}\]
for some other element \(g_{\gamma}\). Vacua of the 7d SU(2) Yang-Mills theory on \(L\) are given by specifying a flat SU(2) connection on \(L\). Modulo conjugation, these are just given by set of matrices in SU(2), satisfying the relations of \(\pi_{1}(L)\). In particular, we would like to satisfy the above two relations with SU(2) matrices, \(M_{\alpha},M_{\beta},M_{\gamma}\).
Without loss of generality, we can conjugate \(M_{\alpha}\) into the maximal torus and hence take \(M_{\alpha}\) to be diagonal. The first relation then asserts that \(M_{\beta}\)_permutes_ the two eigenvalues of \(M_{\alpha}\) and is thus in the Weyl group of SU(2):
\[M_{\alpha}=\begin{pmatrix}e^{i\theta_{3}}&0\\ 0&e^{-i\theta_{3}}\end{pmatrix},\quad M_{\beta}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\quad M_{\gamma}=-\mathbb{1} \tag{3}\]
We thus see that we have a one-dimensional space of vacua, which is in keeping with the one dimensional space of \(G_{2}\)-manifolds, \(X_{h}\), constructed by Joyce and Karigiannis. The fact that the Weyl group plays a key role is essential and was already anticipated by Joyce [23]. Also, though obvious, we note that these flat connections cannot be continuously deformed to the identity.
Since the subgroup of \(\mathrm{SU}(2)\) generated by \(M_{\alpha}\) and \(M_{\beta}\) break \(\mathrm{SU}(2)\) to its centre, \(\mathbb{Z}_{2}\), this tells us that at generic points in its space of vacua, the gauge group of the low energy theory is broken to \(\mathbb{Z}_{2}\), however, since there are no fields in the 7d theory which are charged under the centre, the classical low energy theory has the gauge group effectively broken completely. As we will see, this component of the moduli space corresponds to a Higgs phase in the effective four dimensional theory, and hence the subscript on \(\alpha_{h}\).
The key is to note that at the origin of the moduli space, \(M_{\alpha}=\mathbb{1}\) and that the \(\mathrm{SO}(2)\) subgroup of \(\mathrm{SU}(2)\) consisting of real matrices remains unbroken by \(M_{\beta}\). This tells us that the gauge group of the four dimensional theory is \(\mathrm{SO}(2)\). There must therefore also be supersymmetric Higgs fields whose vacuum values break \(\mathrm{SO}(2)\) completely. The proposal is that there are precisely two complex, chiral superfields, \(\Phi_{1,2}\) transforming in the fundamental representation of \(\mathrm{SO}(2)\). This Higgs doublet contains four bosonic degrees of freedom of which one becomes the longitudinal component of the now massive gauge boson. Another is the Higgs boson itself and these two massive degrees of freedom comprise the degrees of freedom of a massive vector multiplet. The remaining two degrees of freedom remain massless and give rise to the expected complex one-dimensional space of vacua arising from the Joyce-Karigiannis construction.
The Higgs doublet is naturally associated with the \(\mathbb{Z}_{2}\)-twisted harmonic 1-form \(\alpha_{h}\) because at the origin of the moduli space the flat \(\mathrm{SU}(2)\) connection in the adjoint representation arises from a \(\mathbb{Z}_{2}\)-bundle via the natural inclusions \(\mathbb{Z}_{2}\subset\mathrm{SO}(3)\leftarrow\mathrm{SU}(2)\). Hence, at the origin of the moduli space, the Yang-Mills Laplacian effectively reduces to the Laplacian acting on \(\mathbb{Z}_{2}\)-twisted 1-forms. Another way to obtain this result is that if we look at how the \(\mathbb{Z}_{2}\) acts on the fields on \(\hat{L}\) it is via the combined action of the geometric action together with the gauge transformation by \(g_{\beta}\) and it is the \(\mathrm{SO}(2)\) doublet of fields in the low energy theory which are \(\mathbb{Z}_{2}\)-invariant.
To summarise: The \(\mathbb{Z}_{2}\)-twisted Joyce-Karigiannis construction of \(G_{2}\)-manifolds \((X_{h},\varphi)\) has a low energy description as a supersymmetric \(\mathrm{SO}(2)\) gauge theory with matter in the fundamental representation. The one-dimensional complexified moduli space of \(G_{2}\)-holonomy metrics correpsonds naturally to the Higgs branch of this gauge theory.
Note that, in general \(L\) could have both ordinary harmonic as well as \(\mathbb{Z}_{2}\)-twisted harmonic 1-forms. In this case, there will clearly be both Coulomb and Higgs vacua arising from the same \(G_{2}\)-orbifold.
The picture developed above can clearly be generalised in two ways. First, it is clear that one can consider more general \(ADE\) singularities beyond \(\mathrm{SU}(2)\). Second, one may also consider higher order twists beyond \(\mathbb{Z}_{2}\). We will encounter both of these possibilities in what follows.
In the next section we introduce some simple explicit local models which are exactly solvable and which allow us to consider any \(ADE\) gauge group as well as higher order twists where we can prove that the gauge theory moduli space is the moduli space of \(G_{2}\)-holonomy metrics desingularising \(X_{0}\). Following that we will describe some compact \(G_{2}\)-manifolds which give rise to both Coulomb and Higgs branches classically.
## 4 Explicit Local Models
The simplest models of 3-manifolds, \(L\), admitting nowhere vanishing \(\mathbb{Z}_{2}\)-twisted harmonic 1-forms are when \(\hat{L}=\Sigma\times S^{1}\) with Riemannian product metric where the \(\mathbb{Z}_{2}\) acts simultaneously as an orientation reversing isometry of both the \(S^{1}\) and the compact Riemann surface \(\Sigma\). Then, the standard harmonic 1-form on \(S^{1}\) is nowhere vanishing and \(\mathbb{Z}_{2}\)-twisted in the quotient \(L\). We can simplify this even further by considering \(\Sigma=T^{2}\) with a flat metric i.e. we can take \(L\) to be a smooth \(\mathbb{Z}_{2}\)-quotient of the flat 3-torus. Since \(L\) is oriented, the \(\mathbb{Z}_{2}\) action is essentially unique.
If the coordinates of 3-torus are denoted as \(y_{1,2,3}\) with periodicities chosen to be one, then the \(\mathbb{Z}_{2}\) action may be defined as:
\[(y^{1},y^{2},y^{3})\mapsto(-y^{1},-y^{2},y^{3}+1/2). \tag{4}\]
We see that \(dy^{1}\) and \(dy^{2}\) are both \(\mathbb{Z}_{2}\)-twisted harmonic 1-forms, whilst \(dy^{3}\) is an ordinary harmonic 1-form. Therefore if this \(L\) arises in the Joyce-Karigiannis construction, there will be a 2-parameter family of \(G_{2}\)-manifolds corresponding to a Higgs branch and a 1-parameter Coulomb branch1. We will next explicitly construct local models of such singular \(G_{2}\)-spaces before exhibiting the corresponding moduli spaces of flat SU(2) connections.
Footnote 1: Joyce explicitly constructs examples of compact \(G_{2}\)-manifolds by desingularising such singularities as we will describe later.
### A Simple Example
We are thus looking for a local \(G_{2}\)-holonomy orbifold \(X_{0}\) which fibers over \(L\) with fibers \(\mathbb{C}^{2}/\mathbb{Z}_{2}\). The simplest model arises by taking \(L\) to be flat and, since the ambient metric is Ricci flat, the 7-orbifold itself will be locally a Riemannian product i.e.
\[X_{o}=\frac{(T^{3}\times\mathbb{C}^{2}/\mathbb{Z}_{2})}{\mathbb{Z}_{2}} \tag{5}\]
with a flat metric. These locally flat models are _very special cases_ of the Joyce-Karigiannis construction where the harmonic 1-form is constant along \(L\). The fibration over \(L\) is however non-trivial as the requirement that the holonomy be contained in \(G_{2}\) requires the \(\mathbb{Z}_{2}\) which acts on \(T^{3}\) to also act on \(\mathbb{C}^{2}\). In fact, we can choose complex coordinates \((z_{1},z_{2})\) in which the \(\mathbb{Z}_{2}\)-action is \((z_{1},z_{2})\rightarrow(-z_{1},z_{2})\).
The torsion-free \(G_{2}\)-structure on (5) is,
\[\varphi=dy^{1}dy^{2}dy^{3}+d\vec{y}\cdot\vec{\omega}, \tag{6}\]
where \(\omega_{i}\) are the Kahler 2-forms on \(\mathbb{C}^{2}/\mathbb{Z}_{k}\) defining the flat hyperKahler structure. Then the metric is,
\[g=d\vec{y}^{2}+h \tag{7}\]
where \(h\) is the Euclidean metric on \(\mathbb{C}^{2}/\mathbb{Z}_{k}\).
The \(G_{2}\)-orbifold \((X_{0},\varphi)\) admits two topologically distinct smooth desingularisations \((X_{c},\varphi_{c})\) and \((X_{h},\varphi_{h})\) of the form
\[\frac{T^{3}\times M_{EH}}{\mathbb{Z}_{2}^{a}},\ \ a=c,h \tag{8}\]
with Ricci flat metrics
\[g_{a}=d\vec{y}^{2}+h_{EH} \tag{9}\]
where \(h_{EH}\) is the family of hyper-Kahler Eguchi-Hanson metrics on \(T^{*}S^{2}\). Both of these are just free \(\mathbb{Z}_{2}\) quotients of \(M_{EH}\times T^{3}\), differing by the action of the involution: as described in section two, in \(X_{c}\), \(H_{2}(M_{EH})\) is preserved by the \(\mathbb{Z}_{2}\), whereas in \(X_{h}\) it is odd. Note that the holonomy group of both of the metrics \(g_{a}\) is SU(2) \(\ltimes\mathbb{Z}_{2}\) and that this group is a subgroup of SU(3) \(\subset G_{2}\). Hence these special local models actually preserve \(\mathcal{N}=2\) supersymmetry in four dimensions2.
Footnote 2: We will describe a genuinely \(\mathcal{N}=1\) supersymmetric example at the end of this subsection.
\(X_{c}\) and \(X_{h}\) are topologically distinct since \(b^{2}(X_{c})=b^{3}(X_{c})=1\), whilst \(b^{2}(X_{h})=0\) and \(b^{3}(X_{h})=2\). Hence, whilst \(X_{c}\) has a one-dimensional moduli space of \(G_{2}\)-metrics, the moduli space of \(X_{h}\) is two-dimensional. The topology of the compact 4-cycles which arise from the two distinct desingularisations are \(S^{2}\times T^{2}/\mathbb{Z}_{2}\) on the Coulomb branch and \((S^{2}\times T^{2})/\mathbb{Z}_{2}\) in the Higgs case. The latter may be regarded as the non-trivial \(S^{2}\)-bundle over the Klein bottle or as a particular \(T^{2}\)-fibration over \(\mathbb{RP}^{2}\). All of these cycles have calibrated (co-associative), i.e. supersymmetric, representatives.
#### 4.1.1 Flat Connections on \(L\)
Since this example is so explicit we can also be explicit about the gauge theory interpretation. The low energy field theory descriptions in four dimensions will be given by gauge theories whose moduli spaces of vacua are the moduli spaces of flat connections on \(L\). These models were first considered in the physics literature in [1] and the moduli spaces were considered by Barbosa in his PhD thesis [11].
The fundamental group of \(L\) in this case can be presented with four generators: three translations representing the fundamental group of \(T^{3}\) and another representing the \(\mathbb{Z}_{2}\) quotient. The explicit relations may be presented as:
\[\left<g_{1},\;g_{2},\;g_{3},\;g_{\beta}\;|\;g_{i}g_{j}=g_{j}g_{i}\quad i=1,2,3,\quad g_{\beta}^{2}=g_{3},\quad g_{\beta}g_{3}g_{\beta}^{-1}=g_{3},\quad g_{ \beta}g_{1,2}g_{\beta}^{-1}=g_{1,2}^{-1}\right> \tag{10}\]
There are several components to the moduli space of flat SU(2) connections on \(L\). First there are actually four, one dimensional Coulomb branches:
\[g_{1}\mapsto\pm\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\;g_{2}\mapsto\pm\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\;g_{3}\mapsto\begin{pmatrix}e^{i\theta_{3}}&0\\ 0&e^{-i\theta_{3}}\end{pmatrix},\;g_{\beta}\mapsto\begin{pmatrix}e^{i\theta_{ 3}/2}&0\\ 0&e^{-i\theta_{3}/2}\end{pmatrix} \tag{11}\]
Note that, at the origin (\(\theta_{3}=0\)), the centraliser of the solution in SU(2) is the whole group and away from the origin we break down to the maximal torus U(1).
Noticing that the last two group relations are exactly as described in the previous section, we learn that there is also a Higgs branch:
\[g_{1}\mapsto\begin{pmatrix}e^{i\theta_{1}}&0\\ 0&e^{-i\theta_{1}}\end{pmatrix},\;g_{2}\mapsto\begin{pmatrix}e^{i\theta_{2}}& 0\\ 0&e^{-i\theta_{2}}\end{pmatrix},\;g_{3}\mapsto\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix},\;g_{\beta}\mapsto\begin{pmatrix}0&1\\ -1&0\end{pmatrix} \tag{12}\]
These are all the flat SU(2)-connections and this result agrees with [9, 11]. The Coulomb branch solutions have one parameter \(\theta_{3}\), corresponding to \(b^{3}(X_{c})=1\), whilst the Higgs branch has two parameters as expected by \(b^{3}(X_{h})=2\) in perfect agreement with the moduli space of Ricci flat, special holonomy metrics described above.
#### 4.1.2 Field Theory Interpretation
The four dimensional field theory description is now straightforward: The classical dynamics on each of the four Coulomb branches is just pure \(\mathcal{N}=2\) SU(2) Yang-Mills theory. The Higgs branch description is given by \(\mathcal{N}=2\) supersymmetric SO(2) Yang-Mills with a hypermultiplet in the fundamental representation. At first sight it seems that there is a discrepancy in the comparison to the moduli space of \(M\)-theory in the Coulomb phase, since there is only one \(G_{2}\)-manifold \(X_{c}=\frac{T^{3}\times M_{EH}}{\mathbb{Z}_{2}^{c}}\) but four Coulomb branches. However, there are four spaces of vacua arising from \(X_{c}\) differing by the expectation values of the \(C\)-field. Flat \(C\)-fields on \(X_{c}\) are classified by \(H^{3}(X_{c},\text{U}(1))\) and this is given by \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\text{U}(1)\) and there are therefore four discrete families of one-dimensional flat \(C\)-field backgrounds. One may also think of these as the \(\mathbb{Z}_{2}^{c}\)-invariant harmonic \(C\)-fields on \(T^{3}\times M_{EH}\).
These locally flat explicit models clearly have natural generalisations to \(G_{2}\)-orbifolds of the form
\[X_{o}(\Gamma_{ADE},K)=\frac{\mathbb{C}^{2}/\Gamma_{ADE}\times T^{3}}{K} \tag{13}\]
where \(K\) is a finite group acting freely on \(T^{3}\), preserving orientation. The existence of these examples demonstrate that one can consider more general gauge groups as well as \(K\)-twisted harmonic 1-forms. In these examples, \(T^{3}/K\) is an orientable, compact Bieberbach 3-manifold, hence there are only six possibilities: \(K=1,\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{4},\mathbb{Z}_{6},\mathbb{Z}_{2 }\times\mathbb{Z}_{2}\). The key to understanding the moduli spaces of Ricci flat metrics on these spaces reduces essentially to classifying the actions of \(K\) on \(\mathbb{C}^{2}/\Gamma_{ADE}\) which lift to actions on its hyperKahler desingularisations, \(\mathbb{C}^{2}/\Gamma_{ADE}\). This amounts to classifying actions of \(K\) on \((\mathbb{C}^{2}/\Gamma_{ADE}\times T^{3})\) which preserve the \(G_{2}\)-structure, giving rise to Ricci flat metrics with holonomy group \(\mathrm{SU}(2)\ltimes K\) on the smooth 7-manifolds \((\widetilde{\mathbb{C}^{2}/\Gamma_{ADE}\times T^{3}})/K\). These metrics preserve four dimensional \(\mathcal{N}=2\) supersymmetry for \(K=1,\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{4},\mathbb{Z}_{6}\), but the case \(K=\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) has \(\mathcal{N}=1\) supersymmetry. This latter example was considered in [1] and the moduli spaces of flat \(\mathrm{SU}(2)\)-connections in [9]. We will be able to describe all components of the moduli space of Ricci flat metrics and compare that with the space of flat connections and then describe the low energy dynamics of the effective four dimensional field theory associated to each branch.
### \(\mathcal{N}=1\) Supersymmetric Example
The 3-manifold in this example is \(L=T^{3}/\mathbb{Z}_{2}\times\mathbb{Z}_{2}\). The action of \(K\) on the coordinates of \(\mathbb{C}^{2}/\mathbb{Z}_{2}\times T^{3}\) is given by:
\[g_{\beta_{1}}: (z^{1},z^{2},y^{1},y^{2},y^{3})\mapsto(-z^{1},z^{2},y^{1}+1/2,-y ^{2},-y^{3}) \tag{14}\] \[g_{\beta_{2}}: (z^{1},z^{2},y^{1},y^{2},y^{3})\mapsto(\bar{z}^{1},\bar{z}^{2},- y^{1},y^{2}+1/2,-y^{3}+1/2)\] (15) \[g_{\beta_{3}}: (z^{1},z^{2},y^{1},y^{2},y^{3})\mapsto(-\bar{z}^{1},\bar{z}^{2}, -y^{1}+1/2,-y^{2}+1/2,y^{3}+1/2) \tag{16}\]
The fundamental group of \(L\) can thus be described as having six generators \(g_{1,2,3}\), \(g_{\beta_{1}}\), \(g_{\beta_{2}}\), \(g_{\beta_{3}}\) with the relations
\[\langle g_{1},\;g_{2},\;g_{3},\;g_{\beta_{1}},\;g_{\beta_{2}},\; g_{\beta_{3}}\;|g_{i}g_{j}=g_{j}g_{i},\quad g_{\beta_{i}}^{2}=g_{i},\quad g _{\beta_{i}}g_{i}g_{\beta_{i}}^{-1}=g_{i},\quad i=1,2,3,\] \[g_{\beta_{i}}g_{j,k}g_{\beta_{i}}^{-1}=g_{j,k}^{-1},\quad i\neq j \neq k,\quad g_{\beta_{3}}g_{\beta_{2}}g_{\beta_{1}}=g_{1}g_{3}\rangle \tag{17}\]
Here \(g_{\beta_{3}}=g_{1}g_{\beta_{2}}g_{\beta_{1}}\).
#### 4.2.1 Moduli space of Ricci flat metrics
In order to describe the possible smooth Ricci flat manifolds \((M_{EH}\times T^{3})/K\) we first have to describe how the \(\mathbb{Z}_{2}\)-symmetries \(g_{\beta_{1,2,3}}\) act on \(M_{EH}\). Since \(H_{2}(M_{EH})=\mathbb{Z}\), generated by the \(S^{2}\) in the centre, each \(g_{\beta_{i}}\) can either preserve or reverse the orientation of all classes in \(H_{2}(M_{EH})\). The action on homology is therefore specified by a sign for each of the three involutions. There are then three smooth possibilities according to how the \(g_{\beta_{i}}\) act on the \(S^{2}\) at the centre of \(M_{EH}\). These are given by the choices \((i):(+,-,-)\), \((ii):(-,+,-)\) and \((iii):(-,-,+)\), where, for example \((+,-,-)\) means that \(g_{\beta_{1}}\) preserves \(H_{2}(M_{EH})\) whilst \(g_{\beta_{2}}\) and \(g_{\beta_{3}}\) act as minus the identity. Since there is no \((+,+,+)\) case there are no compact 2-cycles or 5-cycles in \((M_{EH}\times T^{3})/K\) or equivalently no compactly supported harmonic 2-forms. In \(M\)-theory these would have given rise to \(\mathrm{U}(1)\) gauge fields in four dimensions and hence a Coulomb branch. Thus there is no continuous Coulomb branch of vacua. In case \((i)\) we see that there are
compact 4-cycles in \((M_{EH}\times T^{3})/K\) which are Poincare dual to \(\beta\wedge dy^{1}\). In case \((ii)\) instead, it is \(\beta\wedge dy^{2}\) which is invariant and in case \((iii)\) the Poincare dual of \(\beta\wedge dy^{3}\) is an invariant compact 4-cycle. The proof that these are the only possibilities will be given section 4.4. The space of Ricci flat metrics resolving the orbifold singularities thus has three one-dimensional components, giving a space of vacua which is \(\mathbb{C}\cup\mathbb{C}\cup\mathbb{C}\).
We will now examine the moduli space of flat SU(2)-connections on \(L\) and see that it matches nicely with this description.
#### 4.2.2 Flat Connections on \(L\)
_Coulomb Branches._
First note that, since none of the \(dy^{i}\) are \(K\)-invariant, \(b^{1}(L)=0\), so there is no continuous moduli space of Coulomb vacua. There are non-trivial, discrete, Abelian connections however. Note that the Abelianisation of \(\pi_{1}(L)\) is \(H_{1}(L,\mathbb{Z})=\mathbb{Z}_{4}\times\mathbb{Z}_{4}\). This can be seen to be generated by \(g_{\beta_{1}}\) and \(g_{\beta_{2}}\) with the relations that both are order four. Thus the corresponding sixteen flat SU(2) connections on the Coulomb branch can be obtained by choosing \(g_{\beta_{1}}\) and \(g_{\beta_{2}}\) independently from the four diagonal matrices in the \(\mathbb{Z}_{4}\) subgroup of the maximal torus.
_Higgs Branches._
From the relations of the fundamental group as we have presented them, the strategy to finding flat connections should be clear. One chooses a flat connection in the Weyl group of SU(2) for each of the three non-trivial elements of \(K\). The remaining elements of the group are diagonal. This will give rise to three components to the space of flat connections, beyond the Coulomb branch above. The three components are, for \(i\neq j\neq k\),
\[g_{i}\mapsto\begin{pmatrix}e^{i\theta_{i}}&0\\ 0&e^{-i\theta_{i}}\end{pmatrix},\;g_{j} \mapsto\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix},\;g_{k}\mapsto\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}, \tag{18}\] \[g_{\beta_{i}}\mapsto\begin{pmatrix}e^{i\theta_{i}/2}&0\\ 0&e^{-i\theta_{i}/2}\end{pmatrix},\;g_{\beta_{j}} \mapsto\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\;g_{\beta_{k}}\mapsto\begin{pmatrix}0&e^{i\theta_{i}/2}\\ -e^{-i\theta_{i}/2}&0\end{pmatrix} \tag{19}\]
The interpretation in the the low energy effective theory of each branch is clear: an \(\mathcal{N}=1\) supersymmetric SO(2) gauge theory with chiral multiplets in the fundamental representation and a flat direction in the space of vacua along which the gauge group is spontaneously broken. The fact that there are three one dimensional Higgs branches matches the three one-dimensional components to the moduli space of Ricci flat metrics found in the previous subsection.
### Moduli Space of Ricci Flat Metrics
In this subsection we describe the space of Ricci flat metrics with holonomy group SU(2) \(\rtimes K\) on the smooth families of 7-manifolds \(X_{a}(\Gamma_{ADE},K):=(\widetilde{\mathbb{C}^{2}/\Gamma_{ADE}}\times T^{3})/K\) which desingularise the flat orbifolds
\[X_{o}(\Gamma_{ADE},K)=\frac{\mathbb{C}^{2}/\Gamma_{ADE}\times T^{3}}{K} \tag{20}\]
and the parameter \(a\) collectively denotes the Coulomb or Higgs branch parameters of the family.
Since \(X_{a}(\Gamma_{ADE},K)\) is simply a free quotient of \(\widetilde{\mathbb{C}^{2}/\Gamma_{ADE}}\times T^{3}\), the moduli space of Ricci flat metrics is given by the subspace of Ricci flat metrics on \(\widetilde{\mathbb{C}^{2}/\Gamma_{ADE}}\) which admit the appropriate action of \(K\). We will first describe the answer to the problem for \(X_{a}(\Gamma_{A_{n}},K)\) where we can use the explicit form of the metrics given by Gibbons and Hawking [19]. This then provides enough insight to solve the problem completely in the general case.
#### 4.3.1 Multi-Centre Gibbons-Hawking Space
An \(A_{n-1}\) singularity is the singularity at the origin of \(\mathbb{C}^{2}/\mathbb{Z}_{n}\), where \(\mathbb{Z}_{n}\) acts as a subgroup of SU(2). The spaces \(\mathbb{C}^{2}/\Gamma\) (where \(\Gamma\) is a finite subgroup of SU(2)) all have topologically unique crepant resolutions, \(\widetilde{\mathbb{C}^{2}/\Gamma}\) which are hyperKahler and Asymptotically Locally Euclidean (ALE) [27].
For the \(A_{n-1}\) singularities, the corresponding ALE spaces are the \(n\)-centre Gibbons-Hawking spaces \(M_{GH}^{(n)}\) with explicit metrics given by
\[ds^{2} =g_{GH}=V(\vec{x})\;d\vec{x}\cdot d\vec{x}+V(\vec{x})^{-1}(dt+A_ {i}\;dx^{i})^{2} \tag{21}\] \[V(\vec{x}) =\sum_{\gamma=1}^{n}\frac{1}{|\vec{x}-\vec{a}_{\gamma}|}\] (22) \[\vec{\nabla}\times\vec{A} =\vec{\nabla}V(\vec{x})\quad\text{or equivalently}\quad*dA=dV. \tag{23}\]
where \(\vec{x}\in\mathbb{R}^{3}\) and \(t\in S^{1}\). There are \(3n\)-parameters which appear as \(n\) 3-vectors, \(\vec{a}_{\gamma}\) and are the centres of the harmonic functions appearing in the potential \(V(\vec{x})\). By a choice of coordinates we can assume that \(\sum_{\gamma}\vec{a}_{\gamma}=0\).
There is a triplet of complex structures, \((I,J,K)\), given by,
\[Idx^{1}=V(\vec{x})^{-1}(dt+\vec{A}\cdot d\vec{x}),\qquad Idx^{2}=dx^{3} \tag{24}\]
with \(J\) and \(K\) given by cyclically permuting \(dx^{1,2,3}\). The hyper-Kahler forms are determined from the metric via, \(\omega_{I}(\cdot,\cdot)=g(I\cdot,\cdot)\) and similarly for \(J\) and \(K\). They are given by,
\[\omega_{I}\equiv\omega_{1}=(dt+A_{i}\;dx^{i})\wedge dx^{1}+V(\vec{x})\;dx^{2} \wedge dx^{3} \tag{25}\]
with \(\omega_{J,K}\) obtained by cyclic permutations of \(dx^{1,2,3}\).
The cohomology group \(H^{2}\left(M_{GH}^{(n)},\mathbb{Z}\right)\) is spanned by the \(L_{2}\)-normalisable 2-forms [20] which are the Poincare duals of the 2-spheres arising from line segments in \(\mathbb{R}^{3}\) connecting adjacent centres. Denote the 2-form associated to the centres \(a_{\gamma}\) and \(a_{\gamma+1}\) by \(\Gamma_{\gamma}\). We can write these explicitly, first define,
\[V_{i}\equiv\frac{1}{|\vec{x}-\vec{a}_{\lambda}|},\quad V\equiv\sum_{\lambda=1} ^{n}V_{\lambda} \tag{26}\]
then define the basis of anti-self-dual 2-forms,
\[\Sigma^{a}=e^{a}\,e^{4}-\frac{1}{2}\,\varepsilon^{a}{}_{bc}\,e^{b}\,e^{c}, \quad e^{1,2,3}=V^{1/2}dx^{1,2,3},\quad e^{4}=V^{-1/2}(dt+A_{i}\,dx^{i}) \tag{27}\]
To each centre we can associate a 2-form \(\Omega_{\gamma}\equiv-\partial_{a}\left(V_{\gamma}/V\right)\Sigma^{a}\) and define the following 2-form,
\[\Gamma_{\gamma}=-\frac{1}{4\pi}(\Omega_{\gamma}-\Omega_{\gamma+1}),\quad \gamma=1,...,n-1 \tag{28}\]
which is anti-self-dual and \(L_{2}\)-normalisable. Further, one can check that,
\[\int_{M_{GH}^{(n)}}\Gamma_{i}\wedge\Gamma_{j} \tag{29}\]
is minus the \(A_{n-1}\) Cartan matrix using the fact that [32, 33],
\[\int_{M_{GH}^{(n)}}\Omega_{i}\wedge\Omega_{j}=-16\pi^{2}\delta_{ij} \tag{30}\]
Having an explicit expression for \(\Gamma_{i}\) makes it easy to see how many \(K\)-invariant and \(K\)-twisted 2-forms we have on \(X_{a}(\Gamma_{ADE},K)\).
#### 4.3.2 Gibbons-Hawking Moduli space and Flat Connections on \(T^{3}\)
Here we explicitly demonstrate the relationship between the moduli space of flat \(SL(n,\mathbb{C})\) connections on a 3-torus to the Gibbons-Hawking moduli space. We will show that the \(M\)-theory moduli space is isomorphic to the space of flat connections.
The \(n\) centres, \(\vec{a}_{\gamma}\) of the harmonic potential \(V\) are 3n-free parameters whose sum is fixed. Hence the moduli space of Ricci flat metrics is given by \((\mathbb{R}^{3})^{n-1}/S_{n}\), where we factor out by permutations of the centres, which acts as the Weyl group of \(\mathrm{SU}(n)\). In fact there is a close connection between this moduli space and the space of flat \(\mathrm{SU}(n)\) connections on \(T^{3}\). A flat \(\mathrm{SU}(n)\) connection on \(T^{3}\) is just given by three commuting elements of \(\mathrm{SU}(n)\) and, in fact, is given by any three elements of the maximal torus, \(T(\mathrm{SU}(n))\), modulo the action of the Weyl group [18]. This space is \((T(\mathrm{SU}(n)))^{3}/S_{n}\). In \(M\)-theory this gets complexified to the space of \(SL(n,\mathbb{C})\) connections which is essentially \((\mathbb{C}^{4})^{n-1}/S_{n}\) and points in this space are just diagonal matrices of unit determinant, up to the Weyl group action. Denote these three diagonal matrices by \(M_{a}\), \(M_{b}\) and \(M_{c}\) and their diagonal elements as \((\lambda_{a_{1}},\lambda_{a_{2}},\cdots,\lambda_{a_{n}})\), \((\lambda_{b_{1}},\lambda_{b_{2}},\cdots,\lambda_{b_{n}})\) and \((\lambda_{c_{1}},\lambda_{c_{2}},\cdots,\lambda_{c_{n}})\). If we suggestively label the coordinates of a given centre by \((a_{\gamma},b_{\gamma},c_{\gamma})\), then the absolute values of the diagonal entries of the matrices are the exponentials of the entries: \(|\lambda_{a_{\gamma}}|=e^{a_{\gamma}}\), \(|\lambda_{b_{\gamma}}|=e^{b_{\gamma}}\) and \(|\lambda_{c_{\gamma}}|=e^{c_{\gamma}}\). This is the explicit relationship between the flat connections and the Ricci flat metrics. We can recover the full moduli space of flat \(SL(n,\mathbb{C})\)-connections by including the harmonic modes of the \(C\)-field in \(M\)-theory. In \(M\)-theory on \((\widetilde{\mathbb{C}^{2}/\mathcal{Z}_{n}}\times T^{3})\) with the metric \(g_{GH}+h\), in addition to the Gibbons-Hawking moduli \((a_{\gamma},b_{\gamma},c_{\gamma})\) we have the massless scalar fields arising from the harmonic modes of the \(C\)-field. These are given by \(H^{3}(\widetilde{\mathbb{C}^{2}/\mathcal{Z}_{n}}\times T^{3},\mathrm{U}(1))= (T(\mathrm{SU}(n)))^{3}\times S^{1}\), where the \(S^{1}\) is the set of four-dimensional axion VEVs and will play no further role. However, this description is not complete since it does not take into account the action of the non-identity connected diffeomorphisms of \((\widetilde{\mathbb{C}^{2}/\mathcal{Z}_{n}}\times T^{3})\) given by the permutation group \(S_{n}\). This group also acts as the Weyl group on \(H^{2}(\mathbb{C}^{2}/\mathcal{Z}_{n})\) which induces the standard action of the Weyl group on the maximal torus \(T(\mathrm{SU}(n))\). Hence, for fixed axion vev, the moduli space of \(C\)-fields is given by \((T(\mathrm{SU}(n)))^{3}/S_{n}\) and is precisely the moduli space of flat \(\mathrm{SU}(n)\) connections on \(T^{3}\). This shows that, for fixed \(T^{3}\) volume and axion VEV, that the moduli space of \(M\)-theory on \((\widetilde{\mathbb{C}^{2}/\mathcal{Z}_{n}}\times T^{3})\) is isomorphic to the moduli space of flat \(SL(n,\mathbb{C})\) connections on \(T^{3}\).
#### 4.3.3 \(K\)-invariant Ricci flat metrics
We want to consider all of the \(K\)-actions on \(M_{GH}^{(n)}\) asymptotic to the action on \(\mathbb{C}^{2}/\mathbb{Z}_{n}\). We now investigate how these act on the coordinates of \(M_{GH}^{(n)}\). Quotients of Gibbons-Hawking spaces were also considered in the papers
[37, 34]. The \(G_{2}\)-structure on \(T^{3}\times M^{(n)}_{GH}\) is given by,
\[\varphi=dy^{1}dy^{2}dy^{3}+d\vec{y}\cdot\vec{\omega}, \tag{31}\]
and we require that this is \(K\)-invariant. From the first term we see that \(K\) must act on \(T^{3}\) such that \(d\vec{y}\mapsto M\,d\vec{y}\) where \(M\in\operatorname{SL}(3,\mathbb{R})\) and from the second term the action on \(M^{(n)}_{GH}\) must induce an action \(\vec{\omega}\mapsto N\,\vec{\omega}\) such that \(M^{T}=N^{-1}\). If we take the action on \(M^{(n)}_{GH}\) to be \((t,\vec{x})\mapsto(t^{\prime},L\,\vec{x})\) then, using equations (25) and (23), we see first that \(V(\vec{x})\) must be preserved which means \(L\in\operatorname{O}(3)\) and \(L\) preserves the set of centres \(\{a_{\gamma}\}\) (as discussed below) and second that \(A\mapsto\det(L)A\) and \(t^{\prime}=\det(L)\,t\). Then a calculation shows that \(\vec{\omega}\mapsto\det(L)\,L\,\vec{\omega}\). Thus \(N=\det(L)\,L\) and so \(M,N\in\operatorname{SO}(3)\). So even if the action on \(\vec{x}\) is in \(\operatorname{O}(3)\) the action on the \(T^{3}\) and the Kahler forms is always in \(\operatorname{SO}(3)\). In this paper, however, we will restrict to the cases where the action on the \(\mathbb{R}^{3}\) coordinates is in \(\operatorname{SO}(3)\), leaving the remaining \(\operatorname{O}(3)\) cases for future investigation.
The condition that \(V(\vec{x})\) must remain invariant gives an important insight. Since
\[|L\cdot\vec{x}-\vec{a}_{\gamma}|=|\vec{x}-L^{T}\cdot\vec{a}_{\gamma}| \tag{32}\]
whenever \(L\) is orthogonal, we see that the action of \(K\) on the coordinates \(x^{i}\) is equivalent to an action on the centres. Therefore, in order for \(V(\vec{x})\) to be invariant, the elements of \(K\) must permute the centres amongst themselves. This is the key restriction on the moduli space that we were seeking. The set of compatible \(K\)-actions are given by the set of homomorphisms \(\chi\) from
\[\chi:K\mapsto S_{n} \tag{33}\]
Some comments are now in order. As explained in [23], non-identity elements of \(S_{n}\) are actually diffeomorphisms of \(M^{(n)}_{\operatorname{GH}}\) which are _disconnected_ from the identity. This is related then to the distinct topologies that can arise on the corresponding \(G_{2}\)-manifolds. This is also then clearly related to the distinct components of the moduli space of flat connections on \(L\). The insight gained from these examples allows us to address the general case: since the moduli space of ALE metrics on \(\widetilde{\mathbb{C}^{2}/\Gamma_{ADE}}\) is given by \((\mathfrak{h}_{ADE}\otimes\mathbb{R}^{3})/\operatorname{Weyl}(ADE)\), we must look for homomorphisms from \(K\) to the Weyl group. Actually, this is not quite the full answer as, in addition to the action of the Weyl group, the ADE Lie algebras \(\mathfrak{g}_{ADE}\) also admit outer automorphisms in general and these could also be induced by actions of \(K\), hence the final answer is given by
\[\chi:K\to\operatorname{Aut}(\Delta_{ADE})\ltimes\operatorname{Weyl}(ADE) \tag{34}\]
where \(\Delta_{ADE}\) is the Dynkin diagram associated to the \(ADE\) Lie algebra.
### Further Explicit Examples
**Example 1**
Our first example is to consider gauge group \(\operatorname{SU}(2)\) and \(K=\mathbb{Z}_{2}\) i.e. \(X_{o}(\Gamma_{A_{1}},\mathbb{Z}_{2})\). Let us consider therefore the most general \(\mathbb{Z}_{2}\)-invariant \(2\)-centre Gibbons-Hawking metric on \(\widetilde{\mathbb{C}^{2}/\Gamma_{A_{1}}}\). This amounts to finding all actions of \(K=\mathbb{Z}_{2}\) which act as
\[K:(\omega_{1},\omega_{2},\omega_{3})\longrightarrow(-\omega_{1},-\omega_{2}, \omega_{3}) \tag{35}\]
on the Kahler forms and preserve \(V(x_{1},x_{2},x_{3})\). In this case we see that the action on the coordinates is
\[K:(x_{1},x_{2},x_{3})\longrightarrow(-x_{1},-x_{2},x_{3}) \tag{36}\]
which, because of (32), is equivalent to the corresponding action on the centres \(\vec{a_{1}}\) and \(\vec{a_{2}}\) which appear in the Gibbons-Hawking potential. We are free to fix the sum of the two centres, \(\vec{a_{2}}=-\vec{a_{1}}\). We then see that there are two branches to the moduli space of \(K\)-invariant hyperKahler metrics, given by
\[\vec{a_{1}}=-\vec{a}_{2}=\begin{pmatrix}0\\ 0\\ c\end{pmatrix} \tag{37}\]
for any \(c\in\mathbb{R}\) and
\[\vec{a_{1}}=-\vec{a}_{2}=\begin{pmatrix}a\\ b\\ 0\end{pmatrix} \tag{38}\]
for any \(a,b\in\mathbb{R}\). In the first case we see that the two centres lie along the fixed point set of the \(K\)-action on \(\mathbb{R}^{3}\) and hence \(H_{2}(\widetilde{\text{C}^{2}/\Gamma_{A_{1}}})\) is preserved by \(K\). This corresponds to the Coulomb branch solution. In the second case the two centres are permuted by \(K\) and hence the action of \(K\) on the homology is non-trivial. This corresponds to the Higgs branch, which we happily see is two-dimensional in this case, in agreement with our gauge theory result from subsection 4.1.1.
We can also count the \(L_{2}\)-normalisable harmonic forms on the smooth manifolds \(X_{a}(\Gamma_{A_{1}},\mathbb{Z}_{2})\) (where \(a=c\) corresponds to the Coulomb branch and \(a=h\) to the Higgs branch) by considering the action of \(K\) on the harmonic 2-form \(\Gamma_{1}\), defined in 4.3.1. In general, since the \(K\)-action permutes the set of centres, one can see that it also permutes the set of 2-forms \(\Omega_{i}\) and thus acts on the \(\Gamma_{i}\) as some invertible linear transformation. In this case, on the Coulomb branch \(\Gamma_{1}\) is \(K\)-invariant and on the Higgs branch,
\[\mathbb{Z}_{2}:\begin{pmatrix}\Omega_{1}\\ \Omega_{2}\end{pmatrix}\mapsto\begin{pmatrix}\Omega_{2}\\ \Omega_{1}\end{pmatrix} \tag{39}\]
and so \(\Gamma_{1}\mapsto-\Gamma_{1}\). Thus we see that \(X_{c}\) has \(b^{2}=1\) and \(b^{3}=1\) and \(X_{h}\) has \(b^{2}=0\) and \(b^{3}=2\) which give rise to the expected number of scalar field moduli and U(1) factors in the gauge group in the 4d theory arising from compactifying M-theory on \(X_{a}(\Gamma_{A_{1}},\mathbb{Z}_{2})\).
**Example 2**
This is the example corresponding to SU(2) gauge theory on \(T^{3}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2})\) i.e. \(M\)-theory on \(X_{o}(\Gamma_{A_{1}},\mathbb{Z}_{2}\times\mathbb{Z}_{2})\). The action of \(K\) on the \(\mathbb{R}^{3}\) coordinates of the Gibbons-Hawking metric is generated by the diagonal order two matrices in SO(3) of the form:
\[\beta:=(-1,-1,1)\ \ \gamma:=(-1,1,-1) \tag{40}\]
In this case there is no Ricci flat metric in which the action of \(K\) preserves the homology. Instead there are three components to the moduli space in which the two centres lie along each of the three coordinate axes:
\[\vec{a_{1}}=-\vec{a_{2}}=\begin{pmatrix}a\\ 0\\ 0\end{pmatrix},\ \ \ \begin{pmatrix}0\\ b\\ 0\end{pmatrix},\ \ \ \begin{pmatrix}0\\ 0\\ c\end{pmatrix} \tag{41}\]
and this is in perfect agreement with the three Higgs branches to the moduli space of SU(2) flat connections on \(T^{3}/K\) from subsection 4.2.2.
Here, as in the previous example, we can count the harmonic forms. Looking at the first branch of the moduli space, we can compute that both \(\beta\) and \(\gamma\) act as \(\Gamma_{1}\mapsto-\Gamma_{1}\) and so \(\beta\gamma\) acts trivially. Thus by wedging \(\Gamma_{1}\) with the harmonic 1-form on \(T^{3}\) that is odd under \(\beta\) and \(\gamma\) but even under \(\beta\gamma\) we get a \((\mathbb{Z}_{2}\times\mathbb{Z}_{2})\)-invariant harmonic 3-form. Thus we have \(b^{2}=0\) and \(b^{3}=1\) for the desingularisation of \(X_{o}(\Gamma_{A_{1}},\mathbb{Z}_{2}\times\mathbb{Z}_{2})\) corresponding to this branch and the same for the other two.
### Example 3
Let's now consider \(\mathrm{SU}(3)\) gauge theory on \(T^{3}/\mathbb{Z}_{2}\), so \(\mathbb{Z}_{2}\)-invariant 3-centre Gibbons Hawking metrics. Denote the centres as,
\[\begin{pmatrix}a_{1}\\ b_{1}\\ c_{1}\end{pmatrix},\begin{pmatrix}a_{2}\\ b_{2}\\ c_{2}\end{pmatrix},\begin{pmatrix}-a_{1}-a_{2}\\ -b_{1}-b_{2}\\ -c_{1}-c_{2}\end{pmatrix} \tag{42}\]
Since \(K\) is generated by \(g_{\beta}\) and preserves points of the form \((0,0,x_{3})\), we have \(K\)-invariant metrics if the three centres are all of this form. This is the two-dimensional Coulomb branch of the \(\mathcal{N}=2\) supersymmetric \(\mathrm{SU}(3)\) gauge theory. There are also solutions of the form,
\[\begin{pmatrix}a\\ b\\ c\end{pmatrix},\begin{pmatrix}-a\\ -b\\ c\end{pmatrix},\begin{pmatrix}0\\ 0\\ -2c\end{pmatrix} \tag{43}\]
This corresponds to an action of \(K=\mathbb{Z}_{2}\) which reverses the orientation of the \(S^{2}\) corresponding to the line segment joining the first two centres, but preserves the orientation of the other \(S^{2}\). Hence the Betti numbers of \(X_{h}(\mathbb{Z}_{3},\mathbb{Z}_{2})\) are \(b^{2}=1\) and \(b^{3}=3\), the latter corresponding to the three parameters above.
We can see this three dimensional moduli space in the \(\mathrm{SU}(3)\) gauge theory explicitly. Notice that when \(a=b=0\) the first two centres coincide and hence an \(A_{1}\)-singularity appears and that this solution intersects the Coulomb branch there. At this point the physical theory has an unbroken \(\mathrm{SO}(2)\) gauge symmetry as the \(S^{2}\) which appears when \(a\) and \(b\) are non-zero is \(\mathbb{Z}_{2}\)-odd. There is also an unbroken \(\mathrm{U}(1)\) gauge symmetry coming from the \(S^{2}\) which connects the first two centres with the third. The corresponding three dimensional moduli space of flat connections is given by,
\[g_{1}\mapsto\begin{pmatrix}\lambda_{a}&0&0\\ 0&\lambda_{a}^{-1}&0\\ 0&0&1\end{pmatrix},\;g_{2}\mapsto\begin{pmatrix}\lambda_{b}&0&0\\ 0&\lambda_{b}^{-1}&0\\ 0&0&1\end{pmatrix},\;g_{3}\mapsto\begin{pmatrix}-\lambda_{c}&0&0\\ 0&-\lambda_{c}&0\\ 0&0&\lambda_{c}^{-2}\end{pmatrix},\;g_{\beta}\mapsto\begin{pmatrix}0& \lambda_{c}^{1/2}&0\\ -\lambda_{c}^{1/2}&0&0\\ 0&0&\lambda_{c}^{-1}\end{pmatrix} \tag{44}\]
We see that at the origin of the moduli space the gauge group is \(\mathrm{SO}(2)\times\mathrm{U}(1)\subset\mathrm{SU}(2)\times\mathrm{U}(1)\subset \mathrm{SU}(3)\), where the \(\mathrm{U}(1)\) factor is in the direction of the Lie algebra which generates \(g_{3}\). Since \(g_{3}\) commutes with \(g_{1},g_{2}\) and \(g_{\beta}\), this \(\mathrm{U}(1)\) is unbroken for all values of the \(\lambda_{i}\). We propose that the massless hypermultiplet which appears at the origin is in the representation \(\mathbf{2_{0}}\) i.e. a doublet which is neutral under \(\mathrm{U}(1)\). Along the Higgs branch the \(\mathrm{SO}(2)\) vector multiplet combines with four real bosonic (plus fermionic) degrees of freedom from the hypermultiplet to become a long massive vector multiplet, leaving a massless spectrum consisting of a \(\mathrm{U}(1)\) vector multiplet plus a single neutral hypermultiplet. Thus the low energy moduli space is (1+2)-complex dimensional corresponding exactly to the three parameters, \((\lambda_{a},\lambda_{b},\lambda_{c})\) in the family of flat connections.
For this case we have two harmonic 2-forms, \(\Gamma_{1}\) and \(\Gamma_{2}\). On the Coulomb branch both 2-forms are invariant and so \(b^{2}=b^{3}=2\) for \(X_{c}(\mathbb{Z}_{3},\mathbb{Z}_{2})\) matching the expected two-dimensional moduli space and \(U(1)^{2}\) unbroken gauge symmetry at generic points. On the Higgs branch, the action on the 2-forms is,
\[\begin{pmatrix}\Gamma_{1}\\ \Gamma_{2}\end{pmatrix}\mapsto\begin{pmatrix}-\Gamma_{1}\\ \Gamma_{1}+\Gamma_{2}\end{pmatrix} \tag{45}\]
and so there is one \(\mathbb{Z}_{2}\)-odd 2-form, \(\Gamma_{1}\), and one \(\mathbb{Z}_{2}\)-even 2-form, \(\Gamma_{1}+2\Gamma_{2}\). Thus, for \(X_{h}(\mathbb{Z}_{3},\mathbb{Z}_{2})\) we have that \(b^{2}=1\) and \(b^{3}=3\), as anticipated (explicitly, if the action on \(T^{3}\) is \(\alpha:\;(y^{1},y^{2},y^{3})\mapsto(-y^{1},-y^{2},y^{3}+1/2)\), then the harmonic 3-forms are \(dy^{1}\wedge\Gamma_{1},\quad dy^{2}\wedge\Gamma_{1},\quad dy^{3}\wedge(\Gamma _{1}+2\Gamma_{2})\)).
#### 4.4.1 The general case for \(n\) centres
We will now describe the most general \(K\)-invariant Gibbons-Hawking Ricci flat metrics by simply describing \(K\)-invariant configurations of \(n\) centres in \(\mathbb{R}^{3}\) for arbitrary \(n\).
Firstly, we take \(K=\mathbb{Z}_{2}=\langle\alpha\rangle\). Then either one can place \(P_{1}\) centres in the \(\alpha\)-invariant subspace of \(\mathbb{R}^{3}\) (such that their centre of mass is at the origin) or one can arrange \(P_{2}\) pairs of centres to be exchanged under the action of \(\alpha\). So \(P_{1}+2P_{2}=n\) and the dimension of the branch of the moduli space is \(P_{1}+3P_{2}-1\). The dimension is this because the centres that are \(\alpha\)-invariant contribute 1 parameter each (their position on the axis of rotation) and each pair of centres exchanged under \(\alpha\) contributes 3 parameters (since such pairs will be of the form \(\{(a,b,c),(-a,-b,c)\}\)) and the centre of mass condition takes away 1 parameter. Secondly, take \(K=\mathbb{Z}_{2}\times\mathbb{Z}_{2}=\langle\alpha,\beta\rangle\). We can place \(P_{1}\) centres in the \(K\)-invariant subspace, in this case this is just the origin so this is the same as considering the case for \(n-P_{1}\) centres. We can arrange \(P_{2}\) pairs of centres to live in the invariant subspace of, say, \(\alpha\) and be exchanged in pairs under \(\alpha\) and \(\alpha\beta\) (plus cyclic permutation of \(\alpha\), \(\beta\) and \(\alpha\beta\)). Or we can arrange centres in \(P_{3}\) sets of four as a full orbit of the group. Then \(2P_{2}+4P_{3}=n\) and the dimension of the moduli space is \(P_{2}+3P_{3}\).
One could continue this reasoning for more general \(K\). Thus we can find the number of branches simply by finding all the ways of partitioning the integer \(n\) into the tuple \((P_{1},P_{2},...)\) subject to a linear condition on the \(P_{i}\) determined by the fact that the centres must be \(K\)-invariant. As an example, consider \(n=4\) and \(K=\mathbb{Z}_{2}\). We have 3 branches,
\[\begin{array}{c|c}(P_{1},P_{2})&d=P_{1}+3P_{2}-1\\ \hline(4,0)&3\\ (2,1)&4\\ (0,2)&5\end{array}\]
We can explicitly see these branches from the point of view of the centres,
\[(4,0) \leftrightarrow\left\{\vec{a}_{1}=\begin{pmatrix}0\\ 0\\ c_{1}\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}0\\ 0\\ c_{2}\end{pmatrix},\;\vec{a}_{3}=\begin{pmatrix}0\\ 0\\ c_{3}\end{pmatrix},\;\vec{a}_{4}=\begin{pmatrix}0\\ 0\\ -c_{1}-c_{2}-c_{3}\end{pmatrix}\right\} \tag{46}\] \[(2,1) \leftrightarrow\left\{\vec{a}_{1}=\begin{pmatrix}a_{1}\\ b_{1}\\ c_{1}\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}-a_{1}\\ -b_{1}\\ c_{1}\end{pmatrix},\;\vec{a}_{3}=\begin{pmatrix}0\\ 0\\ -c_{1}+d_{1}\end{pmatrix},\;\vec{a}_{4}=\begin{pmatrix}0\\ 0\\ -c_{1}-d_{1}\end{pmatrix}\right\}\] (47) \[(0,2) \leftrightarrow\left\{\vec{a}_{1}=\begin{pmatrix}a_{1}\\ b_{1}\\ c_{1}\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}-a_{1}\\ -b_{1}\\ c_{1}\end{pmatrix},\;\vec{a}_{3}=\begin{pmatrix}a_{3}\\ b_{3}\\ -c_{1}\end{pmatrix},\;\vec{a}_{4}=\begin{pmatrix}-a_{3}\\ -b_{3}\\ -c_{1}\end{pmatrix}\right\} \tag{48}\]
and we can use the map described in subsection 4.3.2 to find the corresponding flat connections, thus giving a prediction for the number and dimension of the branches of the moduli space of flat \(\mathrm{SU}(n)\) connections on \(T^{3}/K\).
### Some Higher Rank Examples
In this section we describe some higher rank \(ADE\) Higgs branch solutions by embedding the basic \(\mathrm{SU}(2)\) flat connection on \(T^{3}/\mathbb{Z}_{2}\) of section 4.1.1 into higher rank groups.
For instance consider the maximal subgroup \(\mathrm{SU}(2)^{N}\) of \(\mathrm{SU}(2N)\) (or similarly for \(\mathrm{SU}(2N+1)\)). We can take \(N\) diagonal copies of the \(\mathrm{SU}(2)\) solution. The gauge group at the origin of the Higgs branch for the space,
\[\frac{T^{3}\times M_{GH}^{(2N)}}{\mathbb{Z}_{2}} \tag{49}\]
is the centraliser of \(N\) copies of \(g_{\beta}\) from (12) as a subgroup of \(\mathrm{SU}(2N)\). This subgroup is,
\[\mathrm{S}(\mathrm{U}(N)\times\mathrm{U}(N))\cong(\mathrm{SU}(N)\times\mathrm{ SU}(N)\times\mathrm{U}(1))/\mathbb{Z}_{N}, \tag{50}\]
The fields in the higher dimensional theory (i.e. before compactification onto \(T^{3}/\mathbb{Z}_{2}\)) transform in the adjoint of \(\mathrm{SU}(2N)\). So in order to see how the hypermultiplets in the lower dimensional theory transform we must look at the decomposition of this representation,
\[\mathrm{SU}(2N)\rightarrow(\mathrm{SU}(N)\times\mathrm{SU}(N)\times\mathrm{U} (1))/\mathbb{Z}_{N}\]
\[\mathbf{4N}^{2}-\mathbf{1}\rightarrow(\mathbf{N}^{2}-\mathbf{1},\mathbf{1})_ {0}+(\mathbf{1},\mathbf{N}^{2}-\mathbf{1})_{0}+(\mathbf{N},\bar{\mathbf{N}})_ {2}+(\bar{\mathbf{N}},\mathbf{N})_{-2}+(\mathbf{1},\mathbf{1})_{0}\]
which is the adjoint plus the bifundamental and its complex conjugate. The \(\mathcal{N}=2\) vector multiplet will transform in the adjoint and the two hypermultiplets in the bifundamentals, thus we have \(8N^{2}\) real scalars (the hypermultiplets contain \(8\) real scalar degrees of freedom and the dimension of the representation is \(N^{2}\)). Now we can move away from the origin of the moduli space by switching on VEVs. If we turn on the \((\mathbf{N},\bar{\mathbf{N}})_{2}\) field we break the \(\mathrm{U}(1)\) and the two \(\mathrm{SU}(N)\)'s break to a diagonal subgroup and \(4N^{2}\) of the real scalars become massive. The matter representation breaks to adjoints and singlets of this new group. If we give a VEV one of the remaining scalars in the adjoint we break the group to the maximal torus \(\mathrm{U}(1)^{N-1}\), \(4(N^{2}-N)\) scalars become massive and we can break no further. So we are left with a \(\mathrm{U}(1)^{N-1}\) gauge group and \(8N^{2}-4N^{2}-4(N^{2}-N)=4N\) massless real scalars, these are the moduli on the Higgs branch and match the number of moduli in the explicit flat connection.
This corresponds to \((N-1)\)\(\mathbb{Z}_{2}\)-even spheres (giving \(N-1\) gauge bosons) and \(N\) odd spheres (giving \(2N\)_complex_ scalars). Indeed, several such \(\mathbb{Z}_{2}\) actions can be shown to exist when the centres are arranged in a regular polygon, for example. One can also embed the Coulomb branch solution diagonally with the Higgs branch solution to obtain a mixed branch, which would correspond to having more even spheres. We can achieve this by placing some of the centres on the line that is invariant under the \(\mathbb{Z}_{2}\) action.
Also we may explore the moduli spaces of \(D\) and \(E\) type singularities from the flat connections viewpoint. For the \(D\) case this comes from the maximal subgroup of \(\mathrm{Spin}(4N)\),
\[\mathrm{Spin}(4N)\rightarrow\mathrm{Spin}(4)^{N}\cong\mathrm{SU}(2)^{2N} \tag{51}\]
and then the centraliser of the diagonally embedded \(\mathrm{SU}(2)\) Higgs branch solution at the origin is,
\[\mathrm{Spin}(2N)\times\mathrm{Spin}(2N). \tag{52}\]
Away from the origin the gauge group gets completely broken and there are \(8N\) massless real scalars remaining. Similarly, for the \(\mathrm{SU}(2)^{8}\) subgroup of \(E_{8}\) one should get that the surviving gauge symmetry at the origin of the Higgs branch is,
\[\mathrm{Spin}(16)/\mathbb{Z}_{2}\subset E_{8}. \tag{53}\]
and again, away from the origin the gauge group is completely broken and there are 32 massless real scalars corresponding to 8 copies of the moduli of the basic \(\mathrm{SU}(2)\) solution.
Another way in which we can generalise is to consider more general twists than \(\mathbb{Z}_{2}\). For example, we could consider the space,
\[\frac{T^{3}\times M_{EH}}{\mathbb{Z}_{3}} \tag{54}\]
where the action on the torus is,
\[(y^{1},y^{2},y^{3})\mapsto(-y^{2},y^{1}-y^{2},y^{3}+1/3) \tag{55}\]
Then on \(T^{3}/\mathbb{Z}_{3}\) we have one harmonic 1-form, \(dx^{3}\), and two '\(\mathbb{Z}_{3}\)-twisted' harmonic 1-forms, \(e^{-2\pi i/3}\,dx^{1}+dx^{2}\) and \(e^{2\pi i/3}\,dx^{1}+dx^{2}\). However, for this example, the \(\mathbb{Z}_{3}\) can only act trivially on the 2-sphere in the Eguchi-Hanson space (since there are no non-trivial homomorphisms from \(\mathbb{Z}_{3}\) into \(\mathbb{Z}_{2}\)) and so we will only get a Coulomb branch. If we replace \(M_{EH}\) with \(M_{GH}^{(3)}\) then we do get a Higgs branch and the \(\mathbb{Z}_{3}\)-twisted 1-forms play the role that the \(\mathbb{Z}_{2}\)-twisted ones did in our previous discussion.
### D-type ALE space
For \(D_{n}\) singularities we can consider the moduli space of flat connections by thinking about configurations of centres as we did for the \(A_{n}\) case. Far away from the origin, the ALE space that is the resolution of a \(D_{n}\) singularity looks essentially like Gibbons-Hawking space modded out by a \(\mathbb{Z}_{2}\) action \((t,\vec{x})\mapsto(-t,-\vec{x})\) so now instead of thinking about centres in \(\mathbb{R}^{3}\) we can think about centres in \(\mathbb{R}^{3}/\mathbb{Z}_{2}\) or equivalently, centres in \(\mathbb{R}^{3}\) along with their \(\mathbb{Z}_{2}\) images.
We no longer require that the centres sum to zero, since this is trivially satisfied by including their \(\mathbb{Z}_{2}\) images. So the only constraints are that the set of centres is invariant under the \(K\) action and additionally that \(K\) acts on the Dynkin diagram defined by the 2-spheres as an element of \(\mathrm{Aut}(\Delta_{D_{n}})\ltimes\mathrm{Weyl}(D_{n})=\mathbb{Z}_{2}\ltimes (\mathbb{Z}_{2}^{n-1}\ltimes S_{n})\) (except for \(n=4\), when \(\mathrm{Aut}(\Delta_{D_{n}})=S_{3}\)). The \(\mathbb{Z}_{2}^{n-1}\) factor is important; it corresponds to multiplying an even number of the coordinates of a \(D_{n}\) root vector by \(-1\). Geometrically, this corresponds to sending an even number of centres to their \(\mathbb{Z}_{2}\) images. Thus, on top of the constraint that \(K\) preserves the set of centres, we also have that if \(K\) sends \(m\) centres to their \(\mathbb{Z}_{2}\) images, \(m\) must be even (note that for the \(A\)-type examples the fact that \(K\) preserves the set of centres automatically means it acts as an element of the Weyl group on the 2-spheres, so we got no further constraint like we do here).
As an example, consider \(D_{2}\) with \(K=\mathbb{Z}_{2}\)3. We expect this to coincide with two copies of our \(A_{1}\) example by virtue of the isomorphism \(\mathfrak{so}(4)\cong\mathfrak{su}(2)\times\mathfrak{su}(2)\). The allowed configurations of centres (omitting the orientifold
images) are,
\[\left\{\vec{a}_{1}=\begin{pmatrix}0\\ 0\\ c_{1}\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}0\\ 0\\ c_{2}\end{pmatrix}\right\} \tag{56}\] \[\left\{\vec{a}_{1}=\begin{pmatrix}a_{1}\\ b_{1}\\ c_{1}\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}-a_{1}\\ -b_{1}\\ c_{1}\end{pmatrix}\right\}\] (57) \[\left\{\vec{a}_{1}=\begin{pmatrix}a_{1}\\ b_{1}\\ 0\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}a_{2}\\ b_{2}\\ 0\end{pmatrix}\right\} \tag{58}\]
Where the first branch corresponds to two copies of the \(A_{1}\) Coulomb branch, the second to a one copy of the \(A_{1}\) Coulomb branch and one copy of the Higgs branch and the third to two copies of the \(A_{1}\) Higgs branch. By looking at the corresponding flat connection, we see that on the first branch the full \(\mathrm{SO}(4)\) is unbroken at the origin of the branch and at generic points this is broken to \(\mathrm{SO}(2)^{2}\), on the second branch there is an unbroken \(\mathrm{SO}(2)\times\mathrm{SU}(2)\) at the origin and at generic points an unbroken \(\mathrm{SO}(2)\) and on the third branch there is an unbroken \(\mathrm{SO}(2)\times\mathrm{SO}(2)\) at the origin which is completely broken at generic points. This matches nicely with what we would expect from our considerations for \(A_{1}\).
Next, we consider \(D_{3}\) with \(K=\mathbb{Z}_{2}\). This example should give the same answer as the \(A_{3}\) example, thanks to the isomorphism \(\mathfrak{so}(6)\cong\mathfrak{su}(4)\). The allowed configurations of centres are,
\[\left\{\vec{a}_{1}=\begin{pmatrix}0\\ 0\\ c_{1}\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}0\\ 0\\ c_{2}\end{pmatrix},\;\vec{a}_{3}=\begin{pmatrix}0\\ 0\\ c_{3}\end{pmatrix}\right\} \tag{59}\] \[\left\{\vec{a}_{1}=\begin{pmatrix}a_{1}\\ b_{1}\\ c_{1}\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}-a_{1}\\ -b_{1}\\ c_{1}\end{pmatrix},\;\vec{a}_{3}=\begin{pmatrix}0\\ 0\\ c_{2}\end{pmatrix}\right\}\] (60) \[\left\{\vec{a}_{1}=\begin{pmatrix}a_{1}\\ b_{1}\\ 0\end{pmatrix},\;\vec{a}_{2}=\begin{pmatrix}a_{2}\\ b_{2}\\ 0\end{pmatrix},\;\vec{a}_{3}=\begin{pmatrix}0\\ 0\\ c_{3}\end{pmatrix}\right\} \tag{61}\]
where, using the notation of subsection 4.4, the first branch corresponds to the \(A_{3}\) branch labelled \((4,0)\), the second branch to \((2,1)\) and the third to \((0,2)\) and dimensions of the branches agree as expected. Note how, for example, the 6d solution given by \(\{\vec{a}_{1}=(a_{1},b_{1},0),\;\vec{a}_{2}=(a_{2},b_{2},0),\;\vec{a}_{3}=(a_ {3},b_{3},0)\}\) is not allowed since the \(K=\mathbb{Z}_{2}\) action sends an odd number of centres to their images.
### Compact Examples
Finally, we briefly also discuss compact examples, where now the low energy theory is a four dimensional supergravity theory coupled to vector multiplets and chiral multiplets. The general strategy is clear: the chiral multiplet representations under the \(ADE\)-gauge symmetries are determined by the Weyl group action and flat connections as above.
For the basic Joyce-Karigiannis examples, the answer (as given in section 3) is clear when one considers the \(\mathbb{Z}_{2}\)-twisted desingularisation of an \(A_{1}\)-singularity: one obtains an \(\mathrm{SO}(2)\) gauge theory with a chiral multiplet in the fundamental representation. We point out that several of Joyce's original examples are of this form [22].
#### 4.7.1 Joyce Examples
All of the examples in [22, 24] are obtained by considering a quotient of a flat 7-torus by a finite group, \(\Gamma\) producing an orbifold whose singularities, in favourable cases can be resolved, producing a smooth 7-manifold with metrics of \(G_{2}\)-holonomy. In [22], in all except two examples (17 and 18), all of the singularities that occur are of the kinds described in this paper since they are all locally modelled on \(X_{0}(\Gamma_{ADE},K)\). Hence, the results presented here allow one to simply read off the key ingredients of the low energy field theory arising from those examples. For instance, all of the examples of Table 1 of [22] have \(A_{1}\)-singularities fibered over \(T^{3}\) or \(T^{3}/\mathbb{Z}_{2}\). Hence for each \(T^{3}\) one has an \(\mathrm{SU}(2)\) vector multiplet plus three adjoint chiral multiplets. Each \(T^{3}/\mathbb{Z}_{2}\) instead gives rise either to an \(\mathrm{SO}(2)\) vector multiplet and a doublet of chiral multiplets or to an \(\mathrm{SU}(2)\) vector multiplet and a single adjoint chiral multiplet depending on which choice of resolution one makes.
Higher order singularities of the form \(X_{0}(\mathbb{Z}_{3},\mathbb{Z}_{2})\) also occur, e.g. in example 15 of [22] the singular set contains one component with singularity modelled on \(X_{0}(\mathbb{Z}_{3},\mathbb{Z}_{2})\). As discussed in section 4.4, there are two distinct resolutions of \(X_{0}\), one which gives rise to an \(\mathrm{SU}(3)\) vector multiplet plus an adjoint chiral multiplet (and a two-dimensional space of Coulomb vacua) and a second which gives rise to an \(\mathrm{SO}(2)\times\mathrm{U}(1)\) vector multiplet with a pair of chiral multiplets both in the fundamental of \(\mathrm{SO}(2)\) (and a three-dimensional space of vacua).
Joyce's examples were extended by Barrett [12] and Reidegeld [30] and include cases containing \(A_{n}\), \(D_{n}\) and \(E_{6}\) singularities fibered over \(T^{3}/K\) for various \(K\). The corresponding gauge and matter representations can similarly be obtained from the results presented here [8].
Acknowledgements.
We would like to thank R. Barbosa, L. Foscolo, D. Joyce, S. Karigiannis and J. Lotay for discussions. The work of BSA and DB is supported by a grant from the Simons Foundation (#488569, Bobby Acharya)
|
2309.12841 | Reward Function Design for Crowd Simulation via Reinforcement Learning | Crowd simulation is important for video-games design, since it enables to
populate virtual worlds with autonomous avatars that navigate in a human-like
manner. Reinforcement learning has shown great potential in simulating virtual
crowds, but the design of the reward function is critical to achieving
effective and efficient results. In this work, we explore the design of reward
functions for reinforcement learning-based crowd simulation. We provide
theoretical insights on the validity of certain reward functions according to
their analytical properties, and evaluate them empirically using a range of
scenarios, using the energy efficiency as the metric. Our experiments show that
directly minimizing the energy usage is a viable strategy as long as it is
paired with an appropriately scaled guiding potential, and enable us to study
the impact of the different reward components on the behavior of the simulated
crowd. Our findings can inform the development of new crowd simulation
techniques, and contribute to the wider study of human-like navigation. | Ariel Kwiatkowski, Vicky Kalogeiton, Julien Pettré, Marie-Paule Cani | 2023-09-22T12:55:30Z | http://arxiv.org/abs/2309.12841v1 | # Reward Function Design for Crowd Simulation via Reinforcement Learning
###### Abstract.
Crowd simulation is important for video-games design, since it enables to populate virtual worlds with autonomous avatars that navigate in a human-like manner. Reinforcement learning has shown great potential in simulating virtual crowds, but the design of the reward function is critical to achieving effective and efficient results. In this work, we explore the design of reward functions for reinforcement learning-based crowd simulation. We provide theoretical insights on the validity of certain reward functions according to their analytical properties, and evaluate them empirically using a range of scenarios, using the energy efficiency as the metric. Our experiments show that directly minimizing the energy usage is a viable strategy as long as it is paired with an appropriately scaled guiding potential, and enable us to study the impact of the different reward components on the behavior of the simulated crowd. Our findings can inform the development of new crowd simulation techniques, and contribute to the wider study of human-like navigation.
## 1. Introduction
Reinforcement Learning (RL) holds a unique potential for simulation of human crowds, offering flexibility and power that traditional control or planning algorithms often lack. However, successfully using RL for this purpose brings about new challenges, primarily rooted in the need to design an effective reward function.
The design of the reward function is crucial for the success of RL algorithms in real-world applications. The balance between sparsity and density of rewards has major implications for the performance of these algorithms. Sparse rewards may lead to the standard algorithms not converging in reasonable time. Conversely, overly dense reward could potentially impact the optimal policy and the relative performances of various suboptimal policies. This issue is particularly relevant in the context of simulating human crowds where, apart from clear objectives like navigation and collision avoidance, the goal of reproducing human-like behavior remains somewhat vague.
During locomotion, humans tend to move at a certain comfortable speed that is specific to the individual, usually around \(1.3\,\mathrm{m/s}\)(Whittle, 2008). Following Guy et al. (2010), this is as a result of minimizing the energy expended when moving between two points. In principle, this measure could be used as a reward function for an RL agent to optimize. In practice, however, this tends to be ineffective due to the unique structure of energy minimization, where agents must take short-term negative rewards to obtain long-term positive rewards. The typical solution is designing an artificial reward function, lacking an explicit connection to the energy minimization aspect, but focusing on rewarding movement towards the goal at the right speed.
We propose the development of a more principled reward function that takes into consideration energy efficiency of motion, serving as a proxy for human-likeness. This choice stems from the lack
of metrics that specifically quantify human-likeness in existing literature. It is important to note that energy efficiency does not fully describe human behavior, ignoring aspects like long-term goals and subtle inter-personal interactions. Nonetheless, this approach lays the groundwork for more advanced future methods.
We validate our approach both theoretically and empirically. First, we analyze the properties of various reward functions under the discounted utility paradigm. Second, we train RL agents using these reward functions, and compare their performance using the metric of energy usage.
Our contributions are:
1. Physically-based extension of the energy usage model that accounts for acceleration.
2. Evaluation of various reward functions as proxies for energy minimization.
## 2. Related Work
Crowd simulation has gained considerable attention in the field of computer graphics, artificial intelligence, and robotics. Early techniques relied on rule-based systems, and force-based or velocity-based methods (see (Toll and Pettre, 2021) for a review). Recently, there has been an increasing interest in employing Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) for crowd simulation (Kwiatkowski et al., 2022). In this section, we briefly summarize prior work that is relevant to RL crowds simulation.
**Reinforcement Learning.** RL is an approach to learning sequential decision-making processes, where agents interact with their environment to maximize cumulative rewards. State-of-the-art RL algorithms frequently use neural networks, such as in the Policy Gradient Theorem (Sutton et al., 1999) and Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017). The latter has become the de facto standard on-policy algorithm due to its simplicity and efficiency, and is the algorithm we use in this work.
**Reward function.** Designing the right reward function is a critical aspect of RL as it shapes the agent's behavior and learning process. It is often nontrivial and requires striking a balance between simplicity and expressiveness (Ng et al., 1999). Sparse rewards may lead to difficulties in exploration, while overly dense rewards can result in unintended behaviors or suboptimal solutions (Sutton and Barto, 2018). Several works have addressed reward function design, including inverse reinforcement learning (IRI) (Abbeel and Ng, 2004; Ng and Russell, 2000), which aims to learn the reward function by observing expert demonstrations, and reward shaping (Ng et al., 1999), which augments the original reward function to guide the agent's learning towards a desired behavior. In multiagent settings, designing the reward function becomes even more challenging, as the interactions between agents need to be considered (Leibo et al., 2017). As such, the importance of reward function design in RL cannot be understated, as it directly influences the agent's learning efficiency, generalization capability, and ultimately, the quality of the learned policy. In this work, we draw from the idea of using a potential term, and adapt it to the crowd simulation setting.
**Crowd Simulation via DRL.** Various studies have applied DRL to crowd simulation tasks. Long et al. (2018) focus on multiagent robotic navigation tasks, while Lee et al. (2018) demonstrate that a single trained RL agent can control multiple agents in diverse crowd scenarios. Sun et al. (2019) train groups of agents by following leader agents, and other works (Xu et al., 2020; Zheng and Liu, 2019) combine DRL with velocity obstacle components for collision-free movement. To generate high-quality trajectories, Xu and Karamouzas (2021) use real-world human trajectory data to train a supervised model that evaluates the human-likeness of generated trajectories. Hu et al. (2022) and Panayiotou et al. (2022) employ parametric RL approaches to produce heterogeneous behaviors and configurable agent personalities. Lv et al. (2022) model realistic crowds in combat simulations using the concept of emotional contagion. Kwiatkowski et al. (2023) explore the impact of observation spaces on the effectiveness of RL for crowd simulation. In this work, we introduce a more principled approach of designing the reward function for human-like crowds.
**Energy efficiency.** A commonly used objective for generating and evaluating trajectories is the Principle of Least Effort (PLE). Its origins trace back to Zipf (1949), who proposes that human behavior is broadly characterized by minimizing the perceived effort. Taking energy consumption as a measure of effort, this implies a formulation of human-like trajectories being the energy-efficient ones, which has also been used in prior work on crowd simulation (Brunneau et al., 2015; Guy et al., 2010). In our work, we extend this paradigm to also be applicable to training crowds with RL.
## 3. Energy Usage Model
In this work, we follow the hypothesis of the Principle of Minimum Energy (PME) as stated by Guy et al. (2010), according to which humans tend to choose their trajectories based on minimizing the energy usage. Therefore, we use the energy efficiency as the main benchmark for the quality of a given trajectory. While it does not fully describe human-likeness, it is well-defined and easy to estimate with a simple model.
As a starting point, we consider a model of energy usage based on biomechanical research (Whittle, 2008), and used as a metric in a number of works concerning crowd simulation (Bruneau et al., 2015; Guy et al., 2010; Hu et al., 2022; Kwiatkowski et al., 2023; Xu and Karamouzas, 2021). We estimate the energy used in a discrete timestep \(\Delta t\) as:
\[E=(\epsilon_{\text{s}}+e_{\text{w}}\varphi^{2})\Delta t \tag{1}\]
where \(e_{\text{s}}\) and \(e_{\text{w}}\) are parameters specific to a given person, with typical values of \(e_{\text{s}}=2.23\) and \(e_{\text{w}}=1.26\) in SI units, computed per unit mass (Whittle, 2008).
It is important to keep in mind that this model does not account for acceleration or turning, and instead only applies to linear motion. In this case, the optimal velocity (i.e. one that minimizes the energy usage on a fixed straight trajectory) is \(v^{*}=\sqrt{e_{\text{s}}/e_{\text{w}}}\). This value emerges from integrating the energy usage across the entire path - moving too quickly uses too much energy, and moving too slowly extends the duration of the trajectory, also increasing the energy usage.
### Acceleration correction
In order to improve the energy estimation, we expand the model in Equation 1 so that it also considers the acceleration of agents throughout their trajectories. We start by deriving its basic form. Consider a body moving at a constant velocity \(v\), subject to a force
opposite to the direction of movement \(F_{d}=-\lambda v\). In Newtonian mechanics, we know that the amount of energy used during displacement is \(E=F_{\text{s}}\), where \(F\) is the applied force, and \(s\) is the distance. To adapt this to our discrete model, we factor out the timestep, obtaining \(E=F_{\text{v}}\ \Delta t\). Substituting the force of drag \(F_{d}\), and setting \(\lambda=e_{w}\) we get:
\[E=-\lambda v^{2}\Delta t=-e_{w}v^{2}\Delta t \tag{2}\]
This is the energy lost due to drag in each timestep. To counteract it, the agent needs to use energy equal to the absolute value of this quantity. Combining it with with a constant basal energy usage of \(e_{s}dt\), we get \(e=e_{s}dt+e_{w}v^{2}dt\), recovering Equation 1.
To extend this reasoning, consider an agent that moves at velocities \(v_{0}\) and \(v\) in two consecutive timesteps, that is with an acceleration \(\mathbf{a}=\frac{v-v_{0}}{\Delta t}\). Assume that the agent is applying a certain force \(F_{a}\) in an arbitrary direction in order to modify its velocity. Using simple Euler integration, we have:
\[\mathbf{v}=v_{0}+F\Delta t-e_{w}v_{0}\Delta t=(1-e_{w}\Delta t)v_{0}+F\Delta t \tag{3}\]
Transforming this to obtain the force, we get:
\[F=\frac{1}{\Delta t}(\mathbf{v}-(1-e_{w}\Delta t)v_{0})=\frac{1}{\Delta t}(\mathbf{v} -v_{0}+e_{w}v_{0}\Delta t) \tag{4}\]
From this we can compute the energy usage as follows:
\[E =F\cdot\mathbf{v}\Delta t\] \[=\mathbf{v}\cdot\mathbf{v}-\mathbf{v}\cdot\mathbf{v}_{0}+e_{w}\mathbf{v}_{0}\cdot \mathbf{v}\Delta t\] \[=\mathbf{v}\cdot(\frac{\mathbf{v}-v_{0}}{\Delta t})\Delta t+e_{w}\mathbf{v}_ {0}\cdot\mathbf{v}\Delta t\] \[=(\mathbf{v}\cdot\mathbf{a}+\mathbf{e}_{w}\mathbf{v}_{0}\cdot\mathbf{v})\Delta t \tag{5}\]
Again taking the absolute value and adding a basal energy usage, we obtain our proposed model for energy usage:
\[E=(e_{s}+|\mathbf{v}\cdot\mathbf{a}+e_{w}\mathbf{v}_{0}\cdot\mathbf{v}|)\ \Delta t \tag{6}\]
To better understand Equation 6, consider an agent moving with linear acceleration \(a\) in the following four cases:
1. Constant motion \(a=0\)
2. Acceleration \(a>0\iff v>v_{0}\)
3. Passive deceleration \(0>a>-e_{w}e_{0}\iff v_{0}>v>(1-e_{w}\Delta t)v_{0}\)
4. Active deceleration \(a<-e_{w}v_{0}\iff v<(1-e_{w}\Delta t)v_{0}\)
In the first case \(a=0\), the agent moves at a constant speed \(\mathbf{v}=||\mathbf{v}_{0}||=||\mathbf{v}_{1}||\). The energy usage is then:
\[E=e_{s}\Delta t+|0+e_{w}v^{2}|\Delta t=(e_{s}+e_{w}v^{2})\Delta t \tag{7}\]
which agrees with Equation 1.
If \(a>0\), the agent increases its movement speed. The energy usage then simplifies to:
\[E =(e_{s}+a\mathbf{v}+e_{w}v_{0})\Delta t\] \[=(e_{s}+e_{w}(\mathbf{v}-a\Delta t)\mathbf{v}+a\mathbf{v})\Delta t\] \[=(e_{s}+e_{w}v^{2}+(1-e_{w}\Delta t)\mathbf{v})\Delta t\] \[\approx e_{s}\Delta t+e_{w}v^{2}\Delta t+a\mathbf{v}\Delta t \tag{8}\]
where the term \(a\Delta t\) corresponds to the additional kinetic energy needed to move at a velocity \(v\).
If \(a<0\), the agent decelerates. Note, however, that there are two distinct possibilities. If the agent simply stops putting in effort, it will automatically slow down by a factor of \((1-e_{w}\Delta t)\). We call any deceleration below this threshold **passive deceleration**, which decreases the energy usage. In contrast, if the agent wants to slow down to a speed lower than \((1-e_{w}\Delta t)v_{0}\), this is **active deceleration**, which requires using additional energy.
We depict this relationship in Figure 2. When the velocity remains constant at \(v=v_{0}=1.3\,\text{m/s}\), the energy usage is the same in both models. The lowest energy usage (i.e. only from the basal metabolic rate) occurs at \(v=(1-e_{w}\Delta t)v_{0}=1.28\,\text{m/s}\), when the agent decelerates naturally.
## 4. Navigation Reward Design
Our main goal in this work is designing a reward function which, when optimized, leads to a policy that minimizes the energy usage, as estimated using the model from Section 3. In this section, we discuss a few issues in designing such a reward function.
### Energy as reward
A natural starting point is simply using a reward equal to the negative energy usage:
\[R=-e_{s}\Delta t-e_{w}v^{2}\Delta t \tag{9}\]
or
\[R=-e_{s}\Delta t-|\mathbf{v}\cdot\mathbf{a}+e_{w}\mathbf{v}_{0}\cdot\mathbf{v}|\Delta t \tag{10}\]
This formulation has two critical issues, which make it unfit for being used as a reward function directly. To see this, consider the base reward of Equation 9 for simplicity.
#### 4.1.1. Local optimum
In an RL training procedure, each agent begins by taking random actions. In the case of microscopic crowd simulation, that corresponds to choosing a direction, and setting either the velocity or the acceleration in that direction. If an action leads to a higher reward, its probability increases, and if it leads to a lower reward, its probability decreases.
Consider an agent with a simple objective of moving to a specific location, maximizing the reward from Equation 9. The reward is
Figure 2. Energy used in a single timestep when moving at a velocity of \(v\), after having the velocity of \(1.3\,\text{m/s}\) in the previous timestep, with \(\Delta t=0.01\,\text{s}\).
accumulated from the beginning of the episode, until the agent reaches the goal, or until a predefined time limit. Note that with this structure, the real penalty for not reaching the goal is delivered by the agent having to accumulate the negative reward until the time limit. If the time limit is sufficiently high, it is better for the agent to spend some energy in order to reach its goal and not use any energy afterwards, as compared to spending a long time at rest, even without using energy for movement.
However, during training, the agent is more likely to try to move, but fail reaching the goal. It then gets the full time-based penalty, but also a penalty for using additional energy for movement. The agent does not know how to reduce the time-based penalty, but it can decrease its energy usage by slowing down. Eventually, it will settle into a local optimum of standing still, which is a failure case.
#### 4.1.2. Global optimum
The second problem is related to the fact that modern RL algorithms predominantly use the discounted utility paradigm, weighing future rewards with an exponentially decaying discount factor. Similarly to not reaching the goal, the penalty for moving too slowly is that the agent will have to spend energy in many more timesteps towards the end of the episode. When making a decision at the beginning of the episode, those rewards are heavily discounted, and thus less important.
Consider now the following experiment: the agent travels in a straight line, and has to reach \(x=d\) while moving at a constant velocity \(v\). The reward is discounted exponentially with a discount factor \(\gamma\). In Figure 3, we show the discounted reward for some values of \(d\) and \(\gamma\). The global optimum for a typical discount factor around \(0.99\) is \(v=0\), which corresponds to the agent not moving at all. Depending on the exact values, the optimal value may be anywhere between \(0\) and \(\sqrt{\frac{e_{x}}{e_{w}}}\), which is a significant problem if our goal is training an agent whose optimal velocity is exactly \(\sqrt{\frac{e_{x}}{e_{w}}}\).
The fact that discounting changes the optimal policy is not necessarily unexpected. Naik et al. (2019) show that using discounted rewards when training RL agents may change the optimal policy. In many practical problems, this is not a big concern, and the discount factor is treated as yet another hyperparameter. In this case, however, the discount factor directly impacts the properties of the environment.
#### 4.1.3. Possible solutions
There are various ways to tackle the problems described above, but it is important to note that both of them have to be solved together. In order to avoid the local optimum of standing still, we could employ a curriculum-based approach, where the agents initially learn to navigate a short distance without any obstacle. As the training progresses, the distance and the number of agents can be increased, with the hope that the agents will not stop moving.
To fix the issue with the global optimum, the obvious solution is not using any reward discounting. In practice, however, this turns out to be much more unstable and difficult to train. Alternatively, a different non-exponential discounting method could be employed, so that the variance of the gradient estimation is low enough for efficient training, but the optimal velocity remains correct.
Both of these solutions add a non-negligible amount of complexity to the learning algorithm. While in certain situations that might be acceptable, note that all these issues stem from the simple scenario of a single agent navigating to a goal in an energy-efficient manner. With more complicated applications, the complexity is likely to become even higher, e.g. via a curriculum designed for a different objective.
To avoid the compounding complexity, we instead propose changing the reward function. Ideally, it should remain similar to the energy usage so that the emergent behavior is still energy-efficient. It should also tackle both of the aforementioned issues - that is, the reward for moving towards the goal should be higher than for standing still, and the optimal velocity should be invariant under temporal discounting.
### Energy-based potential
Adding a guiding potential to the reward function is a common technique of making sparse rewards more dense. Ng et al. (1999) show that adding a reward of the form \(R(s,a,s^{\prime})=\gamma\Phi(s^{\prime})-\Phi(s)\) does not change the optimal policy for the \(\gamma\)-discounted rewards. Note that this assumes that the discounted reward is the true objective of the RL task. This is not true in the case of navigation, as we generally want the global energy usage to be optimal. Nevertheless, it can serve as inspiration for designing an analogous guiding term.
In the context of human navigation, there is a simple heuristic that we can use as a guiding potential - the distance from the goal. Consider the following reward function:
\[r(\mathbf{v})=-e_{s}\Delta t-e_{w}\sigma^{2}\Delta t+\tilde{c}_{p}\mathbf{v}\cdot\hat{ \mathbf{g}} \tag{11}\]
where \(\hat{\mathbf{g}}\) is a unit vector pointing from the agent to the goal. Note that the potential term \(\mathbf{v}\cdot\hat{\mathbf{g}}\) is equal to the change in the distance between the agent and its goal in two consecutive timesteps.
This induces a total discounted reward of:
Figure 3. Normalized discounted reward, with energy optimization as the direct objective. Depending on the distance \(d\) and the discount factor \(\gamma\), the global optimum is different, and in some cases, the optimal behavior is standing still with \(v=0\).
\[R^{\prime}=\int_{0}^{T}dt(e^{t\ln\gamma}(-e_{s}\Delta t-e_{w}e^{2}\Delta t+\tilde{c} _{p}\mathbf{v}\cdot\hat{\mathbf{g}})) \tag{12}\]
To obtain a bound on the value of \(c_{p}\), we set the condition that when moving directly towards the goal, \(R(v^{*})>R(0)\), i.e. it is better to move towards the goal than stand still. This implies that \(\tilde{c}_{p}>\sqrt{e_{s}e_{w}}\). For simplicity of further analysis, we define \(c_{p}=\frac{\tilde{c}_{p}}{\sqrt{e_{s}e_{w}}}\).
### Discounting invariance
With a simple simulation, it is clear that there is a nontrivial interaction between the values of the discount factor \(\gamma\), the coefficient \(c_{p}\), and the optimal velocity \(\sigma^{\gamma}\).
Consider the discounted sum of rewards defined in Equation 11, with a simple policy of moving towards the goal with a speed \(\sigma\). With a continuous model of the problem, we can define the discounted sum of rewards as:
\[R^{\prime} =\int_{0}^{T}e^{t\ln\gamma}\left(-e_{s}-e_{w}e^{2}+c_{p}\sqrt{e_ {s}e_{w}}\right)dt\] \[=\frac{1-\gamma^{\frac{d}{\sigma}}}{-\ln\gamma}\left(-e_{w}e^{2}+ c_{p}\sqrt{e_{s}e_{w}}-e_{s}\right) \tag{13}\]
We differentiate this expression w.r.t. \(v\) to obtain an expression for the optimal velocity, and interpret it as an implicit function whose roots correspond to the optimal velocity with a given discount factor \(\gamma\):
\[F(v,\gamma)=\frac{\left(-2e_{w}e^{\gamma}+\tilde{c}_{p}\right)\left(1-\gamma^ {\frac{d}{\sigma}}\right)}{-\ln\gamma}-\frac{\left(-e_{w}e^{2}+\tilde{c}_{p} \mathbf{v}-e_{s}\right)\gamma^{\frac{d}{\sigma}}d}{\sigma^{2}}=0 \tag{14}\]
Solving this analytically for \(v\) is difficult. Instead, we consider the implicit derivative:
\[\frac{dv}{d\gamma}=-\frac{dF}{d\gamma}/\frac{dF}{dv} \tag{15}\]
While the resulting expression is highly complex, it is solvable for \(c_{p}\) analytically, yielding the result:
\[\frac{dv}{d\gamma}=0\iff c_{p}=2 \tag{16}\]
This means that using the reward from Equation 11 with \(\tilde{c}_{p}=2\sqrt{e_{s}e_{w}}\), the optimal velocity is independent of the discount factor. Note that if we consider non-exponential discounting as a weighted sum of exponential discounting, this conclusion extends to other discounting methods, enabling the application of methods like hyperbolic discounting (Fedus et al., 2019) or arbitrary non-exponential discounting (Kwiatkowski et al., 2023).
### Non-finishing penalty
When measuring the energy usage as a reward function, or even as a metric, there is another consideration that stems from the RL setting - the time limit. While theoretically an agent could infinitely explore until they reach the goal, this is impractical. Instead, RL algorithms typically set a maximum number of timesteps allowed in an episode. After this limit passes, the episode terminates, regardless of the state that the agent is in.
In principle, the value of the time limit should not matter as long as it is sufficient to reach the goal. However, the structure of the energy-based reward (Equations 9 and 10) makes it potentially impactful. Let \(T\) be the time limit in seconds, \(d\) the total distance from the goal. Moving in a straight line at the optimal velocity \(v^{*}\), the time needed to reach the goal is \(T^{*}=\frac{d}{v^{*}}=\sqrt{\frac{e_{w}}{e_{s}}}d\), and the energy used in this process is \(2\sqrt{e_{s}e_{w}}d\). If this energy is greater than that of standing still until the end of the episode \(e_{s}T\), then the optimal policy according to the metric may indeed be simply standing still.
To prevent this, one option is simply setting the time limit so that \(T>2\frac{d}{\sigma^{*}}\), in which case moving at the optimal velocity will result in a lower energy usage than standing still until the end of the episode. This corresponds to an episode length more than twice as long as it would take the agent to reach the goal moving at optimal velocity. A significant drawback of this approach is its inefficiency, as the duration of each episode is significantly extended, which increases the amount of time necessary to collect experience for training. Furthermore, complex scenarios with many agents may extend the optimal trajectories in ways that are difficult to predict before training the agents.
Instead we propose two variants of a heuristic that is added as an additional penalty at the end of the episode if a given agent has not reached its goal. In the first variant, we use the **optimal** heuristic - if the agent is at a distance \(d\) from its goal, it incurs a penalty of \(2\sqrt{e_{s}e_{w}}d\), which corresponds to the energy cost it would take to reach the goal moving at the optimal speed in a straight line. In the second variant, instead of using the optimal speed, we use the **average** speed towards the goal across the agent's trajectory to estimate the remaining energy cost.
Both of these variants have their flaws. Using the optimal heuristic, in certain cases it may be beneficial for agents to only move part of the way, and then stop when they encounter a more dense situation, which requires more energy to navigate. While the average heuristic avoids this issue by directly tying the final penalty to the agent's past performance, the estimated velocity has to be capped at a minimum value (in our experiments: \(0.1\,\mathrm{m/s}\)). This avoids issues where the agent has made very little progress towards the goal (which leads to very high penalties, destabilizing the training), or even made negative progress by moving farther from the goal, leading to a negative energy cost and a positive final reward.
### Alternative approaches
In existing literature, most approaches to crowd simulation via RL disregard the problems of energy efficiency, and of encouraging agents to prefer an intermediate velocity throughout their motion. The most common approach to obtain motion with an given velocity \(\sigma^{*}\) is simply setting \(v^{*}\) as the maximum in the environment dynamics (Hu et al., 2022; Long et al., 2018; Sun et al., 2019; Xu et al., 2020). This is then combined with a guiding potential and a one-time reward for reaching the goal, and due to the incentive structure of the discounted utility paradigm common in RL, this leads to the agents mostly moving at the "optimal" (i.e. maximum) speed. The downside of this approach is that agents are unable to
move faster than that predefined limit, in contrast with humans, who tend to easily walk, when needed, a little slower or faster than their optimal comfortable speed.
Other works (Kwiatkowski et al., 2023; Lee et al., 2018; Xu and Karamouzas, 2021) include a velocity-dependent reward term that incentivizes moving at a specific speed which is below the highest allowed speed. Here we analyze and compare each of those approaches.
Lee et al. (2018) use a function they call FLOOD, defined as follows:
\[FLOOD(v,v_{min},v_{max})= \tag{17}\] \[= |\min(v-v_{min},0)|+|\max(v-v_{max},0)|\]
where \(v_{min},v_{max}\) define the range of comfortable speed. When applied to the linear velocity, this term disincentivizes velocities outside of the preferred range. While this structure is not directly connected with energy optimization, it serves a similar purpose of controlling the movement speed.
A similar structure was used by Xu and Karamouzas (2021). In their reward function, they use a velocity regularization term:
\[r(v)=\exp\left(\sigma_{0}||\mathbf{v}-\mathbf{v}^{*}||\right) \tag{18}\]
where \(\sigma_{0}\) is a parameter, and \(\mathbf{v}^{*}\) is a vector pointing towards the goal, whose magnitude is equal to the optimal velocity. In our energy optimization framework, it is \(\mathbf{\rho}^{*}=v^{*}\hat{\mathbf{g}}=\sqrt{\epsilon_{\mathbf{v}}/\epsilon_{\mathbf{v}}}\hat {\mathbf{g}}\).
Consider the term within the exponent \(||\mathbf{v}-\mathbf{v}^{*}||=||\mathbf{v}-\mathbf{v}^{*}\hat{\mathbf{g}}||\), and take its square. Interpreting this as a scalar product, we have \((\mathbf{v}-\mathbf{v}^{*}\hat{\mathbf{g}})\cdot(\mathbf{v}-\mathbf{v}^{*}\hat{\mathbf{g}})\). When we multiply the terms, substitute \(\mathbf{v}^{*}=\sqrt{\epsilon_{\mathbf{v}}/\epsilon_{\mathbf{v}}}\) and use the fact that \(||\hat{\mathbf{g}}||=1\), we get \(v^{2}-2\sqrt{\epsilon_{\mathbf{v}}/\epsilon_{\mathbf{v}}}\mathbf{v}\cdot\hat{\mathbf{g}}+\frac {\epsilon_{\mathbf{v}}}{\epsilon_{\mathbf{v}}}=\frac{1}{\epsilon_{\mathbf{v}}}\left(\epsilon _{\mathbf{v}}+\epsilon_{\mathbf{v}}\mathbf{v}^{2}-2\sqrt{\epsilon_{\mathbf{v}}\epsilon_{\mathbf{v} }}\mathbf{v}\cdot\hat{\mathbf{g}}\right)\). This happens to be proportional to the discounting-invariant energy usage with potential. Note, however, that the final reward used by Xu and Karamouzas (2021) applies additional operations to this value (square root and exponent).
Kwiatkowski et al. (2023) use an explicit potential term, and a speed similarity term \(c_{\mathbf{v}}|\mathbf{v}-\mathbf{v}^{*}|^{\epsilon_{\mathbf{v}}}\) which does not take into account the direction of the movement. With the exponent \(c_{\mathbf{e}}=2\), this expands to \(\tilde{c}_{\mathbf{p}}\mathbf{v}\cdot\hat{\mathbf{g}}-\mathbf{v}^{2}+2\sqrt{\frac{\epsilon_{ \mathbf{v}}}{\epsilon_{\mathbf{v}}}}\mathbf{v}-\frac{\epsilon_{\mathbf{v}}}{\epsilon_{\mathbf{v}}}\), which is equal to the energy usage with potential, but with an additional positive term proportional to the agent's speed. This results in a bicycle-like behavior where an agent prefers to artificially extend its trajectory while maintaining its optimal speed, instead of simply slowing down.
## 5. Reward Evaluation
In this section, we empirically evaluate our proposed reward structure, and compare it to previously proposed formulations. We also perform an ablation on various parts of the reward function to investigate their importance and impact on the final results.
### Experimental setup
We performed the experimental evaluation on five crowd scenarios:
1. Circle - agents start at the perimeter of a circle, and must reach the antipodal point of the circle. We apply noise to both the start and goal positions, and add stationary obstacles in the middle of the circle.
2. Corridor - agents start at two ends of a corridor and must reach the opposite end.
3. Crossing - agents start at southern and western ends of perpendicularly crossed corridors, and must reach the northern and eastern ends, respectively.
4. Choke - agents must pass from west to east through a narrow opening in a wall.
5. Car - agents must wait for a moving obstacle to open a passage to the goal.
In each scenario, all agents are given a time limit of 200 time-steps, each lasting 0.1 s. Each agent is removed from the simulation once it touches its goal. Following the classification by Kwiatkowski et al. (2023), we use Egocentric observations with Polar Acceleration dynamics. Each agent has randomly sampled parameters of \(\epsilon_{\mathbf{s}},\epsilon_{\mathbf{w}}\) as defined in Section 3. These values are included in the observation, and used to compute the individual reward of each agent.
The main metric we use for evaluation is Energy+, defined as energy usage with the acceleration correction (Equation 8), plus the non-finishing penalty using the average heuristic (Section 4.4). The penalty is meant to additionally penalize agents which do not reach their goals in time, to ensure that agents cannot hack the reward function by stopping in the middle of the trajectory.
### Reward function structure
Throughout the various reward functions we evaluate in this work, we use the following components:
1. Basal energy usage \(r_{b}=-\epsilon_{\mathbf{s}}\)
2. Velocity-based energy usage \(r_{b}=-\epsilon_{\mathbf{w}}\mathbf{\rho}^{2}\)
3. Dynamics-based energy usage \(r_{d}=-|\mathbf{v}\cdot\mathbf{a}+\epsilon_{\mathbf{w}}\mathbf{\rho}_{0}\cdot\mathbf{\nu}|\)
4. Guiding potential \(r_{p}=2\sqrt{\epsilon_{\mathbf{s}}\epsilon_{\mathbf{w}}}\mathbf{\rho}^{\prime}\cdot\hat{\mathbf{g}}\)
5. Preferred speed matching \(r_{s}=|\mathbf{v}-\mathbf{v}^{*}|^{c_{\mathbf{e}}}\)
6. Speeding penalty \(r_{z}=\max(v-\mathbf{v}^{*},0)^{c_{\mathbf{e}}}\)
7. Exponential velocity matching \(r_{m}=\exp(\sigma_{0}||\mathbf{v}-\mathbf{v}^{*}||)\)
8. Final non-finishing penalty using the optimal speed heuristic (Section 4.4) \(r_{o}\)
9. Final non-finishing penalty using the average speed heuristic (Section 4.4) \(r_{a}\)
10. One-time goal-reaching reward \(r_{g}\)
11. Constant collision penalty for each frame when an agent collides with another agent or an obstacle \(r_{c}\)
A complete reward function is a weighted sum of a subset of these terms. For terms (1)-(4) and (8)-(9), their coefficients are equal to 1 due to their physics-based formulation. Terms (1)-(3) and (5)-(7) are also multiplied by the duration of the timestep in the simulation.
We primarily focus on evaluating the following reward functions. Note that all of these variants include components (10) and (11) (goal-reaching and collision penalty, respectively)
1. **Base curriculum** - a curriculum which initially has components (4), (6) (with \(c_{\mathbf{e}}=2\)), and after 200 training steps, switches to (1), (3), (4), (9)
2. **Base curriculum (no acceleration)** - like (a), but using component (2) instead of (3)
3. **Base curriculum (no heuristic)** - like (a), but without component (9)
- like (a), but using component (8) instead of (9)
* components (1), (3), (4)
* components (1), (2), (4)
* components (1), (2)
* components (4), (5), based on Kwiatkowski et al. (2023b)
* components (4), (6), based on Lee et al. (2018)
* component (7), based on Xu and Karamouzas (2021)
We trained agents using each of these reward functions, and summarize the results in Section 6.
Furthermore, to investigate the importance of the potential term, we also evaluated the following reward functions:
* same as reward (a), serving as a baseline
* same as (A), but without component (4) (potential)
* same as (B), but also without component (9) (non-finishing penalty)
* same as (B), but also without component (10)
* same as (D), but with component (10) instead of (9)
* same as (C), but also without component (10) (goal). The second phase of the curriculum only uses components (1) and (3).
* same as (F), but the discount factor is set to \(\gamma=1\) throughout the training
We describe the results of these experiments in Section 6.1.
## 6. Results
While the details differ based on the scenario, in all of them except for the Car scenario, the best-performing reward is a curriculum leading to energy optimization. In the Car scenario, the best-performing reward in terms of the Energy+ metric is directly optimizing energy from the beginning.
The benefit of the curriculum becomes apparent when we consider the progression of the training. We show the success rates in the Circle scenario as a function of the training steps in Figure 4. This scenario has a difficult coordination task embedded in it - when agents travel through the central part of the scene, they must avoid many other agents moving in all directions to prevent collisions. Each collision may lead to additional energy usage in order to resume movement, which effectively increases the collision penalty. Because of this, agents learn the navigation task much more slowly. Conversely, using a simple speeding penalty for the initial part of the training allows the agents to quickly reach a high success rate, which is then maintained after the reward is switched to energy optimization.
On the other hand, in the Car scenario, the best-performing variant is direct energy optimization. This is because agents trained with speeding penalty (as opposed to energy minimization) initially converge to attempting to quickly go in front of the car, passing before it hits them. In contrast, agents trained to minimize energy usage simply wait for the obstacle to pass, or start moving behind it. It is difficult to progressively switch from the former to the latter behavior, so the curriculum fails to produce efficient behavior.
### Is potential necessary?
In Section 4.1, we provide theoretical justification for why simply optimizing energy is likely to fail. The data in Table 1 confirms at least the local optimum argument - directly optimizing the energy usage consistently leads to the worst performance, corresponding to standing still. To empirically validate our global optimum argument, we conducted additional experiments on the Circle scenario, using reward functions (A)-(G).
Figure 4. Success rates of agents trained with certain reward functions in the Circle scenario.
Figure 5. Energy+ metric as a function of training progress with various reward functions. To maintain the performance from the first stage of the training, it is necessary to either use a potential term, or set the discount factor to \(\gamma=1\). Agents without a potential or a final heuristic converge to standing still, while other variants’ performance significantly degrades.
We show the results in terms of the Energy+ values in Figure 5. The **no potential** variant maintains a reasonable performance, but its energy efficiency drops compared to the baseline. Both variants without the final non-finishing penalty (with or without the goal reward - (C) and (F) respectively) rapidly deteriorate to a policy which stays still for the entire duration of the episode. The variants that retain some of their performance are (B) and (D), i.e. ones which still use the average heuristic penalty for not reaching the goal, however their success rate is significantly lower than the baseline. Using the optimal heuristic (E) instead of the average heuristic degrades performance significantly, leading agents to slowly approach the goal, abusing the generous reward they receive at the end of the episode. Finally, using pure energy optimization in a curriculum without a discount factor retains the same performance as the base curriculum.
This confirms that absent of additional goals, with a discount factor of \(\gamma=0.99\), using energy as a reward without a guiding potential fails to converge to a valid policy, even when initialized with a goal-seeking policy trained with a different reward function. This may be mitigated by including a guiding potential, which in some cases enables effective end-to-end training using that reward function. Alternatively, if the training converges without discounting, i.e. with \(\gamma=1\), pure energy may also be a valid approach as a second (or later) stage of a curriculum. This is consistent with the analysis by Naik et al. (2019), who describe theoretical problems with the discounted utility paradigm.
### Impact of acceleration
In order to evaluate the impact of the acceleration correction to the energy estimation introduced in Section 3.1, we compare agents trained with the base curriculum, with and without the acceleration correction. We show the histogram of accelerations, collected across 8 independent training runs in the Circle scenario, in Figure 6.
The average magnitude of the acceleration across the trajectories is \(0.339\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}\) with the acceleration correction in the energy estimation, and \(0.679\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}\) without it. This result is statistically significant with \(p<0.01\) using the two-sample Kolmogorov-Smirnov test. This shows that including the acceleration in energy estimation successfully leads to smoother behavior. At the same time, the energy usage without the acceleration correction remains similar for both variants - \(49.77\pm 1.077\) and \(50.47\pm 1.284\) respectively,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **Circle** & **Crossing** & **Corridor** & **Car** & **Choke** \\ \hline
**Base curriculum** & **58.2 \(\pm\) 0.54** & 66.38 \(\pm\) 1.39 & 77.56 \(\pm\) 6.19 & 110.95 \(\pm\) 3.99 & 94.97 \(\pm\) 4.03 \\ \hline
**Base curriculum** & 61.62 \(\pm\) 0.82 & 72.56 \(\pm\) 1.14 & 85.26 \(\pm\) 6.07 & 112.81 \(\pm\) 2.51 & 112.18 \(\pm\) 5.49 \\
**(no acceleration)** & 59.18 \(\pm\) 0.51 & **65.81 \(\pm\) 1.03** & **63.29 \(\pm\) 0.32** & 95.63 \(\pm\) 8.31 & 114.78 \(\pm\) 12.55 \\ \hline
**Base curriculum** & 59.17 \(\pm\) 1.01 & 67.56 \(\pm\) 2.12 & 69.34 \(\pm\) 2.1 & 103.76 \(\pm\) 6.57 & **94.53 \(\pm\) 7.99** \\
**(optimal heuristic)** & 74.59 \(\pm\) 2.48 & 73.55 \(\pm\) 3.36 & 96.19 \(\pm\) 9.02 & **85.05 \(\pm\) 9.36** & 105.58 \(\pm\) 9.09 \\ \hline
**Energy (no acceleration)** & 67.1 \(\pm\) 1.97 & 81.32 \(\pm\) 3.26 & 102.75 \(\pm\) 5.08 & 108.32 \(\pm\) 1.26 & 106.39 \(\pm\) 5.43 \\ \hline
**Energy (no potential)** & 459.53 \(\pm\) 53.16 & 454.01 \(\pm\) 5.29 & 450.65 \(\pm\) 5.06 & 463.26 \(\pm\) 1.44 & 460.28 \(\pm\) 1.0 \\ \hline
**Speed matching** & 60.13 \(\pm\) 0.71 & 81.04 \(\pm\) 5.66 & 68.35 \(\pm\) 1.7 & 126.28 \(\pm\) 2.03 & 276.55 \(\pm\) 14.99 \\ \hline
**Speeding penalty** & 58.55 \(\pm\) 0.87 & 88.47 \(\pm\) 4.42 & 98.71 \(\pm\) 6.06 & 119.72 \(\pm\) 1.27 & 130.03 \(\pm\) 5.66 \\ \hline
**Exponential** & 63.5 \(\pm\) 1.73 & 77.18 \(\pm\) 4.33 & 85.33 \(\pm\) 4.91 & 107.66 \(\pm\) 0.7 & 129.9 \(\pm\) 9.4 \\ \hline \end{tabular}
\end{table}
Table 1. Mean value of the Energy+ metric after training in a given scenario, using a given reward function. Each value is based on 8 independent training runs. Lower is better
Figure 6. Histogram of accelerations in the Circle scenario, trained with and without the acceleration-based term in the reward function.
indicating that the reduced acceleration does not come at the cost of otherwise less efficient movement.
## 7. Conclusions
In this work, we introduce two contributions: a new, more accurate way to estimate energy usage in the context of crowd simulation, and a novel reward function formulation for training agents navigating in an energy-efficient manner. We demonstrate a successful curriculum learning approach, where an initial speeding penalty is replaced by a simpler energy optimization formulation in later training stages. This method allows the agents to learn basic navigation first, and then focus on efficiency.
Our experiments on several crowd navigation scenarios show that training using an energy-based reward consistently outperform other reward functions used in prior work. A critical component of this reward structure is the guiding potential, which ensures that agents navigate towards the goal, and do not simply stay still to minimize the energy usage. We empirically verify this conclusion through additional experiments that exclude this term from the reward function.
Interestingly, in some scenarios, such as the Car scenario, a curriculum approach does not provide any benefits, and agents perform optimally when trained directly with the energy optimization reward. This can be attributed to the specific nature of this scenario, where the initial policy learned by the agents with a speeding penalty makes them rush in front of the moving obstacle, a strategy that contrasts with the more efficient wait-and-follow reward function energy optimization. This highlights the potential need for a more scenario-specific reward formulation or a flexible curriculum training approach that can adjust itself based on the scenario complexity and nature.
Furthermore, our analysis of discount factor effects on training outcomes with a pure energy reward function aligns with the theoretical discussions raised by Naik et al. (2019). It shows that if the training is conducted without discounting, using energy as a reward without a guiding potential can converge to a valid policy when initialized with a goal-seeking policy trained with a different reward function. This discovery invites future research to explore the utility of different discounting paradigms in such energy optimization tasks and potentially other reinforcement learning applications.
While the current results are promising, several directions remain for future work. The energy estimation for motion with acceleration could be made more accurate by considering the agent's physical model more closely. Additionally, the potential function could be replaced with a more sophisticated heuristic that considers the actual shortest path to the goal, taking into account other agents and obstacles. Another possible direction could be developing an adaptive curriculum that consider the nature of the scenario or the learning progress of the agent. Finally, integrating this energy-efficient approach with social norms and considering more realistic crowd behaviors could lead to generating more realistic behaviors with RL.
|
2309.06834 | Linear Scaling Approach for Optical Excitations Using Maximally
Localized Wannier Functions | We present a theoretical method for calculating optical absorption spectra
based on maximally localized Wannier functions, which is suitable for large
periodic systems. For this purpose, we calculate the exciton Hamiltonian, which
determines the Bethe-Salpeter equation for the macroscopic polarization
function and optical absorption characteristics. The Wannier functions are
specific to each material and provide a minimal and therefore computationally
convenient basis. Furthermore, their strong localization greatly improves the
computational performance in two ways: first, the resulting Hamiltonian becomes
very sparse and, second, the electron-hole interaction terms can be evaluated
efficiently in real space, where large electron-hole distances are handled by a
multipole expansion. For the calculation of optical spectra we employ the
sparse exciton Hamiltonian in a time-domain approach, which scales linearly
with system size. We demonstrate the method for bulk silicon - one of the most
frequently studied benchmark systems - and envision calculating optical
properties of systems with much larger and more complex unit cells, which are
presently computationally prohibitive. | Konrad Merkel, Frank Ortmann | 2023-09-13T09:33:00Z | http://arxiv.org/abs/2309.06834v1 | # Linear Scaling Approach for Optical Excitations Using Maximally Localized Wannier Functions
###### Abstract
We present a theoretical method for calculating optical absorption spectra based on maximally localized Wannier functions, which is suitable for large periodic systems. For this purpose, we calculate the exciton Hamiltonian, which determines the Bethe-Salpeter equation for the macroscopic polarization function and optical absorption characteristics. The Wannier functions are specific to each material and provide a minimal and therefore computationally convenient basis. Furthermore, their strong localization greatly improves the computational performance in two ways: first, the resulting Hamiltonian becomes very sparse and, second, the electron-hole interaction terms can be evaluated efficiently in real space, where large electron-hole distances are handled by a multipole expansion. For the calculation of optical spectra we employ the sparse exciton Hamiltonian in a time-domain approach, which scales linearly with system size. We demonstrate the method for bulk silicon - one of the most frequently studied benchmark systems - and envision calculating optical properties of systems with much larger and more complex unit cells, which are presently computationally prohibitive.
## I Introduction
Simulations of optical properties such as UV-vis-NIR absorption or reflection spectra are crucial for designing or improving opto-electronic devices with novel materials. In this context, accurate theoretical predictions help to find suitable materials much faster and at lower cost, thus complementing and guiding experimental efforts. However, calculating optical properties is computationally demanding, which limits calculations to small systems with only a few atoms per unit cell. The reason is that optical properties are inherently affected by many-body effects. For example, the optical response of semiconductors and insulators is determined by the Coulomb interaction between electrons and holes in a material, which leads to the formation of bound electron-hole states called excitons [1; 2; 3]. For the calculation of optical properties such as UV-vis-NIR absorp
tion spectra it is therefore necessary to describe two-particle states of electrons and holes that are created upon optical excitation. A suitable description of such many-body effects can be derived in terms of a Bethe-Salpeter equation (BSE) [3; 4; 5; 6; 7; 8; 9; 10] for the polarization function. For almost all real materials, however this BSE is too difficult to solve. Important simplifications can be obtained for non-spin-polarized systems, where the BSE splits into singlet and triplet parts, which can be treated independently[3]. Optical transitions, described by transition matrix elements that are diagonal in spin space, cannot induce spin-flips, and it is sufficient to calculate the singlet case only, which is already a huge simplification. Furthermore, the singlet-BSE can be rewritten into a generalized eigenvalue problem and further simplified by performing the Tamm-Dancoff approximation for electronically gapped systems [3; 11; 12]. The resulting Hamiltonian matrix is still very large and dense but can in principle be diagonalized for small system sizes using popular simulation packages[13; 14; 15; 16]. In addition, very dense \(\mathbf{k}\)-meshes are needed in order to obtain converged results, a problem that is known from the independent particle picture [17] and which becomes more severe for excitons. This has lead to strategies like the use of hybrid meshes [18; 19], where specific parts of the Brillouin zone are sampled with higher precision. Despite all these works on different computational aspects, it is still challenging to include exciton effects in the calculation of optical absorption spectra, in particular for systems with many atoms per unit cell.
In this paper we present an approach based on maximally localized Wannier functions (MLWF)[20; 21], which can deal with large and/or complex systems. MLWF are directly obtained from underlying quasi-particle wave functions and represent a minimal basis set that is adapted to the specific material. Moreover, they can be obtained for specific bands, e.g., near the band gap, making the calculation independent of the number of atoms in a unit cell. Furthermore, we show that the resulting representation has important computational advantages, namely that the Hamiltonian matrix becomes very sparse, and can therefore be solved very efficiently, thus enabling optical calculations of large systems. For convenience, we use the term LSWO (linear scaling Wannier optics) for the presentation of the entire approach.
Theory: Optical properties and exciton Hamiltonian
### General formalism
We start from the two-particle eigenvalue problem in Tamm-Dancoff approximation[3; 11; 12],
\[\sum_{v^{\prime}c^{\prime}\mathbf{k}^{\prime}}H_{cv\mathbf{k},\,c^{\prime}v ^{\prime}\mathbf{k}^{\prime}}A^{\Lambda}_{c^{\prime}v^{\prime}\mathbf{k}^{\prime}}=E^{ \Lambda}A^{\Lambda}_{cv\mathbf{k}}, \tag{1}\]
where \(c\) and \(v\) label the conduction and valence bands, respectively, and \(A\) describes the exciton amplitude. The crystal momentum \(\mathbf{k}\) is the same for electron and hole because only vertical excitations are considered in the optical limit. The hermitian singlet-exciton Hamiltonian \(H\) is given by
\[H_{cv\mathbf{k},\,c^{\prime}v^{\prime}\mathbf{k}^{\prime}}= \left[E^{\text{cond.}}_{c}(\mathbf{k})-E^{\text{val.}}_{v}(\mathbf{k}) \right]\delta_{cc^{\prime}}\delta_{vv^{\prime}}\delta_{\mathbf{k}\mathbf{k}^{\prime}}- H^{\text{SC}}_{cv\mathbf{k},\,c^{\prime}v^{\prime}\mathbf{k}^{\prime}}+2H^{\text{LEE}}_{cv \mathbf{k},\,c^{\prime}v^{\prime}\mathbf{k}^{\prime}} \tag{2}\]
and consists of effective single-particle contributions from conduction and valence band structures (first term), which are diagonal with respect to \(\mathbf{k}\), and two-particle contributions from screened electron-hole interactions \(H^{\text{SC}}\) and local field effects \(H^{\text{LFE}}\), which couple different \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\) via Coulomb interaction. While the occurrence of a screened electron-hole interaction is intuitively plausible, the local field effects (LFE) term seems less obvious and some comments are appropriate. LFE arise when the system is inhomogeneous on the microscopic scale, i.e. the microscopic dielectric function \(\epsilon_{\mathbf{G}\mathbf{G}^{\prime}}\) is not diagonal with respect to reciprocal lattice vectors \(\mathbf{G}\)[22; 23; 24]. By including LFE in the Hamiltonian, it is ensured that one can later calculate the macroscopic rather than the microscopic dielectric tensor directly from \(E^{\Lambda}\) and \(A^{\Lambda}\). Note that the LFE matrix elements are in the form of electron-hole pair exchange interactions.[25]
\(H^{\text{SC}}\) and \(H^{\text{LFE}}\) can be obtained from single-particle Bloch functions for conduction \(\phi_{c\mathbf{k}}(\mathbf{x})\) and valence states \(\phi_{v\mathbf{k}}(\mathbf{x})\). A natural choice for \(\phi_{c\mathbf{k}}(\mathbf{x})\) and \(\phi_{v\mathbf{k}}(\mathbf{x})\) are Kohn-Sham orbitals leading to
\[H^{\text{SC}}_{cv\mathbf{k},\,c^{\prime}v^{\prime}\mathbf{k}^{\prime}}= \int dx\int dx^{\prime}\phi^{*}_{c\mathbf{k}}(\mathbf{x})\phi^{*}_{v^{ \prime}\mathbf{k}^{\prime}}(\mathbf{x}^{\prime})W(\mathbf{x}-\mathbf{x}^{\prime})\phi_{v\mathbf{ k}}(\mathbf{x}^{\prime})\phi_{c^{\prime}\mathbf{k}^{\prime}}(\mathbf{x}), \tag{3}\] \[H^{\text{LFE}}_{cv\mathbf{k},\,c^{\prime}v^{\prime}\mathbf{k}^{\prime}}= \int dx\int dx^{\prime}\phi^{*}_{c\mathbf{k}}(\mathbf{x})\phi^{*}_{v^{ \prime}\mathbf{k}^{\prime}}(\mathbf{x}^{\prime})\left[\frac{1}{\Omega}\sum_{\mathbf{G} \neq 0}\tilde{V}(|\mathbf{G}|)e^{i\mathbf{G}(\mathbf{x}-\mathbf{x}^{\prime})}\right]\phi_{v \mathbf{k}}(\mathbf{x})\phi_{c^{\prime}\mathbf{k}^{\prime}}(\mathbf{x}^{\prime}), \tag{4}\]
where \(W(\mathbf{x}-\mathbf{x}^{\prime})\) is the screened Coulomb interaction and \(\tilde{V}(|\mathbf{q}+\mathbf{G}|)=\frac{4\pi e^{2}}{\epsilon_{0}}\frac{1}{|\mathbf{q}+ \mathbf{G}|^{2}}\) is the Fourier transformed bare Coulomb potential. The screening might be obtained from different approaches, including a GW calculation or using a model screening function or just a constant relative permittivity. Here, we use a model dielectric function \(\epsilon^{-1}(\mathbf{q})=1-(\eta+\alpha q^{2}/q_{\text{TF}}^{2})^{-1}\) that has been shown to yield good results for typical semiconductors [26]. The parameter \(\eta=(1-\epsilon_{\infty}^{-1})^{-1}\) with
the electronic dielectric constant \(\epsilon_{\infty}\) of the material, and \(q_{\rm TF}\) is the Thomas-Fermi wave vector. The dimensionless parameter \(\alpha=1.563\) has been shown to be rather universal [26]. The screened Coulomb potential is then obtained from \(W(\mathbf{q})=\epsilon^{-1}(\mathbf{q})\tilde{V}(\mathbf{q})\). We assume a static screening, i.e. no time dependence, which is the most frequent approach. However, we note that current efforts also investigate extensions to the frequency dependence of screening [27; 28]. By taking the Fourier transform we obtain the corresponding potential in real space,
\[W(\mathbf{x}-\mathbf{x}^{\prime}) =\frac{1}{4\pi\epsilon_{0}\epsilon_{\infty}|\mathbf{x}-\mathbf{x}^{\prime }|}+\left(1-\epsilon_{\infty}^{-1}\right)\frac{\exp\left[\frac{-q_{\rm TF}|\mathbf{ x}-\mathbf{x}^{\prime}|}{\sqrt{(1-\epsilon_{\infty}^{-1})\alpha}}\right]}{4\pi \epsilon_{0}|\mathbf{x}-\mathbf{x}^{\prime}|}\] \[=V_{\rm scr}(|\mathbf{x}-\mathbf{x}^{\prime}|)+\left(1-\epsilon_{\infty} ^{-1}\right)V_{\rm Yuk}(|\mathbf{x}-\mathbf{x}^{\prime}|), \tag{5}\]
which is the superposition of a screened Coulomb and a Yukawa potential. A more detailed derivation can be found in Section B of the appendix.
Independently of the type of screening, the numerical evaluation of Eq. (1) can be quite expensive because a very fine \(\mathbf{k}\)-mesh is usually required to obtain converged results and the Hamiltonian matrix that needs to be diagonalized is very large and, in general, a dense matrix. Furthermore, the underlying Bloch functions, that are needed for the evaluation of Eq. (3) and Eq. (4), are delocalized which leads to additional challenges for numerical calculations. These obstacles are circumvented by transforming above equations into a localized basis of Wannier functions which will be explained here below.
### Exciton-Hamiltonian in basis of MLWF
For an efficient treatment of the exciton problem in Eq. (1), it is advantageous to employ a localized basis of MLWF \(w_{m\mathbf{R}}(\mathbf{x})\). MLWF are routinely used to investigate single-particle observables [29; 21] and have been shown to be advantageous for many-body first-principles calculations, including electron-electron interactions and screening[30; 31], spin excitations[32] or quadratic optical response[33]. They are directly related to the underlying Bloch functions \(\phi_{n\mathbf{k}}(\mathbf{x})\) by the transformation,
\[w_{m\mathbf{R}}(\mathbf{x}):=\frac{1}{\sqrt{N_{\Omega}}}\sum_{n\mathbf{k}}e^{-i\mathbf{k}\mathbf{ R}}U_{mn}(\mathbf{k})\phi_{n\mathbf{k}}(\mathbf{x}), \tag{6}\]
where \(\mathbf{R}\) represents a unit cell vector and \(U(\mathbf{k})\) is a unitary matrix. It can be chosen such that the obtained Wannier functions are maximally localized, i.e. their spread \(\left[\langle\mathbf{x}^{2}\rangle-\langle\mathbf{x}\rangle^{2}\right]\) is minimal. To be more precise, \(U(\mathbf{k})\) disentangles the individual energy bands in case of band crossings or
degeneracies and fixes the \(\mathbf{k}\)-dependent gauge phase \(e^{i\theta(\mathbf{k})}\) that each Bloch function has. \(U(\mathbf{k})\) can be obtained from an optimization algorithm[20; 21] for specific groups of bands, e.g. all valence bands. The obtained MLWF are orthogonal to each other and must be real valued[20]. Owing to translational symmetry, MLWF at different unit cells \(\mathbf{R}\) have the same shape and are related to each other by \(w_{m\mathbf{R}}(\mathbf{x})=w_{m0}(\mathbf{x}-\mathbf{R})\), which is known as shift property.
For the LSWO approach it is advantageous to obtain MLWF for conduction and valence bands near the fundamental band gap separately. Therefore, the obtained MLWF keep the character of either an electron or a hole. We denote them as conduction-WF and valence-WF in the following. Even though the conduction and valence MLWF are obtained separately, they are orthogonal since valence and conduction states are non-degenerate for all \(\mathbf{k}\)-points. Hence, they represent a suitable basis for the excitonic two-particle Hilbert space.
As mentioned above, only a subspace of the two-particle Hilbert space in which electrons and holes have the same momentum is relevant for the calculation of optical properties. This means we need to transform the Bloch representation with the indexes \(cv\mathbf{k}\) into a real-space description of MLWF with indexes \(mn\mathbf{S}\). This mapping is achieved by a unitary transformation of the two particle basis using the matrix
\[F_{mn\mathbf{S},\,cv\mathbf{k}}=\frac{1}{\sqrt{N_{\Omega}}}e^{ik\mathbf{S}}U_{cm}^{*}(\bm {k})U_{nv}(\mathbf{k}), \tag{7}\]
where the \(U\) matrices are obtained from Wannier transformations of valence and conduction bands and the unit cell vector \(\mathbf{S}=\mathbf{R}-\mathbf{L}\) is the distance between electron unit cell \(\mathbf{R}\) and hole unit cell \(\mathbf{L}\). Excitonic wave functions in the optical subspace (i.e. at vanishing photon momentum \(\mathbf{q}\to 0\)) are obtained by
\[\xi_{mn\mathbf{S}}(\mathbf{x},\mathbf{x}^{\prime}) =\sum_{cv\mathbf{k}}F_{mn\mathbf{S},\,cv\mathbf{k}}\,\phi_{ck}^{*}(\mathbf{x}) \phi_{v\mathbf{k}}(\mathbf{x}^{\prime})\] \[=\frac{1}{\sqrt{N_{\Omega}}}\sum_{\mathbf{R}}w_{m\mathbf{R}}(\mathbf{x})w_{n,\mathbf{R}-\mathbf{S}}(\mathbf{x}^{\prime}). \tag{8}\]
We have used that MLWF are real and therefore the excitonic wave function fulfills \(\xi_{mn\mathbf{S}}=\xi_{mn\mathbf{S}}^{*}\). Eq. (8) is a manifestation of the convolution theorem in terms of Bloch functions and corresponding MLWF. At this point we should mention that the use of the variable \(\mathbf{R}\) (electron unit cell) as summation index by no means introduces any asymmetry in the treatment of electrons and holes. The same result can also be expressed by centre of mass and relative coordinates. The centre of mass motion is not relevant for optics due to translational symmetry of the crystal and only the relative distance \(\mathbf{S}\) between electron and hole remains in \(\xi_{mn\mathbf{S}}\).
We also use \(F_{mn\boldsymbol{S},\,cv\boldsymbol{k}}\) to transform Eq. (1) into the Wannier basis,
\[\sum_{m^{\prime}n^{\prime}\boldsymbol{S}^{\prime}}\tilde{H}_{mn \boldsymbol{S},\,m^{\prime}n^{\prime}\boldsymbol{S}^{\prime}}B^{\Lambda}_{m^{ \prime}n^{\prime}\boldsymbol{S}^{\prime}}=E^{\Lambda}B^{\Lambda}_{mn \boldsymbol{S}}, \tag{9}\]
where the exciton eigenvector is obtained as
\[B^{\Lambda}_{mn\boldsymbol{S}}=\sum_{cv\boldsymbol{k}}F_{mn \boldsymbol{S},\,cv\boldsymbol{k}}\,A^{\Lambda}_{cv\boldsymbol{k}} \tag{10}\]
and the exciton Hamiltonian becomes
\[\tilde{H}_{mn\boldsymbol{S},\,m^{\prime}n^{\prime}\boldsymbol{S}^ {\prime}}= \sum_{cv\boldsymbol{k}}\sum_{c^{\prime}v^{\prime}\boldsymbol{k}^ {\prime}}F_{mn\boldsymbol{S},\,cv\boldsymbol{k}}\,H_{cv\boldsymbol{k},\,c^{ \prime}v^{\prime}\boldsymbol{k}^{\prime}}\,F^{*}_{c^{\prime}v^{\prime} \boldsymbol{k}^{\prime},\,m^{\prime}n^{\prime}\boldsymbol{S}^{\prime}}\] \[= \tilde{H}^{\text{band}}_{mn\boldsymbol{S},\,m^{\prime}n^{\prime} \boldsymbol{S}^{\prime}}-\tilde{H}^{\text{SC}}_{mn\boldsymbol{S},\,m^{\prime} n^{\prime}\boldsymbol{S}^{\prime}}+2\tilde{H}^{\text{LFE}}_{mn\boldsymbol{S},\,m^{ \prime}n^{\prime}\boldsymbol{S}^{\prime}}. \tag{11}\]
According to Eq. (2) the single-particle band contributions are obtained as
\[\tilde{H}^{\text{band}}_{mn\boldsymbol{S},\,m^{\prime}n^{\prime} \boldsymbol{S}^{\prime}}=H^{\text{cond.}}_{m^{\prime}m}(\boldsymbol{S}- \boldsymbol{S}^{\prime})\delta_{nn^{\prime}}-H^{\text{val.}}_{nn^{\prime}}( \boldsymbol{S}-\boldsymbol{S}^{\prime})\delta_{mm^{\prime}}, \tag{12}\]
where \(H^{\text{cond.}}_{m^{\prime}m}(\boldsymbol{S}-\boldsymbol{S}^{\prime})\) and \(H^{\text{val.}}_{nn^{\prime}}(\boldsymbol{S}-\boldsymbol{S}^{\prime})\) are the single-particle Wannier Hamiltonians for conduction and valence bands, respectively. They are directly accessible from the Wannier transformation of the first-principles electronic structure. [20; 21]
The screened electron-hole interaction can be obtained by virtue of Eq. (8) and by applying the shift property of MLWF (see appendix),
\[\tilde{H}^{\text{SC}}_{mn\boldsymbol{S},\,m^{\prime}n^{\prime} \boldsymbol{S}^{\prime}} =\int dx\int dx^{\prime}\,\xi_{mn\boldsymbol{S}}(\boldsymbol{x}, \boldsymbol{x}^{\prime})W(\boldsymbol{x}-\boldsymbol{x}^{\prime})\xi_{m^{ \prime}n^{\prime}\boldsymbol{S}^{\prime}}(\boldsymbol{x},\boldsymbol{x}^{ \prime})\] \[=\sum_{\boldsymbol{A}}\tilde{W}^{mm^{\prime}}_{nn^{\prime}}( \boldsymbol{A},\boldsymbol{S},\boldsymbol{S}^{\prime}), \tag{13}\]
with the general Coulomb matrix elements
\[\tilde{W}^{mm^{\prime}}_{nn^{\prime}}(\boldsymbol{A},\boldsymbol{S },\boldsymbol{S}^{\prime}) =\int dx\int dx^{\prime}\,w_{m0}(\boldsymbol{x})w_{m^{\prime} \boldsymbol{A}}(\boldsymbol{x})W(\boldsymbol{x}-\boldsymbol{x}^{\prime})w_{n^ {\prime},\boldsymbol{A}-\boldsymbol{S}^{\prime}}(\boldsymbol{x}^{\prime})w_{n,-\boldsymbol{S}}(\boldsymbol{x}^{\prime})\] \[=\tilde{W}^{m^{\prime}m}_{n^{\prime}n}(-\boldsymbol{A}, \boldsymbol{S}^{\prime},\boldsymbol{S}), \tag{14}\]
which depend on three different unit cell vectors (corresponding to three \(\boldsymbol{k}\)-vectors in reciprocal space). \(\tilde{H}^{\text{SC}}_{mn\boldsymbol{S},\,m^{\prime}n^{\prime}\boldsymbol{S} ^{\prime}}\) only depends on two unit cell vectors because electrons and holes have the same momentum. For a more intuitive and physically comprehensible description, we introduce the unit cell vectors \(\boldsymbol{R}_{c}\), \(\boldsymbol{R}_{v}\), and \(\boldsymbol{R}_{D}\), which correspond to the relative shifts between conduction WFs, between valence WFs, and to the electron-hole distance, respectively. We substitute \(\boldsymbol{A}=\boldsymbol{R}_{c}\), \(\boldsymbol{S}=-\boldsymbol{R}_{D}\) and \(\boldsymbol{S}^{\prime}=-\boldsymbol{R}_{D}+\boldsymbol{R}_{c}-\boldsymbol{R}_ {v}\) in Eq. (14) and use the shift property of MLWF to obtain
\[\tilde{W}^{mm^{\prime}}_{nn^{\prime}}(\boldsymbol{A}=\boldsymbol{R }_{c},\boldsymbol{S}=-\boldsymbol{R}_{D},\boldsymbol{S}^{\prime}=-\boldsymbol {R}_{D}+\boldsymbol{R}_{c}-\boldsymbol{R}_{v})= \tag{15}\] \[=W^{mm^{\prime}}_{nn^{\prime}}(\boldsymbol{R}_{c},\boldsymbol{R}_ {v},\boldsymbol{R}_{D})=\int d^{3}x\int d^{3}x^{\prime}\rho_{mm^{\prime} \boldsymbol{R}_{c}}(\boldsymbol{x})W(\boldsymbol{x}-\boldsymbol{x}^{\prime}- \boldsymbol{R}_{D})\rho_{nn^{\prime}\boldsymbol{R}_{c}}(\boldsymbol{x}^{\prime}),\]
where \(\rho_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})=w_{m0}(\mathbf{x})w_{m^{\prime}\mathbf{R}_{c}}(\mathbf{x})\) and \(\rho_{nn^{\prime}\mathbf{R}_{c}}(\mathbf{x})=w_{n0}(\mathbf{x})w_{n^{\prime}\mathbf{R}_{v}}(\bm {x})\) are (overlap) densities of two electrons and (overlap) densities of two holes, respectively.
Before we come to the integration strategy in Sect. III, we comment on the distance dependence of these matrix elements. Since the overlap between two different MLWF is exponentially suppressed with increasing distance, it is clear that the overlap densities vanish for large values of \(\mathbf{R}_{c}\) and \(\mathbf{R}_{v}\). Therefore, the corresponding Coulomb integrals Eq. (15) also vanish rapidly for large displacements \(\mathbf{R}_{c}\) or \(\mathbf{R}_{v}\). This substantially reduces the number of calculations required and constitutes a significant advantage over a plane wave basis set. In contrast, \(\mathbf{R}_{D}\) is associated with long-range Coulomb interactions, which always yields contributions that decay very slowly. Substituting back the original variables \(\mathbf{S}\), \(\mathbf{S}^{\prime}\), and \(\mathbf{A}\), we see that finite Coulomb integrals contribute only to matrix elements \(\tilde{H}^{\rm SC}_{mn\mathbf{S},\,m^{\prime}n^{\prime}\mathbf{S}^{\prime}}\) near the diagonal and \(\mathbf{R}_{D}\) corresponds to the position along the diagonal. The matrix representation is therefore very sparse. This is a great advantage for numerical computations, since diagonalization or alternative treatments can be performed very efficiently and with low memory requirements. It is thus not surprising that other localized basis sets leading to sparse representations of Coulomb interactions have shown large performance advantages for GW calculations in the past. [34; 35] The diagonal elements for which \(m=m^{\prime}\), \(n=n^{\prime}\), and \(\mathbf{R}_{c}=\mathbf{R}_{v}=0\) (or alternatively \(\mathbf{A}=0\) and \(\mathbf{S}=\mathbf{S}^{\prime}=-\mathbf{R}_{D}\)) are expected to yield the largest contributions to \(\tilde{H}^{\rm SC}\). They represent interactions of classical charge densities with total charge of one, because MLWF are normalized. The non-diagonal elements of \(\tilde{H}^{\rm SC}\) correspond to interactions where at least one density is an overlap density, i.e. \(\rho_{mm^{\prime}\mathbf{R}_{c}}\) or \(\rho_{nn^{\prime}\mathbf{R}_{v}}\) contains two different MLWF. Such overlap densities have zero total charge because MLWF are orthogonal. We therefore expect the non-diagonal elements to be significantly smaller. Finally, contributions from LFE, Eq. (4), are calculated in analogy to Eq. (13),
\[\tilde{H}^{\rm LFE}_{mn\mathbf{S},\,m^{\prime}n^{\prime}\mathbf{S}^{\prime}} =\int dx\int dx^{\prime}\,\xi_{mn\mathbf{S}}(\mathbf{x},\mathbf{x})\bar{V}( \mathbf{x}-\mathbf{x}^{\prime})\xi_{m^{\prime}n^{\prime}\mathbf{S}^{\prime}}(\mathbf{x}^{ \prime},\mathbf{x}^{\prime})\] \[=\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{n,-\mathbf{S}}(\mathbf{x}) \left[\sum_{\mathbf{G}\neq 0}\tilde{V}(|\mathbf{G}|)e^{i\mathbf{G}(\mathbf{x}-\mathbf{x}^{ \prime})}\right]w_{m^{\prime}0}(\mathbf{x}^{\prime})w_{n^{\prime},-\mathbf{S}^{\prime} }(\mathbf{x}^{\prime}). \tag{16}\]
This matrix is, like \(\tilde{H}^{\rm SC}\), very sparse since the overlap between MLWF is exponentially suppressed with increasing distance. Consequently, only matrix elements with small values \(\mathbf{S}\) and \(\mathbf{S}^{\prime}\), where electron and hole have closest distance, are affected by LFE. In the limiting case of strongly localized Wannier functions only matrix elements with \(\mathbf{S}=\mathbf{S}^{\prime}=0\) would contribute. We thus have a complete description of the singlet exciton Hamiltonian in the Wannier basis Eq. (9) that can be used to calculate optical properties.
### Optical properties
The macroscopic dielectric function \(\epsilon^{\rm M}(\hat{\mathbf{q}},\omega)\) could be calculated within the original Bloch representation directly from the solutions of Eq. (1) and the optical transition matrix elements \(M_{cv\mathbf{k}}(\hat{\mathbf{q}})\) that can be obtained from conduction and valence Bloch functions,
\[M_{cv\mathbf{k}}(\hat{\mathbf{q}})=\lim_{\mathbf{q}\to 0}\frac{e}{\sqrt{4\pi\epsilon_{0}}|\mathbf{q}|i} \int d^{3}x\phi^{*}_{c\mathbf{k}}(\mathbf{x})e^{i\mathbf{q}\mathbf{x}}\phi_{v\mathbf{k}}(\mathbf{x}). \tag{17}\]
The macroscopic dielectric function is given as[3]
\[\epsilon^{\rm M}(\hat{\mathbf{q}},\omega)=1+\frac{4\pi}{\Omega}\sum_{\Lambda}\left| \sum_{cv\mathbf{k}}M^{*}_{cv\mathbf{k}}(\hat{\mathbf{q}})A^{\Lambda}_{cv\mathbf{k}}\right|^{2} \left[\frac{1}{E^{\Lambda}-\hbar(\omega+i\eta)}+\frac{1}{E^{\Lambda}+\hbar( \omega+i\eta)}\right]. \tag{18}\]
Like in the previous section we transform these expressions into the basis of MLWF by utilizing the matrix \(F_{mn\mathbf{S},\,cv\mathbf{k}}\) to calculate \(\epsilon^{\rm M}(\hat{\mathbf{q}},\omega)\) directly from \(B^{\Lambda}_{mn\mathbf{S}}\) and corresponding transition matrix elements. The transformation is applied to the scalar product in Eq. (18),
\[\sum_{cv\mathbf{k}}M^{*}_{cv\mathbf{k}}(\hat{\mathbf{q}})A^{\Lambda}_{cv\mathbf{k}} =\sum_{mn\mathbf{S}}\sum_{c^{\prime}v^{\prime}\mathbf{k}^{\prime}}M^{*}_{ c^{\prime}v^{\prime}\mathbf{k}^{\prime}}(\hat{\mathbf{q}})F^{*}_{c^{\prime}v^{\prime}\mathbf{k}^{ \prime},\,mn\mathbf{S}}\sum_{cv\mathbf{k}}F_{mn\mathbf{S},\,cv\mathbf{k}}\,A^{\Lambda}_{cv\mathbf{ k}}\] \[=\sum_{mn\mathbf{S}}\tilde{M}^{*}_{mn\mathbf{S}}(\hat{\mathbf{q}})B^{\Lambda} _{mn\mathbf{S}}, \tag{19}\]
where \(\tilde{M}^{*}_{mn\mathbf{S}}(\hat{\mathbf{q}})=\sum_{c^{\prime}v^{\prime}\mathbf{k}^{\prime }}M^{*}_{c^{\prime}v^{\prime}\mathbf{k}^{\prime}}(\hat{\mathbf{q}})F^{*}_{c^{\prime}v^ {\prime}\mathbf{k}^{\prime},\,mn\mathbf{S}}\) was defined in the last step. Using Eq. (8) we can rewrite the transition matrix elements in terms of MLWF,
\[\tilde{M}^{*}_{mn\mathbf{S}}(\hat{\mathbf{q}})=\lim_{\mathbf{q}\to 0}\frac{ie}{\sqrt{4\pi \epsilon_{0}}|\mathbf{q}|}\frac{1}{\sqrt{N_{\Omega}}}\sum_{\mathbf{R}}\int d^{3}x\,w_{ m0}(\mathbf{x})e^{-i\mathbf{q}(\mathbf{x}+\mathbf{R})}w_{n,-\mathbf{S}}(\mathbf{x}). \tag{20}\]
Taylor expanding the exponential up to linear order (higher orders are irrelevant in the optical limit \(q\to 0\)) [36; 37] we get
\[\tilde{M}^{*}_{mn\mathbf{S}}(\hat{\mathbf{q}})=\frac{e\sqrt{N_{\Omega}}}{\sqrt{4\pi \epsilon_{0}}}\hat{\mathbf{q}}\int d^{3}x\,w_{m0}(\mathbf{x})\mathbf{x}w_{n,-\mathbf{S}}(\mathbf{x }). \tag{21}\]
From Eq. (21) we can see that the transition matrix elements are proportional to transition dipole moments, i.e. dipole moments of electron-hole overlap densities, which nicely connects to expectations from finite systems. The evaluation of transition dipole moments does not cause any problems (like one would have with delocalized Bloch functions) since Wannier functions are localized in real space. Finally, the macroscopic dielectric function becomes
\[\epsilon^{\rm M}(\hat{\mathbf{q}},\omega)=1+\frac{4\pi}{\Omega}\sum_{\Lambda}\left| \sum_{mn\mathbf{S}}\tilde{M}^{*}_{mn\mathbf{S}}(\hat{\mathbf{q}})B^{\Lambda}_{mn\mathbf{S}} \right|^{2}\left[\frac{1}{E^{\Lambda}-\hbar(\omega+i\eta)}+\frac{1}{E^{\Lambda }+\hbar(\omega+i\eta)}\right]. \tag{22}\]
With Eqs. (22),(21) and (11) the entire problem is formulated in the Wannier basis. The remaining task is to evaluate all required matrix elements for the screened Coulomb interaction and LFE in this basis, which will be discussed below.
## III Numerical evaluation of two-particle matrix elements and macroscopic dielectric function
### Evaluating Coulomb matrix elements in the basis of MLWF
For the numerical evaluation of the screened Coulomb interaction we insert the model-screened potential Eq. (5) into Eq. (15) and evaluate the Coulomb and Yukawa potentials separately,
\[W^{mm\prime}_{nn^{\prime}}(\mathbf{R}_{c},\mathbf{R}_{v},\mathbf{R}_{D})= \int d^{3}x\int d^{3}x^{\prime}\rho_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x} )V_{\rm scr}(|\mathbf{x}-\mathbf{x}^{\prime}-\mathbf{R}_{D}|)\rho_{nn^{\prime}\mathbf{R}_{v}}( \mathbf{x}^{\prime})\] \[+\left(1-\epsilon_{\infty}^{-1}\right)\int d^{3}x\int d^{3}x^{ \prime}\rho_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})V_{\rm Yuk}(|\mathbf{x}-\mathbf{x}^{\prime }-\mathbf{R}_{D}|)\rho_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x}^{\prime}). \tag{23}\]
While the integral with the Yukawa potential (second term of Eq. (23)) can be solved efficiently in reciprocal space, the numerical evaluation of the Coulomb integral (first term of Eq. (23)) is quite challenging, because the potential diverges in both real and reciprocal space for \(\mathbf{x}\to 0\) and \(\mathbf{q}\to 0\). However, the integral is nevertheless finite as can be shown on general grounds. The problem is still complicated by the fact that MLWF are typically obtained numerically from DFT or GW calculations and analytic forms are usually unknown. Strategies to circumvent such issues include expansions of MLWF using spherical harmonics and appropriate radial functions [38; 39], where the Coulomb integrals can be rewritten and partly solved analytically, or attempts to expand MLWF around the origin in \(\mathbf{k}\)-space by a suitable Taylor expansion. While the latter is numerically inconvenient, the expansion in spherical harmonics can provide satisfactory results for simple systems [38], especially when the Wannier functions are expressed in a form of atomic orbitals and only a small number of expansion coefficients are needed. This, however, may not be the case, which means that in general an extreme large set of spherical harmonics becomes necessary, especially when satellite structures far away from the charge centre exist. Alternatively, one might consider choosing a different system of functions where the Coulomb integrals can be solved analytically. A well-known example is Gaussian basis functions, which are routinely used in quantum chemistry codes[40]. However, an expansion of MLWF in terms of such basis functions is usually very complicated and requires sophisticated optimization and fitting algorithms. Despite some proof of principle studies [41], there are no commonly available tools to perform such an elaborated task. Here, we want to use a numerical method that yields satisfactory results for all types of MLWF and is easily applicable. This method follows the ab-initio philosophy in the sense that we avoid any fitting.
The numerical evaluation of the first term of Eq. (23) is performed in multiple steps. We start by introducing auxiliary densities \(\rho^{\rm aux}_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})\) and \(\rho^{\rm aux}_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x})\) for each \(\rho_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})\) and \(\rho_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x})\)
respectively. These auxiliary densities are Gaussian functions with the constraint that they have the same charge as the corresponding overlap density, i.e.,
\[\int d^{3}x\,\rho^{\text{aux}}_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})=\int d^{3}x\,\rho _{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x}). \tag{24}\]
The centre and variance of each Gaussian function is in general not important, albeit specific choices might be numerically favourable. We continue by adding and subtracting auxiliary densities for each integral and separate four different terms,
\[\int d^{3}x\int d^{3}x^{\prime}\,\big{[}\rho_{mm^{\prime}\mathbf{R}_{c }}(\mathbf{x})-\rho^{\text{aux}}_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})+\rho^{\text{aux}}_ {mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})\big{]}\times\] \[\times V_{\text{scr}}(\mathbf{x}-\mathbf{x}^{\prime}-\mathbf{R}_{D})\,\big{[} \rho_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x}^{\prime})-\rho^{\text{aux}}_{nn^{\prime} \mathbf{R}_{v}}(\mathbf{x}^{\prime})+\rho^{\text{aux}}_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x }^{\prime})\big{]}\] \[= I_{1}+I_{2}+I_{3}+I_{4}, \tag{25}\]
where the individual contributions are given by,
\[I_{1}= \int d^{3}x\int d^{3}x^{\prime}[\rho_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{ x})-\rho^{\text{aux}}_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})]V_{\text{scr}}(\mathbf{x}- \mathbf{x}^{\prime}-\mathbf{R}_{D})[\rho_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x}^{\prime})- \rho^{\text{aux}}_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x}^{\prime})],\] \[I_{2}= \int d^{3}x\int d^{3}x^{\prime}[\rho_{mm^{\prime}\mathbf{R}_{c}}(\bm {x})-\rho^{\text{aux}}_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})]V_{\text{scr}}(\mathbf{x}- \mathbf{x}^{\prime}-\mathbf{R}_{D})\rho^{\text{aux}}_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x}^ {\prime}),\] \[I_{3}= \int d^{3}x\int d^{3}x^{\prime}\rho^{\text{aux}}_{mm^{\prime}\bm {R}_{c}}(\mathbf{x})V_{\text{scr}}(\mathbf{x}-\mathbf{x}^{\prime}-\mathbf{R}_{D})[\rho_{nn^{ \prime}\mathbf{R}_{v}}(\mathbf{x}^{\prime})-\rho^{\text{aux}}_{nn^{\prime}\mathbf{R}_{v}}( \mathbf{x}^{\prime})],\] \[I_{4}= \int d^{3}x\int d^{3}x^{\prime}\rho^{\text{aux}}_{mm^{\prime}\bm {R}_{c}}(\mathbf{x})V_{\text{scr}}(\mathbf{x}-\mathbf{x}^{\prime}-\mathbf{R}_{D})\rho^{\text {aux}}_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x}^{\prime}). \tag{26}\]
The last term \(I_{4}\) can be evaluated analytically because only Gaussian functions are involved. For instance, choosing radial symmetrical Gaussians \(\rho^{\text{aux}}_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})=\big{(}\frac{\alpha}{\pi} \big{)}^{3/2}\,e^{-\alpha|\mathbf{x}-\mathbf{B}|^{2}}\) and \(\rho^{\text{aux}}_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x})=\big{(}\frac{\gamma}{\pi} \big{)}^{3/2}\,e^{-\gamma|\mathbf{x}-\mathbf{C}|^{2}}\), one obtains[40],
\[I_{4}= \frac{1}{\epsilon_{0}\epsilon_{\infty}|\mathbf{B}-\mathbf{C}-\mathbf{R}_{D}|} \,\operatorname{erf}\left[\sqrt{\frac{\alpha\gamma}{\alpha+\gamma}}|\mathbf{B}- \mathbf{C}-\mathbf{R}_{D}|\right]. \tag{27}\]
The remaining three terms \(I_{1}\), \(I_{2}\) and \(I_{3}\) are solved in Fourier space. This is demonstrated for \(I_{1}\), which, in Fourier space reads
\[I_{1}=\frac{1}{(2\pi)^{3}}\int d^{3}q\,e^{iq\mathbf{R}_{D}}f_{mm^{\prime}\mathbf{R}_{c }}(\mathbf{q})\tilde{V}_{\text{scr}}(\mathbf{q})f_{nn^{\prime}\mathbf{R}_{v}}(-\mathbf{q}), \tag{28}\]
where the Fourier transformed quantities are
\[f_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{q}) =\int d^{3}x\,e^{-iq\mathbf{x}}[\rho_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})- \rho^{\text{aux}}_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})], \tag{29}\] \[f_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{q}) =\int d^{3}x\,e^{-iq\mathbf{x}}[\rho_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x})- \rho^{\text{aux}}_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x})] \tag{30}\]
and the Fourier transformed potential \(\tilde{V}_{\text{scr}}(\mathbf{q})\propto q^{-2}\). The divergence at \(\mathbf{q}\to 0\) is integrable, i.e. the integral is finite for all finite regions including volumes around the origin.
Since the auxiliary densities have the same charge as the corresponding overlap densities (cf. Eq. (24)), it becomes clear that \(f_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{q}=0)=f_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{q}=0)=0\) by construction. For a discrete numerical evaluation of the integral Eq. (28), this means that the \(\mathbf{q}=0\) term can be omitted, since it must be zero (finite value times zero). The only remaining task is to perform the \(\mathbf{q}\)-sum for all \(\mathbf{q}\neq 0\), where no problems occur, and we obtain
\[I_{1}\simeq\frac{\Delta V_{q}}{N_{\rm grid}}\sum_{\mathbf{q}\neq 0}\,e^{i\mathbf{q}\mathbf{R} _{D}}f_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{q})\tilde{V}_{\rm scr}(\mathbf{q})f_{nn^{\prime }\mathbf{R}_{v}}(-\mathbf{q}). \tag{31}\]
Integrals \(I_{2}\) and \(I_{3}\) are solved in full analogy. After summation and back substitution we obtain the desired (screened) Coulomb matrix elements Eq. (14).
### Evaluating LFE in the basis of MLWF
The numerical calculation of LFE matrix elements in Eq. (16) is much easier than the screened Coulomb interaction because the used potential is not divergent (\(\mathbf{G}=0\) is not contained). The potential in Fourier space is obtained as,
\[\tilde{\tilde{V}}(\mathbf{q})=\int d^{3}x\,e^{-i\mathbf{q}\mathbf{x}}\sum_{\mathbf{G}\neq 0} \tilde{V}(|\mathbf{G}|)e^{i\mathbf{G}\mathbf{x}}=(2\pi)^{3}\sum_{\mathbf{G}\neq 0}\tilde{V}(| \mathbf{G}|)\delta(\mathbf{q}-\mathbf{G}). \tag{32}\]
The overlap densities are now between conduction and valence WF and are known as transition densities. We denote their Fourier transform as
\[f_{mn-\mathbf{S}}(\mathbf{q})=\int d^{3}x\,e^{-i\mathbf{q}\mathbf{x}}\rho_{mn-\mathbf{S}}(\mathbf{x}). \tag{33}\]
Finally, Eq. (16) becomes
\[\tilde{H}^{\rm LFE}_{mn\mathbf{S},\,m^{\prime}n^{\prime}\mathbf{S}^{\prime}}=\sum_{ \mathbf{G}\neq 0}f_{mn-\mathbf{S}}(\mathbf{G})\tilde{V}(|\mathbf{G}|)f_{m^{\prime}n^{ \prime}-\mathbf{S}^{\prime}}(-\mathbf{G}), \tag{34}\]
which can be easily evaluated numerically with a Fast Fourier algorithm.
### Time domain approach for calculating the macroscopic dielectric function
We have now everything at hand to construct the exciton Hamiltonian in the basis of MLWF. The remaining task would be to solve the eigenvalue equation and use Eq. (22) to obtain the macroscopic dielectric function \(\epsilon^{\rm M}\). Numerically this could be done by using a sparse matrix diagonalization algorithm. However, we want to use a time-domain approach[42] which allows us to calculate \(\epsilon^{\rm M}\) without a formal high-scaling diagonalization or restrictions to a few number of
eigenvalues. Therefore, we rewrite Eq. (22) in the time domain by taking a Fourier transform. We start with the dielectric function in the Cartesian direction \(\hat{\mathbf{e}}_{j}\),
\[\epsilon^{\rm M}_{jj}(\omega)=1+\frac{4\pi}{\Omega}\sum_{\Lambda} \left|\sum_{mn\mathbf{S}}\tilde{M}^{*}_{mn\mathbf{S}}(\hat{\mathbf{e}}_{j})B^{\Lambda}_{mn \mathbf{S}}\right|^{2}\left[\frac{1}{E^{\Lambda}-\hbar(\omega+i\eta)}+\frac{1}{E^{ \Lambda}+\hbar(\omega+i\eta)}\right] \tag{35}\]
This is equivalent to a time-domain formulation [42],
\[\epsilon^{\rm M}_{jj}(\omega)=1-\frac{8\pi}{\Omega\hbar}\int_{0}^{ \infty}dt\,e^{i(\omega+i\eta)t}\,{\rm Im}\left[\sum_{mn\mathbf{S}}\tilde{M}^{*}_{ mn\mathbf{S}}(\hat{\mathbf{e}}_{j})\psi^{(j)}_{mn\mathbf{S}}(t)\right], \tag{36}\]
where the time-initial state is given by \(\psi^{(j)}_{mn\mathbf{S}}(t=0)=\tilde{M}^{*}_{mn\mathbf{S}}(\hat{\mathbf{e}}_{j})\) and is propagated with the exciton Hamiltonian,
\[\psi^{(j)}_{mn\mathbf{S}}(t)=\sum_{m^{\prime}n^{\prime}\mathbf{S}^{\prime}}\left( \exp\left[\frac{-it}{\hbar}\tilde{H}\right]\right)_{mn\mathbf{S},\,m^{\prime}n^{ \prime}\mathbf{S}^{\prime}}\psi^{(j)}_{m^{\prime}n^{\prime}\mathbf{S}^{\prime}}(t=0). \tag{37}\]
## IV Computational details
To demonstrate our approach for the example of silicon crystals, which has been frequently studied experimentally and theoretically in the past,[11; 42; 43; 13] we proceed in multiple steps. First, electronic states are obtained using density functional theory (DFT) with the PBE exchange-correlation functional and PAW pseudo potentials[44; 45] as implemented in the vasp code[46; 47]. We use an energy cut-off of \(350\,\)eV and a \(11\times 11\times 11\) Monkhost-Pack \(\mathbf{k}\)-points grid for converged DFT calculations. From these results, we calculate four MLWF which correspond to all valence bands and six MLWF for the lowest-energy conduction bands separately by utilizing the wannier90 code[48]. It was carefully checked that all obtained MLWF are real-valued and reproduce the DFT band structure very accurately. The obtained Wannier functions are very localized with maximal spreads of \(2.18\,\)A\({}^{2}\) for valence WF and \(5.25\,\)A\({}^{2}\) for conduction WF. Since the underlying DFT-GGA calculations do not provide the correct band gap, we apply a scissors shift of \(0.9\,\)eV which is is similar to previously calculated quasi-particle shifts[3]. The Wannier Hamiltonians for valence and conduction bands provide all single-particle contributions of the exciton Hamiltonian Eq. (12). The two-particle integrals entering \(\tilde{H}^{\rm SC}\) and \(\tilde{H}^{\rm LFE}\) are evaluated on a regular grid in Fourier space as described in Section III.1 and III.2, which captures a supercell of \(11\times 11\times 11\) primitive unit cells. The grid is determined by the Fourier space grid of the vasp calculation. (Overlap-)densities and auxiliary functions are also constructed on this real space grid and Fourier transformations (c.f. Eqs. (29), (30) and (33)) are performed using the FFTW library[49]. For the screening model introduced in Sect. II.1 we use \(\epsilon_{\infty}=11.68\) for Si. From
the obtained single-particle and two-particle contributions we construct the exciton Hamiltonian Eq. (11) in a sparse matrix format where \(\mathbf{S},\mathbf{S}^{\prime}\) are running over 61 lattice vectors in each direction for converged results. To test the capability of the LSWO approach we also performed calculations with 111 lattice vectors in each direction, which is equivalent to 1.37 million \(\mathbf{k}\)-points.
The time evolution for the calculation of \(\epsilon^{\mathrm{M}}\) (c.f. Section III.3) is performed by a Chebyshev polynomial expansion [50; 51] of the time evolution operator, which has proven to be very accurate and efficient in the past[52; 53; 54]. We set the maximum time to 14.77 ps, use 2000 time steps and 16 polynomials. When calculating the spectrum we assumed a broadening of \(\eta=65\,\mathrm{meV}\). Fig. S-2 shows the time-autocorrelation function which enters Eq. (36).
We also carefully tested the implementation of the LSWO approach at multiple levels. This includes the comparison to an analytic Wannier-Mott exciton model and the reproduction of its energies. The interested reader is referred to Section C.1 of the appendix for more details.
Figure 1: (a): Examples of overlap densities \(\rho_{nn^{\prime}\mathbf{R}_{v}}\) for valence WF with \(n=n^{\prime}\) and different \(\mathbf{R}_{v}\). Yellow colours represent positive and blue negative values. All densities are plotted for the same iso-value magnitude of 0.001 and blue lines indicate the Si crystal. (b): Coulomb integrals for different hole-hole distances \(r_{v}\) and electron-electron distances \(r_{c}\) in the corresponding overlap densities \(\rho_{nn^{\prime}\mathbf{R}_{v}}\) and \(\rho_{mm^{\prime}\mathbf{R}_{c}}\). While \(\mathbf{R}_{v}\) and \(\mathbf{R}_{c}\) are only unit cell vectors, \(r_{v}\) and \(r_{c}\) also consider the position of Wannier centres within their unit cell.
Results
### Overlap densities and Coulomb integrals
Before discussing the optical absorption of bulk Si, we investigate more closely the distance-dependence of the two-particle contributions of the exciton Hamiltonian. We start by discussing the overlap densities \(\rho_{mm^{\prime}\mathbf{R}_{c}}(\mathbf{x})\) and \(\rho_{nn^{\prime}\mathbf{R}_{v}}(\mathbf{x})\), which contribute to the screened Coulomb interaction via Eq. (15). Fig. 1(a) shows selected overlap densities \(\rho_{nn^{\prime}\mathbf{R}_{v}}\) of the valence WF (with \(n=n^{\prime}\) and different \(\mathbf{R}_{v}\)). In this case, the overlap density for \(\mathbf{R}_{v}=0\) is a classical charge density in the shape of \(\sigma\)-bonded combination of \(sp^{3}\) hybrid orbitals. The density is positive everywhere (yellow colour) with total charge of one. On the other hand, finite shifts \(\mathbf{R}_{v}\) introduce negative regions (blue colour) in \(\rho_{nn\mathbf{R}_{v}}\) and result in a total charge of zero. It is clearly seen that large values of \(\mathbf{R}_{v}\) lead to smaller overlaps as expected.
The implications of the decay of the Coulomb integrals \(W^{mm^{\prime}}_{nn^{\prime}}(\mathbf{R}_{c},\mathbf{R}_{v},\mathbf{R}_{D})\) with distance are shown in Fig. 1(b). Blue stars denote data with varying distance between conduction WF \(r_{c}\) and orange dots show data with varying distance between valence WF \(r_{v}\). The distances \(r_{c}\) and \(r_{v}\) depend on the unit cell separation \(\mathbf{R}_{c}\) and \(\mathbf{R}_{v}\), respectively, and on the position of the Wannier centres within the unit cell. It is clearly visible that already small separations in the overlap densities of a few angstroms lead to much smaller values in the Coulomb integral. The largest Coulomb integrals are observed for \(r_{c}=r_{v}=0\), where classical charge densities (with total charge of one) interact with each other. Our above discussion has therefore been confirmed numerically. Furthermore, \(W^{mm^{\prime}}_{nn^{\prime}}(\mathbf{R}_{c},\mathbf{R}_{v},\mathbf{R}_{D})\) is more sensitive to \(r_{v}\) than \(r_{c}\) because valence WFs are more localized than conduction WFs. In both cases, the overlap densities \(\rho_{mm^{\prime}\mathbf{R}_{c}}\) and \(\rho_{nn^{\prime}\mathbf{R}_{v}}\) vanish for large separations where the Coulomb integrals become zero. As a consequence, the corresponding screened Coulomb operator \(\tilde{H}^{\rm SC}\) is very sparse and the largest values contribute to the diagonal of the Hamiltonian matrix, as suggested. Similar results can be found for \(\tilde{H}^{\rm LFE}\) (not shown), which leads to a very sparse total exciton Hamiltonian.
We next turn to the diagonal elements of the Hamiltonian that correspond to electron-hole interaction of classical charge densities. They are shown in Fig. 2 for different distances between electrons and holes, which depends on the unit cell distance \(\mathbf{R}_{D}\) and the positions of the MLWF (charge centres) within a unit cell. The Coulomb integrals \(W^{mm}_{nn}(0,0,\mathbf{R}_{D})\) become smaller with increasing distance and can be approximated for distances larger than \(10\,\mathrm{\AA}\) by the monopole-monopole interaction (grey dashed line). Notable deviations from the monopole-monopole approximation are
found here only when electron and hole densities start overlapping at smaller distances. As a result of the multipole expansion, only a relatively small fraction of the Coulomb integrals need to be calculated numerically, which reduces the computational effort substantially. For example, in the present study, we only need to compute 2496 out of 5.4 million density-density Coulomb integrals in full detail (less than \(0.5\,\%\) for a \(61\times 61\times 61\) supercell with 4 valence and 6 conduction WFs) and assume the monopole-monopole approximation for the vast majority of terms. In general, the value of \(10\,\mathrm{\SIUnitSymbolAngstrom}\) does not have to be universal and deviations from the leading monopole-monopole term could occur also at larger distances, for instance in systems with Wannier functions that are less strongly localized. However, we are confident that systems with larger orbital spreads can also be treated very efficiently.
### Optical absorption spectrum
With the obtained exciton Hamiltonian we calculate the optical absorption spectrum of Si. Fig. 3(a) shows a comparison the LSWO approach (black solid line) to experimental data (orange dashed line). The spectrum contains the peaks \(E_{1}\) and \(E_{2}\) (naming convention from Ref. [56]), in good agreement with experiment. Most importantly, the characteristic (direct) exciton peak at \(E_{1}=3.5\,\mathrm{eV}\) is a clear sign of bound exciton states that arise from electron-hole interactions. This peak is not present at GW or DFT theory level as shown by the dotted gray line. Compared to
Figure 2: Screened density-density Coulomb interaction (\(m=m^{\prime}\), \(n=n^{\prime}\), \(\mathbf{R}_{c}=\mathbf{R}_{v}=0\)) between conduction and valence WF. The interaction is dominated by the monopole-monopole interaction (dashed line). Only interactions between overlapping densities with small distances differ significantly.
the quasiparticle spectrum, the excitonic effects result in a significantly redshifted spectrum, as generally expected which is a consequence of the electron-hole interaction. Residual deviations of the exciton spectrum to experiment might be related to the screening model (which is frequently used but still remains an approximation) or missing quasi-particle corrections in the band structure that go beyond a scissors shift. Fig. 3(b) compares LSWO results to other theoretical calculations. The height of the \(E_{1}\) exciton peak varies significantly among different methods, which might be related to different treatments of the screening. Our results are closely comparable to the approach by Marini[57] and performs better than others in the literature.
### Scaling and performance of the LSWO approach
Finally, we discuss the performance and scaling with respect to the size of the exciton Hamiltonian, which depends on the number of valence and conduction states and the number of \(\mathbf{k}\)-points (or equivalently \(\mathbf{S}\)-points in Eq. (9)). The overall performance depends on two parts, i.e., firstly
Figure 3: Absorption spectrum for silicon. (a): Comparison of the calculated MLWF-based spectrum (solid black) with calculations without electron-hole interaction (dotted grey) and experiment[55] (dashed orange). Peak labels are in agreement with previous conventions[56]. (b): Comparison with other theoretical calculations. References: Gajdos[14], Puschnig[13], Schmidt[42], Arnaud[43], Marini[57]
the calculation of all required matrix elements of the Hamiltonian and secondly the evaluation of the optical absorption spectrum using the time evolution approach. Fig. 4 shows the scaling of both parts for various numbers of \(\mathbf{k}\)-points. All computations are performed on a single CPU core and normalized to a reference computation. Note that in the current implementation we do not exploit the symmetry of the crystal.
The most time-consuming part for the construction of the exciton Hamiltonian, which is shown in Fig. 4(a), is the evaluation of the Coulomb and LFE integrals that enter \(\hat{H}^{\rm SC}\) and \(\hat{H}^{\rm LFE}\). In contrast, the time required to generate the single-particle contributions of the Hamiltonian, i.e. valence and conduction bands, is negligible. As a result, the computing time scales with the number of two-particle integrals that need to be evaluated numerically on a real space grid. As we have shown in the previous section, the majority of such integrals either vanish if \(\mathbf{R}_{c}\) or \(\mathbf{R}_{v}\) deviate sufficiently from zero, or become analytical monopole-monopole interactions for larger values of \(\mathbf{R}_{D}\). Consequently, only a finite number of integrals need to be evaluated, leading to a saturation of CPU time in Fig. 4(a). This plateau is already reached for a supercell of \(7\times 7\times 7\)-unit cells (corresponding to a \(\mathbf{k}\)-lattice of the same dimensions) which can be done with moderate effort. Once all integrals have been obtained, one can proceed to even denser \(\mathbf{k}\)-grids (corresponding to very large supercells \(\mathbf{S}\)) without additional effort for the computation of \(\tilde{H}\).
The second step that is crucial to the performance of the LSWO method is the time evolution
Figure 4: Scaling behaviour for (a) construction of the exciton Hamiltonian and (b) calculation of the optical absorption spectrum. \(N\) is the rank of the Hamiltonian \(N=N_{\rm el}\cdot N_{\rm h}\cdot N_{\mathbf{k}}\). For comparison, a direct diagonalization of the exciton Hamilton in the Bloch basis (dense matrix) scales with \(\mathcal{O}(N^{3})\). Using the time evolution approach of Ref. [42] scales with \(\mathcal{O}(N^{2})\). The legend is shared for both figures. Calculations are performed on a single CPU core.
with the exciton Hamiltonian, which is shown in Fig. 4(b). This time propagation is performed in a step-by-step fashion, where each time step has the computational complexity of a sparse matrix-vector multiplication. Such operations can be performed very efficiently in linear scaling as shown in the figure. For comparison, the time-evolution approach in a Bloch representation, where the Hamiltonian is dense, would scale with \(\mathcal{O}(N^{2})\)[42], which is similar to implementations that use a Lanczos-Haydock approach as implemented in the Yambo code [58]. Note that a direct diagonalization of the Hamiltonian scales with \(\mathcal{O}(N^{2})\) in the case of a sparse matrix or with \(\mathcal{O}(N^{3})\) in the case of a dense matrix.
## VI Conclusion and Outlook
We have presented a method for describing the exciton Hamiltonian of the Bethe-Salpeter equation using maximally localized Wannier functions, which represent a minimal, spatially localized and material-specific basis set that accurately represents the quasiparticle band structure. The electron-hole interaction, i.e., local field effects and screened Coulomb attraction, are evaluated numerically in this basis, where the required number of two-particle matrix elements to be computed is greatly reduced due to the localized character of Wannier functions. Moreover, Coulomb integrals where electron and hole densities have large distances can be treated very efficiently in monopole approximation. Therefore this description in real space leads to a very sparse exciton Hamiltonian that can be calculated and used with high efficiency and offers intuitive user control over the simulations. With this implementation at hand, the macroscopic dielectric function for optical properties is calculated in the time domain using a linear-scaling algorithm. We have demonstrated the approach for a Si crystal where the optical subspace was constructed with millions of simple unit cells (corresponding to millions of \(\mathbf{k}\)-points). The calculated absorption spectrum agrees well with experimental results.
In the future, we expect that the described LSWO approach will be very efficient for materials with many atoms per unit cell, which are not accessible with alternative current implementations. We hope that excitonic effects in optical spectra, which are relevant in a large number of crystalline systems, become more easily accessible.
## VII Data availability statement
The data that support the findings of this study are available in this article, the appendix or upon reasonable request from the authors.
## Appendix A Step-by-step derivation for screened Coulomb interaction
We inserting Eq. (8) into \(\tilde{H}^{\text{SC}}\) and using the shifting property of Wannier functions, i.e. \(w_{m\mathbf{R}}(\mathbf{x})=w_{m0}(\mathbf{x}-\mathbf{R})\),
\[\tilde{H}^{\text{SC}}_{mn\mathbf{S},\,m^{\prime}n^{\prime}\mathbf{S}^{ \prime}}=\int dx\int dx^{\prime}\,\xi_{mn\mathbf{S}}(\mathbf{x},\mathbf{x}^{\prime})W(\mathbf{ x}-\mathbf{x}^{\prime})\xi_{m^{\prime}n^{\prime}\mathbf{S}^{\prime}}(\mathbf{x},\mathbf{x}^{ \prime})\] \[=\frac{1}{N_{\Omega}}\sum_{\mathbf{R}\mathbf{R}^{\prime}}\int dx\int dx^{ \prime}\,w_{m\mathbf{R}}(\mathbf{x})w_{m^{\prime}\mathbf{R}^{\prime}}(\mathbf{x})W(\mathbf{x}-\bm {x}^{\prime})w_{n^{\prime},\mathbf{R}^{\prime}-\mathbf{S}^{\prime}}(\mathbf{x}^{\prime})w _{n,\mathbf{R}-\mathbf{S}}(\mathbf{x}^{\prime})\] \[=\frac{1}{N_{\Omega}}\int dx\int dx^{\prime}\,\sum_{\mathbf{R}\mathbf{R} ^{\prime}}w_{m0}(\mathbf{x}-\mathbf{R})w_{m^{\prime}\mathbf{R}^{\prime}}(\mathbf{x})W(\mathbf{x}- \mathbf{x}^{\prime})w_{n^{\prime},\mathbf{R}^{\prime}-\mathbf{S}^{\prime}}(\mathbf{x}^{\prime })w_{n,\mathbf{R}-\mathbf{S}}(\mathbf{x}^{\prime})\] \[=\frac{1}{N_{\Omega}}\sum_{\mathbf{R}\mathbf{R}^{\prime}}\int dx\int dx^{ \prime}\,w_{m0}(\mathbf{x})w_{m^{\prime}\mathbf{R}^{\prime}}(\mathbf{x}+\mathbf{R})W(\mathbf{x}+ \mathbf{R}-\mathbf{x}^{\prime})w_{n^{\prime},\mathbf{R}^{\prime}-\mathbf{S}^{\prime}}(\mathbf{x}^ {\prime})w_{n,\mathbf{R}-\mathbf{S}}(\mathbf{x}^{\prime})\] \[=\frac{1}{N_{\Omega}}\sum_{\mathbf{R}\mathbf{R}^{\prime}}\int dx\int dx^{ \prime}\,w_{m0}(\mathbf{x})w_{m^{\prime}\mathbf{R}^{\prime}-\mathbf{R}}(\mathbf{x})W(\mathbf{x}- \mathbf{x}^{\prime})w_{n^{\prime},\mathbf{R}^{\prime}-\mathbf{R}-\mathbf{S}^{\prime}}(\mathbf{x}^ {\prime})w_{n,-\mathbf{S}}(\mathbf{x}^{\prime})\] \[=\frac{1}{N_{\Omega}}\int dx\int dx^{\prime}\,\sum_{\mathbf{A}\mathbf{B}} w_{m0}(\mathbf{x})w_{m^{\prime}\mathbf{A}}(\mathbf{x})W(\mathbf{x}-\mathbf{x}^{\prime})w_{n^{ \prime},\mathbf{A}-\mathbf{S}^{\prime}}(\mathbf{x}^{\prime})w_{n,-\mathbf{S}}(\mathbf{x}^{\prime})\] \[=\sum_{\mathbf{A}}\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{m^{\prime }\mathbf{A}}(\mathbf{x})W(\mathbf{x}-\mathbf{x}^{\prime})w_{n^{\prime},\mathbf{A}-\mathbf{S}^{\prime}} (\mathbf{x}^{\prime})w_{n,-\mathbf{S}}(\mathbf{x}^{\prime}) \tag{10}\]
with \(\mathbf{A}=\mathbf{R}^{\prime}-\mathbf{R}\) and \(\mathbf{B}=\mathbf{R}^{\prime}+\mathbf{R}\).
An alternative form can be derived easily,
\[\tilde{H}^{\text{SC}}_{mn\mathbf{S},\,m^{\prime}n^{\prime}\mathbf{S}^{ \prime}}=\] \[=\sum_{\mathbf{A}}\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{m^{ \prime}\mathbf{A}}(\mathbf{x})W(\mathbf{x}-\mathbf{x}^{\prime})w_{n^{\prime},\mathbf{A}-\mathbf{S}^{ \prime}}(\mathbf{x}^{\prime})w_{n,-\mathbf{S}}(\mathbf{x}^{\prime})\] \[=\sum_{\mathbf{A}}\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{m^{ \prime}\mathbf{A}}(\mathbf{x})W(\mathbf{x}-\mathbf{x}^{\prime})w_{n^{\prime},\mathbf{A}-\mathbf{S}^{ \prime}}(\mathbf{x}^{\prime})w_{n,0}(\mathbf{x}^{\prime}+\mathbf{S})\] \[=\sum_{\mathbf{A}}\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{m^{ \prime}\mathbf{A}}(\mathbf{x})W(\mathbf{x}-(\mathbf{x}^{\prime}-\mathbf{S}))w_{n^{\prime},\mathbf{A}- \mathbf{S}^{\prime}}(\mathbf{x}^{\prime}-\mathbf{S})w_{n,0}(\mathbf{x}^{\prime})\] \[=\sum_{\mathbf{A}}\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{m^{ \prime}\mathbf{A}}(\mathbf{x})W(\mathbf{x}-\mathbf{x}^{\prime}+\mathbf{S})w_{n^{\prime},\mathbf{A}+\bm {S}-\mathbf{S}^{\prime}}(\mathbf{x}^{\prime})w_{n,0}(\mathbf{x}^{\prime})\] \[=\sum_{\mathbf{A}}\tilde{W}^{mm^{\prime}}_{nn^{\prime}}(\mathbf{A},\mathbf{S}, \mathbf{S}^{\prime}) \tag{11}\]
We finally show that the hermiticity relation of the Hamiltonian can be traced back to relations between single Coulomb integrals \(\tilde{W}^{mm^{\prime}}_{nn^{\prime}}(\mathbf{A},\mathbf{S},\mathbf{S}^{\prime})\). For this we substitute \(\mathbf{A}\to-\mathbf{A}\).
\[\tilde{W}^{mm^{\prime}}_{nn^{\prime}} (-\mathbf{A},\mathbf{S},\mathbf{S}^{\prime})=\] \[=\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{m^{\prime}-\mathbf{A}}( \mathbf{x})W(\mathbf{x}-\mathbf{x}^{\prime}+\mathbf{S})w_{n^{\prime},-\mathbf{A}+\mathbf{S}-\mathbf{S}^{ \prime}}(\mathbf{x}^{\prime})w_{n,0}(\mathbf{x}^{\prime})\] \[=\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x})w_{m^{\prime}0}(\mathbf{x}+ \mathbf{A})W(\mathbf{x}-\mathbf{x}^{\prime}+\mathbf{S})w_{n^{\prime}0}(\mathbf{x}^{\prime}+\mathbf{A}- \mathbf{S}+\mathbf{S}^{\prime})w_{n,0}(\mathbf{x}^{\prime})\] \[=\int dx\int dx^{\prime}\,w_{m0}(\mathbf{x}-\mathbf{A})w_{m^{\prime}0}( \mathbf{x})W(\mathbf{x}-\mathbf{A}-(\mathbf{x}^{\prime}-\mathbf{A}+\mathbf{S}-\mathbf{S}^{\prime})+\mathbf{S})\times\] \[\times w_{n^{\prime}0}(\mathbf{x}^{\prime})w_{n,0}(\mathbf{x}^{\prime}- \mathbf{A}+\mathbf{S}-\mathbf{S}^{\prime})\] \[=\int dx\int dx^{\prime}\,w_{m\mathbf{A}}(\mathbf{x})w_{m^{\prime}0}(\mathbf{ x})W(\mathbf{x}-\mathbf{x}^{\prime}+\mathbf{S}^{\prime})w_{n^{\prime}0}(\mathbf{x}^{\prime})w_{n, \mathbf{A}-\mathbf{S}+\mathbf{S}^{\prime}}(\mathbf{x}^{\prime})\] \[=\tilde{W}^{m^{\prime}m}_{n^{\prime}n}(\mathbf{A},\mathbf{S}^{\prime},\bm {S}) \tag{10}\]
Performing the sum over \(\mathbf{A}\) on both sides, we obtain the hermiticity relation of the Hamiltonian.
## Appendix B Model screening potential
We start from the screened potential Coulomb interaction as defined in Section II.1 and define \(\alpha^{\prime}=\alpha/q_{\rm TF}^{2}\) for simplicity,
\[W(\mathbf{q})=\epsilon^{-1}(\mathbf{q})V(\mathbf{q})=\left(1-\frac{1}{\eta+\alpha^{\prime }q^{2}}\right)\frac{1}{\epsilon_{0}q^{2}}. \tag{11}\]
A simple rearrangement of the terms yield the Coulomb and Yukawa potential in reciprocal space,
\[W(\mathbf{q}) =\left(1-\frac{1}{\eta}\right)\frac{1}{\epsilon_{0}q^{2}}+\left( \frac{1}{\eta}-\frac{1}{\eta+\alpha^{\prime}q^{2}}\right)\frac{1}{\epsilon_{0} q^{2}}\] \[=\frac{1}{\epsilon_{\infty}}\frac{1}{\epsilon_{0}q^{2}}+\frac{ \alpha^{\prime}q^{2}}{\eta(\eta+\alpha^{\prime}q^{2})}\frac{1}{\epsilon_{0}q ^{2}}\] \[=\frac{1}{\epsilon_{0}\epsilon_{\infty}q^{2}}+\frac{1}{\eta \epsilon_{0}}\frac{\alpha^{\prime}}{\eta+\alpha^{\prime}q^{2}}\] \[=\frac{1}{\underbrace{\epsilon_{0}\epsilon_{\infty}q^{2}}_{=\text { Coulomb}}}+\left(1-\epsilon_{\infty}^{-1}\right)\underbrace{\frac{1}{\epsilon_ {0}}\frac{1}{q^{2}+\frac{q_{\rm TF}^{2}}{\alpha(1-\epsilon_{\infty}^{-1})}}}_ {=\text{Yukawa}}. \tag{12}\]
The Fourier transform then yields Eq. (5).
Figure S-1: Convergence of Eq. (13) for matrix element \(\mathbf{S}=\mathbf{S}^{\prime}=0\), \(m=m^{\prime}=1\), \(n=n^{\prime}=1\)
### Implementation test
We have carefully and extensively tested all implementations, of which we want to discuss one particular test case that demonstrates the ability to compute excitons. For this purpose, we propose a simple test system that can be solved analytically. It consists of one orbital per unit cell in a cubic lattice of length \(L\) and nearest neighbor transfer integrals for electrons and holes. The electronic structure is given by a tight-binding model,
\[H_{\mathrm{el}} =\sum_{<ij>}-t_{\mathrm{el}}\ a_{i}^{\dagger}a_{j}+E_{0},\] \[H_{\mathrm{h}} =\sum_{<ij>}t_{\mathrm{h}}\ h_{i}^{\dagger}h_{j}, \tag{10}\]
whose band energies are
\[E_{\mathrm{el}}(\mathbf{k}) =-2t_{\mathrm{el}}\left(\cos(k_{x}L)+\cos(k_{y}L)+\cos(k_{z}L) \right)+E_{0}, \tag{11}\] \[E_{\mathrm{h}}(\mathbf{k}) =2t_{\mathrm{h}}\left(\cos(k_{x}L)+\cos(k_{y}L)+\cos(k_{z}L) \right). \tag{12}\]
We construct the exciton Hamiltonian and include the electron-hole interaction. For simplicity we chose a static screening with \(\epsilon_{\infty}\) and do not include local field effects. The resulting model is given by
\[H(\mathbf{k},\mathbf{k}^{\prime})=\left[E_{\mathrm{el}}(\mathbf{k})-E_{ \mathrm{h}}(\mathbf{k})\right]\delta_{\mathbf{k}\mathbf{k}^{\prime}}-\frac{1}{\epsilon_{ \infty}}\tilde{V}(\mathbf{k}-\mathbf{k}^{\prime}), \tag{13}\]
where \(\tilde{V}(\mathbf{k}-\mathbf{k}^{\prime})\) is the bare Coulomb potential in \(\mathbf{k}\)-space. The model system is therefore similar to the Wannier-Mott exciton model [59]. To obtain an analytical solution of this model, we perform a Taylor expansion of the band energies around \(\mathbf{k}=0\)
\[E_{\mathrm{el}}(\mathbf{k})-E_{\mathrm{h}}(\mathbf{k})\approx E_{0}-2(t_{\mathrm{el}}+t_{\mathrm{h}})\left(3-\frac{1}{2}L^{2}|\mathbf{k}|^{2 }+\frac{1}{24}L^{4}|\mathbf{k}|^{4}-...\right) \tag{14}\]
By expanding the exciton Hamiltonian up to second order we obtain the hydrogen-like problem,
\[H(\mathbf{k}\mathbf{k}^{\prime})= \frac{\hbar^{2}\mathbf{k}^{2}}{2\mu}\delta_{\mathbf{k}\mathbf{k}^{\prime}}- \frac{1}{\epsilon_{\infty}}\tilde{V}(\mathbf{k}-\mathbf{k}^{\prime})+E_{\mathrm{g}}, \tag{15}\]
with an effective mass \(\mu=\frac{\hbar^{2}}{2(t_{\mathrm{el}}+t_{\mathrm{h}})L^{2}}\) and \(E_{\mathrm{g}}=E_{0}-6(t_{\mathrm{el}}+t_{\mathrm{h}})\) the band gap without electron-hole interaction. The exciton energies follow a Rydberg series,
\[E_{n}= E_{\mathrm{g}}-\frac{R_{\mathrm{ex}}}{n^{2}\epsilon_{\infty}^{2}}m \tag{16}\]
where the exciton Rydberg energy \(R_{\rm ex}\) and exciton Bohr radius \(a_{\rm B}\) are,
\[R_{\rm ex}= \frac{e^{4}\mu}{2(4\pi\epsilon_{0})^{2}\hbar^{2}},\] \[a_{\rm B}= \frac{4\pi\epsilon_{0}\epsilon_{\infty}\hbar^{2}}{\mu e^{2}}. \tag{10}\]
We note that this result can be further improved by calculating the energy shifts due to the \(k^{4}\) term in Eq.(10), which would correspond to a relativistic correction of the hydrogen atom (fine structure without spin-orbit coupling). In complete analogy, they can be calculated using perturbation theory (more details on the derivation can be found in Ref. [60]),
\[\Delta E_{nl}= -\frac{1}{12}\frac{E_{n}^{2}}{(t_{\rm el}+t_{\rm h})}\left[\frac{4 n}{(l+1/2)}-3\right]. \tag{11}\]
The analytical model will be compared with results of our Wannier implementation. Towards this end, the exciton Hamiltonian is set up in real space using the tight-binding models for valence and conduction bands (c.f. Eq. (11)) and statically screened monopole-monopole interaction. The results can then compared for various model parameters (\(L\), \(t_{\rm el}\), \(t_{\rm h}\) or \(\epsilon_{\infty}\)). For converged numerical results, it is necessary to ensure that the size of the supercell (corresponding to the number of \(\mathbf{k}\) points) is large enough to host the eigenfunctions (hydrogen-like wavefunctions). More specifically, it must be much larger than the exciton Bohr radius \(a_{\rm B}\). To avoid discretization errors, the spacing of the lattice points must be small compared to \(a_{\rm B}\) so that the eigenfunction can be represented on a real space lattice. By varying the parameters, one can obtain converged numerical results that
are arbitrary close to the analytical result. On example is shown in Fig. S-3, where the parameters are \(L=5\,\mathrm{\AA}\), \(t_{\mathrm{el}}=t_{\mathrm{h}}=8\,\mathrm{eV}\), and \(\epsilon_{\infty}=1\). The calculations are performed in a \(700\times 700\times 700\) supercell and we have used an efficient Lanczos algorithm to calculate the density of states (DOS). The figure shows perfect agreement between the numerical and analytical results, demonstrating the correctness of our implementation and the ability to simulate various excitons.
## Appendix D Acknowledgements
We would like to thank the Deutsche Forschungsgemeinschaft for financial support [CRC1415, projects No. OR-349/3 and OR-349/11 and the Cluster of Excellence e-conversion (Grant No. EXC2089)]. Grants for computer time from the Zentrum fur Informationsdienste und Hochleistungsrechnen of TU Dresden and the Leibniz Supercomputing Centre in Garching (SuperMUC-NG) are gratefully acknowledged.
We would like to acknowledge F. Bechstedt and J. Furthmuller for fruitful discussions about the numerical evaluation of Coulomb integrals.
## Appendix E Competing Interests
There are no competing interests to declare.
|
2309.08819 | The Cohen-Macaulay type of edge-weighted r-path ideals | We describe combinatorially the Cohen-Macaulay type of edge-weighted r-path
suspensions of edge-weighted graphs for an arbitrary positive integer r. The
computation of the Cohen-Macaulay type of edge-weighted suspensions of
edge-weighted graphs becomes a special case of r = 1. | Shuai Wei | 2023-09-16T00:31:40Z | http://arxiv.org/abs/2309.08819v1 | # The Cohen-Macaulay type of edge-weighted \(r\)-path ideals
###### Abstract.
We describe combinatorially the Cohen-Macaulay type of edge-weighted \(r\)-path suspensions of edge-weighted graphs for an arbitrary positive integer \(r\). The computation of the Cohen-Macaulay type of edge-weighted suspensions of edge-weighted graphs becomes a special case of \(r=1\).
## Introduction
**Assumption.** Throughout, let \(G\) be a (finite simple) graph with vertex set \(V=V(G)=\{v_{1},\dots,v_{d}\}\) of cardinality \(d\geq 1\) and edge set \(E=E(G)\). An edge between vertices \(v_{i}\) and \(v_{j}\) is denoted \(v_{i}v_{j}\). Let \(\mathbb{K}\) be a field and set \(R=\mathbb{K}[X_{1},\dots,X_{d}]\). Set \(\mathfrak{m}=(X_{1},\dots,X_{d})R\). Fix an integer \(r\in\mathbb{N}=\{1,2,\dots\}\) and set \(R^{\prime}=\mathbb{K}[\{X_{i,j}\mid i=1,\dots,d,j=0,\dots,r\}]\). An _edge-weighting_ on \(G\) is a function \(\omega:E\to\mathbb{N}\), and \(G_{\omega}\) denotes a graph \(G\) equipped with an edge-weighting \(\omega\).
Combinatorial commutative algebra uses combinatorics and graph theory to understand certain algebraic constructions; it also uses algebra to understand certain objects in combinatorics and graph theory. In this paper, we explore aspects of this area via edge ideals and path ideals of edge-weighted graphs.
The _edge ideal_ of \(G\) introduced by Villarreal [7] is the ideal \(I(G)\) of \(R\) that is "generated by the edges of \(G\)":
\[I(G)=(X_{i}X_{j}\mid v_{i}v_{j}\in E)R.\]
Villarreal [7] characterizes the trees \(T\) for which \(I(T)\) is Cohen-Macaulay: these are the "suspensions" or "whiskered trees", i.e., trees obtained from a subtree \(U\) by adding an edge \(v_{i}\)\(v_{i1}\) to each vertex \(v\) of \(U\):
It is straightforward to show that the elements \(v_{i}-v_{i1}\) form a maximal regular sequence on \(R^{\prime}/I(T)\) such that the ensuing quotient is \(R/(I(U)+\langle x_{1}^{2},\dots,x_{d}^{2}\rangle)\). From this, one readily computes the Cohen-Macaulay type of \(R^{\prime}/I(T)\) as the number of ideals in an irredundant irreducible decomposition of \(I(U)\), in other words, the number of minimal vertex covers of \(U\). For instance, in the displayed example, the type of \(R^{\prime}/I(T)\) is \(2\), either by the decomposition \(I(U)=\langle v_{1}v_{2},v_{2}v_{3}\rangle=\langle v_{1},v_{3}\rangle\cap \langle v_{2}\rangle\) or by the minimal vertex covers \(\{v_{1},v_{3}\}\) and \(\{v_{2}\}\). The goal of this paper is to extend this computation to the following more general constructions.
Paulsen and Sather-Wagstaff [6] generalized Villarreal's construction in on direction with the edge ideal of an edge-weighted graph \(G_{\omega}\): the ideal \(I(G_{\omega})\) of \(R\) which is "generated by all weighted-edges |
2306.06069 | Gemtelligence: Accelerating Gemstone classification with Deep Learning | The value of luxury goods, particularly investment-grade gemstones, is
greatly influenced by their origin and authenticity, sometimes resulting in
differences worth millions of dollars. Traditionally, human experts have
determined the origin and detected treatments on gemstones through visual
inspections and a range of analytical methods. However, the interpretation of
the data can be subjective and time-consuming, resulting in inconsistencies. In
this study, we propose Gemtelligence, a novel approach based on deep learning
that enables accurate and consistent origin determination and treatment
detection. Gemtelligence comprises convolutional and attention-based neural
networks that process heterogeneous data types collected by multiple
instruments. Notably, the algorithm demonstrated comparable predictive
performance to expensive laser-ablation inductively-coupled-plasma
mass-spectrometry (ICP-MS) analysis and visual examination by human experts,
despite using input data from relatively inexpensive analytical methods. Our
innovative methodology represents a major breakthrough in the field of gemstone
analysis by significantly improving the automation and robustness of the entire
analytical process pipeline. | Tommaso Bendinelli, Luca Biggio, Daniel Nyfeler, Abhigyan Ghosh, Peter Tollan, Moritz Alexander Kirschmann, Olga Fink | 2023-05-31T14:35:02Z | http://arxiv.org/abs/2306.06069v1 | # Gemetlligence: Accelerating Gemstone classification with Deep Learning
###### Abstract
The value of luxury goods, particularly investment-grade gemstones, is greatly influenced by their origin and authenticity, sometimes resulting in differences worth millions of dollars. Traditionally, human experts have determined the origin and detected treatments on gemstones through visual inspections and a range of analytical methods. However, the interpretation of the data can be subjective and time-consuming, resulting in inconsistencies. In this study, we propose Gemetlligence, a novel approach based on deep learning that enables accurate and consistent origin determination and treatment detection. Gemetlligence comprises convolutional and attention-based neural networks that process heterogeneous data types collected by multiple instruments. Notably, the algorithm demonstrated comparable predictive performance to expensive laser-ablation inductively-coupled-plasma mass-spectrometry (ICP-MS) analysis and visual examination by human experts, despite using input data from relatively inexpensive analytical methods. Our innovative methodology represents a major breakthrough in the field of gemstone analysis by significantly improving the automation and robustness of the entire analytical process pipeline.
## Introduction
Gemstones, both natural and synthetic, are highly prized for their rarity and beauty and are commonly used in jewelry for both adornment and investment purposes. Some of these minerals can be worth over a million dollars per gram, making them some of the most concentrated physical capital in the world. In addition to factors such as species and aesthetic quality, the value of a gemstone is also influenced by its geographic origin and any potential treatments it may have undergone after being mined. Identifying these treatments, which can include exposure to electromagnetic radiation [1], heating [2, 3], or the infusion of oils or other substances [4], is crucial for determining the true value of a precious stone and upholding consumer trust in the jewelry industry. Unfortunately, the artisanal nature of gemstone mining results in a fragmented and opaque supply chain, making it difficult to reliably track the origin and treatment of individual stones.
To minimize investment risks, buyers and sellers often require that gemstones are accompanied by an independent laboratory report that confirms the physical characteristics, treatment status, and geographic source of the stones. In the conventional practice of determining the authenticity and source of gemstones, skilled human experts with two to six years of training in gemology play a pivotal role [5]. These specialists have traditionally relied on optical microscopy to identify structures and inclusions within the gemstones that can provide clues about their origin and any treatments they may have undergone. However, this task is
exceptionally difficult, as gemstones from different locations may exhibit strikingly similar features due to shared geological histories [6]. Furthermore, with the advancement of treatment techniques for gemstones, the detection of such treatments has become increasingly challenging for even the most experienced human examiners [7; 8].
In response to these pressing challenges and the demands of the industry, state-of-the-art gemology laboratories have introduced a variety of new analytical instruments in their regular workflow [9], including ultraviolet-visible-near-infrared spectroscopy (UV) [10], Fourier-transform infrared-spectroscopy (FTIR) [11], energy-dispersive X-ray fluorescence (XRF) [12] and laser-ablation inductively-coupled-plasma mass spectrometry (ICP-MS) [13]. The UV, FTIR, and XRF devices come at a combined cost of around 200,000 USD and can be operated by technical staff with just a brief introductory training. However, the ICP-MS instrument alone, albeit providing the most comprehensive data for identifying origins, costs approximately 500,000 USD and demands one or more qualified operators with extensive theoretical and practical training. In addition, ICP-MS uses a laser to ablate a small volume from the gems, making it, therefore, destructive on a micro-scale level [14]. Despite all these data sources, the tasks of determining the origin and detecting treatment with high accuracy remain very challenging because the differences in physical, spectroscopic, and chemical properties between stones from different origins are often subtle. This is especially true for blue sapphires, which are among the "big 3" gemstone species that top-tier gemology laboratories most often assess [15]. Moreover, even with strict adherence to laboratory protocols, such as restricting experts' access to analytical results during microscopic examinations and ensuring at least two independent conclusions, it remains a challenging task to obtain a consistent outcome from a combination of visual and analytical data. Therefore, the final decision is inevitably susceptible to subjective biases. In addition, as gemstones are long-term investments, they re-enter the market from time to time and new evaluations of the same stones are conducted. Inconsistencies in the origin or treatment determination of the same stone over time, which massively affect a stone's value, can undermine the confidence in this asset class. Crucially, some gemstones cannot be classified unambiguously and with high confidence by the experts. While ICP-MS analysis can help mitigate this issue, it can also result in significant costs. For all these reasons, the advancement of innovative methods that effectively leverage data obtained from affordable instruments while maximizing accuracy and robustness is of considerable practical significance.
Modern machine- and deep-learning algorithms have revolutionized the analysis and interpretation of large and complex datasets in various fields, including but not limited to material science [16; 17; 18], geoscience [19; 20], and computational chemistry [21; 22; 23] allowing for more accurate and efficient data processing. However, their application to gemology is still in its infancy. Conventional machine-learning techniques in gemology that typically involve feature extraction methods followed by simple downstream algorithms, have provided promising results in automating various geological tasks, such as categorizing gemstones by type and shape [24; 25; 26], distinguishing real and synthetic gemstones [27], and even more complicated tasks like grading gemstones [28], [29]. Nevertheless, such techniques are restricted to analyzing only one type of data source at a time, such as images, spectra, or tabular data [30], limiting their capability of detecting artificial treatments or correctly identifying the origin of the gemstone. As such, these challenging tasks still heavily rely on human expertise.
Herein, we propose Gemtelligence (Fig. 1), a deep learning-based method that automates the determination of the country of origin (OD) and detection of treatment (TD) of gemstones at a fraction of the time and cost, outperforming human experts' evaluations and without relying on costly measurements. This study is the first of its kind to address both OD and TD of valuable gemstones using a novel deep learning approach specifically tailored to handle varied and multi-modal analytical data acquired from different testing devices. Crucially, we conduct our experimental evaluation on a large collection of high-quality gemstones, thereby,
allowing us to examine the performance of our algorithm in real-world scenarios. A part of the aforementioned data, along with the source code of our model, will be made available for public use to facilitate the benchmarking and reproducibility of the results presented in this work.
The primary innovation of the proposed approach lies in its multi-modal design, which is custom-tailored to effectively process and integrate varied and diverse analytical data acquired from different instruments. Gemtelligence consists of a combination of strided convolutional neural networks [31] and a variant of the popular Transformer architecture [32]. Particularly, for processing spectral data, we draw inspiration from the work of Ho et al. [33] and use a modified version of their architecture with a bigger kernel size to increase the receptive field in order to capture more global features. The transformer-like component of Gemtelligence is used to process tabular data and is based on the architecture introduced in [34]. The final architecture combines all these elements in a single model enabling end-to-end multi-modal training. As robustness and consistency are key desiderata in gemstone analysis, we include an additional confidence-thresholding scheme in our pipeline allowing users to control the trade-off between the degree of automation provided by Gemtelligence (i.e. the number of stones that can be processed automatically) and its level of accuracy. As illustrated in Fig. 1 (c), given a new test stone, we compare the most likely prediction of Gemtelligence with a threshold value and we accept the prediction only if the corresponding probability exceeds this threshold.
Figure 1: **a.** Gemtelligence can process measurements from four distinct data sources: FTIR and UV (spectroscopy analysis) and ICP-MS and XRF (elemental analysis). **b.** With a negligible inference time compared to human experts, it can predict the probability of a gemstone’s origin or whether it has undergone heat treatment. Not all data types are required for inference; missing sources can be masked out as illustrated by switch symbols in the figure. **c.** If the maximum probability exceeds a predefined threshold (top panel), the stone prediction can be confidently accepted. If the maximum probability falls below the threshold (bottom panel), however, the output should be discarded and the stone should be further analyzed via standard methods such as microscopy and expert analysis. The value of the threshold, selected during the confidence-thresholding phase, determines the balance between the number of stones that can be processed automatically and the accuracy achieved by the model.
The value of this threshold is determined by imposing that the trained model attains the desired accuracy level on the training set. More details about the model architecture and the confidence-thresholding procedure can be found in the Methods section.
Our work contributes to the burgeoning field of laboratory automation [35, 36, 37, 38], which has seen a rising focus on leveraging Artificial Intelligence (AI) techniques to streamline the time-consuming and repetitive pipelines typical of applied scientific research. As shown by our empirical evaluation, the implementation of Gemetelligence in gemological laboratories can assist human experts in the time-consuming task of data assessment and interpretation, allowing them to focus on more high-value activities, including research and development.
## Results
In this section, we present the results of Gemetelligence for the two challenging tasks of OD and TD of blue sapphires. Blue sapphire is a type of corundum, \(\mathrm{Al}_{2}\mathrm{O}_{3}\), with a blue hue caused by the presence of trace amounts of Fe and Ti. Our focus on sapphires stems from two key factors. Firstly, sapphires are widely acknowledged to present more difficulties in achieving accurate OD than other gemstones [15]. Secondly, the TD of sapphires has not been researched as extensively as that of other gemstones like rubies. In the following, we first introduce the tasks, datasets, training, and testing pipelines. Then, we assess Gemetelligence's performance in processing diverse multi-modal data, through various ablation studies and in comparison to human experts.
### Background and Experimental setup
Origin Determination.The problem of OD can be cast as a classification task: based on the laboratory tests performed on a particular gemstone, the goal is to determine its geographical origin out of a discrete set of candidates. This study focuses on the top four significant sources of high-value blue sapphires, which make up over 90% of the market's high-quality blue sapphire volume. These sapphires were created through metamorphism and are sourced from Kashmir1, Burma/Myanmar, Sri Lanka, and Madagascar. The geological environment and rock varieties in which blue sapphires develop have specific attributes [39], resulting in slightly varying gemological characteristics [15]. This allows for distinguishing the origins by examining these features. However, identifying the origin (OD) remains a challenging task since the physical, chemical, and spectroscopic properties of blue sapphires from distinct sources often have considerable overlap [9]. Furthermore, blue sapphire has a very limited range of routinely detectable trace elements in comparison to other gemstones, substantially limiting the effectiveness of traditional multi-component statistical analysis for OD. Microscopy and ICP-MS are widely considered the most reliable analytical evaluations for OD. UV and XRF are also employed, though their analysis results are not as reliable as those from microscopy and ICP-MS. The usefulness of FTIR in identifying the origin of sapphires from metamorphic rocks is limited as those sapphires have very few origin-specific phases that can be detected by FTIR, such as aluminum oxides-hydroxides and structurally bonded OH groups [40].
Footnote 1: In the present context, the term Kashmir refers to the Jammu and Kashmir region at the northern tip of India, between Pakistan and China. There, in the Padar area, in the Kudi valley near the village of Sumjam, high-quality blue sapphires were found during a few years only in the second half of the \(19^{th}\) century.
Artificial Heat Treatment Detection.Artificial heat treatment is the process of heating gemstones to improve their visual appearance, clarity, and color. To determine if a stone has been artificially heat
treated, the primary method used is visual microscopic inspection complemented with spectral measurement techniques. Heat treatment may result in structural changes, particularly in microscopic or submicroscopic small inclusions. The term "inclusions" refers to phases (solid, liquid, or gaseous) that are trapped or formed in the crystal during or after the growth of the stone in the earth [41]. For instance, an inclusion in a gemstone might become unstable when heated and begins to disintegrate. FTIR and UV analysis are typically employed to support heat treatment detection on sapphires. Since elemental analysis methods such as XRF and ICP-MS cannot capture the underlying physical change of a stone undergoing heat treatment, they are not used for this task. Hence, we also avoid using them as input to our algorithm in order to prevent the introduction of spurious correlations. We frame the TD problem as a binary classification task, comprising a "treated" and a "non-treated" class.
Training and Testing Datasets.The data used for training Gemtelligence comprises over 5500 blue sapphire records obtained from the Gubelin Gem Lab over the course of ten years using various protocols differing from each other in the number and type of analyses performed. A full protocol involves taking two optical and two chemical spectroscopic measurements (UV, FTIR, XRF, and ICP-MS) for each gemstone. However, in cases where a reduced protocol is followed, certain analyses may be omitted. To assign OD and TD labels, two or more experienced professionals initially assign candidates for each measurement independently and then visually examine the stone. By comparing the outcomes of each independent analysis, a consensus is reached for the final assignment, which is regarded as our ground truth. We evaluate the performance of both OD and TD models using five-fold cross-validation. The main advantage of this procedure is that it provides a more reliable estimate of the model's performance than a simple train/test split, especially when the dataset is relatively small, as in our case. The data are randomly split into five sets, of which one is used for testing and the remaining four are used for training. This process is repeated for each fold. The training data are used to train, validate, and calibrate the model, while the test data are employed to measure the model's performance. In order to ensure a rigorous analysis, stones from the test set that do not meet all the following criteria are excluded: 1) each measurement is examined independently from the others; 2) all possible relevant measurements are taken; 3) two expert gemologists independently reach the same conclusion via visual inspection in TD and the results obtained from ICP-MS and visual inspection match in OD. Additional details can be found in Supplementary Note 5.
### Performance Evaluation
AI-supported decision system.To begin our analysis, we compare the performance of Gemtelligence with that of human gemologists on various combinations of data sources and tasks. The gemologists follow a strict procedure throughout the process of analyzing the gemstones. First, each data source (e.g. XRF) is observed independently and without access to other information. Second, based solely on this data source, a preliminary conclusion is made regarding the gemstone's origin and any potential heat treatment it may have undergone. We use these sub-conclusions to create statistics and perform comparisons between Gemtelligence and human experts using individual and combined data sources.
Fig. 2 shows a comparison between Gemtelligence and human experts on the OD and TD tasks in terms of the number of stones they can confidently classify based on single or combined data sources 2 and the obtained levels of accuracy.
Footnote 2: For the sake of fairness, we conduct this comparison by considering only the data sources that were not used to determine the label of the test data. Specifically, we do not use ICP-MS data for OD as the test set was created such that the final conclusion had to match the ICP-MS sub-conclusion, which is regarded as the most reliable data source. For TD, the gemologists examine UV and FTIR spectra simultaneously, so we do not have access to separate statistics.
For Gemtelligence, we refer to a stone being confidently classified if the probability of the model associated with its final prediction exceeds the threshold value (see Fig. 1 c.). In this specific experiment, the threshold has been determined by calibrating the model on the training data to match or surpass the accuracy levels reached by human experts on the test data. For OD, human experts confidently classify a stone if they return a single prediction, rather than a list of possible candidates. In the case of TD, the expert is not confident if the uncertainty is too high to draw a final conclusion.
For all the considered combinations of data sources, Gemtelligence can provide confident predictions on substantially larger sets of stones (x-axis) than human experts, who are often unable to draw definitive conclusions due to uncertainty. Remarkably, Gemtelligence also achieves either comparable or significantly higher accuracy levels (y-axis) than human experts while delivering a final conclusion on much larger groups of stones. This experiment demonstrates that Gemtelligence improves the level of automation in gemstone analysis, significantly reducing the analysis time compared to human experts (who typically take several hours per stone) while achieving comparable or even higher levels of accuracy.
We proceed with our analysis by exploring how different threshold values yield different trade-offs between accuracy and automation. A higher threshold value results in more stones requiring further human expert analysis but potentially higher accuracy, while a lower threshold value may require less human expert analysis but potentially lower accuracy. Table 1 presents the performance of our model, in three operating setups, namely None, Mode 1 and Mode 2, each offering different trade-offs between automation and accuracy. For the last two modes, the threshold of Gemtelligence is chosen to ensure a specific accuracy rate (98% for Mode 1 and 99% for Mode 2) on the training stones that have a classification probability above this threshold.
For more details on the confidence-thresholding procedure, please refer to Methods section. For the None setup (second and fifth columns in Table 1), the model is not calibrated since the threshold is set to zero. This means that all predictions are accepted, resulting in the complete automation of the process (100% stones above the threshold) at the cost of higher error rates. However, in scenarios where the model's prediction uncertainty is higher, this configuration lacks preventive measures to mitigate the risk of potential errors.
Figure 2: Comparison between human experts (represented by crosses) and Gemtelligence (represented by circles) in terms of the size of the subset of stones that have been confidently classified (on the x-axis) and the corresponding level of accuracy achieved for this subset (on the y-axis). Each color corresponds to a different combination of data sources. All the combinations apart from the red one (UV+FTIR) are used for OD while UV+FTIR is used for TD. The dashed lines are used to highlight the performance change between humans and our model. The results in the plot are obtained by evaluating the performance of experts and Gemtelligence on test data.
For the other two setups, the threshold is non-zero, resulting in fewer accepted predictions and increased accuracy compared to the None setup. The lower threshold for Mode 1 compared to Mode 2 results in a significant increase in the number of confidently classified stones for both tasks. Compared to Mode 2, this setup greatly reduces the workload of gemologists, as the inference time of Gemtelligence is negligible (less than a second), whereas taking a final decision on a single stone can take several hours for human experts. Nevertheless, Mode 1 also leads to a slight reduction in test accuracy, although the results remain favorable and comparable to human capabilities.
In the field of gemstone analysis, the level of automation and accuracy of predictions can significantly impact the workload of gemologists and the value of stones. Depending on the specific use case, it may be advantageous to prioritize one over the other. As incorrect evaluations can significantly impact stone prices, Mode 2 mode represents a more conservative and low-risk configuration as it results in high accuracy levels, despite decreasing the level of automation of the model (number of stones confidently classified).
Influence of different data sources.Fig. 3 illustrates the relationship between Gemtelligence's accuracy and the number of confidently predicted stones for OD (left) and TD (right), where a stone is considered confidently classified by Gemtelligence if the model's probability associated with its final prediction exceeds the threshold value.
The figure reveals a consistent trend across all data sources and tasks: higher levels of confidence in
\begin{table}
\begin{tabular}{l|c c c|c c c} & \multicolumn{3}{c}{**Origin Determination**} & \multicolumn{3}{c}{**Heat Treatment Detection**} \\ \hline \hline
**Gemtelligence setup** & None & Mode 1 & Mode 2 & None & Mode 1 & Mode 2 \\ \hline
**Calibration accuracy** & None & 98\% & 99\% & None & 98\% & 99\% \\
**Stones above threshold** & 100.0\% & 74.2\% & 38.5\% & 100.0\% & 97.4\% & 95.5\% \\
**Test accuracy** & 90.69\% & 96.8\% & 99.1\% & 98.03\% & 98.7\% & 98.9\% \\ \end{tabular}
\end{table}
Table 1: Calibration accuracy used to determine the threshold (first row), the corresponding number of analyzed stones (second row), and test accuracy (third row) for the three considered operating setups both for OD and TD. OD was performed using UV and XRF and TD was done using UV and FTIR.
Figure 3: Accuracy (%) vs. stones above the threshold (%) for OD (Left) and TD (Right) with different data sources provided as input to the model. The x-axis represents the number of stones confidently classified by Gemtelligence with a probability greater than a fixed threshold value (see Fig. 1 (c)). Starting from the left and moving towards the right on the same axis, we gradually increase the threshold and indicate the number of stones that the model confidently classifies for each resulting subset, along with the corresponding accuracy (y-axis). Data sources not present in the legend are masked.
Gemetelligence's prediction lead to higher accuracy. In other words, when the model assigns a high probability to a certain class, it is more likely to be correct. The observed strong correlation between accuracy and confidence in this experiment validates the effectiveness of our confidence-thresholding procedure.
The results indicate that, for OD, using ICP-MS data leads to \(\sim\)\(4\%\) higher accuracy compared to the next best single data source, UV, across the entire range of values on the x-axis. This highlights the high-quality information provided by ICP-MS data for this task. Moreover, combining UV and XRF data sources yields a model with comparable performance to that obtained with ICP-MS data, despite the latter's higher complexity and cost. This suggests that combining standard and less expensive analytical data sources can be as effective as the more expensive ICP-MS method.
In TD, while the best results are achieved by integrating UV and FTIR data, Gemetelligence still reaches similar levels of accuracy when only utilizing FTIR data. This is noteworthy since experts typically rely on both data sources to reach a final conclusion in their analysis and suggests that Gemetelligence can serve as a valuable tool even in scenarios where using multiple data sources is not feasible.
Prediction consistency analysis.To evaluate the accuracy and reliability of Gemetelligence, we analyze if the network generates consistent results for the same gemstone when data is gathered from different instruments, at various times, and under varying conditions.
Since gemstones are often subject to multiple analyses during their lifespan, inconsistent evaluations can lead to doubts about the authenticity of the asset, as well as legal and financial complications. Thus, assessing whether the predictions have remained consistent over time is crucial.
Fig. 4 illustrates the predictions of Gemetelligence on the OD (left columns) and TD (right column) tasks for only those gemstones that underwent multiple evaluations over the years, for the None (first row), Mode 1 (second row), and Mode 2 (third row) model setups. Each stone in our collection that was assessed more than once is represented in each of the six panels by a line connecting the different evaluations. A black line is drawn if the model's predictions are consistent across evaluations, while a red line is drawn if they are not. Dots located at the extremes of the line indicate the predictions of Gemetelligence: the absence of a dot implies that Gemetelligence's confidence did not exceed the threshold and hence, a decisive conclusion could not be drawn. It is worth emphasizing that scenarios, where uncertainty prevents a change in prediction (indicated by black lines and no dots), are generally more desirable than inconsistent predictions (shown in red lines). Uncertainty prompts experts to carry out additional analyses to reduce the chances of making mistakes.
The results displayed in Fig.4 demonstrate that even without a threshold (upper row), Gemetelligence exhibits a good level of consistency in its predictions, with only 23 inconsistent predictions out of 148 for OD and zero out of 62 for TD. However, when Gemetelligence is employed in Mode 1 and Mode 2 setups (middle and bottom rows), all inconsistent predictions disappear as the model's outputs for such cases fall below the threshold, reflecting its uncertainty for those particularly challenging and ambiguous samples.
This experiment highlights the value of our confidence-thresholding methodology, which enables the user to disregard predictions that the model is not highly confident about, reducing the risk of incorrect predictions.
Figure 4: Gemetelligence predictions over time for OD (Left column) and TD (Right column). Each horizontal line is used to indicate a different stone that is analyzed multiple times over the years. (Upper row) None setup; (Middle row) Mode 1 setup; (Bottom row) Mode 2 setup.
## Conclusion
This study introduces Gemtelligence, a novel deep-learning approach for automated origin determination and heat treatment detection of gemstones. Gemtelligence is capable of handling complex and varied data structures and can enhance prediction accuracy by capturing correlations between different data modalities. Its architecture, based on transformers and convolutional neural networks, enables flexible gemstone classification using any combination of diverse data sources and allows for simultaneous end-to-end processing of tabular and spectral data. Gemtelligence provides numerous benefits. Firstly, its predictions are well calibrated as it outputs correct predictions with high confidence on a large percentage of test samples. This is in contrast to the expert-based evaluation, which provides confident predictions on a significantly smaller subset of stones. Secondly, Gemtelligence provides excellent results by taking as input inexpensive data sources only, hence limiting the reliance on more costly analytic methods, like ICP-MS.
Overall, Gemtelligence has the potential to drastically impact the gemstone industry. Its application can result in significant cost savings and can allow human experts to focus on more value-adding activities in the area of research and development. The deployment of Gemtelligence would be crucial in standardizing the gemstone analysis process, significantly reducing the incidence of ambiguities and increasing trust levels in the entire marketplace. In conclusion, we hope that our results, together with the code and data we will make publicly available, will stimulate more investigations in this domain and advance the creation of novel techniques and tools for gemstone analysis automation.
## Methods
### Data Sources
The following devices and methods were utilized to collect the data used in this study:
ICP-MS.For ICP-MS data, we used an Elemental Scientific (ESI) 193 nm excimer laser ablation system3 with a large-format sample chamber and a small-volume, flexible cup that collected the ablated material. Three ablations were created for each stone, having a 50-micrometer diameter spot size, 15 Hz repetition rate, and 6 J/cm\({}^{2}\) fluence. For each ablation, the materials were conveyed to the ICP via a blend of He (1000 ml/min) and Ar (700 ml/min) gases, where the material got ionized. Finally the ions were transported to an Agilent 8800 mass spectrometer, which measured the following elements / isotopes: \({}^{7}\)Li, \({}^{9}\)Be, \({}^{25}\)Mg, \({}^{27}\)Al, \({}^{29}\)Si, \({}^{45}\)Sc, \({}^{47}\)Ti, \({}^{49}\)Ti, \({}^{51}\)V, \({}^{52}\)Cr, \({}^{53}\)Cr, \({}^{55}\)Mn, \({}^{56}\)Fe, \({}^{57}\)Fe, \({}^{59}\)Co, \({}^{62}\)Ni, \({}^{71}\)Ga, \({}^{89}\)Y, \({}^{90}\)Zr, \({}^{93}\)Nb, \({}^{118}\)Sn, \({}^{140}\)Ce, \({}^{146}\)Nd, \({}^{176}\)Hf, \({}^{181}\)Ta, \({}^{193}\)Ir, \({}^{195}\)Pt. The acquired data was then processed from counts/second to concentration using Glitter [42], with NIST 612 [43] as the primary calibration standard and BHVO-2G [44] and ATHO-G [45] as secondary standards. A value of 99 wt% \(\rm Al_{2}O_{3}\) was used as an internal standard for all corundum. Following gemological-driven analysis, we focussed for our study on the following entries: \({}^{9}\)Be, \({}^{25}\)Mg, \({}^{27}\)Al, \({}^{45}\)Sc, \({}^{49}\)Ti, \({}^{51}\)V, \({}^{53}\)Cr, \({}^{57}\)Fe, \({}^{62}\)Ni, \({}^{71}\)Ga, \({}^{90}\)Zr, \({}^{118}\)Sn, \({}^{140}\)Ce, \({}^{146}\)Nd, \({}^{176}\)Hf, \({}^{181}\)Ta.
Ftir.Non-polarized FTIR spectra were collected in air using a Varian 640 FTIR spectrometer equipped with a KBr beam splitter and a deuterated triglycene sulfate (DTGS) detector. For each sample, three measurements in perpendicular directions were conducted either using diffuse reflectance (DRIFT) or with transmitted light. For each measurement, a total of 64 scans with a resolution of 1 cm\({}^{-1}\) to 4 cm\({}^{-1}\) were collected and averaged. This was done for the wavenumber range of 200 cm\({}^{-1}\) to 7000 cm\({}^{-1}\), with a background collected at regular intervals. As the measurements had different intervals and offsets due to differences in software version and settings, we homogenized the data so that every spectrum had a step size
of 1 cm\({}^{-1}\). This was done by a cubic spline interpolation on the available data. As not all data were collected over the range of 200 cm\({}^{-1}\) to 7000 cm\({}^{-1}\), we padded the missing values with zeroes. Further, any spectra which had a measurement with a value smaller than -5 or greater than 10 were dropped as these values were extreme outliers and not in the expected range of the measurement. This filtering reduced the data set by less than 1%. The resulting spectral data consisted of 6801 data points per measurement.
Xrf.XRF (ED-XRF) measurements of major, minor, and trace elements were conducted using a Thermo Fisher Scientific QUANTX, with a silicon drift detector (SDD), 1 mm collimator, and over an applied energy range of 4-50 kV, with a variety of filters used to reduce spectral interferences on critical elements. For blue sapphires, the only minor and trace elements consistently detectable are Ti, Cr, V, Fe, and Ga, along with the major element Al. In order to identify treatments or synthetic samples, Pb, W, and Pt were included during most blue sapphire measurement routines. For blue sapphires, we discarded the ED-XRF measurement, if
* The \(\mathrm{Fe_{2}O_{3}}\) value is above 40'000 ppm
* The \(\mathrm{Al_{2}O_{3}}\) value is under 850'000 ppm
* The \(\mathrm{Cr_{2}O_{3}}\) value is above 10'000 ppm
* The \(\mathrm{TiO_{2}}\) value is above 6'000 ppm
Such outliers do occur occasionally in XRF measurements due to various reasons such as diffraction peaks induced by the crystal structure of the minerals. ED-XRF data is tabular in nature, having 26 entries describing the concentration of certain chemical compounds.
Uv.Polarised UV (UV-Vis-NIR) spectra were collected using a Varian (now Agilent) Cary 5000, using deuterium and tungsten halogen light sources and indium gallium arsenide (InGaAs) detector. Measurements were performed over the wavelength range 280-880 nm with a step size of 0.5 nm, using both a reference and sample line equipped with polarisers and beam condensers. In most cases, two measurements in perpendicular polarisations were taken on each sample. In the case of single measurements, the measurement was duplicated to be consistent with the bi-polar measurements. As the absorbance can not be negative, any spectra with negative values were discarded. This could occur due to faulty measurements. The resulting final data sample consisted of 2 x 1201 entries.
During the time period from which the data included in this study were obtained, several variants of these instruments were used and a minority of data were collected on other instruments, not detailed here. Data consistency between these different models was maintained through standardized acquisition protocols and the use of identical calibration and secondary reference materials.
Gemtelligence Architecture
Gemtelligence is an artificial neural network created to process multi-modal data from gemological laboratories. It is composed of a UV encoder, an FTIR encoder, and a single elemental analysis encoder that processes XRF (and optionally ICP-MS jointly) data. The encoders generate embeddings which are then combined by the network's head. This head comprises a concatenation layer to combine the encoders' outputs, batch normalization, and a final linear classification layer.
The UV and FTIR encoders are strided convolutional neural networks with skip connections as proposed in [33]. At the core of their architecture, there are six residual connection layers, each with a hidden dimension of 128, kernel size of 17, and strides of 2. These blocks are preceded by a first convolution layer of kernel size
59. For UV measurements, which involve two spectra taken in perpendicular directions, the input channel of the first convolution layer has a dimension of two, while for FTIR measurements, a single spectrum is used and the first convolution has a dimension of one. After the skip connection blocks, the FTIR and UV encoders employ a single convolution channel mapping the hidden dimension from 128 to 1, resulting in final embeddings of length 213 and 190 respectively. The parameter selection was based on a preliminary grid search. Particularly, we found that a smaller kernel size or fewer residual connection blocks caused a decrease in performance, while larger dimensions resulted in high memory usage and slow training with no increase in accuracy. The elemental analysis encoder is based on the SAINT framework introduced by [34], which was specifically designed to provide a sample-efficient deep learning method for tabular data. We opted to follow the Both configuration of the original paper, which deploys both intrasample and intersample attention mechanisms. Intrasample attention is a standard self-attention mechanism, operating on input features (rows), while intersample attention compares specific input features across different samples (columns). Our implementation follows the same hyper-parameters as described in the original paper. However, since our setting only has one sample at inference time rather than a batch, we decided to pre-append a series of reference stones to the batch for both training and testing to avoid a shift in the distribution caused by the intrasample mechanism. XRF and ICP-MS data are concatenated before the encoder allowing for the model to learn dependencies between both data types. The output tensor of the elemental analysis encoder has a single hidden dimension with a length of 32.
Finally, the head of the network is composed of a concatenation layer, followed by a batch normalization and a readout layer. The concatenation layer combines the one-dimensional embeddings from the UV, FTIR, and elemental analysis encoders into a single tensor by concatenating them along the time dimension. This tensor is then fed into the batch normalization and classification layers. The readout layer is composed of a linear layer with a softmax activation function, and its output is the final classification probability for each class.
### Training and Testing Gemtelligence
#### Training details
For training our model, we randomly partitioned the training data into 80% for training, and 20% for validation, saving the model's weights every 5 epochs during the 250 epochs. We then picked the best model in terms of accuracy from the saved weights. During training the batch size was set to 16 and the learning rate was set to 0.0001, with an automatic decay of factor 10 if there was no improvement for more than 10 epochs. To allow the model to learn to handle missing data, we randomly masked one data source with a probability of 0.7 during training, replacing the values with the mean value across the dataset. We did not perform any data normalization or augmentation, as it was found to be detrimental in early experiments. We repeated the same procedure for each of the five folds in the cross-validation procedure. In each fold, the test data was not used, neither for training nor validation.
In order to generate the final results from the folds, we followed the procedures laid out in [46]. Specifically, for all results apart from Fig. 3, we concatenated the predictions of Gemtelligence from each fold and then calculated the final statistic. For Fig. 3 we first computed the curve in each fold, and then obtained the final curve by computing the average of the curves from each fold. We conducted all the experiments on a machine equipped with an NVIDIA GeForce RTX 2080 Ti with 12 CPU cores.
#### Confidence-thresholding procedure
The purpose of our confidence-thresholding procedure is to determine the reliability of the model's prediction based on the associated level of confidence. More formally, we define the model's confidence \(c\) for a given prediction \(p\) as the maximum value of the last softmax layer. A reliable prediction is defined as a prediction
for which \(c\) is greater than some predefined threshold \(\hat{c}\) (e.g., 0.95). To determine the value of \(\hat{c}\), we perform the following steps: first, we compute the model's predictions \(\{p_{i}\}_{i=1}^{N}\) and associated confidence values \(\{c_{i}\}_{i=1}^{N}\) for each stone in the training set after training. Then, we sort the stones by confidence values from lowest to highest (i.e. \(c_{(1)}\leq c_{(2)}\leq...\leq c_{(N)}\)). Next, we iteratively compute the accuracy of the subset of stones with the least confidence removed until the subset accuracy is greater than a pre-specified value \(\epsilon\) (e.g. 95%). We define the accuracy of the subset of stones corresponding to the entire dataset minus the \(k\) stones with the smallest confidence values as \(\mathcal{A}_{N-k}\). At inference time, the threshold of the least confident stone in this subset is used to decide if a model prediction is reliable. Specifically, we set \(\hat{c}=c_{k^{*}}\) where where \(k^{*}\) is defined as the index that satisfies \(\mathcal{A}_{N-k^{\prime}}\geq\epsilon\) for the first time, where \(k^{\prime}\in{1,2,\ldots,N}\).
|
2309.11390 | TOI-858 B b: A hot Jupiter on a polar orbit in a loose binary | We report the discovery of a hot Jupiter on a 3.28-day orbit around a 1.08
M$_{Sun}$ G0 star that is the secondary component in a loose binary system.
Based on follow-up radial velocity observations of TOI-858 B with CORALIE on
the Swiss 1.2 m telescope and CHIRON on the 1.5 m telescope at the Cerro Tololo
Inter-American Observatory (CTIO), we measured the planet mass to be $1.10\pm
0.08$ M$_{J}$ . Two transits were further observed with CORALIE to determine
the alignment of TOI-858 B b with respect to its host star. Analysis of the
Rossiter-McLaughlin signal from the planet shows that the sky-projected
obliquity is $\lambda = 99.3\pm 3.8$. Numerical simulations show that the
neighbour star TOI-858 A is too distant to have trapped the planet in a
Kozai-Lidov resonance, suggesting a different dynamical evolution or a
primordial origin to explain this misalignment. The 1.15 Msun primary F9 star
of the system (TYC 8501-01597-1, at $\rho$ ~11") was also observed with CORALIE
in order to provide upper limits for the presence of a planetary companion
orbiting that star. | J. Hagelberg, L. D. Nielsen, O. Attia, V. Bourrier, L. Pearce, J. Venturini, J. N. Winn, F. Bouchy, L. G. Bouma, C. Briceño, K. A. Collins, A. B. Davis, J. D. Eastman, P. Evans, N. Grieves, N. M. Guerrero, C. Hellier, M. I. Jones, D. W. Latham, N. Law, A. W. Mann, M. Marmier, G. Ottoni, D. J. Radford, N. Restori, A. Rudat, L. Dos Santos, S. Seager, K. Stassun, C. Stockdale, S. Udry, Songhu Wang, C. Ziegler | 2023-09-20T15:15:42Z | http://arxiv.org/abs/2309.11390v1 | # TOI-858 B b: A hot Jupiter on a polar orbit in a loose binary+
###### Abstract
We report the discovery of a hot Jupiter on a 3.28-day orbit around a 1.08 M\({}_{\odot}\) G0 star that is the secondary component in a loose binary system. Based on follow-up radial velocity observations of TOI-858 B with CORALIE on the Swiss 1.2 m telescope and CHIRON on the 1.5 m telescope at the Cerro Tololo Inter-American Observatory (CTIO), we measured the planet mass to be \(1.10^{+0.08}_{-0.07}\) M\({}_{\odot}\). Two transits were further observed with CORALIE to determine the alignment of TOI-858 B b with respect to its host star. Analysis of the Rossiter-McLaughlin signal from the planet shows that the sky-projected obliquity is \(\lambda=99.3^{+3.8}_{-3.7}\). Numerical simulations show that the neighbour star TOI-858 A is too distant to have trapped the planet in a Kozai-Lidov resonance, suggesting a different dynamical evolution or a primordial origin to explain this misalignment. The 1.15 M\({}_{\odot}\) primary F9 star of the system (TYC 8501-01597-1, at \(\rho\sim\)11\({}^{\prime\prime}\)) was also observed with CORALIE in order to provide upper limits for the presence of a planetary companion orbiting that star.
## 1 Introduction
The thousands of exoplanets that have already been discovered show not only a wide variety in size, density, interior, and atmospheric structure but also in orbital configuration. With every survey based on new or improved detection and characterisation techniques, this variety grows (see reviews in Udry & Santos 2007; Bowler 2016; Zhu & Dong 2021, and references therein).
Such surveys have made it possible to highlight the strong connection between a host star's properties and those of its orbiting planets, which are often directly linked to their formation history, such as metallicity favouring giant planet formation (e.g., Santos et al. 2004; Fischer & Valenti 2005). These surveys also revealed that small-planet occurrence is not affected by the
stellar host properties (Sousa et al., 2008; Buchhave et al., 2012) or the fact that late type stars tend to host smaller planets (Bonifils et al., 2013; Mulders et al., 2015).
Another important link has been shown with the presence of a stellar companion, such as the suppression of planet formation in close binaries, with a limit found to be at 47 au or 58 au by Kraus et al. (2016) and Ziegler et al. (2021), respectively. Indeed, Hirsch et al. (2021) reported a strong drop in planet occurrence rate at binary separations of 100 au, with planet occurrence rates of \(\sim 20\%\) for binary separations larger than 100 au and of 4% within 100 au. Moreover, an overabundance, by a factor of about three, of hot Jupiters has also been noted when a stellar companion was found (Law et al., 2014; Ngo et al., 2016; Wang et al., 2017), as well as an increase in eccentricities (Moutou et al., 2017). To a wider extent, Fontanive & Badalez Gagliuffi (2021) have shown that the influence of a stellar companion is strongest for high mass planets and short orbital periods. However, these demographic results are mainly inferred from the planetary parameters accessible through transit and radial velocity (RV) observations, such as the orbital periods, masses, radii, eccentricities, stellar host properties, and dynamics in the case of systems with multiple planets and even multiple stars.
An important additional parameter that is difficult to obtain is the orbital obliquity, or spin-orbit angle, which is the angle between the stellar spin axis and the axis of the orbital plane. Various approaches have been developed to measure it, namely, the analysis of disc-integrated RVs (Queloz et al., 2010), Doppler tomography (Collier Cameron et al., 2010), and the reloaded Rossiter-McLaughlin effect (Rossiter, 1924; McLaughlin, 1924; Cegla et al., 2016), which requires performing high-resolution spectroscopy during a planet's transit.
The spin-orbit angle is a tracer of a planet's history, shedding light on its formation and evolution (see reviews by Triaud, 2018 and Albrecht et al., 2022). Measuring the spin-orbit angle can help one distinguish between a smooth dynamical history that keeps the system aligned, such as migration within the protoplanetary disc (e.g., Winn & Fabrycky, 2015), and more disruptive scenarios in which the planetary orbit is tilted through gravitational interactions with the star or with outer companions (e.g., Fabrycky & Tremaine, 2007, Teyssandier et al., 2013). Misaligned spin-orbit angles can have either a primordial (i.e., occurring during the star-planet formation) or a post-formation origin (taking place after disc dispersal). Primordial mechanisms, such as magnetic warping or chaotic accretion, tend to produce mild misalignments (Albrecht et al., 2022), unless a distant stellar companion exists, which can either enhance the magnetic warping of the inner disc (Foucart & Lai, 2011) or gravitationally tilt the disc (Batygin, 2012). While the latter process has recently been disfavoured due to planet-star coupling damping the effect (Zanazzi & Lai, 2018), the magnetic warping aided by a stellar companion could lead to a large distribution of obliquities, including retrograde planets (Foucart & Lai, 2011). Post-formation misalignments are driven by gravitational interactions involving a third body (planet or star). While planet-planet scattering can produce misalignments of up to 60 degrees (Chatterjee et al., 2008), a close encounter with another star would yield an isotropic distribution of obliquities, although this process is only expected in a very dense cluster environment (Hamers & Tremaine, 2017). The post-formation gravitational interactions can also lead to high-eccentricity tidal migration, where a planet that is originally far from its central star acquires high eccentricity, and its orbit is later circularised (and shrunk) due to the strong stellar tides suffered near periastron. In this process, the driver of the increase in eccentricity is the Kozai-Lidov effect induced by a third massive external companion, such as a star or a brown dwarf (Kozai, 1962). This mechanism has been proposed as a source of hot Jupiters (Dawson & Johnson, 2018) and can also lead to spin-orbit misalignment (Fabrycky & Tremaine, 2007). Recently, Vick et al. (2023) showed that if the system already has a primordial misalignment (from companion-disc interaction, stellar spin-disc interaction, and disc dispersal), Kozai-Lidov oscillations lead to predominantly retrograde stellar obliquities, with the distribution of angles peaking at a misalignment of around 90 degrees (i.e., polar orbits) for hot Jupiters.
In this work, we focused our exoplanet search around the wide stellar binary system TOI-858 A-B, which has an angular separation of \(\rho\sim\)11\({}^{\prime\prime}\) (\(\sim\)3000 au; Gaia Collaboration et al., 2021). Wide binaries (i.e., binary separations in the range of 300-20'000 au) are common in our galaxy (Offner et al., 2023), yet their origin remains a mystery because they easily become disrupted in dense star-forming regions (Offner et al., 2023). The most accepted formation channel for wide binaries is during the phase of star cluster dissolution (Kouwenhoven et al., 2010; Moeckel & Bate, 2010).
In this study, we apply the latest Rossiter-McLaughlin technique (i.e., Revolutions, or RMR; Bourrier et al., 2021) to two transits of TOI-858 B b observed with the CORALIE spectrograph on the 1.2m Swiss telescope at La Silla Observatory (Sect. 4.1) in order to measure the spin-orbit angle between TOI-858 B b and TOI-858 B. The paper is structured as follows. We first describe the discovery and follow-up observations of the 1.08 M\({}_{\odot}\) G0 (Pecaut & Mamajek, 2013) star TOI-858 B and its nearby and similarly bright (\(\Delta\)mag\({}_{FESS}\) = 0.248) 1.15 M\({}_{\odot}\) F9 stellar companion TOI-858 A (Sect. 2). Then, we present the joint analysis of the acquired data leading to the confirmation of the planet (Sect.3). Further investigation of the orbital architecture of the TOI-858 B system along with the link to the nearby star TOI-858 A is given in Sect. 4. Finally, the results are discussed in Sect. 5.
## 2 Observations
A summary of the photometry and high-resolution spectroscopy data used in the joint analysis of TOI-858 B b can be found in Tables 1 and 2. Additionally, SOAR speckle imaging was used to rule out nearby stellar companions, as described in Sect. 2.5.
### TESS discovery photometry
The star TOI-858 B (TIC 198008005) was observed by the Transiting Exoplanet Survey Satellite (_TESS_ - Ricker et al., 2015) in Camera 3 in sectors 3, 4, 29, 30, and 31. For the first two sectors, 30-min cadence full-frame images (FFIs) are available. TOI-858 B b was identified as a TESS Object of Interest (TOI) based on the MIT Quick Look Pipeline data products (QLP; Huang et al., 2020, 2020). In our joint analysis, for sectors 3 and 4, we used the extracted photometry from the QLP pipeline. During the third year of the _TESS_ mission, TOI-858 B was observed with a 2-min cadence in sectors 29, 30, and 31. The light curves used in our joint analysis from these sectors are publicly available at the Simple Aperture Photometry flux with Pre-search Data Conditioning (PDC-SAP; Stumpe et al., 2014, 2012; Smith et al., 2012; Jenkins et al., 2010) provided by the Science Processing Operations Center (SPOC; Jenkins et al., 2016).
Within the large 21\({}^{\prime\prime}\)_TESS_ pixels, TOI-858 B is completely blended with a slightly brighter star, TOI-858 A (TIC-198008002). The two sources are separated by 10.7\({}^{\prime\prime}\). Based on
the _TESS_ data alone, it is not possible to determine around which of the two stars the transits are taking place. A combination of seeing-limited ground-based photometry and RV measurements helped determine which of the two stars host the planet (discussed in Sec. 2.2 and 2.4).
### Ground-based follow-up photometry
The _TESS_ pixel scale is \(\sim 21\arcsec\) pixel\({}^{-1}\), and photometric apertures typically extend out to roughly 1 arcminute, generally causing multiple stars to blend in the _TESS_ photometric aperture. To determine the true source of the _TESS_ detection, we acquired ground-based time-series follow-up photometry of the field around TOI-858 B as part of the _TESS_ Follow-up Observing Program (TFOP; Collins et al. 2018).1 The follow-up light curves were also used to confirm the transit depth and thus the _TESS_ photometric de-blending factor as well as to refine the _TESS_ ephemeris and place constraints on transit depth differences across optical filter bands. We used the TESS Transit Finder, which is a customised version of the Tapir software package (Jensen 2013), to schedule our transit observations. The photometric data were extracted using the AstroImageJ (AI) software package (Collins et al. 2017). The observations are summarised in the following subsections and in Table 3. As shown in Fig. 1, they confirm that the _TESS_-detected transit-like event is occurring around TOI-858 B.
Footnote 1: [https://less.mit.edu/followup](https://less.mit.edu/followup)
#### 2.2.1 Hazelwood Observatory
We observed an ingress of TOI-858 B b in Sloan \(\mathring{r}\)\({}^{\prime}\)-band on UTC 2019 August 28 from Hazelwood Observatory near Churchill, Victoria, Australia. The 0.32 m telescope is equipped with a \(2184\times 1472\) SBIG STT3200 camera. The image scale is 0\(\aas@@fstack{\prime\prime}\)55 pixel\({}^{-1}\), resulting in a \(20\arcsec\times 14\)\({}^{\prime}\) field of view. The photometric data were extracted using a circular 5\(\aas@@fstack{\prime\prime}\)9 photometric aperture.
#### 2.2.2 Brierfield Observatory
We observed a full transit in B-band on UTC 2019 November 5 from Brierfield Observatory near Bellingen, New South Wales, Australia. The 0.36 m telescope is equipped with a \(4096\times 4096\) Moravian 16803 camera. The image scale after binning 2\(\times\)2 is 1\(\aas@@fstack{\prime\prime}\)47 pixel\({}^{-1}\), resulting in a \(50\arcsec\times 50\arcmin\) field of view. The photometric data were extracted using a circular 4\(\aas@@fstack{\prime\prime}\)4 photometric aperture.
#### 2.2.3 Evans 0.36 m Telescope
We observed a full transit in B-band on UTC 2019 December 5 from the Evans 0.36 m telescope at El Sauce Observatory in Coquimbo Province, Chile. The telescope is equipped with a \(1536\times 1024\) SBIG STT-1603-3 camera. The image scale after binning 2\(\times\)2 is 1\(\aas@@fstack{\prime\prime}\)47 pixel\({}^{-1}\), resulting in an \(18.8\arcmin\times 12.5\arcmin\) field of view. The photometric data were extracted using a circular 5\(\aas@@fstack{\prime\prime}\)9 photometric aperture.
### WASP-South archival photometry
The TOI-858 B system was observed in 2010 and 2011 by the WASP-South survey when it was equipped with 200-mm, f/1.8 lenses observing with a 400 - 700 nm passband (Pollacco et al. 2006). The 48-arcsec extraction aperture encompasses both stars of the binary. A total of 6445 photometric data points were obtained, spanning 110 nights starting in 2010 August and then another 170 nights starting 2011 August. The standard WASP transit-search algorithms (Collier Cameron et al. 2006) detect the 3.27-d periodicity (though the object was never selected as a WASP candidate) and report an ephemeris of JD(TDB) = \(245\,5982.41565(13)+E\times 3.279765(13)\).
### High-resolution follow-up spectroscopy
The high-resolution spectrographs CORALIE, HARPS, and CHIRON were utilised to fully characterise the TOI-858 B system. Multi-epoch monitoring allowed us to measure the reflex motion of the star(s) induced by the planet. Furthermore, we observed two spectroscopic transits of TOI-858 B b with the aim of determining the spin-orbit angle of TOI-858 B b.
#### 2.4.1 Coralie
Both TOI-858 B and TOI-858 A were observed with the CORALIE spectrograph mounted on the Swiss 1.2 m Euler telescope at La Silla Observatory, Chile (Queloz et al. 2001b).
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline \multicolumn{1}{c}{Obs date} & \multicolumn{1}{c}{Source} & Filter & lin. limb-dark. & quad. limb-dark. & Dilution factor \\ UT & & & \(u_{1}\) & \(u_{2}\) & \(A_{D}\) \\ \hline
2010-08-13 – 22-01-2012 & WASP-South 200 mm & R & \(0.367\pm 0.038\) & \(0.287\pm 0.036\) & \(0.646\pm 0.022\) \\
2012-09-02 – 01-12-2014 & WASP-South 85 mm & R & \(0.367\pm 0.038\) & \(0.287\pm 0.036\) & \(0.646\pm 0.022\) \\
2018-09-20 – 13-11-2018 & _TESS_ 30 min FFT s3 + s4 & TESS & \(0.315\pm 0.028\) & \(0.293\pm 0.034\) & \(-0.00001\pm 0.00030\) \\
2019-08-28 & Hazelwood & i’ & \(0.268\pm 0.050\) & \(0.263\pm 0.049\) & – \\
2019-11-05 & Brierfield & B & \(0.657\pm 0.042\) & \(0.159^{+0.040}_{-0.039}\) & – \\
2019-12-05 & Evans at El Sauce & B & \(0.657\pm 0.042\) & \(0.159^{+0.040}_{-0.039}\) & – \\
2020-08-26 – 2020-11-13 & _TESS_ 2’ SPOC s29, s30, s31 & TESS & \(0.315\pm 0.028\) & \(0.293\pm 0.034\) & \(-0.00001\pm 0.00030\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the discovery _TESS_ photometry, archival WASP-South photometry, and ground-based follow-up photometry of TOI-858 B.
\begin{table}
\begin{tabular}{l r r} \hline \hline & CORALIE & CHIRON \\ \hline Obs date (UT) & 2019.08.13 & 2019.08.13 \\ & \(-\)2021.01.18 & \(-\)2019.09.02 \\ No. of observations & 33 RVs & 7 RVs \\ Relative RV Offset (m/s) & 64361 + 7 & \(86^{+24}_{-24}\) \\ RV Jitter (m/s) & \(3^{+12}_{-0}\) & \(55^{+23}_{-20}\) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the RV observations of TOI-858 B.
CORALIE has a resolution of \(R=60,000\) and is fed by a 2\({}^{\prime\prime}\) on-sky A fibre. An additional B fibre can be used to either provide simultaneous Fabry-Perot (FP) RV drift monitoring or on-sky monitoring of the background contamination.
We obtained 22 spectra between 2019 August13 and 2021 January 18 UT with simultaneous FP to monitor the RV of TOI-858 B as it was orbited by TOI-858 B b. The exposure times ranged between 1800 and 1200 s, depending on the observing schedule, resulting in an average S/N per pixel of 15 at 5500 A.
Two spectroscopic transits were also observed on 2019 December 5 and 2021 January 18 UT. On both nights, we obtained 11 spectra with individual exposure times of 1800 s. During the first visit (2019-12-05) one spectrum was obtained before transit, seven spectra during transit, and three after transit. We observed without any simultaneous wavelength calibration in order to enable correction for sky contamination monitored with the B fibre. In this mode, the wavelength solution originates from the calibration acquired during daytime. Instrumental drift during the night is not taken into account but were expected to be smaller than or on-par with the expected RV uncertainty for this target. After receiving ambiguous results, we repeated the observations on the second night (2021-01-18) with simultaneous FP. During this visit, we took one spectrum before transit, six spectra during transit, and four after transit.
For TOI-858 A, we acquired nine CORALIE spectra between 2019 October 17 and 2020 March 12 UT in order to check for additional stars and giant planets in the system. All spectra were taken with simultaneous FP, and nearly all had exposure times of 1800 s, while one was set to 1200 s. The average S/N per pixel were 19 at 5500 A.
All spectra were reduced with the standard CORALIE Data Reduction Software (DRS). Spectra for both stars were cross-correlated with a G2 binary mask (Baranne et al., 1996) to extract RV measurements as well as cross-correlation function (CCF) line diagnostics, such as bisector-span (Queloz et al., 2001) and full width at half maximum (FWHM). We also derived H-\(\alpha\) activity indicators for each spectrum. Two spectra taken on 2019 September 29 and 2019 September 30 UT were rejected by the automatic quality control of the DRS due to a large instrument drift of more than 150 m s\({}^{-1}\), which can lead to less precise drift correction of a few m s\({}^{-1}\). For the analysis of TOI-858 B b, we deemed these drift-corrected RVs to still be useful, as the obtained RV uncertainties are larger than the error on the drift correction. Furthermore, the measured RV semi-amplitude is more than 100 m s\({}^{-1}\)(Tab. A.1 and A.3).
#### 2.4.2 Harps
With the HARPS spectrograph on the 3.6 m ESO telescope at the La Silla Observatory, Chile (Pepe et al., 2002), half a spectroscopic transit was observed on 2019 December 5. This took place during technical time, when the new HARPS+NIRPS front end (Bouchy et al., 2017) was being commissioned. In total, six spectra were obtained in high efficiency mode (EGGS), which trades spectral resolution for up to twice the throughput by using a slightly larger on-sky fibre (1.4 \({}^{\prime\prime}\)) than the high accuracy mode (HAM, 1\({}^{\prime\prime}\)) but is still small enough to prevent contamination from the secondary star. The first two spectra have exposure times of 900 s, which was then decreased to 600 s to get a better time resolution during transit. The spectra have S/N 40-30 per pixel at 5500 A, and RV uncertainties of \(3.5-5\) m s\({}^{-1}\). Only one spectrum was taken out of transit.
All HARPS spectra were reduced with the offline HARPS DRS hosted at the Geneva Observatory. Using a sufficiently wide velocity window of 60 km s\({}^{-1}\), CCFs were derived with a G2 binary mask.
#### 2.4.3 Chiron
We obtained spectroscopic data of TOI-858 B using the CHIRON spectrograph (Tokovinin et al., 2013), a high-resolution fibre-fed spectrograph mounted on the 1.5 m telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. We obtained in total seven different spectra between 2019 August 9 and 2019 September 1. For these observations, we used the image slicer mode (resolving power \(\sim\) 80,000) and an exposure time of 600 s, leading to a relatively low S/N per pixel of \(\sim\) 8-10 at 5500 A and RV uncertainties of 20-40 m s\({}^{-1}\). In addition, a ThAr lamp was taken before each observation, from which a new wavelength solution was computed. The data were reduced using the Yale pipeline, and the RVs were computed following the method described in Jones et al. (2019), which has shown a long-term RV precision on \(\tau\) Ceti of \(\sim\) 10-15 m s\({}^{-1}\). We note that our method computes relative RVs with respect to a template that is built by stacking all of the individual spectra. Therefore, the systemic velocity is not included in the final RVs, which explains the large offset between the CHIRON and CORALIE velocities. The Barycentric Julian Date (BJD), RV, and the corresponding 1\(\sigma\) RV uncertainties are listed in Table A.2.
### Speckle imaging
Additional nearby companion stars not previously detected in seeing-limited imaging can result in photometric contamination, reducing the apparent transit depth. We searched for nearby sources of TOI-858 B with SOAR speckle imaging (Tokovinin, 2018) on 2019 August 8 UT in 1-band, a visible bandpass similar to _TESS_. Further details of the observations from the SOAR TESS survey are available in Ziegler et al. (2020). We detected no nearby stars within 3\({}^{\prime\prime}\)of TOI-858 B within the 5\(\sigma\) detection sensitivity of the observation, which is plotted along with the speckle auto-correlation function (ACF) in Fig. 2.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Observatory & Aperture & Filter & Date & Start & End & Length & Exp. Time & Airmass & Comp. & Precision \\ & (m) & & (UTC) & (UTC) & (UTC) & (min.) & (sec.) & Range & Stars (n) & (ppt/10 min) \\ \hline Hazelwood & 0.32 & \(i^{\prime}\) & 2019-08-28 & 15:23 & 19:36 & 253 & 120 & 1.45 - 1.04 & 3 & 1.4 \\ Brierfield & 0.36 & B & 2019-11-05 & 11:07 & 18:01 & 414 & 180 & 1.46 - 1.10 - 1.33 & 4 & 2.6 \\ Evans & 0.36 & B & 2019-12-05 & 01:08 & 06:28 & 320 & 150 & 1.26 - 1.09 - 1.25 & 4 & 1.3 \\ \hline \end{tabular}
\end{table}
Table 3: TFOP photometric follow-up observation log.
## 3 Analysis
### Spectral analysis
Stellar atmospheric parameters for TOI-858 B and TOI-858 A were derived using SpecMatch-emf(Yee et al., 2017). For TOI-858 B, we stacked the six HARPS-EGGS spectra to get one high-fidelity spectrum for the analysis, and for TOI-858 A, we ran SpecMatch-emf on the stacked spectrum created from nine CORALIE spectra.
The SpecMatch-emf tool matches the input spectrum to a large spectral library of stars with well-determined parameters that have been derived with interferometry, optical and near-infrared (NIR) photometry, asteroseismology, and local thermodynamic equilibrium (LTE) analysis of high-resolution optical spectra. The wavelength range encompassing the Mg I b triplet (5100 - 5340 A) was utilised to match our spectra to the built-in SpecMatch-emf spectral library through \(\chi^{2}\) minimisation. A weighted linear combination of the five best matching spectra were used to extract \(T_{\rm eff}\), \(R_{\rm s}\), and \({\rm[Fe/H]}\). For TOI-858 B, we obtained \(T_{\rm eff}\) of 5948 \(\pm\) 110K, \(R_{\rm s}=1.27\pm 0.18\)R\({}_{\odot}\), and \({\rm[Fe/H]}\)\(=0.17\pm 0.09\) (dex). For TOI-858 A, the spectral analysis yielded \(T_{\rm eff}\)\(=5911\pm 110\)K, \(R_{\rm s}=1.47\pm 0.18\)R\({}_{\odot}\), and \({\rm[Fe/H]}\)\(=0.21\pm 0.09\) (dex). The \(T_{\rm eff}\) and \({\rm[Fe/H]}\) were used as priors in the joint analysis detailed in Sect. 3.3 and 3.4, which models the system using broadband photometry, GAIA information, stellar evolutionary models, and (when available) transit light curves. The final stellar parameters are listed in Tabs. 4 and 6.
### Stellar rotation
The projected rotational velocity, \(v\sin i\), was computed for each star using the calibration between \(v\sin i\) and the width of the CORALIE CCF. This calibration was first presented in Santos et al. (2002), and it has since been updated as CORALIE has undergone several updates (Raimbault, 2020). For TOI-858 B, we obtained \(v\sin i=5.80\pm 0.25\) km s\({}^{-1}\), and for TOI-858 A, we obtained \(v\sin i=6.40\pm 0.25\) km s\({}^{-1}\). Using the stellar radii listed in Tables 7 and 6, the projected rotational velocities correspond to \(P_{\rm rot}/\sin i\) of \(11.5\pm 0.7\) days and \(10.8\pm 0.7\) days, respectively.
Figure 1: Photometric transit observations of TOI-858 B b in 20 min bins. The vertical dashed lines represent meridian flips. The bottom panel shows all light curves phase folded and overplotted.
Figure 2: SOAR speckle imaging in Cousins I-band excluding nearby stars down to A Imag \(\sim 5\) within 3′′ of TOI-858 B. The inset is the speckle ACF centred on the target star.
We observed clear rotational modulation in both the _TESS_ and WASP-South light curves. From top to bottom, Figure 3 shows the Lomb-Scargle periodograms computed for the WASP-South data before and after the change to the 85 mm lenses in 2010 and 2011, the TESS QLP light curve without de-trending (SAP), and the SPOC-de-trended (PDCSAP) light curve. The transits of TOI-858 B were masked to avoid picking up the planetary signal. A clear and persistent modulation can be seen at a period of 6.2 to 6.4 days in both the WASP-South and _TESS_ data, having an amplitude varying between 2 and 3 mmags. The TESS light curves cover only a few stellar rotations, whereas the two WASP-South data sets each cover multiple seasons.
Using the WASP-South data only and a modified Lomb-Scargle periodogram approach that is tailored to the noise characteristics of WASP data, as discussed in Maxted et al. (2011), we found a rotation period of \(P_{\rm rot}=6.42\pm 0.10\) d. Since TOI-858 B and TOI-858 A are fully blended in both WASP-South and _TESS_, we could not determine which of the stars is responsible for the rotational modulation. There are no signs of two distinct rotational signals in the photometry nor at the \(P_{\rm rot}\)/ sin \(i\) derived from the CORALIE CCFs. For TOI-858 B, an inclination of 34\({}^{\circ}\) is needed to align the \(P_{\rm rot}\) measured from the light curves with the spectroscopic \(P_{\rm rot}\)/ sin \(i\), and for TOI-858 A, it is 36\({}^{\circ}\).
For \(P_{\rm rot}=6.42\pm 0.02\) d, gyro-chronology yields an age of \(0.3-0.4\) Gyr (Barnes, 2007) for either of the two stars. This is not in agreement with the ages derived in Sect. 3.3 based on a spectral energy distribution (SED) fitting with the Mesa Isochrones and Stellar Tracks (MIST) evolutionary models. Moreover, following the approach described in Bouma et al. (2021), we found no sign of Li absorption in the high S/N stacked spectra for either star, which could otherwise support a hypothesis of a young age. The discrepancy between the stellar age derived with gyro-chronology and MIST could indicate that the planet-hosting star TOI-858 B has been spun-up by the planet, as such is the case for HAT-P-11b (Bakos et al., 2010), which shows evidence of tidal spin and high stellar activity (Morris et al., 2017; Tejada Arevalo et al., 2021). We note that the rotation period of \(6.42\pm 0.02\) d is close to twice the planetary orbital period, \(P_{\rm b}=3.28\) d. The similar \(v\sin i\) measured for the two stars could in this case be explained by differences in inclination.
### Joint modelling of radial velocities and transit light curves
The _TESS_ photometry, ground-based follow-up transit photometry, WASP-South archival light curves, and RV measurements from CORALIE and CHIRON were jointly modelled using _EXOFASTv2_(Eastman et al., 2013, 2019). In this approach, both stellar and planetary parameters are derived for any number of transits and RV instruments while exploring the large parameter space through a differential evolution Markov chain and Metropolis-Hastings Monte Carlo sampler (MCMC).
The transit model is based on the analytical expressions in Mandel & Agol (2002), and the RVs are modelled as a classic Keplerian orbit. The planet properties are described by seven free parameters: RV semi-amplitude (\(K\)), planet radius (R\({}_{p}\)), orbital inclination (\(i\)), orbital period (\(P\)), time of conjunction (\(T_{C}\)), eccentricity (\(e\)), and argument of periastron (\(\omega_{*}\)). Two additional RV terms, systemic velocity and RV jitter, are also fitted for each instrument (CORALIE & CHIRON). Because the CHIRON RVs are derived with respect to a median spectral template, the systemic velocity for the instrument is therefore close to zero.
\begin{table}
\begin{tabular}{l c c} \hline \hline Property & Value & Source \\ \hline Other Names & & \\
2MASS ID & J04004794-5435342 & 2MASS \\ Gaia ID & 4683737294569921664 & Gaia EDR3 \\ TIC ID & 198008005 & _TESS_ \\ TOI & TOI-858 & _TESS_ \\ \multicolumn{3}{c}{Astrometric Properties} \\ R.A. & 04:00:47.96 & _TESS_ \\ Dec & -54:35:34.5 & _TESS_ \\ \(\mu_{\rm R.A.}\) (mas yr\({}^{-1}\)) & 11.036 \(\pm\)0.017 & Gaia EDR3 \\ \(\mu_{\rm Dec.}\) (mas yr\({}^{-1}\)) & -11.004 \(\pm\)0.018 & Gaia EDR3 \\ RV (km s\({}^{-1}\)) & 64.7 \(\pm\) 0.7 & Gaia EDR3 \\ Parallax (mas) & 3.9727 \(\pm\) 0.0134 & Gaia EDR3 \\ \multicolumn{3}{c}{Photometric Properties} \\ V (mag) & 11.18 \(\pm\) 0.07 & Tycho \\ G (mag) & 11.0695 \(\pm\) 0.0004 & Gaia \\ T (mag) & 10.6444 \(\pm\) 0.006 & _TESS_ \\ J (mag) & 10.06 \(\pm\) 0.02 & 2MASS \\ H (mag) & 9.79 \(\pm\) 0.03 & 2MASS \\ K\({}_{s}\) (mag) & 9.72\(\pm\) 0.02 & 2MASS \\ W1 (mag) & 9.57 \(\pm\) 0.03 & WISE \\ W2 (mag) & 9.58 \(\pm\) 0.03 & WISE \\ W3 (mag) & 9.57 \(\pm\) 0.03 & WISE \\ Spectroscopic Properties & \\ \(v\sin i\) (km s\({}^{-1}\)) & 5.8 \(\pm\) 0.25 & Sec. 3.2 \\ \hline \end{tabular}
\end{table}
Table 4: Stellar properties for TOI-858 B.
Figure 3: Periodograms showing the stellar rotational modulation in the WASP-South and TESS data, spanning 11 years in total. All data sets indicate a rotation period of 6.4 to 6.2 days. The vertical blue lines in all four panels indicate \(P_{\rm rot}=6.42\) d, determined using the WASP-South data (200 mm and 85 mm).
For the transit light curves, a set of two limb-darkening coefficients for each photometric band were evaluated by interpolating tables from Claret & Bloemen (2011); Claret (2017). This was done within _EXOFASTv2_ at each MCMC step. The limb-darkening coefficients are fitted along with the out-of-transit baseline flux and variance. The WASP-South data were heavily blended, with more than 50% of the flux in the aperture coming from TOI-858 A. To take this into account, we fitted a dilution parameter to the WASP-South light curves. A Gaussian prior on the dilution factor, based on the _Gaia_\(G_{RP}\) magnitudes of the two stars, was imposed. Similarly, we also included a dilution term for the TESS data to take imperfect de-blending into account and to propagate the error that might come with it. The light curves from Hazelwood and Brierfield include meridian flips, indicated by the dashed lines in Fig. 1. Any offset between the data obtained before and after a meridian flip was modelled as a multiplicative de-trending term that multiplies the flux after the meridian flip with a constant. Furthermore, the El Sauce light curve was multiplicatively de-trended with airmass within _EXOFASTv2_. The Brierfield time series was likewise detrended against total flux counts. (For more information on the de-trending, see Sect. 11 of Eastman et al. (2019).)
Along with the planetary properties, the stellar parameters were also modelled at each step in the MCMC. This allowed us to utilise the information on transit duration and orbital eccentricity embedded in the transit light curves and RVs to constrain the stellar density (Seager & Mallen-Ornelas, 2003; Kipping et al., 2012; Eastman et al., 2022). We imposed Gaussian priors on \(T_{\rm eff}\) and \(\rm[Fe/H]\) from the spectral analysis presented in Sect. 3.1 while fitting the SED based on archival broadband photometry presented in Table 4. When including the _Gaia_ DR3 parallax as a Gaussian prior, we obtained a tight constraint on the stellar radius. We also included an upper limit on the V-band extinction from Schlegel et al. (1998) and Schlafly & Finkbeiner (2011) to constrain line-of-sight reddening. Table 5 lists the informative priors described in this section and summarises the values applied. To improve the stellar mass we obtained from combining the stellar radius with the stellar density from the transit light curve, we queried the MIST models (Dotter, 2016; Choi et al., 2016). This meant comparing the fitted stellar model parameters to viable MIST values at each step of the MCMC. The joint model was penalised for the difference between the two. This method uses MIST to guide the stellar parameters rather than to define them and can help break degeneracies encountered when using only isochrone models. Despite _EXOFASTv2_ having the ability to include Doppler tomography and the Rossiter-McLaughlin effect in its joint model, we found that a more sophisticated analysis of the spectroscopic transit data was needed, as outlined in the Sect. 4.1.
### Stellar properties of TOI-858 A
We derived stellar parameters for TOI-858 A in a similar way as outlined for TOI-858 B in Sect. 3.3, using _EXOFASTv2_ to perform an SED fit combined with MIST, while no additional information on stellar density from a transit light curve was available. We used Gaussian priors on \(T_{\rm eff}=5911\pm 110\) K, \(\rm[Fe/H]\)\(=0.21\pm 0.09\) (dex) and parallax of \(4.0181\pm 0.0146\) (mas). For the V-band extinction, an upper limit from dust maps of 0.04 mag was used. The final stellar properties of TOI-858 A are listed in Table 6. The ages we got for TOI-858 B and TOI-858 A are in agreement with each other, though poorly constrained.
The CORALIE RVs of TOI-858 A showed no sign of planets, though only giant planets in relatively short period orbits could be ruled out. Figure 5 shows the RV time series with a generalised Lomb-Scargle periodogram at the bottom. No periodic signals were detected down to a false alarm probability of 10%. We found the systemic velocity of TOI-858 A measured with CORALIE to be \(65.523\pm 0.006\) km s\({}^{-1}\), which is in agreement with the Gaia EDR3 RV of \(65.5\pm 0.6\) km s\({}^{-1}\)and similar to the \(64.7\pm 0.7\) km s\({}^{-1}\)of TOI-858 B found by Gaia (Gaia Collaboration et al., 2021).
## 4 Analysis of TOI-858 B orbital architecture
We further investigated the orbital architecture of the TOI-858 B, TOI-858 B b, and TOI-858 A ensemble. To carry out this study, we characterised the planet orbit through a Rossiter-McLaughlin observation (described in Sect. 4.1), and we analysed the link between the two stars and their possible impact on the planet orbit ( Sect. 4.2).
### Rossiter-McLaughlin Revolutions analysis
#### 4.1.1 Transit observations
We utilised the two data sets obtained with CORALIE during the transit of TOI-858 B b on 2019 December 5 (Visit 1) and 2021 January 18 (Visit 2). Spectra were extracted from the detector images and corrected and calibrated by version 3.8 of the DRS (Baranne et al., 1996; 2; Bouchy et al., 2001) pipeline. One of the corrections concerns the colour effect caused by the variability of extinction induced by Earth's atmosphere (e.g., Bourrier & Hebrard, 2014, Bourrier et al., 2018, Wehbe et al., 2020). The flux balance of the TOI-858 B spectra was reset to a K1 stellar spectrum template before the spectra were passed through weighted
Figure 4: Phase folded CORALIE and CHIRON RV measurements for TOI-858 B.
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Units & Prior \\ \hline \(T_{\rm eff}\) & Effective Temperature (K) & \(\mathcal{N}(5948,110)\) \\ \(\rm[Fe/H]\) & Metallicity (dex) & \(\mathcal{N}(0.17,0.09)\) \\ \(\varpi\) & Parallax (mas) & \(\mathcal{N}(3.9727,0.0134)\) \\ \(A_{V}\) & V-band extinction (mag) & \(\mathcal{U}(0,0.04)\) \\ \(A_{D}\) & Dilution _TESS_ & \(\mathcal{N}(0.0000,0.0003)\) \\ \(A_{D}\) & Dilution WASP (R) & \(\mathcal{N}(0.55,0.05)\) \\ \hline \end{tabular}
\end{table}
Table 5: Informative priors invoked in the _EXOFASTv2_ model for the TOI-858 B system. Fitted parameters not listed here use uninformed, uniform priors (apart from the limb-darkening parameters, which were tabulated within _EXOFASTv2_ using Claret & Bloemen (2011); Claret (2017) and are based on the fitted stellar parameters at each step in the MCMC).
cross-correlation (Baranne et al., 1996; Pepe et al., 2002a) with a G2 numerical mask to compute the CCFs. Since the CCFs are oversampled by the DRS with a step of 0.5 km s\({}^{-1}\), for a pixel width of about 1.7 km s\({}^{-1}\), we kept one in three points in all CCFs prior to their analysis. We analysed the two CORALIE visits using the RMR technique, which follows three successive steps that are described hereafter (a full description can be found in Bourrier et al., 2021).
#### 4.1.2 Extraction of the planet-occulted CCFs
In the first step of the RMR technique, the disc-integrated CCF\({}_{\rm DI}\) were aligned by shifting their velocity table with the Keplerian motion of the star, as calculated using the median values for the stellar and planet properties from the joint fit analysis done in EXOFASTv2, as given in Table 7. The continuum of the CCF\({}_{\rm DI}\) was then scaled to the same flux outside of the transit and to the flux simulated during transit with the batman package (Kreidberg, 2015). The transit depth was taken from Table 7, and limb-darkening coefficients were calculated with the EXOFAST calculator (Eastman et al., 2013) for the stellar properties listed in Table 7. This yielded u\({}_{1}\) = 0.45 and u\({}_{2}\) = 0.27 in the visible band, representative of the CORALIE spectral range. The CCF\({}_{\rm DI}\) outside of the transits were co-added to build master-out CCFs representative of the uncoculated star. Gaussian profiles were fitted to the master-out CCF\({}_{\rm DI}\) to determine the RV zero points in Visit 1 (64.353\(\pm\)0.015 km s\({}^{-1}\)) and Visit 2 (64.358\(\pm\)0.013 km s\({}^{-1}\)), which were then used to shift all CCF\({}_{\rm DI}\) to the star rest frame. The CCFs from the planet-occulted regions were retrieved by subtracting the scaled CCF\({}_{\rm DI}\) from their corresponding master-out. They were finally normalised to a common flux level by dividing their continuum with the flux scaling applied to the CCF\({}_{\rm DI}\), yielding intrinsic CCF\({}_{\rm intr}\) that directly trace variations in the local stellar line profiles (Fig. 6). Flux errors were assigned to the CCF\({}_{\rm intr}\) as the standard deviation in their continuum flux.
#### 4.1.3 Analysis of individual exposures
In the second step, a Gaussian profile was fitted to the CCF\({}_{\rm intr}\) in each exposure, over [-30, 30] km s\({}^{-1}\) in the star rest frame. We sampled the posterior distributions of its RV centroid, FWHM, and contrast using _emcee_ MCMC (Foreman-Mackey et al., 2013). We set uniform priors on the RV centroid with a boundary between -5 to 10 km s\({}^{-1}\), on the FWHM with a boundary between 0 and 20 km s\({}^{-1}\), and on the contrast (bounded between -2 and 2). One hundred walkers were run for 2000 steps, with a burn-in
\begin{table}
\begin{tabular}{l c c} \hline \hline Property & Value & Source \\ \hline Other Names & & \\
2MASS ID & J04004794-5435342 & 2MASS \\ Gaia ID & 4683737294570307968 & Gaia EDR3 \\ TIC ID & 198008002 & _TESS_ \\ Astrometric Properties & & \\ R.A. & 04:00:47.96 & _TESS_ \\ Dec & -54:35:34.5 & _TESS_ \\ \(\mu_{\rm R.A.}\) (mas yr\({}^{-1}\)) & 8.5707 \(\pm\) 0.0187 & Gaia EDR3 \\ \(\mu_{\rm Dec.}\) (mas yr\({}^{-1}\)) & -12.6905 \(\pm\) 0.0197 & Gaia EDR3 \\ RV (km s\({}^{-1}\)) & 65.5 \(\pm\) 0.6 & Gaia EDR3 \\ Parallax (mas) & 4.0181 \(\pm\) 0.0146 & Gaia EDR3 \\ Photometric Properties & & \\ V (mag) & 11.66 \(\pm\) 0.08 & Tycho \\ B (mag) & 11.07 \(\pm\) 0.08 & Tycho \\ G (mag) & 10.7895 \(\pm\) 0.0008 & Gaia \\ T (mag) & 10.396 \(\pm\) 0.006 & _TESS_ \\ J (mag) & 9.88 \(\pm\) 0.02 & 2MASS \\ H (mag) & 9.61 \(\pm\) 0.02 & 2MASS \\ K\({}_{\rm s}\) (mag) & 9.56\(\pm\) 0.02 & 2MASS \\ W1 (mag) & 9.53 \(\pm\) 0.03 & WISE \\ W2 (mag) & 9.55 \(\pm\) 0.02 & WISE \\ W3 (mag) & 9.49 \(\pm\) 0.03 & WISE \\ W4 (mag) & 9.35 \(\pm\) 0.40 & WISE \\ Spectroscopic Properties & & \\ \(v\sin i\) (km s\({}^{-1}\)) & 6.4 \(\pm\) 0.25 & Sec. 3.2 \\ \(V_{sys}\) (km s\({}^{-1}\)) & 65.523 \(\pm\) 0.006 & Sec. 3.4 \\ \hline \end{tabular} This work, joint analysis of broadband photometry, and RVs
Parameter & Units & Values
\begin{tabular}{l l c} \hline \(M_{*}\) & Mass (M\({}_{\odot}\)) & 1.152\({}^{+0.074}_{-0.082}\) \\ \(R_{*}\) & Radius (R\({}_{\odot}\)) & 1.374\({}^{+0.040}_{-0.009}\) \\ \(L_{*}\) & Luminosity (L\({}_{\odot}\)) & 2.074\({}^{+0.040}_{-0.060}\) \\ \(\rho_{*}\) & Density (cgs) & 0.623\({}^{+0.081}_{-0.070}\) \\ \(\log g\) & Surface gravity (cgs) & 4.222\({}^{+0.042}_{-0.043}\) \\ \(T_{\rm eff}\) & Effective Temp. (K) & 5907\({}^{+8.0}_{-80}\) \\ [Fe/H] & Metallicity (dex) & 0.196 \(\pm\) 0.078 \\ \(Age\) & Age (Gyr) & 5.2\({}^{+2.8}_{-2.0}\) \\ \(Av\) & V-band extinction (mag) & 0.023\({}^{+0.012}_{-0.015}\) \\ \(\sigma_{SED}\) & SED error scaling & 0.80\({}^{+0.31}_{-0.171}\) \\ \(\varpi\) & Parallax (mas) & 4.018 \(\pm\) 0.014 \\ \(d\) & Distance (pc) & 248.88\({}^{+0.84}_{-0.85}\) \\ \hline Tycho (Høg et al., 2000); 2MASS (Skrutskie et al., 2006); WISE (Wright et al., 2010); Gaia (Gaia Collaboration et al., 2016, 2021) & \\ \hline \end{tabular}
\end{table}
Table 6: Stellar parameters for TOI-858 A.
Figure 5: CORALIE RV measurements of TOI-858 A. Top panel: RV time series showing no clear signs of a giant planet in a short period orbit. Lower panel: Lomb-Scargle periodogram for the RVs with no significant signals detected. The false alarm probability (FAP) levels of 1% and 10% are indicated as horizontal lines.
phase of 500 steps, to ensure that the resulting chains converged and are well mixed.
As shown in Fig. 6, the stellar line is clearly detected in most individual CCF\({}_{\rm intr}\) with narrow and well-defined posterior distribution functions (PDFs) for their model parameters (Figs. B.2 and B.3), which allowed us to extract the time series of the local stellar line properties (Fig. 7). The only exception is the first exposure in Visit 2, which was excluded from further analysis. Surprisingly, the local RV series displays larger values overall in Visit 1 than in Visit 2. Nonetheless, both series are positive and constant at the first order, showing that TOI-858 B b transits the stellar hemisphere, rotating away from the observer at about the same stellar longitude. There is no evidence for centre-to-limb variations in the local stellar line shape, as the local contrast and FWHM series remain roughly constant along the transit chord. The width of the local line, however, appears to be broader in the second visit, suggesting a possible change in the stellar surface properties.
#### 4.1.4 Joint transit analysis
In the third step, all CCF\({}_{\rm intr}\) were fitted together with a joint model. Based on step two, the local stellar line was modelled as a Gaussian profile with a constant contrast and an FWHM along the transit chord but with values specific to each visit. The RV centroids of the theoretical lines were set by the surface RV model described in Cegla et al. (2016) and Bourrier et al. (2017), assuming solid-body rotation for the stellar photosphere and oversampling each exposure to account for the blur induced by the planet motion (a conservative oversampling factor of five was used). The time series of the theoretical stellar lines were convolved with the CORALIE instrumental response and then fitted to the CCF\({}_{\rm intr}\) in both visits using _emece_ MCMC. The model parameters used as jump parameters for the MCMC were the line contrast and FWHM, the sky-projected obliquity \(\lambda\), and stellar rotational velocity \(v\sin i_{*}\). Uniform priors were set on all parameters: over the same range as in step two for the contrast and FWHM, between [0 - 30] km s\({}^{-1}\) for \(v\sin i_{*}\), and over its definition range ([-180, 180]\({}^{\circ}\)) for \(\lambda\).
Posterior distribution functions for the model parameters are shown in Fig. B.1. The best fit yielded a reduced \(\chi^{2}\) of 1.3 (\(\chi^{2}\) = 634 for 474 degrees of freedom) with models that reproduce the local stellar lines along the transit chord well, as can be seen in the residual maps shown in Fig. 8. Properties of the best-fit line model convolved with the CORALIE response are shown in Fig. 7. The local line has a similar contrast in the two visits (71.2\({}^{+4.8}_{-4.3}\)% for Visit 1 and 74.1\(\pm\)5.2% for Visit 2), but it is significantly broader in the second visit (6.26\({}^{+0.25}_{-0.37}\) km s\({}^{-1}\) for Visit 1 and 10.44\(\pm\)0.75 km s\({}^{-1}\) for Visit 2). Whether the origin of this variation is stellar or not, we note that \(\lambda\) and \(v\sin i_{*}\) are not correlated with the contrast and FWHM of the fitted lines (Fig. B.1). We derived \(v\sin i_{*}\) = 7.09\(\pm\)0.52 km s\({}^{-1}\), which is larger than the value of 5.80 \(\pm\) 0.25 km s\({}^{-1}\) obtained from a spectroscopic analysis of the CORALIE data. The main result from our analysis is the derivation of the projected spin-orbit angle, \(\lambda\) = 99.3\({}^{+3.8}_{-3.7}\), which shows that TOI-858 B b is on a polar orbit.
We performed two tests to assess the reliability of this conclusion. First, as the derived local line contrasts depend on the accuracy of the transit light curve used to scale the CCF\({}_{\rm DI}\) (Sect. 4.1.2), we thus varied the transit depth of TOI-858 B b within its 3\(\sigma\) uncertainties. We found that it changes \(\lambda\) by less than 0.5\({}^{\circ}\). Then, we fitted the two visits independently. We found significant differences between the derived \(v\sin i_{*}\) (8.2\(\pm\)0.6 km s\({}^{-1}\) in Visit 1 and 3.6\(\pm\)0.9 km s\({}^{-1}\) in Visit 2), neither of which is similar with the value derived from the CORALIE CCF-width in Sect. 3.2 (5.80\(\pm\)0.25 km s\({}^{-1}\)). Interestingly, the latter is consistent with the weighted mean of Visit 1 and 2 values (6.8\(\pm\)0.5 km s\({}^{-1}\)). In the present case of a polar orbit, the rotational velocity is directly constrained by the overall level of the surface RV series, which explains why we derived different values for the two visits (Fig. 7). The physical origin of this difference, however, is unclear. No detrimental contamination was identified in Fibre B of CORALIE, which was pointing on-sky in Visit 1. It is possible that an instrumental drift may have biased the RVs during the visit, which is why we put Fibre B on the FP simultaneous reference in Visit 2. The first half of the transit in Visit 2 was obtained with lower S/N (down to 12 in the first exposures compared to 17 afterwards, as measured in order
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & Units & Values & Parameter & Units & Values \\ \hline Stellar Parameters: & & & Planetary Parameters: & & b \\ \(M_{*}\) & Mass (M\({}_{\odot}\)) & 1.081\({}^{+0.076}_{-0.078}\) & \(M_{P}\) & Mass (M\({}_{\rm J}\)) & 1.10\({}^{+0.08}_{-0.07}\) \\ \(R_{*}\) & Radius (R\({}_{\odot}\)) & 1.308\({}^{+0.037}_{-0.038}\) & \(R_{P}\) & Radius (R\({}_{\rm J}\)) & 1.255 \(\pm\) 0.039 \\ \(L_{*}\) & Luminosity (L\({}_{\odot}\)) & 1.790\({}^{+0.080}_{-0.083}\) & \(P\) & Period (days) & 3.2797178 \(\pm\) 0.0000014 \\ \(\rho_{*}\) & Density (cgs) & 0.680\({}^{+0.064}_{-0.064}\) & \(T_{C}\) & Time of conjunction (BJD\({}_{\rm TDB}\) ) & 58386.45235 \(\pm\) 0.00028 \\ \(\log g_{*}\) & Surface gravity (cgs) & 4.238\({}^{+0.038}_{-0.035}\) & \(a_{*}\) & Semi-major axis (AU) & 0.04435 \({}^{+0.0001}_{-0.0098}\) \\ \(T_{\rm eff}\) & Effective Temperature (K) & 584\({}^{+2.79}_{-7.9}\) & \(i_{*}\) & Inclination (Degrees) & 86.80\({}^{+0.04}_{-0.41}\) \\ \([{\rm Fe/H}]\) & Metallicity (dex) & 0.153 \(\pm\) 0.091 & \(T_{eq}\) & Equilibrium temperature (K) & 1529\({}^{+0.24}_{-2.2}\) \\ \(Age_{*}\) & Age (Gyr) & 6.8\({}^{+2.5}_{-2.5}\) & \(K\) & RV semi-amplitude (m/s) & 143 \(\pm\) 7 \\ \(A_{V}\) & V-band extinction (mag) & 0.022\({}^{+0.013}_{-0.01}\) & \(\delta\) & (\(R_{P}/R_{*}\))\({}^{2}\) & 0.00974 \({}^{+0.00014}_{-0.0006}\) \\ \(\sigma_{SED}\) & SED photometry error scaling & 3.10\({}^{+3.5}_{-3.0}\) & \(T_{\rm 14}\) & Total transit duration (days) & 0.1510 \(\pm\) 0.00090 \\ \(\varpi\) & Parallax (mas) & 3.972 \(\pm\) 0.013 & \(T_{FWHM}\) & FWHM transit duration (days) & 0.13393 \(\pm\) 0.00060 \\ \(d\) & Distance (pc) & 251.76 \(\pm\) 0.85 & \(b\) & Transit Impact parameter & 0.419\({}^{+0.000}_{-0.000}\) \\ & & \(e\) & Eccentricity & \(<\) 0.15 \({}^{+0.47}_{-0.27}\) \\ & & \(\rho_{P}\) & Density (cgs) & 0.690\({}^{+0.07}_{-0.07}\) \\ & & \(logg_{*}\) & Surface gravity & 3.239\({}^{+0.013}_{-0.049}\) \\ & & \(\langle F\rangle\) & Incident Flux (10\({}^{9}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & 1.237\({}^{+0.076}_{-0.072}\) \\ \hline \end{tabular}
\end{table}
Table 7: Median values and 68% confidence intervals for the TOI-858 B system. The orbital eccentricity is fixed at zero in the final model, though we list the 3\(\sigma\) upper limit from models where the eccentricity was a fitted parameter.
46), possibly due to a change in sky conditions. This makes it difficult to determine which data set is the most accurate, considering that they mainly differ during the first half of the transit. The projected spin-orbit angle, however, is less affected by these variations than \(v\sin i_{\star}\) and remains consistent within \(1\sigma\) between the two visits (\(99.4^{+3.1^{\circ}}_{-3.0}\) in Visit 1 and \(80.9^{+17.0^{\circ}}_{-13.7}\) in Visit 2), confirming the misalignment of the TOI-858 B b orbital plane.
To calculate the actual obliquity (as opposed to its sky projection), we needed information about the inclination of the stellar rotation axis relative to the line of sight. Such information can be obtained from the combination of the stellar radius, rotation period, and projected rotation velocity (\(v\sin i_{\star}\)). In this case, the rotation period of the planet-hosting star, TOI-858 B, is uncertain. A photometric period of 6.4 days was detected in the TESS and WASP-South light curves, but the period might belong to TOI-858 A instead. Bearing this caveat in mind and assuming the period belongs to B, we derived constraints on the stellar inclination angle using an MCMC procedure, following Masuda and Winn (2020). The free parameters were \(\cos i_{\star}\), which was subject to a uniform prior; \(R_{\star}/R_{\odot}\), which was subject to a Gaussian prior with a mean of 1.308 and standard deviation of 0.038 (see Table 7); and \(P_{\rm rot}\), which was subject to a Gaussian prior with a mean of 6.42 days and a standard deviation of 0.64 days (enlarged to 10% to account for systematic effects such as differential rotation). The log-likelihood was taken to be
\[-\frac{1}{2}\left(\frac{\frac{2\pi R_{\star}}{P_{\rm rot}}\sqrt{1-\cos^{2}i_{ \star}}-5.80\,{\rm km/s}}{0.25\,{\rm km/s}}\right)^{2} \tag{1}\]
based on the CORALIE-based measurement of \(v\sin i_{\star}\). The result for \(\cos i_{\star}\) was \(0.82^{+0.04}_{-0.05}\). We combined this result with the measurements of \(\lambda=99.3\pm 3.8\) degrees and \(i_{\rm o}=86.8\pm 0.5\) degrees to arrive at two possibilities for the stellar obliquity: \(\psi=92.7\pm 2.5\) degrees or \(98.0\pm 2.5\) degrees (the discrete degeneracy arises because we do not know the relative signs of \(\cos i_{\star}\) and \(\cos i_{\rm o}\)). Thus, under these assumptions, the stellar spin axis and normal to the orbital plane are nearly perpendicular.
This conclusion does not depend strongly on the exact constraints on \(v\sin i_{\star}\). For example, if we use the constraint \(v\sin i_{\star}=7.09\pm 0.52\) km/s, based on the RM analysis, the results for \(\psi\) are modified to \(94.5\pm 3.0\) and \(98.9\pm 3.0\) degrees. In fact, because \(\lambda\) is well constrained, the conclusion that the axes are nearly perpendicular does not even depend strongly on the assumption that the rotation period is 6.4 days. When we repeated the calculation for any choice of rotation period between
Figure 6: Maps of the CCF\({}_{\rm intr}\) during the transit of TOI-858 B b in Visit 1 (upper panel) and Visit 2 (lower panel). Transit contacts are shown as green horizontal dashed lines. Values are coloured as a function of their normalised flux and plotted as a function of RV in the stellar rest frame (in abscissa) and orbital phase (in ordinate). The stellar lines from the planet-occulted regions are clearly visible in both visits. The green solid lines show the best-fit model for the stellar surface RVs derived from a joint RMR fit to both data sets. The green vertical dashed lines show the spectroscopic, sky-projected stellar rotational velocity.
Figure 7: Properties of the stellar surface regions occulted by TOI-858 B b, in blue for Visit 1 and red for Visit 2. The dashed vertical lines are the transit contacts. The horizontal bars indicate the duration of each exposure. The vertical bars indicate the \(1\sigma\) HDI intervals. The solid curves are the best models to each property, derived from a joint RMR fitted to both visits (excluding the first exposure in Visit 2). The RV model is common to both visits. The model contrast and FWHM are specific to each visit and are shown here after convolution of the model line by the CORALIE line spread function (LSF).
0.1 and 11 days, the best-fit value of \(\psi\) varied between 87 and 100 degrees.
### Dynamical analysis
#### 4.2.1 The TOI-858 B - TOI-858 A system
The two stars, TOI-858 B and TOI-858 A, are separated by \(\rho\sim\)11\({}^{\prime\prime}\) (\(\sim\)3000 au; Gaia Collaboration et al.2021) and have similar proper motion and parallax values in Gaia EDR3, suggesting they represent a wide binary pair and are a good target for examination of angular momentum vector alignment between binary and transiting planet orbits. However, the escape velocity for a system of two stars with these given masses and uncertainties at a separation of \(\sim\)3000 AU is \(0.09\pm 0.06\) km s\({}^{-1}\), while the relative velocity vector given by EDR3 proper motions and RV for both stars is 3.8 \(\pm\) 0.2 km s\({}^{-1}\), nearly 18-\(\sigma\) higher than the escape velocity. Thus, only unbound, hyperbolic trajectories are consistent with these relative velocities.
Both have high quality EDR3 astrometric solutions as measured by the re-normalised unit weight error (RUWE): TOI-858 B RUWE = 1.009 and TOI-858 A RUWE = 1.0166, where RUWE\(\approx\)1 is a well-behaved solution (Lindegren, 2018).2 The TOI-858 A-B pair is not resolved in Hipparcos or Tycho. There are two additional astrometric measurements of the pair, both from 1894, in the Washington Double Star Catalogue (WDS; Mason et al.2001), with \(\rho\) = 11.5\({}^{\prime\prime}\) and position angle (PA) = 164\({}^{\circ}\) in 1894, compared to \(\rho\) = 10.94903 \(\pm\) 1 \(\times\) 10\({}^{-5}\)\({}^{\prime\prime}\), PA = 169.90649 \(\pm\) 6 \(\times\) 10\({}^{-5}\) deg in Gaia EDR3. The plane-of-sky relative velocity given by those measurements (\(\sim\)12 km s\({}^{-1}\)) is larger than the Gaia EDR3 relative velocity. Neither star is in the El-Badry et al. (2021) catalogue of binaries identified in Gaia EDR3. Following the method described in Pearce et al. (2021) Sect. 3.1.1, we determined the probability of a chance alignment to be \(\sim 1\times 10^{-6}\), given the density of all objects in Gaia EDR3 within a 10\({}^{\circ}\) radius and 1\(-\sigma\) of the TOI-858 B proper motion and parallax. We conclude that TOI-858 B and TOI-858 A either (1) are a formerly bound binary that recently became unbound, (2) have inaccurate solutions in Gaia EDR3 despite the low RUWE values, or (3) are a chance alignment, despite the low probability. We can thus conclude that the system is either a binary that recently became unbound (conclusion 1) or a binary that has inaccurate Gaia EDR3 solutions (conclusion 2).
Footnote 2: [https://www.cosmos.esa.int/web/gaia/dr2-known-issues#AstrometryConsiderations](https://www.cosmos.esa.int/web/gaia/dr2-known-issues#AstrometryConsiderations)
#### 4.2.2 Assessment of a Kozai-Lidov evolution
Given the polar orbit of TOI-858 B, we examined the possibility of this architecture being caused by the action of the stellar binary companion. Under some circumstances, a distant third body can trigger the Kozai-Lidov effect (Kozai, 1962; Lidov, 1962; Naoz, 2016), a secular dynamical mechanism that makes the inner orbit's eccentricity and inclination oscillate. The Kozai
Figure 8: Maps of the out-of-transil residuals and of the in-transil residuals between CCF\({}_{int}\) and their best-fit model in Visit 1 (top panel) and Visit 2 (bottom panel). Transit contacts are shown as green dashed lines. The green vertical dashed lines show the spectroscopic, sky-projected stellar rotational velocity.
Figure 9: Projection of TOI-858 B in the plane of sky for the best-fit orbital architecture. The black arrow shows the sky-projected stellar spin. The stellar disc is coloured as a function of its surface RV field. The normal to the orbital plane of TOI-858 B b is shown as a green arrow. The thick green solid curve represents the best-fit orbital trajectory. The thin lines surrounding it shows orbits obtained for orbital inclination, semi-major axis, and sky-projected obliquity values drawn randomly within 1\(\sigma\) from their probability distributions. The star, planet (black disc), and orbits are to scale.
Lidov resonance has been invoked as a possible explanation for misaligned orbits (e.g., Fabrycky & Tremaine, 2007; Anderson et al., 2016; Bourrier et al., 2018), but in some cases it can be quenched due to short-range forces of the star (Liu et al., 2015).
Indeed, some short-range forces induce precession of the periapsis in the direction opposite of the Kozai-Lidov effect, possibly suppressing the resonance if they are strong enough (e.g., Wei et al., 2021). Thus, one necessary condition for the onset of the Kozai-Lidov mechanism is that its associated precession rate \(\dot{\omega}_{\rm Kozai}\) must be higher than the precession rate \(\dot{\omega}_{\rm SRF}\) of the short-range-forces. Using the parameters in Tables 4 and 6, we computed the \(\dot{\omega}_{\rm SRF}/\dot{\omega}_{\rm Kozai}\) ratio for a broad range of initial semi-major axes \(a_{0}\) and eccentricities \(e_{0}\) as well as four different values of the companion's eccentricity \(e_{c}\) so as to investigate in which region of the parameter space a Kozai-Lidov resonance could be triggered. The short-range forces we included in \(\dot{\omega}_{\rm SRF}\) are general relativity, static tides, and rotational forces (with the same formalism as in e.g., Eggleton & Kiseleva-Eggleton (2001)).
Figure 10 illustrates the results. We show the \(\dot{\omega}_{\rm SRF}/\dot{\omega}_{\rm Kozai}=0.1\) threshold as a typical value below which the Kozai-Lidov effect can take place as well as the co-rotation radius beyond (resp. inside) which tides widen (resp. shrink) the orbit of the planet. The parameter space regions where the Kozai-Lidov mechanism can be strongly active (i.e., regions where the \(\dot{\omega}_{\rm SRF}/\dot{\omega}_{\rm Kozai}\) ratio is low) are nearly all located beyond the co-rotation radius. Hence, even though TOI-858 B b formed with a favourable orbital configuration for the launch of the Kozai-Lidov effect, reaching the present-day close-in orbit would have been prevented by tidal forces. Indeed, they would always increase the semi-major axis, except for extremely high eccentricities of the binary companion (\(e_{\rm c}>0.99\)) when favourable architectures can be found inside the co-rotation radius. We checked the validity of these analytical results with comprehensive numerical simulations using the JADE code (Attia et al., 2021) for a representative subset of the parameter space, and they corroborated our conclusions.
In summary, if the binary companion's eccentricity is not implausibly high, the Kozai-Lidov scenario can be excluded as a possible explanation for the polar orbit of TOI-858 B b. As TOI-858 A is far away, compatible Kozai-Lidov effects would be quenched by short-range forces generated by TOI-858 B.
## 5 Discussion and conclusions
The planet TOI-858 B b may represent another case of a "perpendicular planet," adding statistical weight to the trend identified by Albrecht et al. (2021). Even though we lack an unambiguous measurement of the stellar inclination in order to disentangle the sky-projected and the true spin-orbit angle, we can still assert the polar nature of the orbit, as \(\lambda\) is very well constrained (Sect. 4.1.4). Moreover, the sky-projected and the 3D spin-orbit angle tend to be close when the former is near 90\({}^{\circ}\)(Fabrycky & Winn, 2009). In any case, because of TOI-858 A's wide separation, a Kozai-Lidov mechanism is unlikely to be the origin of TOI-858 B b's highly misaligned orbit. With the current orbital parameters, such an effect raised by the binary companion would be cancelled out by short-range forces between TOI-858 B and its orbiting planet. Other explanations for a polar orbit include stellar flybys (e.g., Rodet et al., 2021), secular resonance crossings (Petrovich et al., 2020), and magnetic warping (e.g., Romanova et al., 2021). The flyby scenario would be compatible with the orbit of the companion star TOI-858 A; however, more precise astrometry would be needed to determine the exact nature of its orbit. Alternatively, the present wide binary stars TOI-858 A and B could have been closer together in the past, allowing for the high-eccentricity tidal migration to take place. This could explain the origin of the discovered hot Jupiter TOI-858 B b as well as the polar spin-angle misalignment that we measured in this study (Vick et al., 2023). Theoretical work is needed to assess the feasibility of such a scenario. Future observations will reveal if hot and misaligned giant planets correlate with the presence of a distant stellar companion.
In this publication, we have reported the discovery of a Jovian planet transiting TOI-858 B on a polar orbit. The combined analysis of photometric, high-resolution spectroscopic, astrometric, and imaging observations has led to the following main results:
* From the joint transit photometry and RV analysis of planet TOI-858 B b, we find that it is on a \(3.2797178\pm 0.000014\) day orbit around its \(1.08\,\mathrm{M}_{\odot}\) G0 host star. The planet has a mass of \(1.10^{+0.08}_{-0.07}\,\mathrm{M}_{\rm J}\) and a radius of \(1.255\pm 0.039\,\mathrm{R}_{\rm J}\).
* The Rossiter-McLaughlin Revolutions analysis leads to the conclusion that the planet is on a polar orbit with a sky-projected obliquity of \(\lambda=99.3^{+3.8}_{-3.7}\).
* Assuming that the photometric periodicity is from the host star, we find that the stellar spin axis and normal to the orbital plane are nearly perpendicular, and due to the fact that \(\lambda\) is so well constrained, this result does not strongly depend on the assumed rotation period.
* From our combined RV-astrometry analysis, we conclude that TOI-858 B and TOI-858 A are indeed the two components of a binary system, and if the Gaia EDR3 solutions are accurate, they recently became unbound.
* From our dynamical study, we conclude that Kozai-Lidov can be excluded as a possible explanation for the polar orbit of TOI-858 B b. However, a stellar flyby would be compatible with the current astrometry of the companion star TOI-858 A.
Figure 10: Ratio of short-range forces to Kozai–Lidov precession rate as a function of the initial semi-major axis and eccentricity of TOI-858 B b’s orbit for four different values of TOI-858 A’s orbital eccentricity. The white dashed lines show \(\dot{\omega}_{\rm SRF}/\dot{\omega}_{\rm Kozai}=0.1\). The red lines indicate the co-rotation radius.
###### Acknowledgements.
We would like to thank the anonymous referee for the very constructive comments which significantly improved the scientific quality of the article. J.H. is supported by the Swiss National Science Foundation (SNSF) through the Ambizione grant grant PP20072-180098. L.D.N thanks the SNSF for support under Early Postdoc Mobility grant grant #P26PE2.20044. V.B. is supported by the National Centre for Competence in Research "Planets" from the SNSF. V.B. and O.A. are funded by the ERC under the European Union's Horizon 2020 research and innovation programme (project Srex Druz, grant agreement No 947634). A.B.D. was supported by the National Science Foundation Graduate Research Fellowship Program under Grant Number DGE-1122492. J.V. acknowledges support from the Swiss National Science Foundation (SNSF) under the Ambizione grant #PZ0072_208945. Funding for the TES mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. This research has made use of the Exoplanet Follows' up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory, of the SIMBAD database, operated at CDS, Strasbourg, France, and of NASA's Astrophysics Data System Bibliographic Services.
|
2307.16635 | Large amplitude dust-acoustic solitary waves and double layers in
nonthermal warm complex plasmas | Using a Sagdeev pseudopotential approach where the nonlinear structures are
stationary in a comoving frame, the arbitrary or large amplitude dust-acoustic
solitary waves and double layers have been studied in dusty plasmas containing
warm positively charged dust and nonthermal distributed electrons and ions.
Depending on the values of the critical Mach number, which varies with the
plasma parameter, both supersonic and subsonic dust-acoustic solitary waves are
found. It is found that our plasma system under consideration supports both
positive and negative supersonic solitary waves, and only positive subsonic
solitary waves and negative double layers. The parametric regimes for the
existence of subsonic and supersonic dust-acoustic waves and how the polarity
of solitary waves changes with plasma parameters are shown. It is observed that
the solitary waves and double layers solution exist at the values of Mach
number around its critical Mach number. The basic properties (amplitude, width,
speed, etc.) of the solitary pulses and double layers are significantly
modified by the plasma parameters (viz. ion to positive dust number density
ratio, ion to electron temperature ratio, nonthermal parameter, positive dust
temperature to ion temperature ratio, etc.). The applications of our present
work in space environments (viz. cometary tails, Earth's mesosphere, Jupiter's
magnetosphere, etc.) and laboratory devices, where nonthermal ions and
electrons species along with positively charged dust species have been
observed, are briefly discussed. | N. Alam, A. Mannan, A. A. Mamun | 2023-07-31T13:13:54Z | http://arxiv.org/abs/2307.16635v1 | # Large amplitude dust-acoustic solitary waves and double layers in nonthermal warm complex plasmas
###### Abstract
Using a Sagdeev pseudopotential approach where the nonlinear structures are stationary in a comoving frame, the arbitrary or large amplitude dust-acoustic solitary waves and double layers have been studied in dusty plasmas containing warm positively charged dust and nonthermal distributed electrons and ions. Depending on the values of the critical Mach number, which varies with the plasma parameter, both supersonic and subsonic dust-acoustic solitary waves are found. It is found that our plasma system under consideration supports both positive and negative supersonic solitary waves, and only positive subsonic solitary waves and negative double layers. The parametric regimes for the existence of subsonic and supersonic dust-acoustic waves and how the polarity of solitary waves changes with plasma parameters are shown. It is observed that the solitary waves and double layers solution exist at the values of Mach number around its critical Mach number. The basic properties (amplitude, width, speed, etc.) of the solitary pulses and double layers are significantly modified by the plasma parameters (viz. ion to positive dust number density ratio, ion to electron temperature ratio, nonthermal parameter, positive dust temperature to ion temperature ratio, etc.). The applications of our present work in space environments (viz. cometary tails, Earth's mesosphere, Jupiter's magnetosphere, etc.) and laboratory devices, where nonthermal ions and electrons species along with positively charged dust species have been observed, are briefly discussed.
## I Introduction
The physics of dusty plasmas has recently attracted more attention because of the large number of dust particles in our universe and their significance in understanding numerous collective processes in astrophysical and space environments [1; 2; 3; 4]. The most recent discoveries in the study of such complex plasma systems concern the remarkable capacity of small particles to emerge from molecular or radial precursors in reactive plasma environments, and grow into larger particles and incredibly small crystallites [5]. The existing plasma wave spectra have been significantly modified when charged dust particles are introduced. Consideration of charged dust grains in plasma not only modifies the current plasma wave spectra but also provides a variety of unique innovative eigenmodes, such as dust-acoustic (DA) waves [6], dust-ion acoustic (DIA) [7], dust lattice, etc. Rao _et al._[6] at first predicted the presence of this unique extremely low-phase-velocity (as compared to the thermal velocities of electrons and ions) DA waves, in which the dust mass provides the inertia and the electron and ion thermal pressures produce the restoring force. Afterward, many laboratory investigations have thoroughly proven Rao _et al._'s [6] prediction [8; 9].
Over the last two decades, several authors have investigated nonlinear DA waves in various dusty plasma environments, both theoretically [10; 11; 12; 13; 14] and experimentally [15; 16]. Usually, dust particles are thought to be massively negatively charged objects because of the accumulation of electrons from background plasma species [17]. However, there are a number of mechanisms (such as photoemission in the presence of ultraviolet light or thermionic emission from grains heated by radiation) by which a dust particle can acquire a positive charge and coexist with negatively charged dust particles, ions, and electrons in a variety of dusty plasmas (DP), such as the Earth's mesosphere [18; 19; 20], cometary tail [21; 22], Jupiter's surrounds [23], Jupiter's magnetosphere [24], etc., and laboratory equipment [25; 26; 27]. As a consequence of the following three key mechanisms, the dust species get positively charged [28; 29; 30; 31].
1. The collision of highly energetic plasma particles like electrons or ions results in the secondary emission of electrons from the surface of the dust grain [28].
2. The severe radiative or thermal heating that induces the thermionic emission of electrons from the surface of the dust grain [29].
3. Electron photoemission from the dust grain surface is caused by a flow of high-energy photons [30].
The existence of ions and electrons that are not in thermodynamic equilibrium is exposed by space plasma observations [32; 33; 34; 35] defining aspect of space plasma is the nonthermal or superthermal distribution functions that the electrons and ions follow. Such distribution functions are a well-known characteristic of the auroral zone, to be exact [36]. It is becoming recognized that the nonthermal electron/ion distributions constitute a distinctive aspect of space plasmas. Nonthermal velocity distributions are typically simple to evaluate using the Cairns distribution function [37]. The study of nonthermal ions and electrons in dusty plasma is therefore becoming more and more relevant. The Cairns velocity distribution function, which may be represented identically in 1D normalized form as [37]:
\[f(v)=\frac{1+\alpha(v^{2}-2\phi)^{2}}{(1+3\alpha)\sqrt{2\pi}}\exp\left[-\frac{1}{2 }(v^{2}-2\phi)\right]\,. \tag{1}\]
Here, \(\phi\) is the electrostatic wave potential, while \(\alpha\) is a parameter determining the number of fast (energetic) particles in the plasma system under study. Taibany and Sabry [38] studied the small amplitude dust-acoustic solitary waves and double layers with the help of the Zakharov-Kuznetsov equation. It is observed that due to the nonthermal ions both compressive and rarefactive solitary waves exist. Kian and Mahdieh [39] has shown that both non-thermal distributed electron and ion significantly modify the properties of large amplitude dust-acoustic waves. Verheest [40] has considered both nonthermal electrons and ions to study the large amplitude dust-acoustic solitary waves and double layers in opposite polarity dusty plasma system. The effect of the Cairns nonthermal distribution of the ion species on the ion-acoustic solitary waves has been studied by Mamun and Mannan [41].
The existence of positively charged dust species in electron-ion plasmas has attracted many plasma physicists to explore the new features of linear and nonlinear ion-acoustic waves. The presence of positively charged dust species plays a vital role in modifying the salient features of these kinds of waves [41; 42; 43; 44]. Mamun [43] found that the stationary positively charged dust species plays a vital role in the formation of ion-acoustic subsonic solitary waves in electron-ion-positively charged dust plasma medium. Mamun and Mannan [41] studied the effects of static positively charged dust species in a complex plasma medium. They observed that the presence of static positively charged dust species in complex plasma systems supports the ion-acoustic solitary waves and double layers. Mushinzimana _et al._[44] studied the propagation of dust-ion-acoustic solitary waves and double layers in a dusty plasma with the presence of adiabatic positively charged dust grains. Recently, Susmita _et al._[42] investigated the propagation of three-dimensional cylindrical dust-acoustic solitary waves in an unmagnetized dusty plasma environment. The latter is composed of positively charged adiabatic dust grains, nonthermal ions, and electrons. They have used the reductive perturbation method which is valid for small amplitude limits.
In this paper, we study the large or arbitrary amplitude dust-acoustic waves in a warm nonthermal plasma medium containing warm adiabatic positively charged dust species, Carins nonthermal distributed electrons and ions. To reduce our set of governing equations that describe our plasma system into an energy integral equation we use the well-known and widely used method so called the pseudo-potential approach. We then present the formation of solitary waves and double layers with the help of Sagdeev potential. We also present the formation of subsonic and supersonic solitary waves that depends on the critical Mach number. It is observed that the presence of warm adiabatic positively charged dust species significantly modifies the formation of potential wells and basic features (viz. amplitude, width, speed, etc.) of positive and negative solitary waves and double layers.
The manuscript is organized as follows. In Section II, we present the model equations that describe our plasma system under consideration. The formation and properties of solitary waves and double layers by using the pseudo-potential technique are discussed in Section III. Finally, the results of our present theoretical investigation are reported in Section IV.
## II Governing equations
To study the nonlinear propagation of DA waves we consider a collisionless, unmagnetized nonthermal dusty plasma medium containing adiabatic positively charged dust species and nonthermal distributed electrons and ions. According to our present plasma system, the charge neutrality condition at equilibrium can be written as \(n_{e0}=Z_{d}n_{d0}+n_{i0}\), where \(n_{e0}\), \(n_{d0}\), and \(n_{i0}\) are the unperturbed number densities of the nonthermal electrons, PCD species, and nonthermal ions, respectively and \(Z_{d}\) is the number of electrons residing onto the positive dust grain surface. The positively charged dust grain dynamics are governed by the following dimensionless equations:
\[\frac{\partial n_{d}}{\partial t}+\frac{\partial}{\partial x}(n_{ d}u_{d})=0, \tag{2}\] \[\frac{\partial u_{d}}{\partial t}+u_{d}\frac{\partial u_{d}}{ \partial x}=-\frac{\partial\phi}{\partial x}-\frac{\sigma_{d}}{n_{d}}\frac{ \partial{n_{d}}^{\gamma}}{\partial x},\] (3) \[\frac{\partial^{2}\phi}{\partial x^{2}}=(1+\mu)n_{e}-\mu n_{i}-n_ {d}. \tag{4}\]
For convenience, we use here dimensionless equations by introducing the normalizing factors as follows: \(n_{e}\), \(n_{i}\), and \(n_{d}\) are normalized by their unperturbed number density \(n_{e0}\), \(n_{i0}\), and \(n_{d0}\), respectively; dust fluid velocity \(u_{d}\) is normalized by the DA speed \(C_{d}=(Z_{d}k_{B}T_{i}/m_{d})^{1/2}\) with \(k_{B}\), \(T_{i}\), and \(m_{d}\) being the Boltzmann constant, ion temperature, and dust grain mass, respectively; the electrostatic wave potential \(\phi\) is normalized by \(k_{B}T_{i}/e\); the space \(x\) and time \(t\) variables are normalized by Debye radius \(\lambda_{d}=(k_{B}T_{i}/4\pi n_{d0}Z_{d}e^{2})^{1/2}\) and dust plasma period \(\omega_{pd}^{-1}=(m_{d}/4\pi n_{d0}Z_{d}^{2}e^{2})^{1/2}\), respectively; the dust adiabatic index \(\gamma=(2+N)/N\) is equal to 3 for one-dimensional cases in our present study, where \(N\) is the number of degrees of freedom; \(\sigma_{d}=T_{d}/Z_{d}T_{i}\) with \(T_{d}\) being the dust temperature; and \(\mu=n_{i0}/Z_{d}n_{d0}\).
Besides the warm adiabatic dust, both nonthermal cairns distributed electrons and ions in the dimensionless form as follows:
\[n_{e}=(1-\sigma_{i}\beta\phi+\beta(\sigma_{i}\phi)^{2})\exp( \sigma_{i}\phi)\,, \tag{5}\] \[n_{i}=(1+\beta\phi+\beta\phi^{2})\exp(-\phi)\,, \tag{6}\]
where \(\beta=4\alpha/(1+3\alpha)\) is the nonthermal parameter and \(\sigma_{i}=T_{i}/T_{e}\).
## III DA solitary waves and double layers
To analyze the fully nonlinear DA solitary waves and double layers by using the pseudo-potential method [45; 46; 47; 48], we assume a comoving frame where the nonlinear structure is stationary (\(\partial/\partial t=0\)) and all the variables to be undisturbed at \(|\xi|\rightarrow\infty\). So, all the dependent variables in equations depend only on one variable \(\xi=x-Mt\), where \(M\) is the Mach number. Therefore, we write our dimensionless equations (2)-(4) in terms of new variable as
\[M\frac{\partial n_{d}}{\partial\xi}-\frac{\partial}{\partial\xi }(n_{d}u_{d})=0, \tag{7}\] \[M\frac{\partial u_{d}}{\partial\xi}-u_{d}\frac{\partial u_{d}}{ \partial\xi}=\frac{\partial\phi}{\partial\xi}+\frac{\sigma_{d}}{n_{d}}\frac{ \partial{n_{d}}^{3}}{\partial\xi},\] (8) \[\frac{\partial^{2}\phi}{\partial\xi^{2}}=(1+\mu)n_{e}-\mu n_{i}- n_{d}. \tag{9}\]
Integrating equation (7) with respect to \(\xi\) and using the appropriate boundary conditions, \(n_{d}=1\), \(u_{d}=0\), \(\phi(\xi)=0\), \(d\phi/d\xi=0\) at \(|\xi|\rightarrow\infty\) we obtain the normalized dust fluid velocity
\[u_{d}=M-\frac{M}{n_{d}}. \tag{10}\]
By using the above-mentioned boundary conditions and (10) into equation (8) one obtains
\[3\sigma_{d}n_{d}^{4}-(M^{2}+3\sigma_{d}-2\phi)n_{d}^{2}+M^{2}=0\,. \tag{11}\]
Thus the expression for normalized dust number density \(n_{d}\) from (11) can be expressed as
\[n_{d}=\frac{1}{\sqrt{6\sigma_{d}}}\left(\psi-\sqrt{\psi^{2}-12\sigma_{d}M^{2}} \right)^{1/2}, \tag{12}\]
where \(\psi=(M^{2}+3\sigma_{d}-2\phi)\). By inserting equation (5), (6), and (12) into (9) and multiplying on both sides of the resulting equation by \(d\phi/d\xi\) and integrating once with respect to \(\xi\) with the above-mentioned boundary conditions, we obtain the following differential equation
\[\frac{1}{2}\left(\frac{d\phi}{d\xi}\right)^{2}+V(\phi,M)=0\,. \tag{13}\]
This equation represents an energy integral of a pseudo-particle of unit mass, pseudo time \(\xi\), pseudo-position \(\phi\) and the pseudo-potential \(V(\phi,M)\) is defined by
\[V(\phi,M)=C-\frac{e^{\sigma_{i}\phi}(1+\mu)}{\sigma_{i}}\left(1+ 3\beta-3\beta\sigma_{i}\phi+\beta\sigma_{i}^{2}\phi^{2}\right)\] \[-\frac{1}{3}\sqrt{\frac{2}{3\sigma_{d}}}\left(\psi+\frac{1}{2} \sqrt{\psi_{1}}\right)\left(\psi-\sqrt{\psi_{1}}\right)^{1/2}\] \[-\mu e^{-\phi}\left(1+3\beta+3\beta\phi+\beta\phi^{2}\right)\,, \tag{14}\]
where \(\psi_{1}=\psi^{2}-12\sigma_{d}M^{2}\) and the integration constant
\[C=(1+3\beta)\left(\frac{1+\mu}{\sigma_{i}}+\mu\right)+M^{2}+\sigma_{d}\,. \tag{15}\]
Note that \(C\) is determined in such a way that \(V(\phi,M)=0\) at \(\phi=0\). The Sagdeev potential \(V(0,M)=0\) is satisfied because of our choice \(C\). Due to the equilibrium charge neutrality condition \(V^{\prime}(0,M)=0\) is also satisfied. Here the prime (\({}^{\prime}\)) denotes the derivative of \(V(\phi,M)\) w. r. to \(\phi\). To obtain the solitary waves and double layers solution one has to choose the origin an unstable maximum, i.e. \(V^{\prime\prime}(0,M)<0\). The latter confirms the expression of Mach number \(M_{c}\) that is the solution of \(V^{\prime\prime}(0,M)=0\) and \(M_{c}\) is defined as
\[M_{c}=\left(\frac{1}{(1-\beta)(\mu+\sigma_{i}+\mu\sigma_{i})}+3\sigma_{d} \right)^{1/2}\,. \tag{16}\]
It is worth mentioning that \(V^{\prime\prime\prime}(0,M_{c}=0)\) allows the value of parameters that determine the sign changes in the polarity of solitary waves. Thus, we can write [49; 37]:
\[V^{\prime\prime\prime}(0,M_{c})=\mu-(1+\mu)\sigma_{i}^{2}+(3+12\sigma_{d}\rho) \rho^{2}\,, \tag{17}\]
where \(\rho=(1-\beta)(\mu+\sigma_{i}+\mu\sigma_{i})\). Note also that the conditions \(V^{\prime}(\phi_{m},M)>0\) and \(V^{\prime}(\phi_{m},M)<0\) determine the solitary wave with positive potential (\(\phi>0\)) and negative potential (\(\phi<0\)), respectively. At the same time, \(V^{\prime}(\phi_{m},M)=0\) determines the double layers. Here \(\phi_{m}\) denotes the amplitude of the solitary waves or double layers. Finally, it is concluded that the solitary waves and double layers exist if and only if \(V^{\prime\prime}(0,M)<0\), i.e. \(M>M_{c}\), where \(M_{c}\) is defined in (16).
Figures 1 and 2 visualize how the Mach number or phase speed of DA waves changes with different values of nonthermal parameter \(\alpha\) and ion to positively charged dust number density ratio \(\mu\). It is found that \(M_{c}\) decreases as \(\mu\) increases. Note that as we increase the values of \(\sigma_{d}\), \(M_{c}\) also increases but \(M_{c}\) decreases with \(\sigma_{i}\). It is worth mentioning that the effects of nonthermality \(\alpha\) and \(\mu\) determine the formation of subsonic and supersonic DA waves. The conditions \(M_{c}<M<1\) and \(1<M_{c}<M\) determine the subsonic and supersonic DA waves, respectively. It is observed that the region of formation of subsonic DA waves becomes broadened when the nonthermal parameter \(\alpha\) decreases. On the contrary, the possibility of the formation of supersonic DA waves becomes very high with the increasing values of ion number density and nonthermality. The red color shaded area (as shown in figure 2) represents the region of the formation of subsonic DA waves.
The expression in (17) determines the polarity of subsonic and supersonic DA solitary waves and double layers. Figure 4 shows how the regions of \(V^{\prime\prime\prime}(0,M_{c})>0\), \(V^{\prime\prime\prime}(0,M_{c})=0\), and \(V^{\prime\prime\prime}(0,M_{c})<0\) change with \(\mu\) and \(\alpha\) for different values of \(\sigma_{i}\). It is mentioned that \(V^{\prime\prime\prime}(0,M_{c})>0\) (\(V^{\prime\prime\prime}(0,M_{c})>0\)) determine the positive (negative) solitary waves and double layers potentials. Increasing the value of \(\sigma_{i}\) reduces the formation of positive solitary waves and double layers regimes.
The Sagdeev potential \(V(\phi)\) (represented in (14)) wells are plotted against the pseudo-position \(\phi\). The formation of Sagdeev potential wells corresponding to the positive (\(\phi>0\)) and negative (\(\phi<0\)) solitary waves and double layers are shown in figures 4 -9. The distance between the intercept on the positive and negative \(\phi\)-axes and origin is the amplitude \(\phi_{m}\) of solitary waves or double layers. The width is defined by \((\phi_{m}/\sqrt{|V_{m}|})\), where \(|V_{m}|\) is the highest possible value of \(V(\phi)\) in the pseudo-potential wells. The Sagdeev potential wells in the positive \(\phi\)-axis correspond to the formation of the positive subsonic solitary waves (as displayed in figures 4 and 5). The amplitude (width) of the positive subsonic solitary waves increases (decreases) with the increasing value of \(\mu\). But the ef
fect of nonthermal parameter \(\alpha\) reduces (enhances) the amplitude (width) of subsonic solitary waves with \(\phi>0\). The positive and negative supersonic solitary waves correspond to the formation of the pseudo-potential wells with positive and negative \(\phi\)-axes (as displayed in figures 6-8). The effects of nonthermal parameter \(\alpha\) and \(\mu\) on positive and negative supersonic solitary waves are found to be similar in figures 4 and 5. The amplitude (width) of both subsonic and supersonic solitary waves decrease ( increase) with \(\sigma_{d}\). For \(M>M_{c}\), the negative double layer is formed with negative potential \(\phi<0\) only (as displayed in figure 9). Figure 9 provides that solitary wave solution is formed at Mach number \(M=3.35\) (solid curve), double layer solution is found at \(M=3.372\) (dashed curve), and no solitary wave structure exists at M = 3.386 (dot-dashed curve). It is important to mention that no subsonic solitary waves with negative potential \(\phi<0\) are observed for our choice of plasma parameters.
acoustic waves increases. The presence of a positively charged dust particle also increases the phase speed of DA waves. On the contrary, the phase speed of DA waves reduces as we increase the ion number density. Because of adiabatic warm positively charged dust species, the Mach number increases, but increasing the ion temperature reduces the value of the Mach number. Depending on the value of the phase speed of DA waves, the formation of subsonic and supersonic DA waves is discussed.
2. The parametric regimes of how the polarity of DA solitary waves and double layers changes with plasma parameters are shown.
3. The pseudo-potential wells are formed in both positive and negative \(\phi\)-axes. The properties of the height and thickness in the potential well formed in both positive and negative \(\phi\)-axes are discussed. The solitary waves and double layers solution are found in the values of Mach number that are around its critical value (\(M_{c}\)), i.e. \(M>M_{c}\). Depending on the value of Mach number both subsonic and supersonic DA solitary waves are observed. It is seen that the salient features (the amplitude, the width, the speed, etc.) of DA solitary waves are significantly modified the plasma parameters. The positive dust temperature reduces the amplitude of solitary waves but increases the width of solitary waves. Increasing the ion number density causes to increase (decrease) the amplitude (width) of the solitary waves. On the other hand, the amplitude (width) of the solitary waves decreases (increases) with the nonthermal parameter.
4. For fixed plasma parameters, the solitary wave, the double layer, and no solitary wave solution are found with different values of Mach number.
5. No negative subsonic solitary structures and positive double layers are observed in our present plasma system.
Finally, it is concluded that the results of our theoretical and numerical investigations will help in understanding the fundamental characteristics of DA supersonic and subsonic solitary waves and double layers that are found in space environments, such as the Earth's mesosphere or ionosphere [18; 19; 20], cometary tails [21], Jupiter's surroundings [23] and magnetosphere [24], as well as laboratory devices [25; 26; 27].
## Declarations
**Disclosure of potential conflict of interest:** The authors declare that there is no conflict of interest.
**Funding:** This study was not supported by any funding.
**Data availability:** The data that support the findings of this study are available within the article.
|
2309.10930 | Test-Time Training for Speech | In this paper, we study the application of Test-Time Training (TTT) as a
solution to handling distribution shifts in speech applications. In particular,
we introduce distribution-shifts to the test datasets of standard
speech-classification tasks -- for example, speaker-identification and
emotion-detection -- and explore how Test-Time Training (TTT) can help adjust
to the distribution-shift. In our experiments that include distribution shifts
due to background noise and natural variations in speech such as gender and
age, we identify some key-challenges with TTT including sensitivity to
optimization hyperparameters (e.g., number of optimization steps and subset of
parameters chosen for TTT) and scalability (e.g., as each example gets its own
set of parameters, TTT is not scalable). Finally, we propose using BitFit -- a
parameter-efficient fine-tuning algorithm proposed for text applications that
only considers the bias parameters for fine-tuning -- as a solution to the
aforementioned challenges and demonstrate that it is consistently more stable
than fine-tuning all the parameters of the model. | Sri Harsha Dumpala, Chandramouli Sastry, Sageev Oore | 2023-09-19T21:06:22Z | http://arxiv.org/abs/2309.10930v2 | # Test-Time Training for Speech
###### Abstract
In this paper, we study the application of Test-Time Training (TTT) as a solution to handling distribution shifts in speech applications. In particular, we introduce distribution-shifts to the test datasets of standard speech-classification tasks--for example, speaker-identification and emotion-detection--and explore how Test-Time Training (TTT) can help adjust to the distribution-shift. In our experiments that include distribution shifts due to background noise and natural variations in speech such as gender and age, we identify some key-challenges with TTT including sensitivity to optimization hyperparameters (e.g., number of optimization steps and subset of parameters chosen for TTT) and scalability (e.g., as each example gets its own set of parameters, TTT is not scalable). Finally, we propose using BitFit - a parameter-efficient fine-tuning algorithm proposed for text applications that only considers the bias parameters for fine-tuning - as a solution to the aforementioned challenges and demonstrate that it is consistently more stable than fine-tuning all the parameters of the model.
## 1 Introduction
Deep learning methods achieve impressive results in a variety of speech-based downstream tasks when the train and test data are in-distribution (Gulati et al., 2020; Snyder et al., 2017; Zou et al., 2022). In practice, however, the train and test distributions are usually different, i.e., there exists a distributional shift between the train and test data. In speech, such distributional shifts can be introduced due to inter-speaker variations such as speaking style, gender, age, etc., or due to background induced noises such as babble, living room, traffic, etc. These distributional shifts significantly degrade the performance of the deep learning models (Likhomanenko et al., 2021; Garcia-Romero et al., 2019; Parry et al., 2019). In real-world applications, some form of distributional shifts often occurs in the test data, making it of vital importance for deep learning models to be robust to these shifts.
One approach to handling distributional shift is with _train-time_ techniques (Khurana et al., 2021; Li et al., 2020), in which we need to anticipate the type of distributional shifts that can occur during testing, and then train the model on data collected with this anticipated list of distributional shifts. In practice, the anticipated list of distributional shifts is non-exhaustive, and there is no guarantee that the trained model can generalize well to an unseen domain at test-time.
Another interesting approach which attained significant improvements in performance for imaging tasks is test-time training (TTT) (Sun et al., 2020; Liu et al., 2021; Gandelsman et al., 2022). In TTT, we update the model at inference using the test-sample. As the test sample does not have a label, a self-supervised learning task is used for this update.
The efficacy of TTT is impacted by the choice of the self-supervised learning task (Liu et al., 2021). Gandelsman et al. (2022) shows that masked auto-encoding on images is a suitable task for TTT. Motivated by the success of the transformer-based masked autoencoders (MAE) for speech (Huang
et al., 2022), we extend a test-time training approach based on MAE (Gandelsman et al., 2022) to speech in this work. To the best of our knowledge, this is the first work to adapt TTT to the speech domain. We show that TTT-MAE for speech shows significant improvements on three different downstream tasks under a variety of distributional shifts.
Two major challenges of using TTT during inference are maintaining stable performance robust to (reasonable) ranges of hyperparameters, and the potential high computational cost, which is due to both (a) increased memory requirements for updating all parameters of the model, and (b) inability to process a batch of samples if the individual samples in the batch are associated with different distributional shifts. We show that, for speech, it is possible to make significant improvements with respect to all of these issues, i.e. we improve stability, reduce memory requirements, and allow batch processing, all by using parameter-efficient training. Specifically, we show that using bias fine-tuning (Zaken et al., 2022), we can process a batch of test samples, even under the condition that each test sample has a different distributional shift.
## 2 Related work
**Domain adaptation and generalization.** These methods are based on the assumption that models will have access to labelled data from the train distribution and unlabelled (labelled) data from the test distribution. A common strategy in domain adaptation is to learn domain invariant features between train and test data distributions (Sun and Saenko, 2016; Tzeng et al., 2017; Long et al., 2018). Another approach is to perform self-training on the test distribution by generating pseudo-labels for the unlabelled data (Xie et al., 2020). Domain generalization techniques mainly resort to adversarial training, meta learning or adversarial data augmentation (Yang et al., 2021; Balaji et al., 2018; Dou et al., 2019; Volpi et al., 2018). In speech, domain adaptation and generalization techniques have been applied to tasks such as automatic speech recognition, emotion recognition and speaker classification (Khurana et al., 2021; Hu et al., 2021; Song et al., 2017; Li et al., 2020). All these methods assume information about the test domain is available during training.
**Learning at inference.** In the above techniques, the model is trained to generalize to all possible distributional shifts. But anticipating every possible distributional shift at train time is not feasible, particularly in real world applications. Most models trained using domain generalization techniques are fixed during inference even when the test distribution changes. To alleviate this problem, another line of work is to adapt the model to test samples at inference. Methods to update the model at inference can be classified into two types: test-time adaptation and test-time training.
_Test-time Adaptation (TTA):_ These methods allow using off-the-shelf models without any additional training. In general terms, test-time adaptation focuses on adapting models that were not trained with a special configuration prior to being used at inference (Goyal et al., 2022; Boudiaf et al., 2023). One of the first approaches of this category, called TENT (Wang et al., 2021), requires the model and target data. It then updates the model layers by minimizing the Shannon entropy of predictions. Mummadi et al. (2021) improves TENT by using a log-likelihood ratio instead of entropy, and by estimating target batch statistics. Another approach is to update batch normalization (BN) statistics using large number of test samples (Nado et al., 2020; Schneider et al., 2020). SITA (Khurana et al., 2022) is one such approach which can be used on a single test data example. SITA generates a pseudo-batch by randomly augmenting this example and then computes statistics on this pseudo-batch. TTA in computer vision is heavily targeted on the BN layer's adaptation by re-estimating batch statistics on target data. In this work we use transformer models, which have achieved state-of-the-art performance on many speech-based downstream tasks. Since transformer models are not equipped with BN layers, as the length of batched input sequences are different, TTA techniques cannot be applied directly to speech. Only one work has applied TTA to speech (Lin et al., 2022); this approach appears limited to ASR models trained with CTC loss.
_Test-time training (TTT):_ The basic paradigm in TTT (Sun et al., 2020) is to use a test-time task (usually a self-supervised learning task) besides the main task during training, and update the pre-trained model using test data with the (self-supervised) test-time objective before the final prediction. Sun et al. (2020) uses rotation prediction as the self-supervised task. Later, TTT++ (Liu et al., 2021) considers contrastive loss as the self-supervised task in addition to aligning the features by comparing the statistics of the source data with those of the current test batch. Recently, Osowiechi et al. (2023)
minimizes the distribution shift, between the train and test distributions, estimated using normalizing flows. Gandelsman et al. (2022) shows that using masked autoencoding (MAE) (He et al., 2022) as the self-supervised task for TTT achieves substantial improvements in image recognition under various distributional shifts. TTT-based techniques are applied to other domains such as videos (Azimi et al., 2022; Wang et al., 2023), natural language processing (NLP) (Banerjee et al., 2021) and compressed sensing of medical images (Darstani et al., 2022), where the self-supervised task varies across domains. In this work, we extend TTT framework to speech domain for the first time. Our work extends the TTT-MAE framework (Gandelsman et al., 2022) to speech by using work on audio MAE trained with spectrograms (Huang et al., 2022; Chong et al., 2022).
**Parameter efficient fine-tuning (PEFT).** Fine-tuning entire pre-trained models achieves state-of-the-art performance for various downstream tasks (Kenton and Toutanova, 2019; Raffel et al., 2020; Mohamed et al., 2022). However, as the size of these models increases rapidly, updating the models in parameter-efficient ways becomes crucial (Yang et al., 2022; Chen et al., 2022). In the NLP domain, PEFT techniques typically refer to either: a) insertion of new learnable modules with fewer parameters as compared to the whole model; or b) modifying carefully selected parameters of the model. In Houlsby et al. (2019); Pfeiffer et al. (2020); Sung et al. (2022); Li and Liang (2021), only the additional parameters added to the pre-trained models are fine-tuned for the downstream tasks while Yang et al. (2022); Zaken et al. (2022) fine-tune a subset of parameters for downstream tasks without inserting any new modules. Zaken et al. (2022) show that just fine-tuning the bias parameters, which constitute about 0.1% of the overall parameters, can outperform full fine-tuning, especially for small datasets. Motivated from NLP, PEFT techniques are also applied to pre-trained models trained using images (Jia et al., 2022; Chen et al., 2022; Sung et al., 2022; Lee et al., 2022), and speech (Li et al., 2021; Huo et al., 2022). In this work, we study PEFT techniques applied to speech in the context of TTT, focusing on the following questions: 1) Can PEFT techniques achieve comparable or improved performance compared to full fine-tuning in TTT? 2) Can PEFT be more stable than full fine-tuning in TTT?
**Self-Supervision using masking in speech.** Denoising autoencoders (Vincent et al., 2008) is one of the earliest forms of self-supervision in speech (Lu et al., 2012). Subsequent advancements in self-supervision have predominantly focused on masked language modeling (MLM) (Liu et al., 2020; Baevski et al., 2020; Hsu et al., 2021; Gong et al., 2022; Huang et al., 2022). Many of these works employ transformer networks trained with MLM frameworks to achieve state-of-the-art performance on various speech-related downstream tasks. More recently, the concept of a masked autoencoder (MAE) based on the Vision Transformer (ViT) has been extended to the audio and speech domain (Huang et al., 2022; Baade et al., 2022; Chong et al., 2022; Nizumi et al., 2022). In MAE, only the non-masked spectrogram patches are encoded, distinguishing it from other approaches that encode both masked and non-masked input (wave/spectrogram) segments for self-supervised pre-training. This distinction makes MAE-based models computationally appealing. In this work, we analyze the effect of distributional shift on the performance of MAE for speech and use TTT-based approach to enhance the robustness of MAE in the presence of such distributional shifts.
## 3 Method
**Pre-training MAE.** Our masked autoencoder (MAE) for speech, following Huang et al. (2022); Chong et al. (2022), aims to reconstruct the masked patches of the speech Mel-spectrogram with an asymmetrical encoder-decoder architecture. Below, we provide a brief overview of the MAE.
First, we transform the input speech waveform into 128-dimensional Mel-spectrograms using a Hanning window of size 25ms for every 10ms. Next, we divide the spectrogram into a sequence of non-overlapping patches, each patch sized 16 \(\times\) 16. These patches are then flattened and embedded with a linear projection layer. To provide positional information, fixed sinusoidal positional embeddings are added to the embedded patches. Afterward, we randomly mask 75% of the patches while preserving positional indices of all the patches. This enables the decoder to reconstruct the spectrogram. For the encoder, only unmasked patches are used to generate latent representations. The decoder then tries to reconstruct the original spectrogram, given latent representations of the encoder and masked patches as input. The latent representations and masked patches are organized in the initial order before being provided as input to the decoder. During training, the objective is to minimize mean squared error (MSE) between reconstructed and input spectrograms, averaged over the masked patches.
**Train-time training.** For the downstream tasks, we only use the encoder and discard the decoder. The latent representations generated by the encoder are provided as input to the task-specific classifier head. In this paper, we consider three different ways to use the pre-trained encoder for downstream tasks: 1) Linear probing: freeze the encoder to use it as a feature extractor and train only the classifier head; 2) Fine-tuning: train both encoder and classifier head, end-to-end, for the downstream task; 3) Linear-probing and fine-tuning (LP-FT): first train only the classifier head using linear probing, and then fine-tune both encoder and classifier head end-to end as explained in (Kumar et al., 2022).
**Test-time training.** Similar to (Sun et al., 2020, Gandelsman et al., 2022), we use a Y-shaped architecture: a shared encoder network \(e\) followed by two heads, a self-supervised head \(g\) and a task-specific classifier head \(h\). Here, \(e\) and \(g\) are the encoder and decoder networks of the pre-trained MAE, respectively. The classifier head \(h\), uses a linear projection from the dimension of the encoder features to the number of classes, depending on the downstream task.
When using TTT, we use linear-probing to update the weights of the classifier head \(h\) and freeze the weights of the shared encoder \(e\) during train-time training. As explained in Gandelsman et al. (2022), we found that linear-probing is more suitable for TTT as compared to full-model fine-tuning. At test-time, the parameters of the shared encoder are updated to minimize the self-supervised loss. We explore the following approaches to update the weights of the encoder during test-time:
1) **Full fine-tuning**: In full fine-tuning, all parameters of the shared encoder are updated to minimize the self-supervised loss across various augmentations of a single test sample. However, the large size of pre-trained models, such as the MAE used in this study with 75M parameters, makes full fine-tuning computationally expensive during test time. Moreover, extensive steps of full fine-tuning during test time, as shown in Figure 1, can result in performance degradation. Since there is no validation data available at test time, early stopping is not an option.
Therefore, it is desirable to maintain stability in performance during Test-Time Training (TTT). However, even when using the SGD optimizer as suggested in (Gandelsman et al., 2022), full fine-tuning still exhibits performance degradation in speech-related tasks. Furthermore, TTT techniques entail a higher computational cost as TTT is approached as a one-sample learning problem. This means that the model can only be adapted to one test sample at a time, and batch processing is not feasible under the assumption that each test sample is subject to a different distribution shift.
To address these challenges, we investigate parameter-efficient fine-tuning techniques (PEFT) that have proven to be highly effective in supervised learning tasks within the field of NLP. However, their application in the context of TTT has not previously been explored.
2) **Parameter-efficient fine-tuning (PEFT)**: Here we explore different PEFT techniques to adapt the encoder to the test sample during inference. We conduct experiments using four different PEFT techniques for TTT: (i) First block: updating only the final/last block of the encoder, (ii) Last block: updating only the final/last block of the encoder, (iii) Middle block: updating only the middle block of the encoder, and (iv) Bias: updating only bias parameters of the encoder using the self-supervised task on the test sample. As shown in Table 1, the number of trainable parameters for updating any one block (first, middle, or last) accounts for 9.48% of the total parameters, while updating the bias parameters involves just 0.1% of the total parameters. In this study, we primarily focus on bias fine-tuning for TTT due to the following reasons:
* Bias fine-tuning is much more lightweight compared to full fine-tuning, training 830 times fewer parameters (0.78M vs. 64M).
* Zaken et al. (2022) demonstrated that, in supervised learning with limited training data, fine-tuning only bias parameters yields superior performance to full fine-tuning. Since TTT-MAE can be regarded as a one-sample unsupervised domain adaptation technique, we investigate effectiveness of updating only bias parameters during test-time training.
* TTT techniques incur higher computational costs as TTT is approached as a one-sample learning problem. This means that the model can only be adapted to one test sample at a time, and batch
\begin{table}
\begin{tabular}{l c} \hline \hline & \#parameters(Proportion \%) \\ \hline MAE & 74751488 (100.00) \\ Encoder & 64457216 (86.23) \\ Decoder & 10294272 (13.77) \\ One Encoder block & 7087872 (9.48) \\ Bias - MAE & 96000 (0.13) \\ Bias - Encoder & 77568 (0.10) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Trainable parameters in different blocks of our MAE for speech. Number of Parameters in one Encoder block are same as the number of parameters in each of the first, middle or last blocks of the encoder.
processing is not feasible when each test sample is drawn from a different distribution. We show that bias fine-tuning allows processing an entire batch of test samples.
**Distributional shifts in speech.** In this work, we identify and examine two types of distributional shifts: (1) those resulting from background noise that degrades/distorts speech quality, and (2) natural distributional shifts resulting from inter-speaker variations, including gender, age, and speaking style.
To generate degraded speech, we introduce background noise to clean speech signals at a specified signal-to-noise ratio (SNR) (see Figure 2). We identify two categories of background noise: (1) _Time-invariant_, where noise characteristics remain constant over time. Examples include additive white Gaussian noise (AWGN) and air conditioner (AC). (2) _Time-varying_, where noise characteristics change over time. Examples include background babble, living room, restaurant, reverberation, and traffic. Distributional shifts introduced by these types of noise are generally difficult to learn, even with adversarial training. Furthermore, some noises (e.g. babble, restaurant, and reverberation) exhibit patterns similar to speech, which can corrupt and contaminate information contained in the original signal, such as linguistic content, speaker characteristics, emotions, etc.
In this study, we explore significance of TTT in handling distributional shifts introduced by both background noise and natural variations in speech. We show that TTT-based methods consistently outperform non-TTT techniques with significant margins. Use of PEFT techniques, particularly BitFit, further improve the performance and stability of TTT under different distributional shifts. To show the effectiveness of TTT under different distributional shifts, we conduct the following experiments: 1) Training with clean speech and testing with speech corrupted by various background noises. 2) Training models on one speaking style and testing with another speaking style. 3) Training with speech from one gender and testing with the other gender. 4) Training with either younger or older speakers and testing with the other.
Figure 1: We compare the accuracy (speaker identification) across TTT steps between three different variants of TTT (full, last-layer and bias fine-tuning). For most distributional shifts, full fine-tuning shows degradation in performance with longer test-time training (after 20 steps) whereas bias and last layer fine-tuning show relatively stable performance even after 25 steps
## 4 Experiments and Results
**Implementation Details** In all experiments, we use 9-layer ViT by default as the MAE encoder. For the decoder, we use a 3-layer Transformer. We use Voxceleb2 dataset [17] for pre-training the MAE. We pre-train the MAE for 120K steps with a batch size of 392, which takes about 7 days using 8 Nvidia RTX6000 24GB GPUs. The AdamW optimizer with an initial learning rate of 0.001 and a weight decay of 0.05 is applied. The learning rate has a cosine decay schedule [13] with 20K warmup steps. We transform raw waveform (mono-channel sampled at 16 KHz) into 128 Mel-frequency bands extracted with 20 ms Hanning window and 10 ms stride. We use the original speech spectrograms and apply no augmentations. We only use random masking of the spectrograms as an augmentation, with a masking ratio of 0.75 for pre-training.
During TTT, for each test sample, we train only the encoder (freezing the decoder weights)for \(20\) steps using SGD optimizer with a fixed learning rate of 2.5e-3, batch size of 128, momentum of 0.9 and weight decay of 0.2. We also show performance plots for \(25\) steps of TTT (see Figure 1). During TTT, we follow the same procedure as pre-training: mask 75% of the input patches and provide the unmasked patches as input to the encoder whereas all the patches are provided to the decoder. Then update the encoder weights using reconstruction loss (MSE) on the masked patches as the objective function. We follow the same procedure for full, bias, first-layer, middle-layer and last layer fine-tuning.s Moreover, we do not use any augmentation on top of random masking for TTT. We performed most of these experiments using a single Nvidia A40 48GB GPU. Unless and otherwise specified, we report results for TTT after 20 TTT steps.
**Dataset details.** Table 2 provide details of the datasets used in this work. We perform speaker identification using VCTK [16]; emotion recognition using CREMA-D [14], IEMOCAP [12], RAVDESS [15] and TESS [16] datasets; low-vocabulary speech recognition using Speech commands [21] dataset. We use original speech samples from each of the datasets in the pre-training and train-time training phases. To evaluate models' performance under different distributional shifts, we introduce diverse background noises sourced from Microsoft's Scalable Noisy Speech Dataset
Figure 2: Mel-spectrograms of speech with different distributional shifts due to background noises added at 0 dB SNR. (c) shows the distribution shift in Mel-spectrogram of clean speech (see (a)) when added with AWGN ((b)). (d)-(i) shows the Mel-spectrograms of speech when added with different background noises. Characteristics of background noises (e)-(i) vary with time thus distorting the characteristics of speech critical for speech-based applications. For instance, panel (e) shows how babble noise, which has similar characteristics to speech, when added to speech, introduces patterns similar to clean speech along both time and frequency dimensions, and thus distorts the patterns in clean speech. Similarly, living room (see (f) and restaurant (see (g)) noises distort speech patterns in time and frequency dimensions.
(MS-SNSD) (Reddy et al., 2019). These noises are exclusively added during the testing phase and are not used in pre-training or train-time training of the models.
**Test-time Training vs No Test-time Training.** In Figure 3, we compare TTT for speech with different non-TTT techniques (linear probing, fine-tuning, LP-FT (Kumar et al., 2022) and TENT (Wang et al., 2021)) for different background noises unseen during training. TTT outperforms the non-TTT techniques for every unseen background noise condition. When comparing between non-TTT techniques, for most of the background noises, simple linear-probing performs better than fine-tuning. Similar to images in (Kumar et al., 2022), LP-FT performs better than both linear-probing and fine-tuning. TENT, a test-time adaptation technique, performs better or comparable to LP-FT but performs inferior to TTT. This can be attributed to the fact that TENT requires a large set of test samples to learn the distribution of the test sample but performs poorly under one-sample testing condition, as illustrated in (Khurana et al., 2022).
Even though TTT achieves significant improvements in performance under different distributional shifts, there are a few shortcomings of TTT when applied to speech such as high memory requirements during TTT, degradation in performance across TTT steps and inability to process batch of test samples, as shown in Figure 1 and explained in Section 3.
We overcome these issues by incorporating PEFT techniques into TTT. PEFT techniques are light weight as we need to fine-tune fewer parameters compared to full fine-tuning, thus requiring lesser memory (see Table 1). We also find that, for speech, PEFT techniques achieve better consistency in performance compared to full fine-tuning across the TTT steps (see Figure 1). Here we compare performance of different PEFT techniques used during TTT under different distributional shifts in Table 3. PEFT techniques of fine-tuning a specific block performs comparable to full-fine-tuning. Similar to (Lee et al., 2022), we find certain layers are more suitable for a specific set of background
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline Dataset & Task & \#Classes & \#Speakers & \#Total & Total & Average \\ & & & & Samples & Duration & Length \\ \hline VoxCeleb2 & Pre-training & – & 5994 & 770000 & 2300 & 8.6 \\ VCTK & Speaker ID & 109 & 109 & 42075 & 44 & 3.8 \\ CREMA-D & Emotion Recog. & 4 & 91 & 4397 & 3.04 & 2.5 \\ IEMOCAP & Emotion Recog. & 4 & 10 & 5531 & 7.00 & 4.6 \\ RAVDESS & Emotion Recog. & 4 & 24 & 672 & 0.70 & 3.7 \\ TESS & Emotion Recog. & 4 & 2 & 1600 & 0.92 & 2.1 \\ Speech Commands & Ltd. Vocab. ASR & 12 & 2618 & 105829 & 29.5 & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Details of the datasets used for pre-training and downstream tasks. Total Duration is in hours and Average Length is in seconds. The tasks Speaker ID, Emotion Recog., Ltd Vocab. ASR refer to Speaker identification, emotion recognition and limited vocabulary ASR, respectively.
Figure 3: Compare TTT with non-TTT approaches (linear probing, fine-tuning, LP-FT, TENT) under different different shifts due to background noises (noises added at 0 dB). TTT significantly outperforms the non-TTT approaches across all the distributional shifts. Results averaged over 3 runs. Emotion classification reported in terms of unweighted average recall (UAR (%))
noises and there is no single block which is optimal for every background noise. Selecting a layer to fine-tune during TTT is not feasible as we only have a single test sample. To overcome this issue, we use Bitfit [22] for TTT, where we fine-tune only the bias parameters of the encoder during TTT. Bitfit, a light-weighted fine-tuning approach, consistently performs better (comparable) to full-fine-tuning across all the distributional shifts.
**Evaluation under natural distributional shifts.** We evaluate performance of TTT techniques across different natural distributional shifts caused by inter-speaker variations, e.g., speaking style (Table 4a), gender (Table 4b), and age (Table 5).
In Table 4a, we compare performance of non-TTT (Linear probing and fine-tuning) with TTT (full fine-tuning (Full) and Bias) techniques for distributional shift due to speaking style variation. Here we train on CREMA-D (emotively acted utterances in American English) and test with samples from IEMOCAP (emotive conversations enacted in American English),
RAVDESS (emotively acted utterances in North American English) and TESS (emotively acted
\begin{table}
\end{table}
Table 4: Emotion recognition under natural distributional shifts caused by (a) **speaking style variations**: Train model using CREMA-D (CRE) dataset and test with other datasets i,e IEMOCAP (IEM), RAVDESS (RAV) and TESS. Column CRE: matched condition – train and test on CREMA-D dataset, (b) **Gender variations**: Train model using speech data from speakers of one gender and test with speakers from other gender. F-M: train on female and test on male; M-F: train on male and test on female; F-F and M-M: train and test speakers from the same gender. We use CREMA-D dataset for these experiments. TTT variants (Full and Bias fine-tuning) outperform non-TTT methods (linear probing and fine-tuning) across different natural distribution shifts.
\begin{table}
\end{table}
Table 3: Performance of different variants of TTT under distributional shifts due to background noises added at 0dB SNR. Different variants of TTT: Full refers to Full fine-tuning; First, Middle, Last and Bias refer to fine-tuning only the first layer, middle layer, last layer and bias parameters, respectively. AWGN \(\rightarrow\) additive white Gaussian noise. Bias fine-tuning performs better than other variants of TTT across most of the distribution shifts due to background noise
\begin{table}
\end{table}
Table 5: Emotion classification under age variation. We use TESS dataset consisting of two female speakers: one Younger (Y) and one older (O). We train on one speaker and test using other speaker’s speech. Y-O \(\rightarrow\) train on Y and test with O; O-Y \(\rightarrow\) train on O and test with Y
utterances in Canadian English). TTT-based techniques achieve significantly better performance compared to non-TTT methods for speaking style variations.
To evaluate the performance of TTT in addressing distributional shifts caused by gender variations for the task of emotion classification (CREMA-D dataset), we conducted training using speech data from one gender (either female or male) and carried out testing using speech data from the opposite gender (male or female). Table 3(b) provides a comparison between non-TTT and TTT-based techniques. For gender variations, non-TTT approaches exhibit a steep decline in performance between matched (F-F and M-M) and mismatched (F-M and M-F) conditions. In contrast, TTT approaches (full and Bias) show very small degradation in performance between matched and mismatched conditions. For both matched and mismatched conditions, TTT with Bias fine-tuning performs better than full fine-tuning.
We evaluate performance of TTT under age variations using TESS dataset for the task of emotion recognition (refer to Table 5). The TESS dataset was collected from two female speakers: a younger (Y) speaker aged 26 years and an older speaker aged 64 years. In the O-Y scenario, we trained the model using younger (Y) speaker's speech and tested it with older (O) speaker's speech. Similarly, in the Y-O scenario, we trained with the older speaker (O) and tested with the younger (Y) speaker. For unmatched train-test conditions (Y-O and O-Y), TTT techniques outperformed non-TTT techniques, with bias fine-tuning performing better than full fine-tuning. Interestingly, even in matched conditions (Y-Y and O-O), TTT-based techniques performed comparably or better than non-TTT techniques for all the natural distributional shifts.
**Improved utilization of computational resources.** We discuss an approach to process a batch of test samples using TTT for the real world scenario of each test sample associated with a different distributional shift. Since each test sample gets its own copy of parameters, TTT can be applied with only one test sample at a time. However, bias fine-tuning gives us an unique opportunity to apply TTT to a batch of examples while still ensuring that each sample gets its own copy of parameters. For example, consider a simple linear layer containing weight matrix \(W\) and a bias-vector \(b\); given a batch \(\mathbf{x}\) containing \(B\) samples, the output of this layer can be written as \(\mathbf{y}=W\mathbf{x}+b\). In TTT-bias, each example in this batch gets its own \(b\) while sharing \(W\); now, suppose that the linear layer gets an additional input \(\Delta b\) (initialized to zeros) that contains \(B\) learnable vectors and is used to compute the output \(\mathbf{y}=W\mathbf{x}+b+\Delta b\): since \(\Delta b\) contains \(B\) learnable parameters, each parameter can be optimized independently while taking advantage of the GPU batch-processing. We illustrate this in Figure 4: in practice, this can be implemented as forward hooks in PyTorch without changing the model code. Refer to the supplementary material for further analysis and results.
## 5 Conclusion
In this work, we extend test-time training to speech related tasks such as speaker identification and emotion recognition. We extend the TTT-MAE, proposed in Gandelsman et al. (2022) for image recognition, to improve the performance of speech-related downstream tasks under a variety of distributional shifts. In our application of TTT-MAE to speech, we observed that TTT is sensitive to hyperparameters such as
Figure 4: Illustration of TTT-Bias Fine-tuning on Batch of Test Samples: for each example in a batch containing \(B\) samples, we create a learnable parameter corresponding to each trainable bias as shown in green boxes and the outputs of the modules are adjusted accordingly as shown. Only the parameters in green boxes need to be fine-tuned and since these parameters are not shared across examples in a batch, TTT-bias can take advantage of GPU batch processing to improve throughput.
training steps and subset of parameters considered for optimization. To overcome these issues, we applied PEFT techniques to make TTT more stable and scalabale. We find that PEFT techniques, being light-weight, achieve better or comparable performance to full fine-tuning. Specifically, we find that bias fine-tuning, motivated from BitFit, improves both performance and stability. Further, we propose a new approach to process batches of test samples using bias fine-tuning for TTT.
|
2309.16399 | Atmospheric loss in giant impacts depends on pre-impact surface
conditions | Earth likely acquired much of its inventory of volatile elements during the
main stage of its formation. Some of Earth's proto-atmosphere must therefore
have survived the giant impacts, collisions between planet-sized bodies, that
dominate the latter phases of accretion. Here we use a suite of 1D hydrodynamic
simulations and impedance match calculations to quantify the effect that
pre-impact surface conditions (such as atmospheric pressure and presence of an
ocean) have on the efficiency of atmospheric and ocean loss from proto-planets
during giant impacts. We find that -- in the absence of an ocean -- lighter,
hotter, and lower-pressure atmospheres are more easily lost. The presence of an
ocean can significantly increase the efficiency of atmospheric loss compared to
the no-ocean case, with a rapid transition between low and high loss regimes as
the mass ratio of atmosphere to ocean decreases. However, contrary to previous
thinking, the presence of an ocean can also reduce atmospheric loss if the
ocean is not sufficiently massive, typically less than a few times the
atmospheric mass. Volatile loss due to giant impacts is thus highly sensitive
to the surface conditions on the colliding bodies. To allow our results to be
combined with 3D impact simulations, we have developed scaling laws that relate
loss to the ground velocity and surface conditions. Our results demonstrate
that the final volatile budgets of planets are critically dependent on the
exact timing and sequence of impacts experienced by their precursor planetary
embryos, making atmospheric properties a highly stochastic outcome of
accretion. | Simon J. Lock, Sarah T. Stewart | 2023-09-28T12:46:02Z | http://arxiv.org/abs/2309.16399v1 | # Atmospheric loss in giant impacts depends on pre-impact surface conditions
###### Abstract
Earth likely acquired much of its inventory of volatile elements during the main stage of its formation. Some of Earth's proto-atmosphere must therefore have survived the giant impacts, collisions between planet-sized bodies, that dominate the latter phases of accretion. Here we use a suite of 1D hydrodynamic simulations and impedance match calculations to quantify the effect that pre-impact surface conditions (such as atmospheric pressure and presence of an ocean) have on the efficiency of atmospheric and ocean loss from proto-planets during giant impacts. We find that - in the absence of an ocean - lighter, hotter, and lower-pressure atmospheres are more easily lost. The presence of an ocean can significantly increase the efficiency of atmospheric loss compared to the no-ocean case, with a rapid transition between low and high loss regimes as the mass ratio of atmosphere to ocean decreases. However, contrary to previous thinking, the presence of an ocean can also reduce atmospheric loss if the ocean is not sufficiently massive, typically less than a few times the atmospheric mass. Volatile loss due to giant impacts is thus highly sensitive to the surface conditions on the colliding bodies. To allow our results to be combined with 3D impact simulations, we have developed scaling laws that relate loss to the ground velocity and surface conditions. Our results demonstrate that the final volatile budgets of planets are critically dependent on the exact timing and sequence of impacts experienced by their precursor planetary embryos, making atmospheric properties a highly stochastic outcome of accretion.
0000-0002-4000-7880]Simon J. Lock
0000-0002-3188-7880]Sarah T. Stewart
## 1 Introduction
How Earth acquired its unique atmosphere and ocean is a fundamental, unanswered question. Earth is thought to have gained a large fraction of its current budget of highly volatile elements (e.g., N, C, H, noble gases) during the main stages of accretion (e.g., Halliday, 2013; Mukhopadhyay & Parai, 2019). Accretion is a violent, stochastic process, and there are many mechanisms by which planets and their building blocks can gain and lose volatiles (e.g., O'Brien et al., 2014; Marty et al., 2016; Olson & Sharp, 2019; Raymond & Izidoro, 2017; Schlichting et al., 2015; Schlichting & Mukhopadhyay, 2018; Young et al., 2019; Odert et al., 2018). Determining how each potential mechanism works is vital for understanding the origin of Earth's volatile budget and that of other planets.
Giant impacts, collisions between planet-sized bodies, likely play a significant role in the chemical evolution of terrestrial planets (Genda & Abe, 2005, 2003; Kegerreis et al., 2020, 2020; Denman et al., 2020, 2022; Carter et al., 2018). Most terrestrial planets experience several giant impacts during their formation (e.g., Raymond et al., 2007; Quintana et al., 2016). Such collisions are incredibly dramatic events with large fractions of the mantles of the colliding bodies being melted or vaporized, variable amounts of crust, mantle, and core being ejected, and the post-impact body often left rapidly rotating (Canup, 2004; Lock & Stewart, 2017; Nakajima & Stevenson, 2015; Carter et al., 2020, 2018; Rufu et al., 2017; Lock et al., 2020). Giant impacts have a particular significance for Earth as it is thought that the last giant impact (or potentially the last few impacts: Rufu et al., 2017; Asphaug et al., 2021) onto the proto-Earth injected material into orbit out of which our Moon formed (Cameron & Ward, 1976; Hartmann & Davis, 1975). The exact scenario for the so-called Moon-forming giant impact and the mechanisms by which the Moon formed in the aftermath are
## 1 Introduction
The _Herschel_ and _Herschel_ are the most important tools for the study of the dynamics of the molecular gas. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon, and the molecular gas is a well-known phenomenon. The molecular gas is a well-known phenomenon, and the molecular gas is a well-
highly debated (Canup and Asphaug, 2001; Reufer et al., 2012; Canup, 2012; Cuk and Stewart, 2012; Lock et al., 2018; Rufu et al., 2017), but the event marks the end of the main stage of Earth's accretion.
In giant impacts, only a proportion of the volatiles on the colliding bodies is inherited by the final post-impact body, with the fraction of volatiles retained varying substantially between impacts. Volatiles are carried away dissolved or trapped in the ejected silicate and metal mass, and the atmospheres and oceans of the colliding bodies can also be directly ejected from the system. The latter process will be the focus of this paper.
There are two principal mechanisms by which atmosphere and ocean are ejected during giant impacts (Figure 1). First, close to the initial contact point between the colliding bodies, the crust and upper mantle of both bodies is ejected as melted and vaporized plumes (Figure 1B, e.g., Carter et al., 2018). Some of this material remains bound to the system, but a large fraction is typically ejected. What happens to any atmosphere or ocean close to the contact point during this process has not been studied in detail. At such high temperatures it is likely that the volatiles are fully soluble in the silicate (Lock et al., 2018; Fegley et al., 2023) and, if there is efficient mechanical mixing and chemical equilibration between the volatiles and silicate, the volatiles could share the same fate as the crust and upper mantle. Alternatively, the ocean and atmosphere may constitute a separate part of the escaping plumes and be lost at an efficiency dictated by the thermodynamics of shock and release of water and gas mixtures (e.g., Kegerreis et al., 2018, 2020), which could be somewhat different from that of the silicates. The reality is likely somewhere between these two extremes but, in either case, most accretionary giant impacts would drive loss of atmosphere and ocean from near the contact point (e.g., Carter et al., 2018; Kegerreis et al., 2018, 2020).
Away from the contact point, ocean and atmosphere can be lost through breakout of the impact shock wave from the surface of the planet (Figure 1C, Chen and Ahrens, 1997; Genda and Abe, 2003, 2005; Schlichting et al., 2015). The strong shock wave generated by the impact travels through each of the colliding bodies until it reaches the surface where the shock wave is transmitted to the atmosphere or ocean. The transmission of the shock wave from the planet to the atmosphere or ocean is known as the breakout of the shock wave, and leads to acceleration of the planets surface to velocities above that of the particle velocity of the shock within the planet (see Section 2). The acceleration of the planet's surface upon breakout means that transmission of the shock into the atmosphere/ocean is sometimes described as the atmosphere/ocean being 'kicked' by the silicate surface. The shock accelerates up the strong hydrostatic density gradient of the atmosphere and some fraction of the top of the atmosphere reaches escape velocity and is lost from the system (see Section 2 for a more extensive description of this process). The efficiency of loss due to the breakout of the shock wave from the impact has been quantified using 1D hydrodynamic simulations (Chen and Ahrens, 1997; Genda and Abe, 2003, 2005) and semi-analytical calculations (Schlichting et al., 2015) of the shock driven in the atmosphere for a given groud motion. 1D calculations have the advantage of being mathematically tractable or numerically inexpensive but, in order to determine the total loss from a given impact, knowledge of the ground velocity around the planet is required. Genda and Abe (2003) used a single value for the average ground velocity from numerical simulations of the canonical Moon-forming giant impact (a grazing collision by a Mars-mass impactor at near the escape velocity onto the proto-Earth, Canup and Asphaug, 2001) and concluded that only about 20% of the atmosphere would be lost from a planet with no ocean. Schlichting et al. (2015) used a simple 2D shock propagation model to calculate the ground velocity distribution across the surface and showed that, due to the highly non-linear relation between ground velocity and loss, the efficiency of loss was strongly sensitive to the distribution of ground motion and that using an average value for ground velocity underestimates the total loss by a factor of two. Yalinewich and Schlichting (2018) took such calculations to their logical conclusion by using 3D hydrodynamic giant-impact simulations to determine the ground ve
Figure 1: Atmospheric loss from giant impacts occurs through ejection in impact plumes or from ground motion further from the impact site. A: A schematic of a giant impact a few minutes before first contact with different material layers indicated by different colors. B: The same collision a few minutes after initial contact. Melted and partially vaporized plumes are extruded from the impact site, carrying away a fraction of the atmosphere and ocean near the contact site. A shock wave (white line) propogates away from the impact site into the rest of the impactor and target. C: A schematic of a giant impact approximately 20 minutes after first contact. The shock wave travels through the planet and breaks out at the surface. The resulting acceleration of the ground drives a shock into the ocean and atmosphere, driving loss. D: A schematic of the 1D simulations performed for this study which is similar to that used in previous work (Chen and Ahrens, 1997; Genda and Abe, 2003, 2005). A hydrostatic ocean and atmosphere are initialized at a radius equal to the planetary radius in a spherical geometry. The mantle and core of the planet are not modelled directly and the breakout of the shock from the planet is mimicked by giving the lower boundary of the domain an initial velocity, \(u_{\rm G}\).
locity distribution and hence the fraction of atmosphere that would be lost due to ground motion as a function of the impact velocity and the ratio of the sizes of the two colliding bodies.
It is possible to simultaneously capture both near and far-field loss by explicitly including atmospheres in 3D numerical simulations of giant impacts (Kegerreis et al., 2020, 2020, 2022). However, accurately resolving the thin atmospheres expected on many terrestrial planets, such as Earth, during the giant-impact phase requires extremely high resolution simulations. Advances in the scalability of hydrodynamic codes and the expansion in high performance computing resources have recently allowed direct simulation of atmospheres with surface pressures as low as 3.2 kbar (Kegerreis et al., 2020, 2020). Kegerreis and coworkers (Kegerreis et al., 2020, 2020) used their simulations to develop a scaling law that relates the loss due to a given impact to the parameters of the impact (impact velocity, impactor mass, etc.). Encouragingly, Kegerreis et al. (2020) found broad agreement between their results and those calculated by convolving 1D models of atmospheric loss with the ground velocity distributions from their impact simulations (as in Yalinewich and Schlichting, 2018). The efficiency of atmospheric loss from giant impacts varies widely, but most impacts only lead to the loss of a few tens of percent of the atmosphere and near-total loss is only achieved in high-velocity, near head-on impacts.
So far, the studies we have discussed considered atmospheric loss from planets that do not have oceans. However, during the giant impact phase of accretion, the time between impacts is long enough that the atmosphere of proto-planets would cool sufficiently between impacts for condensation of a surface ocean (Abe and Matsui, 1988). Genda and Abe (2005) explored the effect of a surface ocean on atmospheric loss and concluded that the presence of an ocean can significantly increase the efficiency of loss (a full explanation of this phenomena is given in Section 4.2). So far, oceans have not been included in 3D simulations and so the quantitative effect of the presence of an ocean on total loss is not known. However, the results of Genda and Abe (2005) suggest that the thermal state of a planet's surface could make the difference between a proto-planet losing or retaining its atmosphere during a giant impact.
In this paper, we explore the effect that the surface conditions (e.g., atmospheric pressure, temperature, and composition, and the depth of an ocean) on the colliding bodies have on the efficiency of atmospheric and ocean loss from planets with small to modest atmospheric mass fractions. Previous studies have considered only a limited range of surface conditions and it is not well known how parameters such as planetary mass, ocean depth, surface pressure, surface temperature and atmospheric composition affect the efficiency of loss. Full 3D impact simulations are limited by their resolution to calculating atmospheric loss from bodies with thick atmospheres, on the order of several kbar for an Earth-mass body (Kegerreis et al., 2020). Similarly, the highest resolution simulations currently would only be able to resolve oceans deeper than \(\sim 40\) km. For the formation of planets, at least in our own solar system, it is important to understand the loss of much thinner atmospheres and shallower oceans. For example, it is typically thought that ancient Earth and Venus had atmospheres of a few hundred bar (Kasting, 1988; Marty, 2012; Halliday, 2013; Sossi et al., 2020). To resolve such thin atmospheres, we take a hybrid approach, using 1D hydrodynamic simulations of loss due to a given ground motion to relate the surface properties to the efficiency of loss. These results can then be convolved with ground velocity distributions calculated from 3D giant-impact simulations to quantify the efficiency of loss from any given impact. In this paper we describe our 1D simulations which we will combine with 3D giant impact simulations in future work.
We begin by providing an overview of the processes controlling the breakout of the shock wave from the planet and impedance match calculations (Section 2). We will then describe our methods (Section 3) and report our results for the relationship between surface conditions, ground motion, and loss without (Section 4.1) and with an ocean (Section 4.2). Section C outlines a parameterization that describes the efficiency of atmospheric and ocean loss as a function of ground velocity, planetary mass and the ratio of atmospheric to ocean mass. In Sections 5 and 6 we explore the relationship between the strength of the shock in the planet and the ground motion and bound the effect of more complicated ground motions on the efficiency of loss. In Section 7 we discuss the implications of our results and conclude in Section 8. A full description of the numerical methods is contained in the appendix.
## 2 The physics of loss due to breakout of the impact shock wave
In this section, we give an overview of the physical processes that occur when the impact shock wave reaches the surface of the planet (Figure 1C) and how this results in loss of atmosphere/ocean. Here we will consider only the period immediately upon release of the shock and return to discuss the processes that complicate this simple picture later in evolution in Sections 4 and 7.
Figure 2: Caption on next page.
Figure 2 illustrates the stages in the breakout of an impact shock wave from the surface of a planet with an atmosphere but no ocean. The left hand column shows a schematic of the physical location and velocity of the material at different stages, and the right hand column shows the dynamics and thermodynamic states of material in pressure - particle velocity space. At the time illustrated in Figure 2A the shock in the planet is approaching the surface. The shock accelerates and compresses the rock to a point along its Hugoniot, the locus of thermodynamic states and velocities that can be reached by shock compression (black solid line in the right column of Figure 2). The point on the Hugoniot to which the rock is shocked gives the strength of the shock, which we quantify by the particle velocity of the shock in the planet, \(u_{p}^{G}\).
When the shock wave reaches the surface of the planet (Figure 2B), the pressure differential between the shocked rock and the atmosphere causes the surface to accelerate and the shock wave propagates into the atmosphere. However, the Hugoniot of a typical atmosphere (e.g., solid orange line in Figure 2B, right) is shallower than that of rocks, due to the lower shock impedance (i.e., resistance to compression) of gases compared to rocks. As a result, the pressure in the shocked atmosphere is much lower than in the shocked rock for a given particle velocity. The ground must release to a lower pressure, following an isentrope (black dashed line), until it intersects the gas Hugoniot to achieve both pressure and particle velocity continuity with the atmosphere (black and orange pentagon). The particle velocity and thermodynamic properties at which the rock release curve and gas Hugoniot intersect is called the impedance match solution. The impedance-match velocity of the surface is greater than the particle velocity of the shock within the planet before release. The acceleration of the surface leads to a release wave that propagates back into the planet (white dashed line in Figure 2B, left) and more and more of the rock accelerates to the impedance match velocity. Meanwhile, as the shock propagates up the atmosphere, the density of atmosphere in front of the shock falls, and so a higher particle velocity is required to achieve pressure continuity. The shock therefore accelerates as it moves upwards in the atmosphere. When the shock reaches the top of the atmosphere, a release wave, analogous to that in the rock, propagates downwards in the atmosphere (upper white dashed line in Figure 2C, left), causing the atmosphere to accelerate to even higher velocity (orange dashed line in Figure 2C, right). The process of acceleration of the shock wave upwards in the atmosphere and the subsequent release of the atmospheric shock drives a portion of the atmosphere to velocities above the escape velocity. Therefore, even when the initial ground-atmosphere impedance match velocity is much lower than escape, a portion of the atmosphere can be lost.
The presence of an ocean changes the efficiency of loss as the ocean has a different shock impedance than the rocky surface or atmosphere. Figure 3 shows the equivalent cartoon to that in Figure 2 for the same strength of impact shock, but when there is an ocean present. Before the shock wave breaks out from the surface of the planet (Figure 3A) the situation is much the same as in the no-ocean case, except that the initial pressure of the rock is increased due to the mass of the ocean. The low compressibility of rocks means that the increased pre-shock pressure has little effect on the Hugoniot. When the shock wave from the impact reaches the surface of the planet (Figure 3B), the shocked rock releases. In a similar manner to in the no-ocean case, the water is shocked to a point on its Hugoniot (blue solid line in Figure 3B, right) that intersects the release curve for the rock. The impedance match between the rock and water (blue and black pentagon in Figure 3B, right) is at a higher pressure and lower particle velocity than the impedance match between the rock and atmosphere as the compressibility of the ocean is lower than that of the
Figure 2: The efficiency of loss is strongly influenced by the relative impedance of the atmosphere, ocean, and ground. Shown is a schematic that shows the relative position (left column) and the thermodynamic state (right) of the different material layers at different stages (rows) in the passage of the shock from the planet into the atmosphere. Left: Colors indicate different materials with rock in black, and atmosphere in orange. In the left column, darker shades of these colors indicate material under compression in the shock. Boundaries between materials are shown as thin black lines with their velocities shown as black arrows. The shock wave is indicated by a thick white line. Release waves are shown as white dashed lines and their velocities as white arrows. Where relevant, key dynamic and thermodynamic variables are noted. Right: Schematic particle velocity - pressure plots for the impedance match calculation between rock and atmosphere. Key pressures and particle velocities corresponding to different stages of the thermodynamic evolution of material are given on the axis, as labelled in the left column. Solid lines are shock Hugoniots, the locus of points that a shocked material can reach from an initial starting position. Hugoniots are not thermodynamic paths and the point reached material on each Hugoniot is indicated by a filled symbol. Dashed lines are release curves followed by material decompressing from a shocked states. Release curves are thermodynamic paths and the material moves along these lines. As in the left column, colors indicate different materials. A similar schematic for a planet with an ocean is shown in Figure 3
gas. When the shock reaches the surface of the ocean, the water itself releases, accelerating the atmosphere to the ocean-atmosphere impedance match velocity. The release curve for water (blue dashed line in Figure 3C, right) is shallower than that of rocks, and the impedance match velocity of the ocean surface with the atmosphere is higher than that of the ground in the no-ocean case. In the ocean case, the surface driving the loss of the atmosphere is effectively the surface of the ocean, not the ground, and the higher-velocity of the ocean surface has the potential to drive greater loss than in the no-ocean case. After release from the surface of the ocean, the shock propagates up the atmosphere in the same way as the no-ocean case (Figure 3C).
In both the ocean and no-ocean cases, how a given strength of shock in the planet translates to the velocity of the ocean surface or ground depends on the slope of the atmospheric Hugoniot, and therefore on the properties of the atmosphere. We will discuss these effects in Section 5.
## 3 Methods
### 1D hydrodynamic calculations
To calculate atmospheric and ocean loss due to ground motion we follow a similar approach to that of Genda & Abe (2003, 2005). A hydrostatic atmosphere, and in most cases a hydrostatic ocean, is initialized at the radius of the planet's surface in a 1D spherical geometry (Figure 1D). The breakout of the shock wave is then simulated by giving the lower boundary a vertical velocity that generates a shock wave in the (ocean and) atmosphere that accelerates a fraction of the (ocean and) atmosphere to escape.
We adapted the 1D WONDY hydrodynamic code (Kipp & Lawrence, 1982) to calculate the evolution of the atmosphere (and ocean) in response to the motion of the ground. WONDY solves the Lagrangian 1D mass, momentum, and energy equations using a finite difference method. Artificial viscosity is used to resolve shocks by spreading the shock front over several Lagrangian cells. We have expanded the capabilities of the WONDY code by adding options for radial gravity, three additional equations of state (EOSs), and hydrostatic initialisation of an atmosphere and ocean. A full description of the adapted code and sensitivity tests are given in Appendix A, but we provide a summary here.
The atmosphere and ocean were both modeled using 500 Lagrangian cells (or zones) each, and are initialized as stationary, hydrostatic, and adiabatic, isothermal or isoenergetic depending on the equation of state (EOS) used. By assuming that the atmosphere and ocean are stationary and hydrostatic we neglect the influence of the gravity of the other colliding body which could deform the surface and disturb the atmospheric structure. We will address the effect of pre-impact redistribution of the atmosphere and ocean in future work. The properties of the atmosphere were set by defining a surface temperature (\(T_{0}\)) and pressure (\(p_{0}\)) and the structure of the atmosphere was determined by integrating upwards from the surface. The top of the atmosphere is treated as a stress-free boundary, implemented by using an additional mass-less, zero-pressure cell at the top of the atmosphere. The atmosphere was modelled as an ideal gas with a constant molar mass (\(m_{\rm a}\)) and ratio of specific heat capacities (\(\gamma\)). When present, the ocean was initialized with a given depth (\(H_{\rm oc}\)) and the initial structure of the ocean was calculated by integrating downwards from the ocean surface, assuming thermal equilibrium with the atmosphere at the base of the atmosphere. In most simulations, we used the water EOS of Senft & Stewart (2008) to describe the thermodynamic properties of the ocean. In order to compare our results to those of Genda & Abe (2005), we also ran simulations using the International Association for the Properties of Water and Steam (IAPWS) EOS (Wagner, 2002) and the Tilloston EOS (Tillotson, 1962) using the parameters from O'Keefe & Ahrens (1982). We find good agreement between our results using the Senft & Stewart tabulated EOS and the IAPWS EOS, which is not surprising as the Senft & Stewart EOS was constructed partly using the IAPWS EOS, but find significant differences when using the Tillotson EOS which we discuss in Section 4.2.1.
As in previous work (Genda & Abe, 2003, 2005), we do not directly model the shock in the planet. Instead, the propagation of the shock from the planet into the ocean/atmosphere, is simulated by imposing the velocity of the lower boundary of the domain, i.e., the ground motion. The boundary is given an initial velocity (\(u_{\rm G}\)) and then allowed to follow a ballistic trajectory, ignoring the influence of any forces other than gravity (see Section A.3 for more details). Imposing a ballistic boundary condition assumes that the mass of the ground is much greater than the mass of any ocean and/or atmosphere and thus the ground is not slowed significantly by transferring momentum to the ocean and/or atmosphere. For the range of oceans and atmospheres we consider in this work, this is a good approximation. Even for the most massive atmosphere and ocean combination we simulated, the mass of the ocean and atmosphere combined is equivalent to a surface layer of only \(\sim 10\) km, and is typically much lower. Using the ballistic boundary condition, if the ground velocity is below the escape velocity, the boundary eventually stops and then accelerates downwards towards its initial position. When the
Figure 3: Caption on next page.
boundary approaches its initial position, it is gradually brought to a stop. The prescription of this later-stage evolution of the boundary rarely has an effect on the amount of loss. We discuss the effect of non-ballistic motion of the boundary in Section 6. It is important to note that \(u_{\rm G}\) is the velocity of the ground upon breakout of the shock into the atmosphere or ocean (i.e., the impedance-match velocity; see Section 2), and is not the particle velocity of the shock in the planet. The relationship between the strength of the shock in the planet and \(u_{\rm G}\) can vary depending on the properties of the atmosphere or ocean, and we discuss this in Section 5. Furthermore, prescribing a ballistic trajectory ignores any further positive acceleration of the ground by decompression of the surface to pressures below the impedance match solution. We discuss the implications of this simplification in Section 6.
Simulations were run for 5000 s to ensure that a plateau in atmospheric/ocean loss was achieved. A small number of runs failed before completion. Failed runs were typically for either particularly high or low ground velocities. Failure was generally due to either insufficient numerical viscosity in the first few time steps or due to complications with stopping of the boundary upon its ballistic descent late in time. If a plateau in loss had been reached prior to failure this value was taken as the final loss, otherwise the result was discounted and not considered in our results. In another subset of cases, the stopping of the boundary upon descent caused a secondary shock into the atmosphere leading to additional loss. This secondary shock is likely unrealistic as the surface would have spalled or vaporized shortly after release. Generally, the amount of additional loss is small as the initial hydrostatic structure of the atmosphere that allowed for the acceleration of the initial shock has been disrupted. To account for this effect, we identified the plateau in loss due to the original shock and took the loss just before the passage of the second wave as the final value for loss. We tested the sensitivity of our results to the intrinsic parameters of the code and found no variation within the range of reasonable values (Appendix B.1).
We ran simulations for a wide variety of surface pressures, surface temperatures, atmospheric compositions, ocean depths, planetary masses, ground velocities, and using the three different EOS for water to explore the dependence of each of these parameters on the efficiency of loss. The details of the surface conditions used in each set of calculations are described at the relevant point in Section 4.
### Impedance match calculations
The impedance match velocities and pressures between different layers were found numerically, by solving for the intersection between the relevant release curve (of the ground/ocean) with the Hugoniot of the layer above. Hugoniots were calculated by iteratively finding the particle velocity, \(u_{\rm p}\), that satisfied the Rankine-Hugoniot equations, or using the analytical expressions for shocks in an ideal gas (Zel'Dovich and Raizer, 2002; Melosh, 1989). Release curves were calculated as isentropes through the relevant EOS with the solutions found iteratively, if necessary. Jupyter notebooks, python scripts, and a widget that can calculate the impedance match between different materials will be made available on publication of this work.
The EOS used for water and atmospheres were the same as for the 1D hydrodynamics simulations (Section 3.1), and the ground was modelled as forsterite (Stewart et al., 2019). To allow discussion of previous work (Kegerreis et al., 2020, 2018), we also calculated impedance matches using the EOS for H\({}_{2}\)-He mixtures from Hubbard and MacFarlane (1980). An early version of the Hubbard and MacFarlane (1980) EOS that was included in the SWIFT hydrodynamics code (Schaller et al., 2018; Kegerreis et al., 2019) and used in previous work (Kegerreis et al., 2020, 2018; Kegerreis et al., 2020) contained an error in the calculation of internal energy. The Hubbard and MacFarlane (1980) EOS is defined by expressions for
Figure 3: The presence of an ocean can strongly influence the efficiency of loss. Shown is a schematic that shows the relative position (left column) and the thermodynamic state (right) of the different material layers at different stages (rows) in the passage of the shock from the planet, through the ocean, and into the atmosphere. Left: Different materials are indicated by different colors: rock - black, water - blue, and atmosphere - orange. In the left column, darker shades of these colors indicate material under maximum compression in the shock. Boundaries between materials are shown as thin black lines with their velocities shown as black arrows. The shock wave is indicated by a thick white line. Release waves are shown as white dashed lines and their velocities as white arrows. Where key dynamic and thermodynamic variables apply are noted. Right: Schematic particle velocity - pressure plots for the impedance match calculation between rock, water and atmosphere. Key pressures and particle velocities corresponding to different stages of the thermodynamic evolution of material are given on the axis, as labelled in the left column. Solid lines are shock Hugoniots, the locus of points that a shocked material can reach from an initial starting position. Hugoniots are not thermodynamic paths and the point reached material on each Hugoniot is indicated by a filled symbol. Dashed lines are release curves followed by material decompressing from a shocked states. Release curves are thermodynamic paths and the material moves along these lines. As in the left column, colors indicate different materials. A similar schematic for the case with no ocean is shown in Figure 2
Figure 4: Caption on next page.
pressure and heat capacity as functions of density and temperature. Previous work calculated the specific internal energy as
\[\epsilon=c_{V}(\rho,T)\,T \tag{1}\]
where \(c_{V}\) is the specific heat capacity, \(\rho\) is the density, and \(T\) is temperature, but this expression neglects the change in heat capacity as a function of temperature and density. Here, we calculate internal energy as an integration first along the \(T=0\) K isotherm, and then along an isochore:
\[\epsilon(\rho,T)=\int_{\rho_{0}}^{\rho}\frac{\partial\epsilon}{\partial\rho^{ \prime}}\bigg{|}_{T=0}d\rho^{\prime}+\int_{0}^{T}\frac{\partial\epsilon}{ \partial T^{\prime}}\bigg{|}_{\rho^{\prime}=\rho}dT^{\prime}\;, \tag{2}\]
where primes indicate integration variables. In the formulation of Hubbard & MacFarlane (1980) the first term, the integral along the isotherm, is zero and so
\[\epsilon(\rho,T)=\int_{0}^{T}\frac{\partial\epsilon}{\partial T^{\prime}} \bigg{|}_{\rho^{\prime}=\rho}dT^{\prime}=\int_{0}^{T}c_{V}(\rho,T^{\prime})dT ^{\prime}\;, \tag{3}\]
which can be solved analytically. This is now the method used for calculating internal energy in the Hubbard & MacFarlane (1980) EOS table included in the current and future releases of SWIFT. It is important to note that an EOS defined by expressions for heat capacity and pressure alone, such as the Hubbard & MacFarlane (1980) EOS, is non-conservative. In other words, integration of thermodynamics variable along different paths between points in phase space can give different values. There is thus no single definition of internal energy, and our choice of energy calculation only improves on previous work in that it gives a value that is consistent with the defined EOS.
## 4 Results of 1D Hydrocode Simulations
In this section we discuss the results of our numerical calculations. First, we will consider the efficiency of loss as a function of ground velocity without (Section 4.1) and with an ocean (Section 4.2), including comparisons to previous results. We present a parameterization of the relationship between ground velocity and loss in both cases in Section C.
### The dependence of loss on ground velocity in the absence of an ocean
Atmospheric loss in the no-ocean case has been considered in a number of previous studies (e.g., Chen & Ahrens, 1997; Genda & Abe, 2003; Schlichting et al., 2015). Genda & Abe (2003) explored seven cases of ideal, adiabatic atmospheres with different atmospheric compositions, surface temperatures and pressures, and atmospheric compositions, and conducted simulations using different prescriptions for \(\gamma\). They found that the degree of loss is relatively minimal unless the ground velocity exceeds \(\sim 0.5v_{\rm esc}\) and observed relatively little variation in the efficiency of loss between different atmospheric properties and prescriptions for \(\gamma\). Schlichting et al. (2015) calculated loss for both isothermal and adiabatic atmospheres with \(\gamma=4/3\) and \(\gamma=5/3\). In agreement with Genda & Abe (2003) they found little difference in the efficiency of loss between atmospheres with different \(\gamma\), but found that loss from isothermal atmospheres was somewhat less efficient than for adiabatic atmospheres at the same ground velocity (a difference in loss of up to \(\sim 10\%\)). Here we revisit atmospheric loss in the absence of an ocean to ground truth our numerical model and to further explore the effect of atmospheric properties and planetary mass on the efficiency of loss.
Figure 4 shows the evolution of an atmosphere upon breakout of a shock a planet's surface as calculated using our 1D hydrocode. The evolution follows that expected based on the physics described in Section 2. The pressure of the shock at the base of the atmosphere is set by the impedance-match solution at the particle velocity of the ground imposed in the simulation. The shock wave accelerates up the strong adiabatic density gradient of
Figure 4: Atmospheric loss is driven by acceleration of the impact shock wave as it travels up the hydrostatic pressure profile. Shown are the velocity, pressure, density and temperature profiles of the atmosphere (with each row a different variable) at different times after break out of the shock wave from the planet in a 1D simulation. Lines of the same color in each panel show the atmospheric structure at the same time after initial breakout (see legends in the second row). Each profile is plotted as a function of position relative to the initial location of the ground (x-axis), and hence the location of the bottom of the atmosphere moves to higher values with time. Each column shows a different set of time steps with the axis scales altered to be appropriate for each set of time steps. The sharp increase in all parameters in the first column is the shock wave, which moves upwards in the atmosphere over time, i.e., as line colors become lighter. At about \(\sim 3\) s, between the first and second columns, the shock reaches the top of the atmosphere and release to vacuum rapidly accelerates the top of the atmosphere to escape. For this simulation, there was no ocean and the atmosphere was Earth-like (\(m_{\rm a}=29\) g mol\({}^{-1}\), \(\gamma=1.4\): Genda & Abe, 2003) with a surface pressure and temperature of \(p_{0}=100\) bar and \(T_{0}=283\) K, respectively. The initial ground velocity was \(u_{\rm G}=5.5\) km s\({}^{-1}\) and the final loss fraction was 0.25. Gray dashed lines give the escape velocity as a function of distance from the center of the planet. To avoid showing known numerical artifacts near the center of the calculation (see Section A.3), the density and temperature of the zones closest to the lower boundary are not plotted. This figure is comparable to Figure 3 of Genda & Abe (2003). An animated version of this figure is available lasting 11 s.
the atmosphere, heating and compressing the gas, until the shockfront reaches the top of the atmosphere (at \(\sim\)3.2s in Figure 4). When the shock reaches the low-density edge of the atmosphere the compressed gas expands rapidly, reaching speeds far in excess of the escape velocity, and the top of the atmosphere is lost from the gravitational well of the planet (Figure 4E, I). Momentum transfer to the portion of the atmosphere that is lost occurs in the first few seconds to tens of seconds after the release of the shock into the atmosphere. After this point the lost portion of the atmosphere behaves almost ballistically and its velocity begins to fall as it moves further from the planet. In our simulations, the ground eventually stops (at 830 s in Figure 4) and the remaining bound atmosphere begins to fall back down to the planet. In reality, before this point the release wave from other parts of the surface could have slowed the ground motion and the rock surface could have spalled, melted, or vaporized. However, as the momentum is transferred to the lost portion of the atmosphere very early, these complications likely do not affect the efficiency of loss from the initial shock wave (see Kegerreis et al., 2019, and Section 6).
We find good agreement between our results and those of previous studies, and demonstrate more completely that the relationship between ground velocity and atmospheric loss in the absence of an ocean is relatively insensitive to atmospheric composition, surface temperature and pressure, and planetary mass. Figure 5A shows the efficiency of atmospheric loss for H\({}_{2}\) (\(m_{\rm a}=2\) g mol\({}^{-1}\), \(\gamma=1.4\)), H\({}_{2}\)O (\(m_{\rm a}=18\) g mol\({}^{-1}\), \(\gamma=1.25\)), CO\({}_{2}\) (\(m_{\rm a}=44\) g mol\({}^{-1}\), \(\gamma=1.29\)) and an approximation to an Earth-like N\({}_{2}\) and O\({}_{2}\)-dominated atmosphere (\(m_{\rm a}=29\) g mol\({}^{-1}\), \(\gamma=1.4\); Genda & Abe, 2003) with surface temperatures of 283 and 3000 K (H\({}_{2}\), CO\({}_{2}\) and Earth-like) or 300 (H\({}_{2}\)O) and surface pressures of 100 bar. The black dashed line is the result from Genda & Abe (2003) for a 1 bar atmosphere of Earth-like composition and a surface temperature of 288 K. With the exception of the 3000 K, H\({}_{2}\) atmosphere, all of the simulations gave very similar results and were in good agreement with those of Genda & Abe (2003) and Schlichting et al. (2015). Loss of the high-temperature H\({}_{2}\) atmosphere is slightly less efficient for a given ground velocity (at most a few percent), which may be surprising given that the high-\(T\) H\({}_{2}\) atmosphere is much more extended, and so more loosely bound, than the other example atmospheres. The high-\(T\) H\({}_{2}\) atmosphere has a height of 2.3 \(R_{\rm Earth}\) (Earth radii), compared to a maximum height of 0.07 \(R_{\rm Earth}\) for the other examples. The pressure and density gradients are much lower, affecting how the shock wave accelerates through the atmosphere, and more of the mass of the atmosphere is at lower pressure. In addition, the hot H\({}_{2}\) atmosphere is so extended that for ground velocities less than \(\sim 0.6\)\(v_{\rm esc}\) the ground has stopped and begun to fall back to its original postion even before the initial shock has reached the top of the atmosphere. The release wave from the reversal of the ground velocity propagates upwards in the atmosphere and may play a role in reducing the efficiency of loss compared to other cases where the shock wave is supported as the atmosphere is accelerated to escape.
Figure 5B shows the efficiency of atmospheric loss for atmospheres of varying surface pressures on planets of between Mars and Earth mass. Points are for all combinations of atmospheric pressures of 0.1, 0.5, 1, 5, 10, 50, 100, and 500 bar and planetary masses of of 0.107 (\(M_{\rm Mars}\)), 0.3, 0.5, 0.7, 0.9, and 1 \(M_{\rm Earth}\) with mass indicated by color. The black dashed line is the result from Genda & Abe (2003) for an Earth-mass planet and the solid black line is a fit to our simulation results (see Section C). There is very little variation in the efficiency of loss as a function of ground velocity with atmospheric pressure and planetary mass, when ground velocity is normalized to the escape velocity. This confirms what has been assumed in other studies (Genda & Abe, 2003, 2005; Schlichting et al., 2015) that the effect of planetary mass in the absence of an ocean is almost entirely accounted for by normalization to the escape velocity.
### Dependence of loss on ground velocity in the presence of an ocean
The efficiency of atmospheric loss for a given ground velocity can be significantly enhanced if the colliding bodies have part or all of their surfaces covered by water (Genda & Abe, 2005). In the following sections, we compare our results to those of Genda & Abe (2005), and explore how the the efficiency of loss is dependent on the initial surface conditions (e.g., atmospheric pressure, ocean depth, etc.), and the mass of the planet.
#### 4.2.1 Comparison to previous results
Figure 6A shows the efficiency of loss as a function of ground velocity for H\({}_{2}\) atmospheres of 300, 30, and 1 bar (dotted lines) above 3 km deep oceans on Earth-mass planets with surface temperatures of 300 K. The pressure at the base of ocean were approximately 600, 330, and 300 bar, respectively. For reference, loss in the no-ocean case is shown in black. The examples shown in Figure 6A were also explored by Genda & Abe (2005) and their results are shown as dashed lines. In their study, Genda & Abe (2005) considered H\({}_{2}\) atmospheres with six different atmospheric pressures but we have chosen to only show three here to allow clear comparison with our results. At ground velocities less than
\(\sim 6\) km s\({}^{-1}\) (\(0.54\)\(v_{\rm esc}\)) we find relatively good agreement between our results and those of Genda & Abe (2005), but at higher velocities our results diverge, particularly for higher pressure atmospheres. This difference is due to the EOS for water used at high ground velocities in each study. Genda & Abe (2005) ran simulations using both the IAPWS EOS (Wagner, 2002) and the Tilloston EOS (Tillotson, 1962). At lower ground velocities the two EOS gave relatively similar results, but the maximum pressure limit of the IAPWS EOS precluded its use for ground velocities above 6 km s\({}^{-1}\) where the pressure in the shocked stated exceeded the range of the EOS (marked by crosses on each loss curve in Figure 6A). It is therefore in the regime in which Genda & Abe (2005) only performed calculations using the Tillotson EOS in which our results significantly differ from theirs.
To confirm that the only difference between our results and those of Genda & Abe (2005) is the water EOS used, we ran additional simulations using the Tillotson EOS for the ocean and found good agreement between our results when using the same EOS. Figure 6 shows an example set of simulations for a Earth-mass planet with a 300 bar, H\({}_{2}\) atmosphere over a 3 km ocean with our simulations using the Senft & Stewart (2008) EOS shown by the dotted line, our results using the Tillotson EOS as a dash-dash-dot line, and the results of Genda & Abe (2005) as a dashed line. The treatment of expanded states in the Tillotson EOS often leads to unphysical solutions at low densities, causing simulations to fail. As a result, very few of our calculations using the Tillotson EOS reached the prescribed run-time of 5000 s which likely accounts for the slightly lower loss we calculate in some cases.
The Tillotson EOS is designed to model the behaviour of material in the shocked state but does not provide a good description of material properties in lower-density, expanded states. This is particularly an issue in the multi-phase liquid and vapor region where a minimum density cutoff is imposed which is typically a sizeable fraction of the reference density. In contrast, the water EOS from Senft & Stewart (2008)
Figure 5: The relationship between ground velocity and loss in the no-ocean case is insensitive to atmospheric composition, surface temperature and pressure, and planetary mass. A: Fraction of atmosphere lost from an Earth-mass (\(M_{\rm Earth}\)) planet as a function of ground velocity for 100 bar atmospheres of different compositions (line styles) and surface temperatures (line thicknesses). The black dashed line is the result from Genda & Abe (2003) for a 1 bar atmosphere of Earth-like composition and a surface temperature of 288 K. B: The fraction of atmosphere lost from bodies of different masses with surface pressures of 0.1, 0.5, 1, 5, 10, 50, 100, and 500 bar (colored symbols). Ground velocity is normalized to the escape velocity, \(v_{\rm esc}\), of each body. Atmospheres were CO\({}_{2}\) with a surface temperature of 300 K. The solid black line was calculated using the parameterization described in Section C. C: The misfit of the results in B from the parameterization described in Section C.
planetary collisions and includes a liquid-vapor phase region to more accurately describe expanded states. The efficiency of loss in the ocean case is dictated by the complex combination of multiple waves (see discussion below) and so the use of high-quality EOS for water is critical to accurately determine loss. At high ground velocities, more of the water reaches expanded states and hits the minimum density cutoff, likely accounting for the lower loss at high ground velocities when using the Tilltsoon EOS. Given that the Senft and Stewart (2008) EOS provides a more accurate description of expanded states, our results are likely more realistic than those of Genda and Abe (2005) using the Tillotson EOS. For a discussion of the comparison between the Tillotson and more advanced EOS see Stewart et al. (2020).
#### 4.2.2 Dependence on ocean depth and atmospheric pressure
To explore the effect of surface pressure and ocean depth, as well as planetary mass (Section 4.2.3), on the efficiency of loss as a function of ground velocity, we conducted simulations with every combination of six different planetary masses (0.107, 0.3, 0.5, 0.7, 0.9, and 1 \(M_{\rm Earth}\)), seven surface pressures (\(p_{0}\) = 1, 5, 10, 50, 100, 300, and 500 bar), and nine ocean depths (0.1, 0.5, 1, 2, 3, 5, 10, 20, and 30 km). We also conducted additional simulations for each planetary mass with 900 bar atmospheres and oceans of 0.1 km depth. Atmospheres were CO\({}_{2}\) (\(m_{\rm a}\) = 44 g mol\({}^{-1}\), \(\gamma\) = 1.29) with surface temperatures of 300 K. Simulations were performed for ground velocities at 0.05 \(v_{\rm esc}\) intervals between 0.05 and 0.95 \(v_{\rm esc}\).
Figure 7 shows the efficiency of loss from an Earth-mass planet for six example surface velocities for different atmospheric pressures (symbols) and ocean depths (x-axis). The amount of atmosphere and ocean lost is presented as the sum of the atmospheric and ocean loss (which we refer to as combined loss) with one being total loss of atmosphere and two being the total loss of both ocean and atmosphere. For reference, the lower open grey triangle to the left of each panel shows the loss expected in the absence of an ocean. Note that the x-axis is the inverse of ocean height (\(1/H_{\rm oc}\)).
Variation in the atmospheric pressure and ocean depth can make the difference between almost zero and total
Figure 6: Our results agree well with those of Genda and Abe (2005) at low ground velocities but deviate at high ground velocities due to using an improved equation of state (EOS) for water. A: Atmosphere and ocean loss from an Earth mass, \(M_{\rm Earth}\), body as a function of ground velocity for H\({}_{2}\) atmospheres of different surface pressures (colored lines) above 3 km oceans. Dotted lines are the results of our calculations and dashed lines are the results of Genda and Abe (2005). The ocean surface temperature in each case was 300 K. Black lines are for the case of no ocean from Genda and Abe (2003) (dashed line) or our parameterization described in Section C (solid line). Crosses indicate the maximum velocity at which Genda and Abe (2005) calculated loss using the IAPWS water EOS, due to reaching the maximum pressure of validity for that EOS. Cumulative loss fraction is the sum of atmospheric and ocean loss. Grey dashed lines indicates total atmospheric loss but zero ocean loss. B: Fraction of atmosphere and ocean lost for a 300 bar, H\({}_{2}\) atmosphere over a 3 km ocean, calculated using different water EOS in our study and in Genda and Abe (2005).
Figure 7: Caption on next page.
loss of an atmosphere, and between zero and almost total loss of the ocean. Loss is more efficient from planets that initially have deeper oceans and/or lower-pressure atmospheres. For shallow oceans and high-pressure atmospheres, the efficiency of loss tends towards a low-loss limit where the efficiency of loss in the no-ocean case (open grey triangle to the left of each panel in Figure 7). Increasing the ocean depth while keeping the atmospheric pressure constant leads to an increase in the efficiency of loss until loss plateaus at a high-loss limit for very deep oceans. Over the range of conditions we have considered only the lowest pressure atmospheres plateau in the high-loss limit. At lower ground velocities, where only the atmosphere is being lost, the value of the plateau is dependent on the initial atmospheric pressure. As we will describe below, this phenomena is due to the fact that the lower the initial atmospheric pressure, the higher the velocity of the ocean surface upon release to the atmosphere and so the stronger the driver for atmospheric loss. When the atmosphere is totally lost ocean loss in the high-loss limit is almost invariant of initial atmospheric pressure
It is evident from Figure 7 that the physics of atmospheric loss in the presence of an ocean is more complicated than a simple impedance match calculation (Section 2). Filled symbols to the left of each panel in Figure 7 show the loss determined by convolving the velocity of the ocean surface determined from an impedance match solution (see Figure 12) with the parameterization for atmospheric loss in the case of no ocean (see Section C), i.e., the loss that would be expected if loss of the atmosphere was only controlled by the ocean surface driving a shock at the impedance-match velocity. The dark blue line on the left of each panel in Figure 7 show a similar calculation except using the velocity of the ocean surface expected upon release of the ocean to very low pressure (1 Pa). The plateau in loss seen in our simulations is typically higher than that calculated assuming the impedance match velocity as the driving velocity for loss, and additional processes must be at play.
To explain the dependence of loss on atmospheric pressure and the depth of the ocean, it is necessary to understand the dynamics of the system beyond the initial breakout of the shock (Section 2). Figure 8 shows examples of the early evolution of the ocean and atmosphere after breakout of the impact shock from the ground. Each column shows the evolution for planets with the same depth of ocean (3 km) and the same ground velocity (4 km s\({}^{-1}\)), but with increasing initial atmospheric pressures going from left to right (1, 50, and 500 bar). Upon breakout from the planet, the system evolves as dictated by the impedance-match between the different layers, as described in Section 2. The shock propagates through the ocean, compressing the water and accelerating it to the ground velocity. When the shock front reaches the surface of the ocean the water releases to the impedance match pressure between the water and the atmosphere (grey dashed lines in Figure 8), driving a shock wave into the atmosphere. The shock accelerates as it travels up the atmosphere and the initial evolution is similar to that seen in Figure 4 for the no-ocean case, but with the ocean surface taking the role of the ground.
Figure 8: Caption on next page.
shock is no longer fully supported in the atmosphere and a release wave from the bottom of the atmosphere can retard the acceleration of the upper fraction of the atmosphere, leading to decreased loss. In cases with either very thin oceans or very high-pressure atmospheres (e.g., Figure 8E and F), pressure waves can equalize the pressure throughout the ocean before the shock wave has propagated far into the atmosphere. The velocity of the ocean surface slows to that of the ground, and the evolution of the atmosphere is very similar to that in the no-ocean case. This explains why the atmospheric loss tends to that in the no-ocean case in the low-loss limit (Figure 9).
In all cases, for sufficiently high ground velocities continued expansion of the ocean, contributed to by a release wave from the top of the atmosphere, leads to a slow acceleration of the ocean surface and loss of the top of the ocean (e.g., Figure 8A, top of the ocean on the right side of the yellow solid line). When the whole atmosphere is lost the ocean effectively releases to zero pressure and the initial pressure of the atmosphere becomes almost irrelevant. The high-loss limit for ocean loss is therefore relatively insensitive to atmospheric pressure.
The time at which the pressure in the ocean becomes less than that at the base of the atmosphere is critical in governing the efficiency of loss. The earlier in time this transition occurs, the earlier the ocean surface and atmosphere are slowed, and the lower the degree of atmospheric/ocean loss. The timing of this transition depends on two factors: the depth of the ocean; and the initial atmospheric pressure. For deeper oceans, the increase in the depth of the ocean layer as the ocean surface expands is a smaller fraction of the total depth of the ocean. The density, and hence pressure, of the ocean therefore falls more slowly. The dependence of loss on atmospheric pressure is more complicated as there are two competing effects. The atmospheric pressure dictates the impedance match pressure and velocity of the ocean surface and base of the atmosphere. The lower the initial atmospheric pressure, the higher the impedance-match velocity of the ocean surface and the greater the driver for atmospheric loss. In addition, the lower the initial atmospheric pressure, the lower the pressure in the atmospheric shock. All else being the same, as the ocean expands it takes longer for the pressure in the ocean to fall below the pressure in the shocked lower atmosphere, leading to slowing of the ocean surface later in time. However, working against this effect is that the higher the surface velocity of the ocean, the more rapidly the ocean expands. The pressure in the ocean decreases more rapidly and so falls below the pressure in the lower atmosphere earlier in time, leading to an earlier slowing of the ocean surface by pressure waves. Determining the balance between these two effects of atmospheric pressure is non-trivial.
The efficiency of atmospheric loss is therefore controlled by the initial depth of the ocean and atmospheric pressure in two principle ways: the atmospheric pressure controls the impedance-match velocity between the ocean and atmosphere that drives enhanced loss; and a combination of the ocean depth and initial atmospheric pressure determine when the ocean surface begins to slow. Genda & Abe (2005) previously suggested that loss was dependent on the ratio of the mass of the atmosphere to the mass of the ocean (\(\mathcal{R}=M_{\rm atm}/M_{\rm oc}\)). For a planet of a given mass, this mass ratio is proportional to the ratio of initial atmospheric pressure to ocean depth (\(p_{0}/H_{\rm oc}\)):
\[\mathcal{R}\sim\frac{p_{0}}{H_{\rm oc}}\frac{1}{g}\;, \tag{4}\]
where \(g\) is the gravitational acceleration at the base of the atmosphere. We find that loss correlates well with both \(\mathcal{R}\) (Figure 9) and \(p_{0}/H_{\rm oc}\) over a wide range of atmospheric pressures and ocean depths, including across the transition between the low and high loss regimes. This demonstrates the key role that atmospheric pressure and ocean depth play in loss.
The correlation of loss with \(\mathcal{R}\) provides an alternative way of understanding the limits on loss. In the high-\(\mathcal{R}\) limit, the ocean is much less massive than the atmosphere and so the release of the ocean cannot provide sufficient momentum to drive any enhancement in atmospheric loss. The efficiency of loss is then the same as if it was just driven by the ground motion alone, and the whole atmosphere, or any amount of ocean, is not lost until the ground velocity is very close to the escape velocity. In the low-\(\mathcal{R}\) limit, the ocean is much more massive than the atmosphere and so the atmosphere of
Figure 8: Acceleration of the ocean surface upon release of the shock can lead to a significant increase in the efficiency of atmospheric loss for a given ground velocity. Shown are velocity (top row) and pressure (bottom row) profiles of the ocean (solid lines) and atmosphere (dotted lines) at different times (colors) for simulations with a ground velocity of 4 km s\({}^{-1}\). Columns show the evolution for planets with the same ocean depth (3 km) but different atmospheric pressures, resulting in different ratios of the mass of the atmosphere to the mass of the ocean. The same time steps are shown in each column. The atmosphere in all cases was CO\({}_{2}\) (\(m_{\rm a}=44\) g mol\({}^{-1}\), \(\gamma=1.29\)) with a surface temperature of 300 K, and the planet was Earth-mass with an escape velocity of 11.2 km s\({}^{-1}\). Gray dashed lines show the impedance match solution for the release of the ocean to the corresponding atmosphere. An animated version of this figure accompanies this manuscript.
Figure 9: The transition between the high and low loss regimes scales well with the ratio of atmospheric to ocean mass (\(\mathcal{R}\)). Each panel shows the combined loss (sum of atmosphere and ocean fraction lost) as a function of \(\mathcal{R}\). Symbols and lines are the same as in Figure 7.
fers little impediment to the release of the ocean to very low pressures. The transition between the high and low loss regimes occurs when the mass of the atmosphere is comparable to that of the ocean (i.e., \(\mathcal{R}\sim 1\)) and takes place over one to two orders of magnitude in \(\mathcal{R}\), dependent on the ground velocity. The transition occurs at higher \(\mathcal{R}\) as ground velocity increases as the ocean itself begins to be lost and the influence of the atmosphere diminishes.
We find that considering loss as a function of \(\mathcal{R}\) is more useful than \(p_{0}/H_{\rm oc}\) when considering planets with different masses (Section 4.2.3). We therefore use \(\mathcal{R}\) as the principal control on loss from here on out. However, it is important to bear in mind the close relationship between the two ratios.
Scaling with \(\mathcal{R}\) does not capture all the factors that influence the relationship between ground velocity and loss. In Figure 9A and B, the dependence of the upper limit of loss on atmospheric pressure is noticeable, but the scaling with \(M_{\rm atm}\sim p_{0}\) of the x-axis means that the offset in loss at a given \(\mathcal{R}\) is smaller than at the same \(H_{\rm oc}\). In addition, loss in transition between the high and low \(\mathcal{R}\) regimes does not scale perfectly with \(\mathcal{R}\), particularly in the regime where ocean is being lost, with loss being higher in cases with initially higher pressure atmospheres at the same \(\mathcal{R}\). In such cases, the scaling with \(M_{\rm atm}\sim p_{0}\) is over correcting for the effect of atmospheric pressure as the ocean is able to decompress to lower pressures with limited restriction from the atmosphere. Over the range of parameters we have considered, variation in atmospheric pressure leads to differences in loss that are much smaller than the overall \(\mathcal{R}\) effect, with the exception of in the ocean loss transition region when the variation due to atmospheric pressure can be \(\sim 1/3\) of the total variation in loss. If we were to consider cases with high pressure atmospheres but much larger ocean depths the simple scaling with \(\mathcal{R}\) would not well describe the value to which loss plateaus in the high-loss regime. The lower impedance-match velocity of higher pressure atmospheres would result in loss plateauing at lower values than for lower pressure atmospheres (Figure 7). This effect would cause deviation on the order of the total variation in loss for cases with a similar \(\mathcal{R}\). Considering lower pressure atmospheres than those we examine here could compound this effect as they could plateau at higher loss fractions than the 1 bar minimum pressure we simulated. A wider range of parameters will be considered in future work but, for now, we advise caution when using the results of this work beyond the parameter regime simulated.
#### 4.2.3 Dependence on planetary mass
The scaling of loss with \(\mathcal{R}\) holds well, with some caveats, when considering planets of different masses. Figure 10 shows the combined loss as a function of \(\mathcal{R}\) for six example velocities (normalized to \(v_{\rm esc}\)) and six different mass planets (colors) between the mass of Mars (\(M_{\rm Mars}\)) and the mass of Earth (\(M_{\rm Earth}\)). The symbols are the same as in Figures 7 and 9, and colored lines show the results of our paramterization for atmospheric loss for each mass planet (Section C). The dependence on loss is generally well captured by scaling with \(\mathcal{R}\), with the exception that in the low-\(\mathcal{R}\) regime when there is only partial atmospheric loss the plateau in loss is lower for lower mass planets. For a ground velocity which is a given fraction of \(v_{\rm esc}\), the absolute ground velocity, and hence the strength of the shock, for a lower mass planet is lower. The resulting impedance-match velocity for the ocean-atmosphere interface is a lower fraction of the escape velocity of the smaller planet, leading to less efficient loss (filled symbols to the left of each panel in Figure 10). The influence on loss is compounded by the fact that the atmospheric loss function is highly non-linear at low velocities (Figure 5) while the impedance match velocity is relatively linear with respect to absolute velocity. In the regime in which the entire atmosphere is lost, the loss of ocean is controlled by expansion of the ocean to low pressure and the maximum loss is relatively insensitive to planetary mass.
There is also substantial deviation from a simple \(\mathcal{R}\) scaling in the transition region between the low and high loss regimes, with loss from more massive planets being more efficient. This variation is likely largely due to the sensitivity of the dynamics of loss to the shock and release path of water which itself is a function of the absolute strength of the shock. The variation due to planetary mass is compounded by the variation due to initial atmospheric pressure/ocean depth (Section 4.2.2) and the difference in combined loss can be as much as 50% at the same \(\mathcal{R}\) in regions where the loss fraction is varying rapidly with \(\mathcal{R}\). These variations are the main cause of error in our parameterization of loss (Section C).
#### 4.2.4 Effect of atmospheric composition
In the presence of an ocean, the effect of atmospheric composition on the relationship between ground velocity and loss is complex as the composition of the atmosphere can affect the efficiency of loss in a number of different ways. First, the composition of the atmosphere controls the compressibility of the shocked gas, and the impedance match velocity and pressure of the
Figure 10: Caption on next page.
ocean surface. For an ideal gas in the strong-shock limit
\[\frac{\rho_{\rm s}}{\rho_{0}}\approx\frac{\gamma+1}{\gamma-1}\;, \tag{5}\]
where \(\rho_{\rm s}\) and \(\rho_{0}\) are the density of the shocked and unshocked material, respectively, and \(\gamma\) is the ratio of specific heat capacities of the gas. The compression of material due to the shock is entirely dependent on \(\gamma\) which varies from 1.25 to 1.4 for the gases we consider, resulting in a variation of density in the shocked gas of 6 to 9. The particle velocity, \(u_{\rm p}\), at a given shock pressure, \(p_{\rm s}\), in the strong shock limit is
\[u_{\rm p}\approx\sqrt{\frac{p_{\rm s}}{\rho_{0}}\left(\frac{2}{\gamma+1} \right)}\;. \tag{6}\]
The lower the initial density of the gas, the higher the particle velocity required to reach a given shock pressure. As a result, the impedance match velocity at the ocean surface is higher for lighter gases while the impedance match pressure is lower. In particular, H\({}_{2}\) is much less dense than gases such as N\({}_{2}\) and CO\({}_{2}\) and the initial velocity of the ocean surface can be tens of percent larger for H\({}_{2}\) than for the heavier gases. The lower pressure of the impedance match solution in such cases means that the pressure at the base of the ocean can remain above that at the base of the atmosphere for longer, helping sustain the velocity of the ocean surface. Finally, the height of the atmosphere, and hence the time it takes for the shock to reach the top of the atmosphere, can vary significantly depending on its composition. For example, it can take the shock more than ten times longer to reach the top of a H\({}_{2}\) atmosphere than a CO\({}_{2}\) atmosphere. As a result, the surface of the ocean, and the ground, slow earlier in the evolution relative to the progress of the shock through the atmosphere. So, although the velocity of the ocean surface is sustained for longer for H\({}_{2}\) atmospheres in absolute time, the ocean surface is typically slowed earlier relative to the evolution of the atmosphere, acting to reduce the efficiency of loss. Finally, the release wave from the top of the atmosphere reaches the ocean surface earlier during the loss of CO\({}_{2}\) atmospheres compared to H\({}_{2}\) atmospheres and so the pressure in the ocean drops more rapidly and the ocean reaches higher velocities earlier in the evolution.
The net effect of these competing factors depends on the initial conditions and the regime of loss, and can be difficult to isolate. For example, Figure 11A shows the same cases as in Figure 6A for both H\({}_{2}\) and CO\({}_{2}\) atmospheres which show some of the common tradeoffs. The loss for different heavy atmospheres (e.g., N\({}_{2}\) and CO\({}_{2}\)) is very similar, but the loss of H\({}_{2}\) atmospheres can be significantly different. In the high loss limit (e.g., yellow curves in Figure 11A), the shock in the atmosphere is supported late into the evolution in both the CO\({}_{2}\) and H\({}_{2}\) cases and the efficiency of loss is very similar. In the regime where only part of the atmosphere is lost, loss in the H\({}_{2}\) case is slightly greater due to the larger impedance match velocity. However, this effect is muted by the continued decompression of the ocean surface and the ocean surface in the CO\({}_{2}\) case reaches greater velocities than in the H\({}_{2}\) cases within a few seconds. In the high-loss regime when the whole atmosphere is lost, the loss of ocean in the CO\({}_{2}\) and H\({}_{2}\) cases are nearly identical as the low-mass atmosphere provides little impediment to the loss of the ocean, regardless of atmospheric composition.
In the transition between the high and low loss limits (e.g., teal and blue lines in Figure 11A) the differences between CO\({}_{2}\) and H\({}_{2}\) atmospheres are more complex. At low ground velocities, loss of CO\({}_{2}\) atmospheres is more efficient largely because the ocean surface in the H\({}_{2}\) cases slows significantly before the shock reaches the top of the atmosphere. There is then an intermediate ground velocity regime where the situation is similar to that in the high loss limit and loss of H\({}_{2}\) atmospheres is more efficient. Close to near total loss of the atmosphere, and continuing into the ocean-loss regime, loss
Figure 10: The efficiency of atmospheric loss depends strongly on the ratio of atmospheric to ocean mass (\(\mathcal{R}\)). Each panel shows the combined (atmosphere and ocean) loss for a different ground velocity, normalized to the escape velocity of each planet, as a function of \(\mathcal{R}\). Points are the results of simulations for a range of planetary masses (colors), initial atmospheric pressures (symbols) and initial ocean depths (decreasing left to right). Colored solid lines show a parameterized fit of the simulation results (Section C) for each planetary mass. Grey dashed lines indicate total atmospheric loss (a combined loss of one). The open grey triangle to the left of each panel shows the loss expected in the absence of an ocean. Filled colored circles to the left of each panel show the loss determined by convolving the velocity of the ocean surface determined from an impedance match solution for an atmosphere of 1 bar (the lowest pressure considered in this work) with a parameterization for atmospheric loss in the case of no ocean (Section C). Colored dash marks to the left of each panel show a similar calculation except using the velocity of the ocean surface expected upon release of the ocean to 1 Pa. In both cases, the color of the symbols indicates the planetary mass. The mass of the planet affects the degree of loss in as it determines the absolute velocity that corresponds to the given fraction of the escape velocity which dictates the absolute impedance match velocity. The lower axis in each panel shows the misfit between the simulations and the parameterized fit. Indicated is the root mean squared (RMS) misfit at each velocity.
of CO\({}_{2}\) atmospheres and their oceans once again becomes more efficient. This effect is likely a result of acceleration caused by the more rapid drop in the pressure of the ocean in the CO\({}_{2}\) cases allowing the ocean to reach higher velocities. In cases where the ocean is almost entirely lost, the situation can reverse (e.g., teal line in Figure 11A). This may indicate that the higher impedance match velocity in the H\({}_{2}\) case controls the loss of the deep ocean.
The difference in loss between CO\({}_{2}\) and H\({}_{2}\) atmospheres can be tens of percent, particularly in the ocean-loss regime. However, for the range of parameters we have explored, the loss of H\({}_{2}\) atmospheres is within, if sometimes at the extreme, of the variation we see between different \(H_{\rm oc}\), \(p_{0}\) and \(M_{\rm p}\) combinations for CO\({}_{2}\) atmospheres with similar \(\mathcal{R}\) (Figure 10). Therefore, although the effect of atmospheric composition can be significant, it is a second order effect compared to the dominant \(\mathcal{R}\) scaling.
#### 4.2.5 Effect of surface temperature
The range of possible surface temperatures in the ocean case is limited to the range over which liquid water is stable on the surface (we do not consider an ice-covered surface in this work). The exact temperature range depends on the initial atmospheric pressure but, over the range considered here, the effect of atmospheric temperature on loss is minor. Figure 11B shows loss for an example case with H\({}_{2}\), N\({}_{2}\), and CO\({}_{2}\) atmospheres and surface temperatures between 283 and 583 K. Higher temperature atmospheres typically lead to slightly less efficient loss, likely because the greater atmospheric height means that the atmospheric shock becomes unsupported earlier in the evolution (see Section 4.2.4). This effect is only really noticeable for H\({}_{2}\) atmospheres in the atmospheric loss regime (e.g., see thin dotted lines in Figure 11B) where the efficiency of loss can be a few percent lower than in colder cases.
The relationship between shock strength and ground velocity and the implications for the efficiency of loss
So far we have considered the effect of the surface conditions on the efficiency of atmospheric and ocean loss for a specified ground velocity. However, as discussed in Section 2, the impedance-match ground velocity that results from a shock wave of a given strength within a planet depends strongly on parameters such as the initial surface pressure and temperature. We now turn to consider how the surface conditions influence the ground velocity due to a given impact and the effect on atmospheric and ocean loss. In this section we will only consider the magnitude of the initial ground velocity and return to discuss non-ballistic effects on the ground velocity in Section 6.
For a given impact, the strength of the shock inside the planet before breakout is insensitive to the proper
Figure 11: Loss in the ocean case is somewhat sensitive to the composition of the atmosphere. A: Atmosphere and ocean loss from an Earth-mass body as a function of ground velocity for H\({}_{2}\) (dotted lines) and CO\({}_{2}\) (solid lines) atmospheres of different surface pressures (colors) above 3 km oceans. B: Loss for atmospheres of different compositions (line styles) and ocean surface temperatures (line thicknesses) for a 100 bar atmosphere over a 3 km ocean.
ties of a thin atmosphere and ocean. In this section, we will therefore use the particle velocity in the shock in the planet, prior to breakout into the atmosphere/ocean, as the independent parameter when comparing loss from planets with different surface conditions. In other words, we will consider how much of the atmosphere/ocean would be lost from a given impact if the only parameters that changed were the initial surface conditions.
### The no-ocean case
As discussed in Section 2, in the no-ocean case, the ground velocity is set by an impedance match between the rock surface and the atmosphere. Figure 12A-C shows the ground velocity as a function of the particle velocity in the shock in the planet before breakout for atmospheres with different pressures, compositions, and temperature (colored lines). For reference, the particle velocity of the ground on release to very low pressures (1 Pa: black dashed line), and the impedance match velocity for release of the ground into an ocean (deeper blue line) are also shown. For high-temperature, low-pressure atmospheres the ground velocity is close to the low-pressure release velocity, as the impedance match pressure is low enough that the forsterite release curve is very steep in \(u_{p}\)-\(p\) space by the time it intersects the shock Hugoniot of the gas. This can be seen in the example impedance-match calculations show in \(u_{p}\)-\(p\) space in Figure 13A (similar to the schematics in Figure 2 and 3) where the black dashed line show rock release curves and black symbols show the impedance-match solutions. For the same strength of shock in the planet, the impedance match velocity decreases with increasing atmospheric pressure, and decreasing temperature and is also lower for heavier atmospheres (Figure 12B). Therefore, even though the dependence of loss on the ground velocity is largely insensitive to the atmospheric properties (Section 4.1 and Figure 5), it is easier to lose hotter, lighter, and lower pressure atmospheres from terrestrial planets due to the higher ground velocity resulting from any given impact.
The effect of the varying impedance-match velocities on the magnitude of loss is shown in Figure 12. Panels E-G show the loss calculated by inputting the impedance-match velocities shown in Figure 12A-C into our parameterization of the relationship between ground velocity and atmospheric loss (Section C). Panel H shows similar results but for the four atmospheres considered in the 3D loss simulations of Kegerreis et al. (2019) utilizing the EOS of Hubbard and MacFarlane (1980). Panels I-L show the difference between the calculated loss from each of the atmospheres and the loss that would be driven by release of the surface to very low pressures (1 Pa, black dashed line in panels A-H). Figure 12 also shows an ocean line which we will discuss in Section 5.2. The efficiency of loss for a given strength of impact shock can vary by 10s % between different atmospheres (Figure 14I-L), purely as a function of the difference in the impedance match velocity between the ground and the atmosphere. The ability of giant impacts to remove volatiles from planets is therefore dictated, in part, by the surface conditions on the colliding bodies before the impact.
As a result of the dependence of ground velocity on atmospheric properties, care needs to be taken when combining \(u_{\rm G}\)-loss scalings (Section C) with the results of 3D impact simulations and when applying the results of 3D impact loss simulations to accretion models. The peak ground motion in a 3D impact simulation will be dictated by the pressure of the atmosphere, or the lack of atmosphere, used in that simulation. When combining 1D simulations with the ground velocity from 3D impact simulations (see e.g., Kegerreis et al., 2019) to calculate the loss expected from a planet with a different atmosphere than that used in the 3D simulation, the peak ground motion must be corrected to that dictated by the impedance match with the alternative atmosphere. Furthermore, results of directly calculated 3D loss simulations from a planet with an atmosphere of one composition, pressure, and temperature cannot necessarily be applied to loss from a planet with a different atmosphere without significant error. For example, the difference in the loss efficiency curves shown in Figure 12 H and L may be at least partly responsible for the difference in atmospheric loss between different mass atmospheres observed by Kegerreis et al. (2019) (e.g., their Figure 7). The dependence of ground velocity on atmospheric properties complicates efforts to develop universal scaling laws for atmospheric loss.
### The ocean case
Figure 14A shows the impedance match velocity between the ocean and the atmosphere for a given strength of shock in the planet. The impedance match velocities are much larger than that between the ground and atmosphere in the no-ocean case (Figure 12A-D), explaining the significant increase of loss achieved in the high-loss regime (Figure 10). However, as we discussed in Section 4.2, the loss efficiency can be significantly perturbed from that expected from an impedance-match calculation.
When there is an ocean on the pre-impact body, the ground velocity is set, not by release of the ground to the atmosphere, but by an impedance match between the rock surface and the bottom of the ocean. The
Figure 12: Caption on next page.
darker blue line in Figure 12A-D shows the ground velocity upon release to the ocean as a function of the shock particle velocity in the planet before breakout. Note that, although the Hugoniot depends on the initial pressure/density of the material, over the range of ocean depths and atmospheric pressures considered in this paper, the forsterite and water Hugoniots are similar and there is little variation in the ground velocity upon release. The ground velocity in the ocean case can be tens of percent lower than the ground velocity in the no-ocean case for the same impact (Figure 12I-L).
The reduced ground velocity in the ocean case can lead to a previously unrecognized phenomenon: the presence of an ocean can actually reduce the efficiency of loss (c.f., Genda & Abe, 2005). In the large atmosphere/ocean mass ratio, low-loss regime the loss efficiency is the same as the loss in the no-ocean case for the same ground velocity. However, the ground velocity in the ocean case, for a given impact, is lower than in the no-ocean case. Therefore, the amount of atmospheric loss in a given impact will be lower if an ocean is present than if it were absent. Figure 14B shows the fraction of atmosphere lost due to a given particle velocity depending on the ocean to atmosphere mass ratio. The black dashed line shows the equivalent relationship for low-pressure atmospheres in the absence of an ocean. The atmosphere to ocean mass ratio needs to be sufficiently small in order for the loss efficiency in the ocean case to exceed that in the no-ocean case. Figure 14C shows the critical mass ratio required for loss in the ocean case to equal that in the no-ocean case, assuming release of the ground to very low pressure (\(\sim 1\) Pa). The mass of the ocean must be at least comparable to the mass of the atmosphere in order for the presence of an ocean to enhance loss, over that in the no-ocean case. The critical mass ratio is lower for higher pressure atmospheres as the release velocity is comparatively lower in the no-ocean case.
## 6 Non-ballistic boundary conditions
As in previous work (Genda & Abe, 2003, 2005), we have not directly simulated the shock in the planet. Instead, in most of our simulations, we have modeled the breakout of the shock from the planet to the atmosphere/ocean by prescribing the motion of the ground, assuming that the ground reaches its peak velocity instantaneously and then evolving ballistically. However, there are multiple processes that could perturb the motion of the ground from this ideal that we must consider.
Firstly, the acceleration of the ground due to the release of the shock to the atmosphere/ocean will occur over some finite rise time. That is, there is a finite rise time over which any point on the surface of the planet accelerates from stationary to the impedance-match velocity. Experimental studies have found that rise times for shocks in silicates are on the order of \(10^{-7}\) s (e.g., Grady et al., 1987). However, rises times in the shocks from nuclear explosions have been found to be much longer, up to several \(10^{-2}\) s (Melosh, 2003). Genda & Abe (2003) investigated the effect of rise time on loss in the no-ocean case and found that rise times less than \(\sim 1\) s had only minimal effect on loss for a ground velocity of 5.6 km s\({}^{-1}\) on an Earth-mass planet with an atmosphere similar to the present-day Earth (\(m_{\rm a}=29\) g mol\({}^{-1}\), \(\gamma=1.4\)). To further examine the effect of rise time on loss, we ran simulations in which the boundary accelerated linearly over a given rise time, \(t_{\rm rise}\) (Appendix A.3). We confirm the result of Genda & Abe (2003) in the no-ocean case, but find there can still be a substantial decrease in loss at higher ground velocities for rise times \(>\sim 0.1\) s. The rise times required to effect loss in H\({}_{2}\) atmospheres are much longer (\(>\sim\)10 s),
Figure 12: The initial velocity of the ground or ocean surface depends on the properties of the atmosphere. A-C: Release velocity of the ground, as determined by an impedance match calculation, as a function of the particle velocity of the shock in the planet before release for atmospheres with different pressures (A, colors), compositions (B, line styles), and temperatures (C, line thicknesses). The particle velocity of the ground on release to very low pressures (1 Pa: black dashed line), and the impedance match velocity for release of the ground into a water ocean (deep blue line) are also shown. The surface was modelled as forsterite (Stewart et al., 2019), the atmospheres as ideal gases, and the ocean using the Senti & Stewart (2008) EOS. The sudden increase in the slope of the low-pressure release line is due to the onset of vaporization. D: As A-C but for atmospheres modelled using the H\({}_{2}\)-He EOS of Hubbard & MacFarlane (1980) with the initial pressures and temperatures (500 K) as used in the 3D loss simulations of Kegerreis et al. (2019). Note that the implementation of the Hubbard & MacFarlane (1980) in Kegerreis et al. (2019) had an inconsistency in the calculation of internal energy which we have corrected here (see Section 3). E-H: The fraction of atmosphere lost due to the breakout of a shock as a function of particle velocity in the planet before release to the atmosphere shown in A-D. Calculations are for loss from an Earth-mass planet. Loss was calculated by inputting the calculated impedance match velocity for a each atmosphere and shock strength into our parameterization for loss in the no-ocean case (Section C). The loss that could be driven by release of the ground to very low pressures (1 Pa) is shown as a black dashed line. I-L: The fractional difference between the loss calculated for the atmospheres shown in E-H and the loss calculated assuming that the ground released to very low pressures (1 Pa, a theoretical upper limit on loss, black dashed line in A-D). For low-pressure, high temperature, and lighter atmospheres the loss is close to maximal, but loss of heavier and higher pressures atmospheres can be tens of percent less.
likely due to the larger scale height of the atmosphere and hence longer timescale for evolution. Regardless, we concur with Genda & Abe (2003) that the rise times typical of shocks are too short to significantly impact the efficiency of loss. Similarly, we find that in cases with an ocean rise times typical of the rise time of shocks have minimal effect on loss (Figure 15A).
The key limitation to our approach is that in prescribing that the maximum velocity of the ground is reached instantaneously and that any reduction in pressure at the base of the atmosphere/ocean below the impedance match pressure does not lead to any additional acceleration of the ground. In effect, we are assuming that the release curve of the rock is vertical in \(u_{\rm p}\)-\(p\) space when the impedance match is reached (similar to the \(u_{\rm p}=1\) km s\({}^{-1}\) curve in Figure 13). This is a very good assumption for low pressure atmospheres and relatively weak shocks in the absence of an ocean, but for higher pressure atmospheres and stronger shocks, or for release into an ocean, the release curve of the rock intersects the air or water Hugoniot while it still has a significant slope in \(u_{\rm p}\)-\(p\) space (see e.g., 5 km s\({}^{-1}\) example in Figure 13A, B). Therefore, when the base of the atmosphere/ocean decompresses as it expands outwards, the decrease in pressure at the base of the atmosphere/ocean will cause the ground to accelerate to velocities beyond the initial impedance match. The increase in ground velocity is particularly large for very strong shocks when the ground can vaporize upon release. This later acceleration has the potential increase the efficiency of loss.
Direct simulation of the planet's surface, or parameterization of the ground motion, is beyond the scope of this paper. However, we can examine the potential effect on atmospheric loss of continued acceleration of the ground by calculating the anticipated velocity of the ground based on the forsterite release curve and the base ocean/atmosphere pressure from our simulations, and comparing this to the results of our simulations with finite rise times. It is important to note that our finite
Figure 13: The initial velocity of the surface that is driving atmospheric loss depends on the properties of the atmosphere. A: Impedance match solution in the no-ocean case. Solid or dotted lines show the Hugoniots (the locus of physical states produced by shock waves in a material) for forsterite (black, a proxy for the planet’s surface) and atmospheres of varying composition and initial pressure. Black dashed lines show the path for isentropic release of the forsterite shocked to particle velocities of 1, 3, and 5 km s\({}^{-1}\). The pressure and velocity of both the planet’s surface and the atmosphere upon breakout of the shock is set by the intersection of the forsterite release curve and the Hugoniot of the relevant atmosphere (black symbols). The impedance match ground velocity is higher for lower pressure, lower molecular weight, and hotter atmospheres (Figure 12A-C). B: Impedance match solution in the presence of an ocean. Lines and symbols are the same as in A with the addition of the shock Hugoniot of water (blue solid line), isentropic release curves for water (blue dashed lines), and blue symbols marking the impedance match solutions for the ocean with different atmospheres. The release curve of water is shallower that that of forsterite and intersects the atmosphere Hugoniots at higher particle velocities than the rock release curve. Similarly to the no-ocean case, lower pressure and lighter atmospheres lead to larger initial velocities of the ocean surface (Figure 14A).
rise time simulations are not capturing the full conditions of continued acceleration of the boundary, as in the finite-rise time simulations the boundary is accelerating at a constant rate into an initially hydrostatic atmosphere/ocean rather than an atmosphere/ocean that have already been significantly perturbed, and accelerated, by the initial shock. The finite rise-time simulations are therefore likely to overestimate the loss for the same timing of late acceleration of the ground.
We find that the effect of rise time is intrinsically linked to the decompression of the base of the ocean/atmosphere, and hence the anticipated acceleration of the ground. Accelerations of the ground coincident with or after the release of the base of the ocean/atmosphere typically lead to much less additional loss than accelerations that occur earlier. This is likely due to the fact that any additional pressure/shock waves driven by the accelerating boundary do not travel as quickly in the decompressed ocean/atmosphere and so do not accelerate as quickly up the much shallower post-release pressure gradient in the atmosphere. It is therefore likely that additional acceleration of the boundary upon release would have only a modest effect on the efficiency of atmospheric loss.
The pressure evolution of the bottom of the ocean/atmosphere can vary significantly between different surface conditions and shock strengths, and including the effects of continued acceleration of the boundary
Figure 14: Caption opposite.
will likely add significant complexity to the parameterization of atmospheric and ocean loss. We intend to explore these effects in future work. In the meantime, loss from a given impact can be bounded by convolving our parameterization of loss with both the impedance match velocity and the ground velocity upon release to low pressure.
Finally, the ballistic assumption could be violated by slowing of the ground earlier than under the ballistic assumption. This is expected due to release of the shock in the planet by release waves from elsewhere on the surface. Figure 15B shows loss as a function of ground velocity for an example case, where the motion of the ground has been stopped at different times after the start of the simulation (colors). In the regime of atmospheric loss, stopping the boundary after the initial release of the shock (\(10^{-2}\)-\(10^{2}\) for the range of ocean depths considered here) causes very little change in loss efficiency, even though it can take tens of seconds for the fraction of the atmosphere that is lost to be accelerated to escape. This is because the acceleration of the atmosphere is supported by the expansion of the ocean. The passage time of the shock across the ocean in the example case shown in Figure 15B is \(<1\) s. If the ocean begins to expand before the boundary is stopped, and before the shock in the ocean is no longer supported by the ground, the expanding ocean continues to support the shock in the atmosphere until much later in evolution. In the ocean loss regime, stopping the boundary later in time (up to a few hundred seconds) can still have a significant affect on loss, with the effect greatest at the highest ground velocities (Figure 15B), despite the lost ocean fraction reaching escape within a few seconds. When the boundary slows, release waves propagating upwards, slowing the escaping ocean and reducing the fraction that is lost. In giant impacts, the time between the breakout of the shock and release of the impact shock varies across the planet. When determining the loss from 3D impact simulations using the results of 1D simulations, it is important to account for the fact that loss of ocean will be less efficient from areas where the shock in the planet is released in less than a few hundred seconds. Such points are, however, likely near the impact site where a lot of additional processes are occurring, and the 1D approximation is potentially invalid regardless.
## 7 Discussion
We now discuss the implications of our results for our understanding of planetary volatile and surface evolution (Section 7.1), and the potential for reproducing the observed fractionation of different volatile elements due to preferential loss of atmosphere from giant impacts (Section 7.2). We then discuss the potential impact of Rayleigh-Taylor instabilities on the efficiency of loss (Section 7.3), and considerations for when combining
Figure 15: Non-ballistic motion of the ground can affect the efficiency of loss. Shown is loss as a function of maximum ground velocity with different rise times (A) and stopping times (B) for the ground motion. Loss was calculated for an Earth-mass planet with a 3 km ocean and an atmospheric pressure of 100 bar. For reference, the instantaneous rise time solution is shown in black. Axis labels are the same as in Figure 6.
our 1D loss results with 3D impact simulations (Section 7.4).
### Sensitivity of loss to volatile budgets and pre-impact surface conditions
We have shown that the surface conditions on colliding bodies before giant impacts can have a strong influence on the efficiency of atmospheric and ocean loss. In the no-ocean case, hotter, lower pressure, and lower molecular weight atmospheres are more easily lost due to the effect of atmospheric properties on the impedance-match velocity between the ground and atmosphere. If terrestrial planets grow to a large enough size while the nebula is still present, they can accrete a primary H\({}_{2}\)-He atmosphere (e.g., Ikoma & Genda, 2006) as inferred in the case of many exoplanets (e.g., Jontof-Hutter, 2019). After the nebular dissipates, such a light atmosphere would be comparatively susceptible to loss by giant impacts, in addition to more generally considered thermal or stellar-radiation driven loss (e.g., Lammer et al., 2014). Over time, the atmospheres of planets can become dominated by heavier molecular weight species produced by degassing of accreted solids and from the planet's interior. These heavier molecular weight atmospheres, often called secondary atmospheres, would be less susceptible to loss by giant impacts compared to H\({}_{2}\)-He atmospheres. For atmospheres of all compositions, depending on the efficiency of volatile accretion, repeat impact events could reduce the atmospheric pressure, providing a positive feedback for increasingly efficient loss through accretion. The ability of giant impacts to remove atmospheric volatiles thus can evolve substantially during planet formation as the atmospheres of planets change composition, mass, and oxidation state.
Towards the end of planet formation, the cadence of giants impacts and the flux of planetesimal accretion in our solar system are both low enough to allow significant cooling between giant impacts. Thus, water to condense on proto-planets, given a suitable oxidation state, in the time between giant impacts (Abe & Matsui, 1988). This may also be the case in many exosystems, but will depend strongly on the timescale of accretion. Therefore, giant impacts that occur later in accretion are likely to be between planets with pre-impact oceans. We have shown that, over the range of atmospheres we considered, loss from planets with oceans is only weakly dependent on the atmospheric composition, temperature and pressure, and is largely dominated by the atmospheric to ocean mass ratio (as proposed by Genda & Abe, 2005). We find a relatively rapid transition between a high and a low loss regime, over one to two orders of magnitude in the atmosphere to ocean mass ratio. Contrary to previous thinking, the presence of an ocean can reduce the efficiency of loss in the low-loss regime. Condensation of an ocean could then actually protect the surface volatile inventory from loss by giant impacts. However, if the ocean is sufficiently massive, with the critical mass being typically within a factor of a few of the atmospheric mass, the presence of an ocean can significantly increase the efficiency of atmospheric loss, in agreement with Genda & Abe (2005). Therefore, a critical factor in understanding the evolution of planetary atmospheres during accretion is determining whether they are in the high or low loss regimes.
In our solar system, the compositions of chondritic meteorites are often used as proxies for the compositions of the building blocks of Earth. For most chondrites, if their full budget of H, C and N were converted into a CO\({}_{2}\) and N\({}_{2}\) atmosphere over a pure H\({}_{2}\)O ocean, the resulting atmosphere to ocean mass ratio would be of order 10\({}^{0}\)-10\({}^{1}\) (see Table 1). For all but the highest ground velocities, such an atmosphere and ocean would be at the edge of the low-loss efficiency regime and only high energy impacts would drive substantial atmospheric loss (Kegerreis et al., 2020, 2020). However, since loss of ocean is less efficient than loss of atmosphere (Section 7.2), there is a positive feedback where loss events push the atmosphere to ocean mass ratio lower, increasingly the susceptibility of the remaining atmosphere to loss in subsequent giant impacts. The rapid transition in the efficiency of loss with decreasing atmosphere to ocean mass ratio could result in planets reaching a tipping point in volatile evolution once their atmosphere to ocean mass ratio becomes sufficiently small to be in the high-loss regime. Such an effect relies on a planet having a sufficient number of giant impacts with a relative timing that allows an ocean to condense between impacts. The strong sensitivity to the order and timing of giant impacts could result in planets with similar accretionary histories obtaining substantially different final volatile budgets.
Beyond just the total volatile budget, critical to the loss or retention of volatile elements is the partitioning of volatiles between reservoirs in forming planets. As an example, the present-day Earth has an atmosphere to ocean mass ratio of \(\sim 3.6\times 10^{-3}\), firmly in the high loss efficiency regime. However, if all the H, C and N in the present-day atmosphere, ocean and sedimentary rocks (exosphere) were present as a CO\({}_{2}\) and N\({}_{2}\) atmosphere and a H\({}_{2}\)O ocean, the atmosphere to ocean mass ratio would be \(\sim\)0.21 (Table 1), which is in the transition between the high and low loss regimes. Similar calculations using two different estimates for the volatile content of the entire BSE give atmosphere to ocean mass ratios of
\(0.7\pm 0.6\)(Marty, 2012) and \(0.3\pm 0.15\)(Halliday, 2013). The oxidation state of a planet's surface also plays a role in dictating the efficiency of atmospheric loss. For example, if CO was the dominate carbon species in the atmospheres calculated above the atmospheric mass would be \(\sim 1/3\) lower (right column of Table 1).
How volatiles partition between reservoirs, and the oxidation state of the surface, in the period between giant impacts depends on a number of factors including: the separation of volatiles from the silicate during condensation of the silicate vapor produced in the giant impact (Stewart et al., 2018; Lock et al., 2020; Caracas & Stewart, 2023); degassing of the mantle during magma ocean solidification (see e.g.; Elkins-Tanton, 2012; Bower et al., 2019); dissolution of carbon in the ocean and potentially precipitation and storage of abiogenic (or even potentially biogenic) carbonates; weathering of crust transferring volatiles into the crust or sediments; and the thermal state of the atmosphere and surface (e.g., Zahnle et al., 2010). Given the difficulty in understanding each of these processes, and the interactions between them, it is likely not possible to precisely determine the mass and composition of ocean and atmosphere before each impact. We therefore advocate taking a statistical approach to exploring the effect of giant impacts on the volatile budgets of terrestrial planets, making use of the high and low-loss efficiency regimes described here to bound the evolution of planetary volatile budgets.
### Fractionation of hydrophilic and atmophile species
One of the main outstanding questions in regard to Earth's chemistry is that the C/N and H/N ratios of the BSE are significantly fractionated from those of known chondrites (Halliday, 2013). It has been proposed that preferential loss of atmosphere to ocean in impacts could fractionate N and C from H (Tucker & Mukhopadhyay, 2014). If C was dissolved in the ocean or stored in the crust or sediments, more C could be retained, leading to N/C fractionation.
Our results confirm that giant impacts drive significantly more loss of atmospheric species than loss of water. In the high-loss efficiency regime, total atmospheric loss can be achieved from sections of the colliding bodies where ground motions reach only \(\sim 0.35\)\(v_{\rm esc}\), for an Earth-mass planet, whereas substantial ocean loss requires ground velocities that are a large fraction of \(v_{\rm esc}\). The amount of ocean loss can also be reduced by slowing of the ground by release waves within the planet, a process which atmospheric loss is relatively insensitive to (Section 6). If the proto-Earth, or the planetary embryos that accreted to form it, underwent a giant impact when an ocean was present, there is significant potential for H/N and C/N fractionation. Such an effect would be particularly strong if the ratio of atmospheric to ocean mass was low enough to be in the high-loss efficiency regime. Smaller impacts can also lead to significant atmospheric loss (Schlichting et al., 2015), but the degree of ocean/atmosphere fractionation in such events has not been determined. If loss by giant impacts is required to explain the Earth's H/N and C/N ratios, this would imply that Earth experienced, potentially multiple, giant impacts late in formation when it had an ocean and potentially a low enough atmospheric to ocean mass ratio to make atmospheric loss efficient.
### Potential for Rayleigh-Taylor instabilities
Rayleigh-Taylor instabilities develop when a fluid attempts to accelerate (i.e., push) another fluid of greater density than itself. The growth of the instability disrupts the material interface and significantly reduces the acceleration of the denser fluid due to the forcing of the lighter fluid. If such instabilities were to develop at the ground/ocean, ground/atmosphere, or ocean/atmosphere boundaries during the process of atmospheric loss the efficiency of loss could be significantly
\begin{table}
\begin{tabular}{l c c} Reservoir & \(\mathcal{R}\) (oxidised) & \(\mathcal{R}\) (reduced) \\ \hline Earth’s surface today & 3.6\(\times 10^{-3}\) & - \\ Earth’s exosphere & 0.21 & 0.13 \\ BSE (Marty, 2012) & 0.7\(\pm 0.5\) & 0.4\(\pm 0.5\) \\ BSE (Halliday, 2013) & 0.3\(\pm 0.1\) & 0.2\(\pm 0.1\) \\ CC (Marty, 2012) & 1.1\(\pm 0.3\) & 0.7\(\pm 0.3\) \\ CI & 8.85 & 5.71 \\ H & 12.8 & 8.24 \\ L & 17.2 & 11.0 \\ LL & 13.6 & 8.75 \\ EH & 15.9 & 10.4 \\ \end{tabular}
\end{table}
Table 1: The likely volatile budgets of accreting protoplanets span from the high to the low loss regimes. Presented are the atmosphere to ocean mass ratios for the Earth today, and different reservoirs or meteorite groups assuming that all the carbon and nitrogen were partitioned into the atmosphere and the hydrogen was in the ocean. Estimates assuming an N\({}_{2}\) and CO\({}_{2}\) (oxidized) or N\({}_{2}\) and CO (reduced) atmospheres are given in different columns. Estimate of Earth’s exosphere volatile budget from Marty (2012). Bulk silicate Earth (BSE) estimates from Marty (2012) and Halliday (2013). Average carbonaceous chondrite (CC) estimated by Marty (2012) with data from Kerridge (1985) and Robert (2003). CI values based on Oryguil Lodders (2003). Average composition of ordinary chondrites (H, L, LL) taken from compilation by Schaefer & Fegley (2007). Enstatite (EH) chondrite values based on Indarch with data from Wiik (1956), Moore & Lewis (1966), Moore & Gibson (1969), and Grady et al. (1986) as compiled by Schaefer & Fegley (2017).
reduced. Our 1D simulations cannot capture instabilities and so it is necessary for us to compare the density of the different layers during our simulations to determine the likelihood for Rayleigh-Taylor instabilities.
The ground/ocean boundary is not at risk of experiencing Rayleigh-Taylor instabilities over the range of parameters considered in this work. As discussed in Section 6, we do not directly model the planet in our simulations, but we can calculate the expected density of the ground by using the pressure at the base of the ocean/atmosphere from our calculations combined with the calculated release curve of forsterite (Figure 14). We find that the density of the ground is never lower than the base of the ocean, even in cases where the ground begins to vaporize, and so we would expect the ground/ocean boundary to remain stable.
Determining the stability of the ground/atmosphere and ocean/atmosphere boundaries is more difficult due to the limitations of the available EOS. High quality EOS for heavy gases such as N\({}_{2}\), CO\({}_{2}\), and air mixtures that cover the range of conditions considered here are not publicly available or easily attainable. In this work we therefore decided to use an ideal gas EOS for atmospheres. For lower pressure atmospheres (\(<\sim\) 150 bar for CO\({}_{2}\), \(\sim\) 4 kbar for H\({}_{2}\)) the density of the ideal gas Hugoniot is lower than the density of forsterite and water on the release curve at the impedance match point, even in cases where the ground/atmosphere begins to vaporize. However, for higher pressures the density of the ideal gas EOS exceeds the density of the water and even forsterite. This is likely due to the fact that the ideal gas EOS significantly overestimates the density of gases at high pressure as it neglects the volumes and interactions of particles. More sophisticated EOS for gases (Span and Wagner, 1996; Lemmon et al., 2000; Span et al., 2000) predict a much lower density for the Hugoniot of atmospheric gases, below that of the forsterite and water at the impedance match pressure. However, the maximum pressure covered by these EOS preclude using them to calculate shocks over much of the range considered here. It is therefore likely that the ocean/atmosphere and ground/atmosphere boundaries are indeed stable to Rayleigh-Taylor instabilities but wider-ranging, high-quality gas EOS are required to confirm this.
### Combination of 1D and 3D simulations
We have produced scaling laws for loss as a function of ground velocity (Section C) and scripts to calculate impedance match velocities (available through GitHub and Zenodo), which allow for linking of our results with ground velocity distributions calculated from 3D impact simulations (e.g., Kegerreis et al., 2019; Yalinewich and Schlichting, 2018). We hope that this will allow for investigations of the effect of surface conditions on global atmospheric loss which are currently not possible due to computational limitations or expense.
When combining our results with 3D simulations there are two particularly important factors to consider. First, it is important to correct the ground motion from the 3D simulation to the impedance match velocity for the atmosphere/ocean under consideration from that used in the 3D simulations. This can be done with the scripts and functions made available with this manuscript.
Second, by considering a 1D system in our simulations, we have inherently assumed that the shock in the planet breaks out perpendicular to the ground. That is, that the ground velocity is always vertical relative to the local surface. In reality, the shock wave will be travelling at an angle relative to the surface. On breakout of the shock, the wave will be refracted depending on the impedance of the materials either side of the boundary Henderson (1989). It is not clear what the best approach is to account for a non-perpendicular incidence angle, but we suggest that a correction to the local escape velocity due to the component of the shock particle velocity parallel to the surface may be adequate. However, different methods will need to be tested using full 3D simulations as ground truth before proceeding with hybrid 1D-3D models.
## 8 Conclusions
We have conducted a large number of hydrodynamic simulations and impedance-match calculations to explore the effect of surface conditions on the efficiency of atmospheric loss from terrestrial planets due to giant impacts. We have found that pre-impact surface conditions can have a significant effect on the efficiency of loss driven by the impact shock wave, in the far-field away from the impact site. The higher ground release velocities, as given by impedance-match solutions, for lower molecular weight, hotter, and lower pressure atmospheres mean that such atmospheres are more efficiently lost.
The presence of an ocean can also substantially influence atmospheric loss. In the ocean case the loss efficiency is largely dictated by the atmospheric to ocean mass ratio, with a relatively rapid transition between a low loss and a high loss regime as the mass ratio of atmosphere to ocean decreases. If the ocean is above a critical mass, typically within a factor of a few of the atmospheric mass, the presence of an ocean can significantly increase the efficiency of loss. However, contrary to previous thinking, the presence of an ocean can re
duce the efficiency of loss if the ocean is not sufficiently massive.
The efficiency of loss due to giant impacts is therefore highly dependent on the surface conditions on the colliding bodies before the impact. As the surface conditions on planets evolve during accretion so will the potential for substantial atmospheric loss as a result of giant impacts. In particular, later in accretion there is sufficient time for oceans to condense between most impact events (Abe & Matsui, 1988) potentially allowing for much more efficient atmospheric loss. Preferential loss of atmosphere over ocean, coupled with the fact that loss efficiency increases with decreasing atmosphere to ocean mass ratio, could generate a positive feedback that accelerates atmospheric loss from planets that experience multiple late giant impact events.
To allow our 1D results to be combined with 3D giant impact simulations (e.g., Keggerreis et al., 2020) to calculate the total loss from specific impacts, we have developed a scaling law that relates ground velocity due to an impact, the escape velocity of the planet, and the ratio of atmosphere to ocean mass, to the efficiency of loss. Future work will use this approach to develop scaling laws that approximate loss as a function of both impact parameters and surface conditions. Such laws will allow atmospheric loss from giant impacts to be included directly in combined dynamical and chemical models of planet formation.
The authors acknowledge the late, great, Jay Melosh for useful discussions and stimulating questions on the topic of atmospheric loss. SJL thanks Erik Asphaug and Hidenori Genda for help in replicating the results of Genda & Abe (2005) using the Tillotson EOS, and Matthew Roche for his feedback on earlier versions of the manuscript. We also thank Hidenori Genda and an anonymous reviewer for their encouraging and constructive comments which helped improve the manuscript. SJL acknowledges the support of an NASA Earth and Space Science Fellowship (grant NNX13AO67H), NSF (awards EAR-1947614 and EAR-1725349), the Earth and Planetary Science Department of Harvard University, the Division of Geological and Planetary Sciences of the California Institute of Technology, and the UK Natural Environment Research Council (grant NE/V014129/1). STS acknowledges support from NASA through awards 80NSSC18K0828 and NNX15AH54G. This work was carried out using the FASRC Odyssey cluster, supported by the FAS Division of Science Research Computing Group at Harvard University, and the computational facilities of the Advanced Computing Research Centre, University of Bristol.
Our processed results, scripts for reproducing the figures in this manuscript, and the modifications made to the WONDY hydrodynamics code are available on GitHub (sjl499/Lock_Stewart_2023_PSJ_atmospheric_loss) and archived on Zenodo (Lock, 2023). A Python package, _SIMPLES_ (Shock Impedance Match Package and Loss Event Simulator) that can calculate impedance match velocities for real materials and the resulting atmospheric loss is also available on GitHub (sjl499/simples) and archived on Zenodo (Lock, 2023).
## Appendix A Description of 1D Hydrocode
For calculating atmospheric and ocean loss in 1D we follow an approach similar to that of Genda & Abe (2003, 2005). We used a modified version of the WONDY hydrocode (Kipp & Lawrence, 1982) which solves the Lagrangian 1D mass, momentum and energy equations using a finite difference method with artificial viscosity to allow accurate modelling of shock waves by spreading the shock front over several zones. We modified the code by adding radial gravity and several different equation of state (EOS) options as well as allowing for hydrostatic initialisation of an atmosphere and ocean. The details of our adapted code are described in the following sections.
### Finite Difference Implementation of the Fundamental Fluid Dynamics Equations
The core of the code is the solution of the fundamental fluid dynamics equations in 1D. Conservation of momentum can be written as
\[\rho a=-\frac{\partial p}{\partial x}-\frac{\partial q}{\partial x}+\rho g\;,\] (A1)
where \(\rho\) is density, \(a\) is acceleration, \(p\) is pressure, \(x\) is the Lagrangian spatial coordinate, \(g\) is gravitational acceleration and \(q\) is the viscous stress. For our purposes \(q\) is simply the artificial viscosity. The acceleration in a Lagrangian reference frame is given by
\[a=\frac{\partial u}{\partial t}\;,\] (A2)
where \(u\) is the velocity,
\[u=\frac{\partial x}{\partial t}\;.\] (A3)
Conservation of energy is the balance between the rate of change of internal energy and the rate at which work is being done,
\[\rho\frac{\partial\epsilon}{\partial t}=(p+q)\frac{1}{\rho}\frac{\partial \rho}{\partial t}\;,\] (A4)
where \(\epsilon\) is the specific internal energy.
In WONDY, the conservation equations are solved using a finite difference method. The fluid is divided into Lagrangian zones (or cells) and the parameters of each cell are advanced in incremental time steps using the conservative equations. The bulk fluid properties (\(m\), \(\rho\), \(q\), \(p\), \(\epsilon\), \(c\), \(T\)) are defined at the center of each zone and the kinematic variables (\(a\), \(u\), \(x\)) are defined at the boundaries of the zones. \(c\) is adiabatic sound speed, \(T\) is the temperature, \(m\) is the mass of a given Lagrangian zone and the other variables are described above. For our purposes all kinematic variables are defined in reference to the center of the planet. Linear interpolation is used to find values between these points, e.g., to find the bulk fluid properties at the edge of a zone. Zones are labelled in order with an index, \(j\), starting at 1 with the lower boundary referred to with the index 0. The notation we employ here, after Kipp & Lawrence (1982), for the fluid properties is \(\psi_{j}^{n}\) which donates an arbitrary quantity, \(\psi\), at the \(j^{\text{th}}\) zone and the \(n^{\text{th}}\) time step. Integer \(j\) denote values of the quantity at the upper boundary of a zone whereas half integer values indicate quantities defined at the center of the zone. Similarly integer \(n\) indicate values known at that time step and half integer values indicate quantities defined halfway between time steps. Bulk fluid properties are defined at \(\psi_{j-\frac{1}{2}}^{n}\) and the kinematic variables at \(\psi_{j}^{n}\) with the exception of \(u\) which is defined at \(u_{j}^{n+\frac{1}{2}}\). These choices of when and where to define variables are for numerical convenience which will become evident below. The second order centred difference method is used to calculate both time and spatial derivatives of all quantities.
Using this notation the fundamental equations can be defined in finite difference form. The finite difference momentum equation is
\[\begin{split} a_{j}^{n}&=2\left[\frac{(p_{j-\frac{1 }{2}}^{n}+q_{j-\frac{1}{2}}^{n})-(p_{j+\frac{1}{2}}^{n}+q_{j+\frac{1}{2}}^{n}) }{\rho_{j+\frac{1}{2}}^{n}(x_{j+1}^{n}-x_{j}^{n})+\rho_{j-\frac{1}{2}}^{n}(x_{ j}^{n}-x_{j-1}^{n})}\right]\\ &\qquad-\frac{GM_{p}}{\left(x_{j}^{n}\right)^{2}}\;,\end{split}\] (A5)
where \(g_{j}^{n}\) has been substituted for a radial gravity profile for a planet of mass \(M_{p}\). \(G\) is the universal gravitational constant. Equation A5 can be used to calculate the acceleration at the \(n^{\text{th}}\) time step from quantities that are already known from that time step. The velocity at the next half time step can then determined from the finite difference form of Equation A2:
\[u_{j}^{n+\frac{1}{2}}=u_{j}^{n-\frac{1}{2}}+\frac{1}{2}\left(t^{n+1}-t^{n-1} \right)a_{j}^{n}\;.\] (A6)
Similarly from Equation A3 the position of zone \(j\) at the next full time step is
\[x_{j}^{n+1}=x_{j}^{n}+\left(t^{n+1}-t^{n}\right)u_{j}^{n+\frac{1}{2}}\;.\] (A7)
Having used Equations A5, A6 and A7 to determine the kinematic variables for the next time step the code
ecalculates the thermodynamic properties for the zone. Conservation of mass in spherical coordinates provides the updated density,
\[\rho_{j-\frac{1}{2}}^{n+1}=\frac{3m_{j-\frac{1}{2}}}{4\pi\left[\left(x_{j}^{n+1} \right)^{3}-\left(x_{j-1}^{n+1}\right)^{3}\right]}\;.\] (A8)
To reduce round off error the code actually implements the following, equivalent, expression,
\[\begin{split}\rho_{j-\frac{1}{2}}^{n+1}=&\left\{ \frac{1}{\rho_{j-\frac{1}{2}}^{n}}+\right.\\ &\left.\frac{4\pi\Delta t^{n+\frac{1}{2}}}{3m_{j-\frac{1}{2}}} \left[\xi_{j}^{n+\frac{1}{2}}u_{j}^{n+\frac{1}{2}}-\xi_{j-1}^{n+\frac{1}{2}}u_ {j-1}^{n+\frac{1}{2}}\right]\right\}^{-1}\;,\end{split}\] (A9)
where
\[\xi_{j}^{n+\frac{1}{2}}=\left(x_{j}^{n+1}\right)^{2}+x_{j}^{n+1}x_{j}^{n}+ \left(x_{j}^{n}\right)^{2}\;,\] (A10)
and
\[\Delta t^{n+\frac{1}{2}}=t^{n+1}-t^{n}\;.\] (A11)
The internal energy and pressure for the next time step are found by a combination of the energy equation and the EOS of the material. The finite difference form of the energy equation is
\[\begin{split}\epsilon_{j-\frac{1}{2}}^{n+1}&=\epsilon _{j-\frac{1}{2}}^{n}+\\ & 2\left(p_{j-\frac{1}{2}}^{n+1}+p_{j-\frac{1}{2}}^{n}+2q_{j-\frac{1}{2 }}^{n+\frac{1}{2}}\right)\frac{\left(\rho_{j-\frac{1}{2}}^{n+1}-\rho_{j-\frac {1}{2}}^{n}\right)}{\left(\rho_{j-\frac{1}{2}}^{n+1}-\rho_{j-\frac{1}{2}}^{n} \right)^{2}}\;.\end{split}\] (A12)
We will examine the solution of this equation for each of the EOS used in this study in Appendix A.2.
Resolving shocks in numerical codes is challenging due to the very rapid change in material properties and velocity across the shock front. The natural viscosity of the fluids we consider is low, leading to very thin shock fronts that cannot be feasibly resolved at planetary scales. Artificial viscosity is used in WONDY to smear the shock front over several zones and avoid discontinuities. WONDY includes both a quadratic viscosity,
\[q_{1}=\rho b_{1}^{2}\left(\frac{1}{\rho}\frac{\partial\rho}{\partial t} \right)^{2}\;,\] (A13)
and a linear viscosity,
\[q_{2}=b_{2}c\left(\frac{\partial\rho}{\partial t}\right)\;,\] (A14)
with the total viscosity given as a simple sum of \(q_{1}\) and \(q_{2}\) (i.e., \(q=q_{1}+q_{2}\)). \(b_{1}\) and \(b_{2}\) are parameters that control the strength of the artificial viscosity. To make the artificial viscosity insensitive to the absolute spatial scales, \(b_{i}\) are scaled to the size of the zone,
\[b_{i}=B_{i}\Delta x\;,\] (A15)
so the true constants, \(B_{i}\), are approximately equal to the number of zones that any disturbance will be smoothed over. The values for all constants used in the code are given in Table 4 and the sensitivity of our results to each parameter is examined in Appendix B.1.
\(q_{1}\) is large only when the rate of change is large (i.e. in the shock front) and small elsewhere so it is used mainly to control gradients in the shock front and avoid discontinuities. \(q_{2}\) is more effective at controlling spurious oscillations elsewhere where \(q_{1}\) is negligible. Both forms of the viscosity are only used when the material is in compression, i.e. where
\[\frac{\partial\rho}{\partial t}<0\;.\] (A16)
The finite difference form of the combined viscosity is
\[\begin{split} q_{j-\frac{1}{2}}^{n+\frac{1}{2}}&= \frac{\rho_{j-\frac{1}{2}}^{n+1}+\rho_{j-\frac{1}{2}}^{n}}{2}\\ \left[B_{2}\left(\frac{x_{j}^{n+1}-x_{j-1}^{n+1}+x_{j}^{n}-x_{j-1 }^{n}}{2}\right)c_{j-\frac{1}{2}}^{n}\left(\frac{1}{\rho}\frac{\partial\rho}{ \partial t}\right)\right.\\ \left.+B_{1}^{2}\left(\frac{x_{j}^{n+1}-x_{j-1}^{n+1}+x_{j}^{n}-x_ {j-1}^{n}}{2}\right)^{2}\\ \times\left.\left(\frac{1}{\rho}\frac{\partial\rho}{\partial t} \right)\left|\frac{1}{\rho}\frac{\partial\rho}{\partial t}\right|\right|\;,\end{split}\] (A17)
where
\[\frac{1}{\rho}\frac{\partial\rho}{\partial t}=\frac{2\left(\rho_{j-\frac{1}{2} }^{n+1}-\rho_{j-\frac{1}{2}}^{n}\right)}{\Delta t^{n+\frac{1}{2}}\left(\rho_{j -\frac{1}{2}}^{n+1}+\rho_{j-\frac{1}{2}}^{n}\right)}\;,\] (A18)
and
\[\Delta t^{n+\frac{1}{2}}=t^{n+1}-t^{n}\;.\] (A19)
The time step is chosen to ensure numerical stability. The criterion for linear stability of the difference equations used in the WONDY code (Hicks, 1978) is
\[\begin{split}\Delta t&<\Delta x\left[B_{2}c+2B_{1}^{ 2}\left|\frac{\dot{\rho}}{\rho}\right|\Delta x\right.\\ &\left.+\sqrt{\left(B_{2}c++2B_{1}^{2}\left|\frac{\dot{\rho}}{ \rho}\right|\Delta x\right)^{2}+c^{2}}\,\right]^{-1}\;.\end{split}\] (A20)
At the end of each cycle, the value of the right hand side of Equation A20 is evaluated for each cell to determine
the maximum time step that can be used to ensure stability. Since the criteria in Equation A20 is based on a linear stability analysis, and then assumed to apply generally, the critical value for each cell is scaled by a constant, \(K_{\rm t_{1}}\), to account for any non-linear effects and ensure stability, such that
\[\begin{split}&\Delta t_{j-\frac{1}{2}}^{n+\frac{3}{2}}=K_{\rm t_{1}} \Delta x_{j-\frac{1}{2}}^{n+1}\left[B_{2}c_{j-\frac{1}{2}}^{n+1}+2B_{1}^{2} \left|\frac{\dot{\rho}}{\rho}\right|\Delta x_{j-\frac{1}{2}}^{n+1}\right.\\ &\left.+\sqrt{\left(B_{2}c_{j-\frac{1}{2}}^{n+1}+2B_{1}^{2}\left| \frac{\dot{\rho}}{\rho}\right|\Delta x_{j-\frac{1}{2}}^{n+1}\right)^{2}+\left( c_{j-\frac{1}{2}}^{n+1}\right)^{2}}\,\right]^{-1}\,.\end{split}\] (A21)
The next time step is then set as
\[\Delta t^{n+\frac{3}{2}}=\min\left(\Delta t_{j-\frac{1}{2}}^{n+\frac{3}{2}},K_ {\rm t_{2}}\Delta t^{n+\frac{1}{2}}\right)\,,\] (A22)
to ensure stability and to limit the increase in the time step to a factor \(K_{\rm t_{2}}\).
Although we will discuss the code in dimensional parameters here, these equations are implemented in the code in a dimensionless form to reduce numerical errors. All the parameters, with the exception of temperature, are non-dimensionalised using combinations of: the ground velocity, \(u_{\rm G}\); the radius of the planet, \(R_{\rm p}\); and \(p_{0}\).
Note that the gravity field in the 1D calculation is assumed to be fixed and radial. Since the time for loss is small (\(\sim 100\) s) compared to the timescale of the impact, any change to the field over this period is minimal, but the field could still be distorted by the deformation of the target and the presence of the impactor. As long as the gravity field does not vary significantly from a \(1/r^{2}\) dependence, a local escape velocity in the direction of the ground motion should be sufficient to scale our results to calculate the loss in 3D impact simulations (Kegerreis et al., 2020, 2020).
### Implementation of Equations of State
In this work we have used a variety of equations of state, in particular for water. The EOS dictates the method used to solve the finite difference form of the energy equation (Equation A12). We will now outline the equations of state used and their implementation.
#### a.2.1 Ideal Gas Equation of State
The ideal gas EOS is the simplest EOS for gases and its implementation is straightforward. The EOS can be written in the form
\[p=(\gamma-1)\rho\epsilon\;,\] (A23)
where \(\gamma\) is the ratio \(c_{\rm p}/c_{\rm V}\), and \(c_{\rm p}\) and \(c_{\rm V}\) are the specific heat capacities at constant pressure and volume respectively. This linear relation between pressure and energy allows for an analytical solution to the energy equation,
\[\begin{split}&\epsilon_{j-\frac{1}{2}}^{n+1}=\left[\epsilon_{j- \frac{1}{2}}^{n}+\right.\\ &\left.2\left(p_{j-\frac{1}{2}}^{n}+2q_{j-\frac{1}{2}}^{n+\frac{ 1}{2}}\right)\frac{\left(\rho_{j-\frac{1}{2}}^{n+1}-\rho_{j-\frac{1}{2}}^{n} \right)}{\left(\rho_{j-\frac{1}{2}}^{n+1}-\rho_{j-\frac{1}{2}}^{n}\right)^{2} }\right]\\ &\left.\left[1-(\gamma-1)\rho_{j-\frac{1}{2}}^{n+1}\right]^{-1}\,, \end{split}\] (A24)
which exclusively uses quantities that are known from the previous time step. The pressure at the next time step is given by
\[p_{j-\frac{1}{2}}^{n+1}=(\gamma-1)\rho_{j-\frac{1}{2}}^{n+1}\epsilon_{j-\frac {1}{2}}^{n+1}\;.\] (A25)
The only parameter that needs to be specified for an ideal gas EOS is \(\gamma\), although other parameters are needed to initialise the atmosphere (see Appendix A.4).
#### a.2.2 Tabulated Equations of State
We have also added to WONDY the ability to use a tabulated EOS in the form of a \(T\)-\(\rho\) table. In this work we have used the water table form Senft & Stewart (2008), a high quality water EOS based partially on the IAPWS water EOS (see Appendix A.2.3) but extended to the high pressures relevant for planetary impacts.
Use of a tabulated EOS requires a numerical solution to the energy equation. We solve for the temperature at each time step that solves the energy equation using a Newton-Raphson method. The solution is found iteratively using
\[T_{i+1}=T_{i}-\frac{f(T_{i})}{f^{\prime}(T_{i})}\;,\] (A26)
where the subscripts mark the number of the iteration,
\[\begin{split}& f(T_{i})=\epsilon(\rho^{n+1},T_{i})-\epsilon^{n}\\ &\quad-\frac{\left(\rho^{n+1}-\rho^{n}\right)}{\left(\rho^{n+1}- \rho^{n}\right)^{2}}\left(p(\rho^{n+1},T_{i})-p^{n}-2q^{n+\frac{1}{2}}\right) \;,\end{split}\] (A27)
and
\[\begin{split}& f^{\prime}(T_{i})=c_{\rm V}(\rho^{n+1},T_{i})\\ &\quad-\frac{\left(\rho^{n+1}-\rho^{n}\right)}{\left(\rho^{n+1}- \rho^{n}\right)^{2}}\left[\frac{\partial\,p}{\partial T}\bigg{|}_{\rho}\,(\rho^ {n+1},T_{i})\right]\;.\end{split}\] (A28)
Note we have dropped the spatial subscripts for clarity as all values required are for the zone under consideration. The values of; \(\epsilon\), \(p\), \(c_{V}\) and \(\partial\,p/\partial T|_{\rho}\) are found by linear interpolation of the tabulated values. The derivatives are pre-calculated at each point in the table using a second order modified central difference method for
unevenly spaced points. The initial value for the iteration is determined by assuming an isentropic volume change from the previous time step. The change in temperature, \(\Delta T\), for a given density change, \(\Delta\rho\), is given by
\[\Delta T=\frac{\partial\,T}{\partial\rho}\bigg{|}_{S}\Delta\rho\;,\] (A29)
which can be written
\[\Delta T=\frac{T}{c_{V}\rho^{2}}\frac{\partial\,p}{\partial T}\bigg{|}_{\rho} \Delta\rho\;,\] (A30)
which is straightforward to calculate from the tabulated values. The solution is considered converged when
\[\left|\frac{T_{i+1}-T_{i}}{T_{i+1}}\right|<10^{-10}\;.\] (A31)
A maximum of 1000 iterations are allowed but the routine typically converges within 2-3 iterations.
#### a.2.3 IAPWS Water Equation of State
The IAPWS water EOS (Wagner, 2002) is a high quality empirically fitted EOS widely used in industrial applications. The advantages of this EOS is that it provides a smooth form for the material properties, however the EOS is limited in its pressure range and therefore it can only be used for lower values of \(u_{\rm G}\). The solution to the energy equation using this EOS is found in the same way as for the tabulated EOS (Appendix A.2.2), except with the thermodynamic parameters and their derivatives calculated using the fortran subroutines provided by the IAPWS rather than by interpolation of an EOS table.
#### a.2.4 Tillotson Equation of State
The Tillotson EOS (Tillotson, 1962) has been widely used in shock physics and the study of planetary collisions. The EOS is based on empirical fits along the hugoniot forced to converge to the Thomas-Fermi limit at high pressures. Although the EOS is designed for use primarily in compression it includes parameters to approximate the behaviour of material upon release. For this work we use the parameters from O'Keefe & Ahrens (1982) for water.
Following Melosh (1989) the EOS implemented here has four different regimes: compressed states (\(\rho/\rho_{0}\geq 1\), where \(\rho_{0}\) is the reference density), cold expanded states (\(\rho/\rho_{0}<1\) and \(\epsilon<\epsilon_{iv}\), where \(\epsilon_{iv}\) is the energy of incipient vaporisation), hot expanded states (\(\rho/\rho_{0}\leq 1\) and \(\epsilon>\epsilon_{cv}\), where \(\epsilon_{cv}\) is the energy of complete vaporisation), and a transition region between the hot and cold expanded states (where \(\rho/\rho_{0}<1\) and \(\epsilon_{iv}<\epsilon<\epsilon_{cv}\)). Firstly, for compressed states and for cold expanded states the pressure is given by
\[p=\left[a+\frac{b}{\frac{\epsilon}{\epsilon_{0}\eta^{2}}+1}\right]\rho\epsilon +A\mu+B\mu^{2}\;,\] (A32)
where: \(a\), \(b\), \(A\), \(B\) and \(\epsilon_{0}\) are fitted parameters; \(\eta=\rho/\rho_{0}\); \(\mu=\eta-1\); and \(\rho_{0}\) is the reference density of the material. In order to ensure numerical stability, a low density cutoff is applied in the cold expanded state when \(\rho/\rho_{0}<0.8\), where a nominal pressure of 10 Pa is given to a zone. In hot expanded states:
\[p=a\rho\epsilon\] (A33) \[+\left[\frac{b\rho\epsilon}{\frac{\epsilon}{\epsilon_{0}\eta^{2} }+1}+A\mu\exp\left\{-\beta\left(\frac{1}{\eta}-1\right)\right\}\right]\] \[\exp\left\{-\alpha\left(\frac{1}{\eta}-1\right)^{2}\right\}\;,\]
where \(\alpha\) and \(\beta\) are two more fitted parameters that control the rate at which the EOS tends to the ideal gas law. For expanded states with internal energies between that of incipient and complete vaporization a hybrid formula is used to transition smoothly between the cold and hot expanded states,
\[p=\frac{\left(\epsilon-\epsilon_{iv}\right)p_{E}+\left(\epsilon_{cv}-\epsilon \right)p_{C}}{\epsilon_{cv}-\epsilon_{iv}}\;,\] (A34)
where \(p_{C}\) is the pressure calculated using Equation A32 and \(p_{E}\) is the pressure calculated using Equation A33.
Using the Tillotson EOS the energy equation can be solved analytically. However, identifying the relevant regime to use in our code requires knowledge of the solution. We therefore find the possible solutions for \(\epsilon\) in all four regimes and use the physically meaningful solution. Subbing Equations A32, A33 and A34 into the energy equation in finite difference form (Equation A12) produces three separate quadratic equations. These can then be solved analytically giving two solutions for each equation. In the case that the material is in compression we choose the solution of the equation derived from Equation A32 that: 1) gives the correct sign of the change in \(\epsilon\) for the density change; and 2) is closest to the previous value of \(\epsilon\). If the material is in an expanded state solutions from all three equations are potentially valid and so we choose the solution that: 1) is in the correct energy range for its regime; 2) gives the correct sign of the change in \(\epsilon\) for the density change; and 3) is closest to the previous value of \(\epsilon\). The relevant equation can then be used to find the pressure. Note that care must be taken in averaging properties in the cross over region.
In order to calculate artificial viscosity we also need to calculate the adiabatic sound speed. The sound speed is defined as
\[c^{2}=\frac{\partial p}{\partial\rho}\bigg{|}_{S}\,\] (A35)
where \(S\) is specific entropy. Using standard differential relations this can be expressed as
\[c^{2}=\frac{\partial p}{\partial\rho}\bigg{|}_{\epsilon}+\left.\frac{\partial p }{\partial\epsilon}\right|_{\rho}\frac{\partial\epsilon}{\partial\rho}\bigg{|} _{S}\.\] (A36)
Substituting for the final term from the fundamental thermodynamic relation,
\[c^{2}=\frac{\partial p}{\partial\rho}\bigg{|}_{\epsilon}+\frac{p}{\rho^{2}} \frac{\partial p}{\partial\epsilon}\bigg{|}_{\rho}\.\] (A37)
All the terms in this equation are known or can be found by differentiating the relevant equation for pressure in each regime. Equation A37 is used to calculate the sound speed in each regime with the exception of the low density cutoff where a nominal sound speed of \(0.5u_{\rm G}\) is used.
Due to the limitations of the Tillotson EOS, WONDY can encounter unphysical solutions in expanded states. In such cases, the code is unable to continue and most of our calculations using Tillotson did not complete the full model run. However, unphysical states are usually encountered late in the simulation when the loss has plateaued and so the total loss calculated is only slightly underestimated.
### Boundary Conditions
The breakout of the shock at the base of the atmosphere/ocean is simulated by giving the lower boundary an initial velocity, \(u_{\rm G}\), and then allowing the boundary to evolve ballistically. The lower boundary evolves following the momentum equation (Equation A5) ignoring forces other than gravity, which in finite difference form is expressed as:
\[a_{0}^{n}=-\frac{GM_{p}}{\left(x_{0}^{n}\right)^{2}}\.\] (A38)
In this framework, \(u_{\rm G}\) is the particle velocity of the surface upon release of the shock in the planet to the base of the atmosphere or ocean. As for the case of the ocean/atmosphere interface, \(u_{\rm G}\) can be determined using an impedance-match calculation. Due to the lower impedance of gases as opposed to water, release of the shock directly to an atmosphere results in higher ground velocities than in the presence of an ocean and in both cases the ground velocity is lower than if the shock released to vacuum. When calculating loss by convolving 1D results with the ground velocity calculated in 2D or 3D impact simulations it is imperative that corrections are made to translate the ground velocity.
Driving the lower boundary in this way creates some numerical complications. The sudden acceleration of the boundary generates excessively high artificial viscosities in the lowermost zones. This causes these zones to have artificially high temperatures and low densities which can be outside the range of the EOS. We overcome this by limiting the artificial viscosity in the first four zones to be less than \(q_{max}\). The exact value of \(q_{max}\), within a reasonable range, has little effect on the amount of loss so we set it to 0.2 of that calculated for the first time step for the first zone using Equation A17. We also initialise the first zone with \(q_{1}^{0}=q_{max}\) and the second and third zones with \(q_{j-\frac{1}{2}}^{0}=0.01q_{j-\frac{3}{2}}^{0}\). This helps to maintain reasonable values for variables in the lowermost zones with no affect on the final results.
In some simulations (see Section 6) the velocity of the ground is forced to increase linearly over some finite rise time, \(t_{\rm rise}\), until the specified ground velocity is reached. The specification of the artificial viscosity in cells just above the boundary is the same as in the standard case.
The lower boundary is assumed to stop when it returns to its initial position. However it is necessary to slow the boundary gradually to avoid numerical errors. The boundary is prescribed to fall exponentially after it returns to a position \(x_{0}<R_{t}+0.15\left(x_{0}^{max}-R_{t}\right)\) where \(x_{0}^{max}\) is the maximum height reached by the boundary. The rate is chosen so that the velocity of the boundary is continuous from before being slowed. In finite difference form
\[u_{0}^{n+\frac{1}{2}}=u_{0}^{n-\frac{1}{2}}\exp\left\{\frac{u_{slow}\Delta t^{n -\frac{1}{2}}}{0.15\left(x_{0}^{max}-R_{t}\right)}\right\}\,\] (A39)
where \(u_{slow}\) is the velocity of the boundary at the point the boundary starts to be slowed. Since the slow down happens late in the simulation the time step can be quite large by the time the slow down begins. We therefore artificially reduce the time step when the boundary is \(R_{t}+0.15\left(x_{0}^{max}-R_{t}\right)<x_{0}<R_{t}+0.3\left(x_{0}^{max}-R_{t}\right)\) to \(\leq 0.1\) s so that the slow down can be properly calculated. After slow down begins the time step is allowed to evolve normally.
In some simulations (see Section 6) the ground is stopped at some time, \(t_{\rm stop}\). This is achieved by immediately setting the velocity of the boundary to zero at the specified time. Once the boundary has stopped, it begins to fall under gravity and its evolution continues as described above.
The boundary at the top of the atmosphere is treated as a stress free boundary i.e. a vacuum. This is implemented by having an additional zone at the top of the
atmosphere with \((\rho,p)=0\) at all time steps. The finite difference equations are then applied as normal.
### Atmosphere and Ocean Initialisation
The initial conditions for our simulations are a hydrostatic atmosphere and, optionally, an ocean. We set the extent of the atmosphere by setting the pressure and temperature at the base of the atmosphere. The atmosphere is then assumed to be polytropic,
\[\left(\frac{p}{p_{0}}\right)^{\frac{\gamma-1}{\gamma}}=\left(\frac{\rho}{\rho_ {0}}\right)^{\gamma-1}=\frac{T}{T_{0}}\;,\] (A40)
where \(p_{0}\), \(\rho_{0}\) and \(T_{0}\) are the pressure, density and temperature at the base of the atmosphere. We then integrate up from the base of the atmosphere using a 4th order Runge-Kutta method to find the top of the atmosphere (in this model the pressure drops to zero at a finite height). The atmosphere is then divided into the specified number of zones, \(N_{\rm atm}\), each of equal thickness. For the ideal gas EOS the structure of the atmosphere is determined by \(p_{0}\), \(T_{0}\), \(\gamma\) and \(m_{\rm a}\) (the molar mass of the gas) which sets the density at a given \(p\), \(T\). The values for \(\gamma\) and \(m_{\rm a}\) for each of the atmospheres used here are given in Table 2.
The extent of the ocean is set by specifying the depth of the ocean, \(H_{\rm oc}\). The temperature and pressure of the bottom of the atmosphere are assumed to be continuous with the ocean. The exact structure of the ocean however depends on the EOS being used. For the tabulated EOS an isothermal ocean is initialised with temperature \(T_{0}\). In the case of the IAPWS EOS an isentropic ocean is used with the entropy set by \(p_{0}\) and \(T_{0}\). In order to compare directly with the work of Genda and Abe (2005) when using the Tillotson EOS we initialise a constant specific internal energy ocean with \(\epsilon=120\) J kg\({}^{-1}\). In each case a 4th order Runge-Kutta method is used to integrate down from the surface to find the properties for each of the \(N_{\rm oc}\) ocean cells. Due to the limited compressibility of water, the ocean is close to isothermal in both the isentropic and isoenergetic initializations. Cells are of equal thickness.
The mass, radii, and escape velocity of the planets used in this study are given in Table 3. For Earth and Mars -mass planets the radii were taken as the present-day equatorial radii of those planets. For intermediate mass planets, the radii were calculated using the HERCULES planetary structure code (Lock and Stewart, 2017; Lock, 2019) using the present-day Earth's core mass fraction of 0.323 (Yoder, 1995). In HERCULES, a body is modeled as consisting of a series of nested concentric layers of constant density and a potential field method is used to calculate the equilibrium structure of the body with a given thermal state, composition, mass and angular momentum, using realistic equations of state. For this work, the mantle and core of each planet were assumed to be forsterite and pure iron, respectively, and were modeled using the the ANEOS equation of state (EOS) model (Thompson and Lauson, 1972; Canup, 2012; Melosh, 2007). The EOS are documented as 'aneos-T70' (iron) and 'aneos-gadget' (forsterite) respectively in two Zenodo repositories (Stewart, 2020; Stewart et al., 2019). The mantle was assumed to be isentropic with a specific entropy of 3.2 kJ K\({}^{-1}\) kg\({}^{-1}\) corresponding to a mantle potential temperature of around 1900 K. This thermal state approximates that of the Hadean mantle. The core was also assumed to be isentropic and have a thermal state similar to the present day. The core specific entropy was set at 1.5 kJ K\({}^{-1}\) kg\({}^{-1}\), corresponding to a temperature of 3800 K at the pressure of the present-day core-mantle boundary. We used the same HERCULES parameters as in previous studies, that have been shown to accurately model the structure of Earth-like planets (Lock and Stewart, 2017, 2019; Lock, 2019; Lock et al., 2020). The planets were modeled by \(N_{\rm lay}^{\rm core}=20\) evenly spaced layers in the core and \(N_{\rm lay}^{\rm mantle}=80\) layers in the mantle, with \(N_{\mu}=1000\) points describing the shape of each layer. The expression for the gravity field was truncated at order \(2k_{\rm max}=12\). The minimum pressure at the surface of the planet was set to 10 bar. The tolerance for the convergence of the shape of equipotential layers was \(\xi_{\rm toll}^{\mu}=10^{-10}\) and the tolerance for the convergence of the mass of the planet was \(\xi_{\rm toll}=10^{-8}\). The step used for calculating gradients in the solution algorithm was \(\delta\xi=10^{-2}\). For further details of the definitions of these parameters the reader is referred to the HERCULES user manual (Lock, 2019).
## Appendix B Sensitivity Tests for 1D Model
### Sensitivity to Code Parameters
We have tested the sensitivity of our 1D models to the intrinsic code parameters (\(B_{1}\), \(B_{2}\), \(K_{\rm t_{1}}\) and \(K_{\rm t_{2}}\)) as well as the number of zones in the ocean and atmosphere (\(N_{\rm atm}\) and \(N_{\rm oc}\)). To do this we ran a series of calcula
\begin{table}
\begin{tabular}{l c c} Atmosphere & \(\gamma\) & \(m_{\rm a}\) [g] \\ \hline H\({}_{2}\) & 1.4 & 2 \\ H\({}_{2}\)O & 1.25 & 18 \\ CO\({}_{2}\) & 1.29 & 44 \\ N\({}_{2}\) & 1.4 & 28 \\ Earth-like & 1.4 & 29 \\ \end{tabular}
\end{table}
Table 2: The ideal gas parameters (ratio of specific heat capacities, \(\gamma\), and molar mass, \(m_{\rm a}\))for the atmospheres used in this paper.
tions varying each of the parameters in turn and comparing the calculated loss. The final results showed little variation with any of the parameters over the ranges we explored (e.g., Figure 16), although some runs with lower \(B_{1}\) values failed. For some combinations of parameters ringing was observed around the shock front indicating insufficient numerical viscosity, but such effects did not affect the final loss. We are therefore confident that our results are not significantly affected by our choice of these parameters.
## Appendix C Parameterization of Loss as a Function of Ground Velocity
In this section, we present a parameterization of the loss due to a given ground motion from a body with a given escape velocity and atmosphere to ocean mass ratio. This parameterization is not intended to give physical insight, but rather to provide expressions for calculating global loss from 3D impact simulations (e.g., Kegerreis et al., 2019; Denman et al., 2020, 2022). We will first describe the parameterization for the limiting case of no ocean and then for the general scenario including oceans and atmospheres of varying masses. Python functions and an interactive widget that implement this parameterization are available through GitHub.
### The no-ocean case
For the no-ocean case, we describe the loss using a modified logistics function,
\[f_{\rm NO}\left(\frac{u_{\rm G}}{v_{\rm esc}}\right)=\alpha_{1}\left[1+\exp \left\{\alpha_{2}\left(\frac{u_{\rm G}}{v_{\rm esc}}\right)+\alpha_{3} \right\}\right]^{\alpha_{4}}+\alpha_{5}\;,\] (C41)
where \(u_{\rm G}\) is the ground velocity, \(v_{\rm esc}\) is the escape velocity, and \(\alpha_{i}\) are constants. By requiring loss to be zero in the case of zero ground velocity, i.e., \(f_{\rm NO}=0\) when \(u_{\rm G}=0\), we find
\[\alpha_{5}=-\alpha_{1}[1+\exp\left(\alpha_{3}\right)]^{\alpha_{4}}\;.\] (C42)
Similarly, by requiring loss to be complete when \(u_{\rm G}=v_{\rm esc}\) we find
\[\alpha_{1}=\frac{1}{[1+\exp\left(\alpha_{2}+\alpha_{3}\right)]^{\alpha_{4}}- [1+\exp\left(\alpha_{3}\right)]^{\alpha_{4}}}\;.\] (C43)
In addition, we force \(f_{\rm NO}=0\) when \(u_{\rm G}/v_{\rm esc}<0\) and \(f_{\rm NO}=1\) when \(u_{\rm G}/v_{\rm esc}>1\).
In the no-ocean case we set \(\alpha_{3}=-1\) and performed a least squares fit to find \(\alpha_{2}=-4.32\) and \(\alpha_{4}=-3.77\). For the fit we used the calculations shown in Figure 5A: 0.1, 0.5, 1, 5, 10, 50, 100, and 500 bar atmospheres on planets
\begin{table}
\begin{tabular}{l c} Parameter & Standard Value \\ \hline \(B_{1}\) & 2 \\ \(B_{2}\) & 0.1 \\ \(K_{\rm t_{1}}\) & 0.75 \\ \(K_{\rm t_{2}}\) & 1.05 \\ \(N_{\rm oc}\) & 500 \\ \(N_{\rm atm}\) & 500 \\ \(t_{\rm final}\) & \(5\times 10^{4}\) s \\ \end{tabular}
\end{table}
Table 4: Values for constants used in all 1D model runs for which results are presented here (details in text). \(B_{1}\) and \(B_{2}\) are used to determine the strength of the artificial viscosity. \(K_{\rm t_{1}}\) and \(K_{\rm t_{2}}\) control the size of each time step. \(N_{\rm oc}\) and \(N_{\rm atm}\) are the number of zones used for the ocean and atmosphere respectively. \(t_{\rm final}\) is the total runtime for the simulations.
Figure 16: The results of our 1D calculations are not sensitive to the intrinsic parameters in the code. Shown is the loss as a function of ground velocity for an example simulation (\(H_{\rm oc}=3\) km, \(p_{0}=100\) bar and a CO\({}_{2}\) atmosphere at 300 K) calculated varying run parameters \(B_{1}\) (1-10), \(B_{2}\) (0.1-5), \(K_{\rm t_{1}}\) (1.05-1.2), \(K_{\rm t_{2}}\) (0.6-0.9), \(N_{\rm oc}\) (250-750), and \(N_{\rm atm}\) (250-750). Each parameter was varied while holding all others constant at the values given in Table 4.
\begin{table}
\begin{tabular}{c c c} Mass [\(M_{\rm Earth}\)] & Radius [km] & \(v_{\rm esc}\) [km s\({}^{-1}\)] \\ \hline
0.107 & 3396 & 5.02 \\
0.3 & 4607 & 7.20 \\
0.5 & 5366 & 8.62 \\
0.7 & 5907 & 9.72 \\
0.9 & 6337 & 10.64 \\
1.0 & 6371 & 11.18 \\ \end{tabular}
\end{table}
Table 3: Properties of planets used for hydrodynamic simulations in this study
with mass 0.107 (\(M_{\rm Mars}\)), 0.3, 0.5, 0.7, 0.9 and 1 \(M_{\rm Earth}\). The surface temperature was 300 K, and the atmosphere was CO\({}_{2}\) with a mean molecular weight of \(m_{\rm a}=44\) and a ratio of specific heat capacities of \(\gamma=1.29\). Loss was calculated at 0.05 \(v_{\rm esc}\) increments from 0.05 to 0.95 \(v_{\rm esc}\).
This parameterization offers a very accurate fit to our model runs. Figure 5C shows the misfit between our parameterization and the fitted calculations. The root mean square (RMS) misfit for all runs is 0.0046 and the maximum misfit is 0.0089.
### Loss in the presence of an ocean
We chose to parameterize loss in the \(\mathcal{R}\)-\(u_{\rm G}\)-loss space (Figure 10). Loss is described by a modified logisitics function
\[f(u_{\rm G},v_{\rm esc},\mathcal{R})=\alpha_{1}[1+\exp{(\alpha_{2}\log_{10} \left(\mathcal{R}\right)+\alpha_{3})}]^{\alpha_{4}}+\alpha_{5}\;,\] (C44)
where \(\alpha_{i}\) are functions of the ground velocity and the escape velocity, \(v_{\rm esc}\). We find that loss due to a given ground velocity tends to the no-ocean case as \(\mathcal{R}\to\infty\) (Figure 10) and so we enforce \(f\to f_{\rm NO}\) as \(\mathcal{R}\to\infty\) by setting
\[\alpha_{1}=f_{\rm NO}-\alpha_{5}\;,\] (C45)
and requiring that \(\alpha_{2}\leq 0\).
The velocity dependence of \(\alpha_{i}\) are given by arbitrary function chosen as they provided a good fit to our simulation results. \(\alpha_{2}\) and \(\alpha_{3}\) are described by 3\({}^{\rm rd}\)-order polynomials:
\[\alpha_{i}=a_{1}^{i}+a_{2}^{i}\frac{u_{\rm G}}{v_{\rm esc}}+a_{3}^{i}\left( \frac{u_{\rm G}}{v_{\rm esc}}\right)^{2}+a_{4}^{i}\left(\frac{u_{\rm G}}{v_{ \rm esc}}\right)^{3}\;,\] (C46)
and \(\alpha_{4}\) is given by
\[\alpha_{4}= a_{1}^{4}\exp\left\{\sin{\left(\frac{u_{\rm G}}{v_{\rm esc}}\pi \right)}\right.\] \[\left.\times\left[a_{2}^{4}\frac{u_{\rm G}}{v_{\rm esc}}+a_{3}^{4 }\left(\frac{u_{\rm G}}{v_{\rm esc}}\right)^{2}+a_{4}^{4}\left(\frac{u_{\rm G }}{v_{\rm esc}}\right)^{3}\right]\right\}\;,\] (C47)
where \(a_{j}^{i}\) are coefficients that can depend on \(v_{\rm esc}\). The functional form for the ground velocity dependence of \(\alpha_{5}\) is more complex. The value of \(\alpha_{5}\) is the asymptote for loss as \(\mathcal{R}\to 0\) and is close to the lowest \(\mathcal{R}\) loss curve we calculate (e.g., the yellow line in Figure 6). To describe a loss curve in \(u_{\rm G}\)-loss space, we divide the functional form for \(\alpha_{5}\) into two parts for the atmospheric loss (\(g_{1}\)) and ocean loss (\(g_{2}\)) regimes respectively:
\[\alpha_{5}=\begin{cases}0&u_{\rm G}\leq 0\\ g_{1}&0<u_{\rm G}<u_{\rm tal}\\ 1&u_{\rm tal}\leq u_{\rm G}<v_{\rm esc}\quad\&\quad g_{2}\leq 1\\ g_{2}&u_{\rm tal}\leq u_{\rm G}<v_{\rm esc}\quad\&\quad g_{2}>1\\ 2&u_{\rm G}\geq v_{\rm esc}\quad,\end{cases}\] (C48)
where
\[g_{1}=a_{1}^{5}\left(\frac{u_{\rm G}}{v_{\rm esc}}\right)^{a_{3}^{5}}\exp \left(a_{2}^{5}\frac{u_{\rm G}}{v_{\rm esc}}\right)\,,\] (C49)
\[g_{2}=2 +a_{4}^{5}\left[\frac{u_{\rm G}-u_{\rm tal}}{v_{\rm esc}-u_{\rm tal }}-1\right]\] (C50) \[+a_{5}^{5}\left[\left(\frac{u_{\rm G}-u_{\rm tal}}{v_{\rm esc}-u _{\rm tal}}\right)^{2}-1\right]\;,\]
and \(u_{\rm tal}\) is the velocity at which total atmospheric loss is achieved (i.e., when \(g_{1}=1\)) given by
\[u_{\rm tal}=v_{\rm esc}\left(\frac{a_{3}^{5}}{a_{2}^{5}}\right)\min_{k=-1,0}W _{k}\left\{\frac{a_{3}^{5}}{a_{2}^{5}}\left(a_{1}^{5}\right)^{-1/a_{3}^{5}} \right\}\;,\] (C51)
where \(W_{k}\) is the Lambert \(W\) function of order \(k\). The velocity regime between the atmospheric and ocean loss regimes where \(\alpha_{5}=1\) loss emulates a feature seen in our low \(\mathcal{R}\) simulations where significant ocean loss is delayed for a short range of \(u_{\rm G}\) after total atmospheric loss is achieved (Figure 6).
Dependence on planetary mass is captured by a linear dependence of the coefficients for \(\alpha_{2}\) (\(a_{j}^{2}\)), \(\alpha_{4}\) (\(a_{j}^{4}\)), and \(\alpha_{5}\) (\(a_{j}^{5}\)) on \(v_{\rm esc}\) such that
\[a_{j}^{i}=a_{j,1}^{i}+a_{j,2}^{i}\frac{v_{\rm esc}}{v_{\rm esc}^{\rm Earth}}\;,\] (C52)
where \(v_{\rm esc}^{\rm Earth}=11.2\) km s\({}^{-1}\) is the escape velocity of Earth. Note that it is possible to parameterize the \(v_{\rm esc}\) dependence on \(\alpha_{5}\) by using a function for the absolute value of \(u_{\rm G}\) convolved with the no-ocean loss function, but we found that, because of the complications of the varying pressure of release and the highly non-linear no-ocean loss function, a simple escape velocity scaling of the parameters was able to give a more accurate fit.
To determine the parameters \(a_{j,n}^{i}\) we performed a least squares fit on the results of the simulations shown in Figure 10: all combinations of 1, 5, 10, 50, 100, 300 and 500 bar atmospheres and oceans of depths 0.1, 0.5, 1, 2, 3, 5, 10, 20 and 30 km, on planets with mass \(M_{\rm Mars}\) (0.107 \(M_{\rm Earth}\)) and 0.3, 0.5, 0.7, 0.9 and 1 \(M_{\rm Earth}\). Additional simulations with 900 bar atmospheres and oceans of 0.1 km depth were also included. The surface temperature was assumed to be 300 K, and the atmosphere was CO\({}_{2}\) (\(m_{\rm a}=44\), \(\gamma=1.29\)). Loss was calculated at 0.05 \(v_{\rm esc}\) increments from 0.05 to 0.95 \(v_{\rm esc}\). The best-fit parameters are given in Table 5.
We find that our parameterization offers an accurate representation of the dependence of loss on \(\mathcal{R}\), \(u_{\rm G}\) and \(M_{\rm p}\). Figure 10 shows the fit in \(\mathcal{R}\)-loss space for example ground velocities. The global RMS is 0.041, the RMS at any given ground velocity does not exceed 0.062, and the
maximum misfit of any simulation result is 0.25. Unsurprisingly, the misfit is largest in the transition between the low and high \(\mathcal{R}\) regimes where the gradient of the function in the \(\mathcal{R}\)-loss regime is greatest and the slight sensitivity to parameters such as absolute velocity, atmospheric pressure, ocean height etc. are greatest.
The parameterization also reproduces realistic loss curves in \(u_{\mathrm{G}}\)-loss space. Figure 17 shows our simulation results for the same four example loss curves as in Figure 6A, but for CO\({}_{2}\) atmospheres, (solid lines), with corresponding curves calculated using our parameterization (dashed lines). The grey band gives the full range of possible loss for an Earth-mass planet for any value of \(\mathcal{R}\) using our parameterization.
### Connecting shock strength to ground velocity
As discussed in Section 5, the relationship between the strength of the shock in the planet and the ground velocity varies depending on the properties of the ocean and/or atmosphere in contact with the ground. It is also worth noting that the relationship could be affected somewhat by changing the shock properties of the ground. When calculating the loss due to a given shock, for example when combining 1D and 3D simulation (Section 7.4), it is therefore necessary to calculate the relationship between the properties of the shock in the planet (here we use the particle velocity, \(u_{p}\), as the reference variable) and the ground velocity for the specific combination of ground and ocean and/or atmosphere that is under consideration. As this relationship can vary substantially, we do not attempt to provide a comprehensive set of \(u_{p}\)-\(u_{\mathrm{G}}\) relations. Instead, we provide a python package and a widget that workers can use to calculate individual or sets of \(u_{p}\)-\(u_{\mathrm{G}}\) relations (see data availability) for different surface conditions and a range of EOS.
We do make one exception. The \(u_{p}\)-\(u_{\mathrm{G}}\) relationship for breakout of a shock into the base of the ocean from a forsterite mantle is relatively constant over the range of conditions we have considered (occans of depths 0.1-30 km, and atmospheric pressure up to 900 bar). In this case, the \(u_{p}\)-\(u_{\mathrm{G}}\) relationship can be well described by
\[u_{\mathrm{G}}=\beta_{1}u_{p}^{\beta_{2}}\,\] (C53)
where \(\beta_{1}=2.6441\), and \(\beta_{2}=0.9252\).
Figure 17: Our parameterization can accurately determine loss as a function of ground velocity. Shown are our simulation results for the same example surface conditions as in Figure 6A but for CO\({}_{2}\) atmospheres (solid lines) and loss curves calculated using our parameterization (dashed lines). Grey band shows the full range of possible loss from Earth-mass bodies determined using our parameterization.
\begin{table}
\begin{tabular}{c c c c} \(i\) & \(j\) & \(a_{j,1}^{i}\) & \(a_{j,2}^{i}\) \\ \hline
[MISSING_PAGE_POST]
1.14723912 & 0.418100341 \\ \end{tabular}
\end{table}
Table 5: Fitted coefficients for each of the parameters \(a_{j,k}^{i}\) (see Equation C52). |
2309.17087 | Data-Driven Mathematical Modeling Approaches for COVID-19: a survey | In this review, we successively present the methods for phenomenological
modeling of the evolution of reported and unreported cases of COVID-19, both in
the exponential phase of growth and then in a complete epidemic wave. After the
case of an isolated wave, we present the modeling of several successive waves
separated by endemic stationary periods. Then, we treat the case of
multi-compartmental models without or with age structure. Eventually, we review
the literature, based on 230 articles selected in 11 sections, ranging from the
medical survey of hospital cases to forecasting the dynamics of new cases in
the general population. | J. Demongeot, P. Magal | 2023-09-29T09:32:28Z | http://arxiv.org/abs/2309.17087v1 | # Data-Driven Mathematical Modeling Approaches for COVID-19: a survey
###### Abstract
In this review, we successively present the methods for phenomenological modeling of the evolution of reported and unreported cases of COVID-19, both in the exponential phase of growth and then in a complete epidemic wave. After the case of an isolated wave, we present the modeling of several successive waves separated by endemic stationary periods. Then, we treat the case of multi-compartmental models without or with age structure. Eventually, we review the literature, based on 230 articles selected in 11 sections, ranging from the medical survey of hospital cases to forecasting the dynamics of new cases in the general population.
**Keywords:**_COVID-19 epidemic wave prediction; Epidemic models; Time series; Phenomenological models; Social changes; Time dependent models; Contagious disease; Endemic phase; Epidemic wave; Endemic/epidemic; Reported and unreported cases; Parameters identification;_
_I simply wish that, in a matter which so closely concerns the well-being of mankind, no decision shall be made without all the knowledge which a little analysis and calculation can provide_, Daniel Bernoulli 1765._
###### Contents
* 1 Introduction
* 2 Reported and unreported data
* 2.1 What are the unreported cases?
* 2.2 Example of unreported cases
* 2.3 Testing data for New York state
Phenomenological models
* 4 Epidemic model with reported and unreported individuals
* 4.1 Mathematical model
* 4.2 Given Parameters
* 4.3 Computed parameters
* 5 Modeling the exponential phase
* 5.1 Initial number of infected and transmission rate
* 5.2 Application to COVID-19 in mainland China
* 5.3 Spectral method in epidemic time series
* 5.4 Monotone property of the cumulative distribution
* 6 Modeling a single epidemic wave
* 6.1 What factors govern the transmission of pathogens
* 6.2 More results and references about the time dependent transmission rate modeling
* 6.3 Why do we need a time-dependent transmission rate?
* 6.4 Theoretical formula for \(\tau(t)\)
* 6.5 Explicit formula for \(\tau(t)\) and \(I_{0}\)
* 6.6 Results
* 7 Modeling multiple epidemic waves
* 7.1 Phenomenological model used for multiple epidemic waves
* 7.2 Phenomenological Model apply to France
* 7.3 Phenomenological Model apply to several countries
* 7.4 Earlier results about transmission rate reconstructed from the data
* 7.5 Instantaneous reproduction number
* 7.6 Results
* 7.7 Consequences of the results
* 8 Exponential phase with more compartments
* 8.1 A model with transmission from the unreported infectious
* 8.2 The exponential phase approximation
* 8.3 Uncertainty due to the period chosen to fit the data
* 9 Modeling COVID-19 epidemic with age groups
* 9.1 Epidemic model with age groups
* 9.2 Cumulative reported cases with age structure in Japan
* 9.3 Method to Fit of the Age Structured Model to the Data
* 9.4 Rate of contact
* 10 A survey for COVID-19 mathematical modeling
* 10.1 Medical survey
* 10.2 Incubation, Infectiousness, and Recovery Period
* 10.3 Data
* 10.3.1 Contact tracing
10.3.2 Testing data * 10.3.3 Unreported and uncertainty in the number of reported case data * 10.3.4 Clusters * 10.3.5 More phenomenological model to fit the data * 10.3.6 Wasted water data * 10.3.7 Discrete and random modeling * 10.3.8 Time series and wavelet approaches * 10.3.9 Transmission estimation and spatial modeling * 10.3.10 Forecasting methods * 10.4 SIR like models * 10.4.1 Multigroups or multiscale models * 10.4.2 Model with unreported or asymptomatic compartment * 10.5 Connecting reported case data with SIR like model * 10.6 Re-infections, natural and hybrid immunity * 10.7 Mortality * 10.8 Vaccination and mitigation measures * 10.9 Chronological age * 10.10Basic reproduction number * 10.11Prediction of COVID-19 evolution
* A When the output is a single exponential function
## 1 Introduction
The COVID-19 outbreak has been the catalyst for increased scientific activity, particularly in data collection and modeling the dynamics of new cases and deaths due to the outbreak.
Such scientific excitement contemporary with a pandemic is not new. Several historical epidemic episodes have led to significant advances in public health, biostatistics, databases, and discrete or continuous mathematical modeling of disease evolution, considering the mechanisms of contagion, host resistance, and mutation of the infectious agent. Historically, we can thus distinguish several epidemic outbreaks followed by important scientific breakdowns:
* The plague epidemic of 1348 saw the development of the beginnings of epidemiology with the recording of cases at the abbey of St Antoine (Isere in France) and in the network of hospitals managed by the Antonin monks;
* During the London cholera epidemic of 1654, John Snow discovered the waterborne transmission of cholera, which led to significant changes to improve public health, notably by constructing improved sanitation facilities. This epidemic and its resolution by Snow even before the discovery of the responsible germ was a founding event in intervention epidemiology, with the validation of methods that can be applied to all diseases, not
just contagious (infectious or social), in particular the principle of coupling the mapping of patients with that of sources of water for domestic consumption, which would later lead to the development of Geographic Information Systems (GIS) in epidemiology and to work such as the collection of water used as a COVID-19 tracer in the French Obepine project ([https://www.research-obepine.fr/](https://www.research-obepine.fr/));
* The smallpox epidemic of 1760 led to the importation into Europe of the inoculation practiced in Turkey (subsequently leading to vaccination by inert vaccine by Jenner) and to the creation of the first models for predicting epidemic waves by Bernoulli and d'Alembert.
In the tradition of these past discoveries, we will therefore present some recent progress in modeling the dynamics of infectious diseases and their transmission mechanisms in this article.
The plan of the paper is the following. Section 2 presents some background about the reported data. We explain some phenomena related to data collection, such as contact tracing, daily numbers of tests, and more. In section 3, we explain the main idea behind the notion of a phenomenological model. In section 4, we introduce an epidemic model with unreported cases and explain how to compare such a model with the data. In section 5, we consider the exponential phase of an epidemic, where the phenomenological model will be an exponential function. In section 6, we consider a single epidemic wave, where the phenomenological model will be the Boulli-Verhulst model. We consider several successive epidemic waves in section 7. In section 8, we present some new results to understand how to compare the data and the epidemic models with several compartments during the exponential phase. In section 9, we consider a model with age groups and explain how to deal with data in large systems. Section 10 is a survey section where we try to give some references for a selected number of important topics to model epidemic outbreaks.
## 2 Reported and unreported data
### What are the unreported cases?
The unreported cases correspond to mild symptoms because people will only get tested in case of severe symptoms. Unreported cases can result from a lack of tests or asymptotic patients [146]. That is infected patients that do not show symptoms. Unreported cases are partly due to a low daily number of tests.
### Example of unreported cases
A published study traced COVID-19 infections resulting from a business meeting in Germany attended by a person who was infected but had no symptoms at the time [174]. Four people were eventually infected from this single contact.
A team in Japan [141] reports that 13 people evacuated from _Diamond Princess_ were infected, 4 of whom, or 31 %, never developed symptoms.
On the French _aircraft carrier Charles de Gaulle_, clinical and biological data for all 1739 crew members were collected on arrival at the Toulon harbor and during quarantine: 1121 crew members (64%) were tested positive for COVID-19 using RT-PCR, and among these, 24% were asymptomatic [36].
### Testing data for New York state
The goal of the figure below is to show that due to the changes in the method of detecting the cases, a jump occurred on February 12 in Wuhan in, China. The testing technology was not well developed at the early beginning of the epidemic, and such a problem also occurs in other countries.
The dynamic of the daily number of tests is connected to the dynamic of the daily number of reported cases in a complex way [70].
The large peak in the number of tests at the end of April 2020, shows that the number of cases was strongly underestimated during the period. Because increasing the number of tests increases the number of positive test. Later on,
Figure 1: _Timeline of Exposure to Index Patient with Asymptomatic 2019-CoV Infection in Germany._
Figure 2: _Cumulative number of cases in Wuhan China._
the epidemic wave passed and the changes in the number of test had almost no influence on the number of positive test.
The number of reported cases is the consequence of the combination of the dynamic of the number of tests (a complex dynamic which depends on human perceptions of the epidemic outbreak), and the dynamic of the epidemic outbreak (which is also very complex due the contact rate which depends on human perceptions) and the dynamic of transmission (which can also be complex due to the changes of susceptibility in the population).
Figure 4 presents the flowchart of the model used in [70]. In Figure 5 (which was obtained in [70]), we use the daily number of tests as an input of the model, and we fit the output of the model to the cumulative number of cases.
Figure 3: _In this figure, we plot the daily number of tests for the New York State. The black curve, orange curve, and blue curve correspond respectively to the number of tests, the number of positive tests, and the number of negative tests._
In Figure 5, on the left-hand side, we consider the daily fluctuations of the number of reported cases (epidemic dynamic) and the daily number of tests (testing dynamics). Combining test dynamics and infection dynamics results in a complex time-parameterized curve. Nevertheless, we obtain a good correspondence between the top and the bottom left figures. The correspondence becomes excellent on the figures on the right, where we consider the cumulative
Figure 4: _Flow chart of the epidemic model. In this diagram \(n(t)\) is the daily number of tests at time \(t\) is an input of the model. We consider a fraction \((1-\sigma)\) of false negative tests and a fraction \(\sigma\) of true positive tests. The parameter \(g\) reflects the fact that the tests are devoted not only to the symptomatic patients but also to a large fraction of the population of New York state._
Figure 5: _The black curves are produced by using the New York state data only. The blue curves are constructed by using the model with the testing data as input of the model._
number of declared cases and the cumulative number of tests.
## 3 Phenomenological models
Along this note, we use phenomenological models to fit the data.
**Definition 3.1**.: _A phenomenological model is a mathematical model used to describe the data without mechanistic description of the processes involved in the phenomenon._
In the next section, we will use exponential functions to get a continuous time representation of the data. This will be our first example of a phenomenological model. Our goal here is to replace the data by a function that captures the robust tendency of the phenomenon. In some sense, we are trying to get rid of the noise around the tendency.
By using, for example, spline functions, we can always fit the data perfectly. Then the fit is too precise to capture the significant information, and if we compute the derivatives of such a perfect fit, we will obtain a very noisy signal that is not meaningful.
Therefore the underlying idea of the phenomenological model is to derive a robust tendency with a limited number of parameters that will represent the data. Such a model is supposed to reduce the signal's noisy part and capture the robust part of the signal.
The phenomenological model can then replace the data, permitting analysis of some consequences when injected into the models. For example, we will obtain a meaningful range of parameters.
Figure 6: _We can apply statistical methods to estimate the parameters of the proposed phenomenological model and derive their average values with some confidence intervals. The phenomenological model is used at the first step of the modelling process, providing regularized data to the epidemic model and allowing the identification of its parameters._
## 4 Epidemic model with reported and unreported individuals
### Mathematical model
Transmissions between infectious and susceptible individuals are described by
\[\boxed{\begin{aligned} \left\{\begin{aligned} & S^{\prime}(t)=-\tau(t)\,S(t)\,I(t), \\ I^{\prime}(t)=\tau(t)\,S(t)\,I(t)-\nu\,I(t),\end{aligned}\right.\end{aligned}} \tag{4.1}\]
where \(S(t)\) is the number of susceptible and \(I(t)\) the number of infectious at time \(t\).
The system (4.1) is complemented with the initial data
\[\boxed{\begin{aligned} \left\{\begin{aligned} S(t_{0})=S_{0}\geq 0,\text{ and }I(t_{0})=I_{0}\geq 0,\end{aligned}\right.\end{aligned}} \tag{4.2}\]
where \(t_{0}\) is a time from which the epidemic model (4.1) becomes applicable.
In this model, the rate of transmission \(\tau(t)\) combines the number of contacts per unit of time and the probability of transmission (see Section 6.1 for more information).
The number \(1/\nu\) is the average duration of the asymptomatic infectious period, \(\tau(t)\,S(t)\,I(t)\) is the flow of \(S\)-individuals becoming \(I\)-infected at time \(t\). That is,
\[\int_{t_{1}}^{t_{2}}\tau(\sigma)\,S(\sigma)\,I(\sigma)\mathrm{d}\sigma\]
is the number of individual that became \(I\) during the time interval \([t_{1},t_{2}]\).
Similarly, \(\nu\,I(t)\) is the flow of \(I\)-individuals leaving the \(I\)-compartment. That is
\[\int_{t_{1}}^{t_{2}}\nu\,I(\sigma)\mathrm{d}\sigma\]
is the number of individual that became \(I\) during the time interval \([t_{1},t_{2}]\).
The epidemic model associated with the flowchart in Figure 7 applies to the Hong Kong flu outbreak in New York City [127, 60].
We assume that the flow of reported individuals is a fraction \(0\leq f\leq 1\) of the flow of recovered individuals \(\nu\,I\). That is,
\[\boxed{\text{CR}^{\prime}(t)=f\,\nu\,I(t),\text{ for }t\geq t_{0},} \tag{4.3}\]
where \(\text{CR}(t)\) is the cumulative number of reported individuals, and \(f\) is the fraction of reported individuals. The fraction \(f\) is the fraction of patients with severe symptoms, and \(1-f\) the fraction of patients with mild symptoms.
### Given Parameters
In this study, the following parameters will be given:
* Number of susceptible individuals when the epidemic starts \[\boxed{\text{$S_{0}=67$ millions for France}.}\]
* Time from which the epidemic model starts to be valid, also called initial time of the model \[\boxed{\text{$t_{0}$}.}\]
**Remark 4.1**.: _The time \(t_{0}\) is a time where the epidemic phase started already._
* The average duration of the infectiousness \[\boxed{\text{$\frac{4.1}{\nu}=3$ days}.}\]
* The fraction of reported individuals \[\boxed{\text{$f=0.9$}.}\]
Figure 7: _Flowchart._
### Computed parameters
The following parameters will be obtained by comparing the output of the model and the data:
* \(I_{0}\) the number of asymptomatic infectious patients at the start of the epidemic.
* \(\tau(t)\) the rate of transmission.
## 5 Modeling the exponential phase
At the early stage of the epidemic, we can assume that \(S(t)\) is constant, and equal to \(S_{0}\). We can also assume that \(\tau(t)\) remains constant equal to \(\tau_{0}=\tau(t_{0})\). Therefore, by replacing these parameters into the I-equation of system (4.1) we obtain
\[I^{\prime}(t)=(\tau_{0}S_{0}-\nu)I(t).\]
Therefore
\[I(t)=I_{0}e^{\chi_{2}(t-t_{0})}, \tag{5.1}\]
where
\[\chi_{2}=\tau_{0}S_{0}-\nu. \tag{5.2}\]
### Initial number of infected and transmission rate
By using (4.3) and (5.1), we obtain
\[\text{CR}(t)=\chi_{1}\left(e^{\chi_{2}(t-t_{0})}-1\right)+\chi_{3}. \tag{5.3}\]
We observe that
\[\text{CR}(t_{0})=\chi_{3},\]
then \(\chi_{3}\) is a parameter which must be estimated from the data.
By using (4.3) at \(t_{0}\), we obtain
\[I_{0}=\frac{\text{CR}^{\prime}(t_{0})}{\nu\,f}=\frac{\chi_{1}\,\chi_{2}}{\nu \,f}, \tag{5.4}\]
and by using (5.2)
\[\tau_{0}=\frac{\chi_{2}+\nu}{S_{0}}.\]
Note that the above estimations of \(I_{0}\) and \(\tau_{0}\) are robust since we used the data over a period (i.e., not only at \(t_{0}\)) to evaluate \(\chi_{1},\chi_{2}\).
### Application to COVID-19 in mainland China
The figures below are taken from [51] (see [120] for similar results).
**Remark 5.1**.: _Fixing \(f=0.5\) and \(\nu=0.2\), we obtain_
\[I_{0}=3.7366\times 0.2650\times\exp(0.2650\times 19)/(0.2\times 0.5)=1521,\]
_and_
\[\tau_{0}=\frac{0.2650+0.2}{1.4\times 10^{9}}=3.3214\times 10^{-10}.\]
One may compare Figure 2 with Figure 9 and realize that there is no more jump in Figure 9. Here, we canceled out the jump in Figure 2 due to a change of method in counting the number of cases. More precisely, on February 16, 2020, the cumulative data in Figure 2 jumps by 17409 cases (the original data are available in [118, Table 2]). From that day, public health authorities in China decided to include the patients showing symptoms.
Figure 8: _In this figure, we plot the best fit of the exponential model to the cumulative number of reported cases of COVID-19 in mainland China between February 19 and March 1. We obtain \(\chi_{1}=3.7366\), \(\chi_{2}=0.2650\) and \(\chi_{3}=615.41\) with \(t_{0}=19\) Feb. The parameter \(\chi_{3}\) is obtained by minimizing the error between the best exponential fit and the data._
**Remark 5.2**.: _It is important to understand that, throughout this article, we fit the cumulative reported data by using a phenomenological. The reason is simple: the cumulative data are much smoother, while the daily number of reported cases are much more fluctuating. Therefore, it is "in theory" much easier to fit the cumulative data with a phenomenological model. Unfortunately, the problem is not that simple. So for example, in the exponential phase, we obtain the parameters_
\[\mathrm{CR}(t)=\chi_{1}\left(e^{\chi_{2}(t-t_{0})}-1\right)+\chi_{3}\]
_by using a best fit to the cumulative number of cases._
_Next, when we compute the first derivative of the above model to the daily number of cases, this gives a pretty reasonable approximation of the daily number of reported cases._
_Another way to avoid the first derivative \(t\to\mathrm{CR}(t)\), is to use the following model_
\[D^{\prime}(t)=f\nu I(t)-D(t).\]
_In this model, we use the same input flow of infected as for the model used to compute the cumulative number of cases. But here, we assume that daily cases individuals only stay one day in the \(D\) compartment. This model is also equivalent to_
\[D(t)=e^{-(t-t_{0})}D_{0}+\int_{t_{0}}^{t}e^{-(t-\sigma)}f\nu I(\sigma)d\sigma,\]
_and by replacing \(f\nu I(\sigma)\) by the cumulative data \(\mathrm{CR}(\sigma)\), we obtain a formula for the daily number of cases._
_So, during the exponential phase, once we obtain the best fit of the model to the cumulative data, the daily number of cases is given by_
\[D(t)=e^{-(t-t_{0})}D_{0}+\int_{t_{0}}^{t}e^{-(t-\sigma)}\chi_{1}\left(e^{\chi _{2}(\sigma-t_{0})}-1\right)+\chi_{3}d\sigma.\]
_The model's advantage is that it avoids computing a derive of the cumulative number of cases, which can be an issue._
### Spectral method in epidemic time series
During the COVID-19 pandemic, most people viewed the oscillations around the exponential growth at the beginning of an epidemic wave as the default
Figure 9: _In this figure, the black dots represent the cumulative number of cases for China (with correction for the jump presented in Figure 2). The period marked in red corresponds to the period considered in Figure 8._
in reporting the data. The residual is probably partly due to the reporting data process (random noise). Nevertheless, a significant remaining part of such oscillations could be connected to the infection dynamic at the level of a single average patient. Eventually, the central question we try to address here is: Is there some hidden information in the signal around the exponential tendency for COVID-19 data? So we consider the early stage of an epidemic phase, and we try to exploit the oscillations around the tendency in order to reconstruct the infection dynamic at the level of a single average patient. We investigate this question in [53].
The figures below are taken from [53, see Figures 13 and 14].
Then in the figure below we plot the first residual. That is,
\[\mathrm{Residual}_{1}(t)=\mathrm{CR}(t)-\left[A_{1}e^{\alpha_{1}t}+C_{1}\right].\]
Figure 10: _In this figure, we plot the cumulative number of reported cases data for Japan between \(19\) October and \(19\) November \(2020\) (black dots). We plot the best fit of the model (7.1) to the cumulative data (red curve)._
### Monotone property of the cumulative distribution
The influence of the errors made in the estimations (at the early stage of the epidemic) has been considered in the recent article [171]. To understand this problem, let us first consider the case of the rate of transmission \(\tau(t)=\tau_{0}\) in the model (4.1).
**From the epidemic model to the data** Assume that the transmission rate \(\tau(t)\) is constant equal to \(\tau>0\) in the model (4.1). Then by integrating the \(S\)-equation in model (4.1) between \(t_{0}\) and \(t\), we obtain
\[S(t)=S_{0}e^{\tau\mathrm{CI}(t)} \tag{5.5}\]
where
\[\mathrm{CI}(t)=\int_{t_{0}}^{t}I(\sigma)d\sigma.\]
Moreover
\[I^{\prime}(t)=\tau S(t)I(t)-\nu I(t).\]
replacing \(S(t)\) by 5.5, and by integrating between \(t_{0}\) and \(t\) we obtain
\[I(t)=I_{0}+S_{0}\left(1-e^{-\tau\mathrm{CI}(t)}\right)-\nu\mathrm{CI}(t).\]
Remembering that \(\mathrm{CI}(t)^{\prime}=I(t)\), we conclude that the cumulative number of cases should follow a single ordinary differential equaton
\[\boxed{\mathrm{CI}(t)^{\prime}=I_{0}+S_{0}\left(1-e^{-\tau\mathrm{CI}(t)} \right)-\nu\mathrm{CI}(t).} \tag{5.6}\]
Figure 11: _In this figure, we plot the first residual when subtracting the exponential tendency obtained in Figure 9 to the cumulative reported cases data between \(19\) October and \(19\) November \(2020\) (black dots). We plot the best fit of the model to the first residual (red curve)._
The system (5.6) is complemented with the initial distribution of the model
\[\boxed{\rm CI}(t_{0})={\rm CI}_{0}\geq 0.\]
This equation should be a good phenomenological model whenever \(t\mapsto\tau(t)\) is a constant function. We refer to [187], and [59, Chapter 8] for a comprehensive presentation on the monotone ordinary differential equations.
**Theorem 5.3**.: _Let \(t>t_{0}\) be fixed. The cumulative number of infectious \({\rm CI}(t)\) is strictly increasing with respect to the following quantities_
* \(I_{0}>0\) _the initial number of infectious individuals;_
* \(S_{0}>0\) _the initial number of susceptible individuals;_
* \(\tau>0\) _the transmission rate;_
* \(1/\nu>0\) _the average duration of the infectiousness period._
Proof.: In the estimated initial number of infected and transmission rate
Assume that the parameters \(\chi_{1}\) and \(\chi_{2}\) are estimated with a 95% confidence interval
\[\chi_{1,95\%}^{-}\leq\chi_{1}\leq\chi_{1,95\%}^{+},\]
and
\[\chi_{2,95\%}^{-}\leq\chi_{2}\leq\chi_{2,95\%}^{+}.\]
We obtain
\[I_{0,95\%}^{-}:=\frac{\chi_{1,95\%}^{-}\,\chi_{2,95\%}^{-}\,e^{\chi_{2,95 \%}^{-}\,t_{0}}}{\nu\,f}\leq I_{0}\leq I_{0,95\%}^{+}:=\frac{\chi_{1,95\% }^{+}\,\chi_{2,95\%}^{+}e^{\chi_{2,95\%}^{+}\,t_{0}}}{\nu\,f},\]
and
\[\tau_{0,95\%}^{-}:=\frac{\chi_{2,95\%}^{-}+\nu}{S_{0}}\leq\tau_{0}\leq\tau_{ 0,95\%}^{+}:=\frac{\chi_{2,95\%}^{+}+\nu}{S_{0}}.\]
**Remark 5.4**.: _By using the data for mainland China we obtain_
\[\chi_{1,95\%}^{-}=1.57,\,\chi_{1,95\%}^{+}=5.89,\,\chi_{2,95\%}^{-}=0.24,\, \chi_{2,95\%}^{+}=0.28.\]
In Figure 12, we plot the upper and lower solutions \({\rm CR}^{+}(t)\) (obtained by using \(I_{0}=I_{0,95\%}^{+}\) and \(\tau_{0}=\tau_{0,95\%}^{+}\)) and \({\rm CR}^{-}(t)\) (obtained by using \(I_{0}=I_{0,95\%}^{-}\) and \(\tau_{0}=\tau_{0,95\%}^{-}\)) corresponding to the blue region and the black curve corresponds to the best estimated values \(I_{0}=1521\) and \(\tau_{0}=3.3214\times 10^{-10}\).
Recall that the final size of the epidemic corresponds to the positive equilibrium of (5.6)
\[0=I_{0}+S_{0}\left[1-\exp\left(-\tau_{0}{\rm CI}_{\infty}\right)\right]-\nu{ \rm CI}_{\infty}.\]
In Figure 12 the changes in the parameters \(I_{0}\) and \(\tau_{0}\) (in (5.4)-(5.4)) do not affect significantly the final size.
**Remark 5.5**.: _Theorem 5.3 can be used day by day to fit the cumulative number of infected \(\mathrm{CI}(t)\). Indeed, if we assume that \(\tau(t)\) is a day-by-day piece-wise constant, we can use the monotone properties to find a unique daily value for \(\tau\) to fit the cumulative data to obtain a perfect match. Such an algorithm was developed in [51]._
## 6 Modeling a single epidemic wave
### What factors govern the transmission of pathogens
Estimating the average transmission rate is one of the most crucial challenges in the epidemiology of communicable diseases. This rate conditions the entry into the epidemic phase of the disease and its return to the extinction phase, if it has diminished sufficiently. It is the combination of three factors, one, the coefficient of virulence, linked to the infectious agent (in the case of infectious transmissible diseases), the other, the coefficient of susceptibility, linked to the host (all summarized into the probability of transmission), and also, the number of contact per unit of time between individuals (see [126]). The coefficient of virulence may change over time due to mutation over the course of the disease history. The second and third also, if mitigation measures have been taken. This was the case in China from the start of the pandemic (see [162]. Monitoring the decrease in the average transmission rate is an excellent way to monitor the effectiveness of these mitigation measures. Estimating the rate is therefore a central problem in the fight against epidemics.
Figure 12: _In this figure, the black curve corresponds to the cumulative number of reported cases \(\mathrm{CR}(t)\) obtained from the model (5.4) with \(\mathrm{CR}^{\prime}(t)=\nu fI(t)\) by using the values \(I_{0}=1521\) and \(\tau_{0}=3.32\times 10^{-10}\) obtained from our method and the early data from February 19 to March 1. The blue region corresponds the \(95\%\) confidence interval when the rate of transmission \(\tau(t)\) is constant and equal to the estimated value \(\tau_{0}=3.32\times 10^{-10}\)._
The transmission rate may vary over time, and it may significantly impact epidemic outbreaks. As explained in [126], the transmission rate can be decomposed as follow
\[\tau(t)=\frac{\text{the probability of transmission}}{\text{the average duration of a contact}}.\]
In this formula, the transmission probability may depend on climatic changes (temperature, humidity, ultraviolet, and other external factors), and the average duration of contact depends on human social behavior. It can be noted that the transmission rate is proportional to the inverse of the average contact duration because the shorter the average contact duration, the greater the number of contacts per unit of time.
**Remark 6.1**.: _A model was proposed by [45] to describe the evolution of the transmission rate during a single epidemic wave. Namely, the model is the following_
\[\tau(t)=\left\{\begin{array}{ll}\tau_{0},&\text{ if }t_{0}\leq t\leq N,\\ \tau_{0}\left(p\,e^{-\mu(t-N)}+(1-p)\right),&\text{ if }t\geq N,\end{array}\right.\]
_where \(N\) corresponds to the day when the public measures take effect, and \(\mu\) is the rate at which they take effect (this parameter describes the speed at which the public measures are taking place). The fraction \(0\leq p\leq 1\) is the fraction by which the transmission rate is reduced when applying public measures. We can rewrite this model shortly by using \(t^{+}=\max(t,0)\), the positive part of \(t\). That is,_
\[\tau(t)=\tau_{0}\left(p\,e^{-\mu(t-N)^{+}}+(1-p)\right),\]
_Such a model was successfully used by [118, 17, 119] and others._
_Nevertheless, the model for joining the end of an epidemic wave to the next epidemic wave is still unknown. A tentative model was proposed in [17]._
Contact patterns are impacted by social distancing measures. The average number of contacts per unit of time depends on the density of population [170, 182]. The probability of transmission depends of the virulence of the pathogen which can depend on the temperature, the humidity, and the Ultraviolet [49, 202]. In COVID-19 the level of susceptibility may depend on blood group and genetic lineage. It is indeed suspected that the
* Blood group [73] : Blood group O is associated with a lower susceptibility to SARS-CoV2;
* Genetic lineage [219] A gene cluster inherited from Neanderthal has been identified as a risk factor for severe symptoms.
### More results and references about the time dependent transmission rate modeling
Throughout this section, the parameter \(S_{0}=1.4\times 10^{9}\) will be the entire population of mainland China (since COVID-19 is a newly emerging disease). The actual number of susceptibles \(S_{0}\) can be smaller since some individuals can be partially (or totally) immunized by previous infections or other factors. This is also true for Sars-CoV2, even if COVID-19 is a newly emerging disease.
At the early beginning of the epidemic, the average duration of the infectious period \(1/\nu\) is unknown, since the virus has never been investigated in the past. Therefore, at the early beginning of the COVID-19 epidemic, medical doctors and public health scientists used previously estimated average duration of the infectious period to make some public health recommendations. Here we show that the average infectious period is impossible to estimate by using only the time series of reported cases, and must therefore be identified by other means. Actually, with the data of Sars-CoV2 in mainland China, we will fit the cumulative number of the reported case almost perfectly for any non-negative value \(1/\nu<3.3\) days. In the literature, several estimations were obtained: 11 days in [226], 9.5 days in [87], 8 days in [125], and 3.5 days in [114]. The recent survey by Byrne et al. [37] focuses on this subject.
[style=MyFrame]
**Result**
In Section 6.4, our analysis shows that
* It is hopeless to estimate the exact value of the duration of infectiousness by using SI models. Several values of the average duration of the infectious period give the exact same fit to the data.
* We can estimate an upper bound for the duration of infectiousness by using SI models. In the case of Sars-CoV2 in mainland China, this upper bound is 3.3 days.
In [174], it is reported that transmission of COVID-19 infection may occur from an infectious individual who is not yet symptomatic. In [207], it is reported that COVID-19 infected individuals generally develop symptoms, including mild respiratory symptoms and fever, on average \(5-6\) days after the infection date (with a confidence of 95%, range \(1-14\) days). In [215], it is reported that the median time prior to symptom onset is 3 days, the shortest 1 day, and the longest 24 days. It is evident that these time periods play an important role in understanding COVID-19 transmission dynamics. Here the fraction of reported individuals \(f\) is unknown as well.
## Appendix A
### Theoretical formula for \(\tau(t)\)
By using the S-equation of model (4.1) we obtain
\[S(t)=S_{0}\exp\left(-\int_{t_{0}}^{t}\tau(\sigma)\,I(\sigma)\mathrm{d}\sigma \right),\]
next by using the I-equation of model (4.1) we obtain
\[I^{\prime}(t)=S_{0}\exp\left(-\int_{t_{0}}^{t}\tau(\sigma)\,I(\sigma)\mathrm{d }\sigma\right)\tau(t)\,I(t)-\nu I(t),\]
and by taking the integral between \(t\) and \(t_{0}\) we obtain a Volterra integral equation for the cumulative number of infectious
\[\mathrm{CI}^{\prime}(t)=I_{0}+S_{0}\left[1-\exp\left(-\int_{t_{0}}^{t}\tau( \sigma)\,I(\sigma)\mathrm{d}\sigma\right)\right]-\nu\mathrm{CI}(t), \tag{6.1}\]
which is equivalent to (by using (4.3))
\[\mathrm{CR}^{\prime}(t)=\nu\,f\left(I_{0}+S_{0}\left[1-\exp\left(-\frac{1}{ \nu\,f}\int_{t_{0}}^{t}\tau(\sigma)\,\mathrm{CR}^{\prime}(\sigma)\mathrm{d} \sigma\right)\right]\right)+\nu\,\mathrm{CR}_{0}-\nu\mathrm{CR}(t). \tag{6.2}\]
Figure 13: _In this figure, the black dots represent the cumulative number of cases for China (we a correction for the jump presented in Figure 2). The period marked in red corresponds to the period considered in Figure 8. The yellow curve corresponds to the number of infected obtained using model (4.1) with a constant rate of transmission \(\tau(t)\). We observe a rapid divergence between the epidemic model and the data whenever the transmission rate is constant with time._
The following result permits to obtain a perfect match between the SI model and the time-dependent rate of transmission \(\tau(t)\).
**Theorem 6.2**.: _Let \(S_{0}\), \(\nu\), \(f\), \(I_{0}>0\) and \(\mathrm{CR}_{0}\geq 0\) be given. Let \(t\to I(t)\) be the second component of system (4.1). Let \(\widetilde{\mathrm{CR}}:[t_{0},\infty)\to\mathbb{R}\) be a two times continuously differentiable function satisfying_
\[\widehat{\mathrm{CR}}(t_{0})=\mathrm{CR}_{0}, \tag{6.3}\]
\[\widehat{\mathrm{CR}}^{{}^{\prime}}(t_{0})=\nu\,f\,I_{0}, \tag{6.4}\]
\[\widehat{\mathrm{CR}}^{{}^{\prime}}(t)>0,\forall t\geq t_{0}, \tag{6.5}\]
_and_
\[\nu f\left(I_{0}+S_{0}\right)-\widehat{\mathrm{CR}}^{{}^{\prime}}(t)-\nu \left(\widehat{\mathrm{CR}}(t)-\mathrm{CR}_{0}\right)>0,\forall t\geq t_{0}. \tag{6.6}\]
_Then_
\[\widehat{\mathrm{CR}}(t)=\mathrm{CR}_{0}+\nu f\int_{t_{0}}^{t}I\left(s\right) ds,\forall t\geq t_{0}, \tag{6.7}\]
_if and only if_
\[\tau(t)=\frac{\nu f\left(\frac{\widehat{\mathrm{CR}}^{{}^{\prime\prime}}(t)}{ \widehat{\mathrm{CR}}^{{}^{\prime}}(t)}+\nu\right)}{\nu f\left(I_{0}+S_{0} \right)-\widehat{\mathrm{CR}}^{{}^{\prime}}(t)-\nu\left(\widehat{\mathrm{CR}} (t)-\mathrm{CR}_{0}\right)}. \tag{6.8}\]
Proof.: Assume first (6.7) is satisfied. Then by using equation (6.1) we deduce that
\[S_{0}\exp\left(-\int_{t_{0}}^{t}\tau(\sigma)I(\sigma)d\sigma\right)=I_{0}+S_{ 0}-I(t)-\nu\mathrm{CI}(t).\]
Therefore
\[\int_{t_{0}}^{t}\tau(\sigma)I(\sigma)d\sigma=\ln\left[\frac{S_{0}}{I_{0}+S_{0} -I(t)-\nu\mathrm{CI}(t)}\right]=\ln\left(S_{0}\right)-\ln\left[I_{0}+S_{0}-I(t )-\nu\mathrm{CI}(t)\right]\]
therefore by taking the derivative on both side
\[\tau(t)I(t)=\frac{I^{\prime}(t)+\nu I(t)}{I_{0}+S_{0}-I(t)-\nu\mathrm{CI}(t) }\Leftrightarrow\tau(t)=\frac{\frac{I^{\prime}(t)}{I(t)}+\nu}{I_{0}+S_{0}-I( t)-\nu\mathrm{CI}(t)} \tag{6.9}\]
and by using the fact that \(\mathrm{CR}(t)-\mathrm{CR}_{0}=\nu f\mathrm{CI}(t)\) we obtain (6.8).
Conversely, assume that \(\tau(t)\) is given by (6.8). Then if we define \(\widetilde{I}(t)=\widehat{\mathrm{CR}}^{{}^{\prime}}(t)/\nu f\) and \(\widetilde{\mathrm{CI}}(t)=\left(\widehat{\mathrm{CR}}(t)-\mathrm{CR}_{0} \right)/\nu f\), by using (6.3) we deduce that
\[\widetilde{\mathrm{CI}}(t)=\int_{t_{0}}^{t}\widetilde{I}(\sigma)d\sigma,\]
and by using (6.4)
\[\widetilde{I}(t_{0})=I_{0}. \tag{6.10}\]
Moreover from (6.8) we deduce that \(\widetilde{I}(t)\) satisfies (6.9). By using (6.10) we deduce that \(t\to\widetilde{\mathrm{CI}}(t)\) is a solution of (6.1). By uniqueness of the solution of (6.1), we deduce that \(\widetilde{\mathrm{CI}}(t)=\mathrm{CI}(t),\forall t\geq t_{0}\) or equivalently \(\mathrm{CR}(t)=\mathrm{CR}_{0}+\nu f\int_{t_{0}}^{t}I\left(s\right)ds,\forall t \geq t_{0}\). The proof is completed.
The formula (6.8) was already obtained by Hadeler [74, see Corollary 2].
### Explicit formula for \(\tau(t)\) and \(I_{0}\)
In 1766, Bernoulli [23] investigated an epidemic phase followed by an endemic phase. This appears clearly in Figures 9 and 10 in [57] who revisited the original article of Bernoulli. We also refer to [24] for another article revisiting the original work of Bernoulli. A similar article has been re-written in French as well by [25]. In 1838, Verhulst [199] introduced the same equation to describe population growth. Several works comparing cumulative reported cases data and the Bernoulli-Verhulst model appear in the literature (see [86, 204, 227]). The Bernoulli-Verhulst model is sometimes called Richard's model, although Richard's work came much later in 1959.
Many phenomenological models have been compared to the data during the first phase of the COVID-19 outbreak. We refer to the paper of [197] for a nice survey on the generalized logistic equations. Let us consider here for example, the Bernoulli-Verhulst equation
\[\mathrm{CR}^{\prime}(t)=\chi_{2}\,\mathrm{CR}(t)\left(1-\left(\frac{\mathrm{ CR}(t)}{\mathrm{CR}_{\infty}}\right)^{\theta}\right),\forall t\geq t_{0}, \tag{6.11}\]
supplemented with the initial data
\[\mathrm{CR}(t_{0})=\mathrm{CR}_{0}\geq 0.\]
Let us recall the explicit formula for the solution of (6.11)
\[\mathrm{CR}(t)=\frac{e^{\chi_{2}(t-t_{0})}\mathrm{CR}_{0}}{\left[1+\frac{\chi _{2}\theta}{\mathrm{CR}_{\infty}^{\theta}}\int_{t_{0}}^{t}\left(e^{\chi_{2}( \sigma-t_{0})}\mathrm{CR}_{0}\right)^{\theta}d\sigma\right]^{1/\theta}}=\frac{ e^{\chi_{2}(t-t_{0})}\mathrm{CR}_{0}}{\left[1+\frac{\mathrm{CR}_{0}^{\theta}}{ \mathrm{CR}_{\infty}^{\theta}}\left(e^{\chi_{2}\theta(t-t_{0})}-1\right) \right]^{1/\theta}}. \tag{6.12}\]
The model's main advantage is that it is rich enough to fit the data, together with a limited number of parameters. To fit this model to the data, we only need to estimate four parameters \(\chi_{2},\theta,\mathrm{CR}_{0},\) and \(\mathrm{CR}_{\infty}\).
**Remark 6.3**.: _Plenty of possibilities exist to fit the data, including split functions (irregular functions with many parameters) and others. In [35], they proposed several possible alternatives, including a generalized logistic equation of the form_
\[\mathrm{CR}^{\prime}(t)=\chi_{2}\,\mathrm{CR}(t)^{\theta}\left(1-\left(\frac{ \mathrm{CR}(t)}{\mathrm{CR}_{\infty}}\right)\right),\forall t\geq t_{0}.\]
_The above equation has no explicit solution. Therefore it is more difficult to use it than the Bernoulli-Verhulst model. We also refer to [153, 154] for more phenomenological model to fit an epidemic wave._
By combining (6.1) and the Bernoulli-Verhulst equation (6.11) for \(t\to\operatorname{CR}(t)\), we deduce the initial number of infected
\[I_{0}=\frac{\operatorname{CR}^{\prime}(t_{0})}{\nu\,f}=\frac{\chi_{2} \operatorname{CR}_{0}\left(1-\left(\frac{\operatorname{CR}_{0}}{\operatorname {CR}_{\infty}}\right)^{\theta}\right)}{\nu\,f}. \tag{6.13}\]
**Remark 6.4**.: _We fix \(f=0.5\), from the COVID-19 data in mainland China and formula (6.13) (with \(\operatorname{CR}_{0}=198\)), we obtain_
\[I_{0}=1909\text{ for }\nu=0.1,\]
_and_
\[I_{0}=954\text{ for }\nu=0.2.\]
By using (6.11) we deduce that
\[\operatorname{CR}^{\prime\prime}(t) =\chi_{2}\operatorname{CR}^{\prime}(t)\left(1-\left(\frac{ \operatorname{CR}(t)}{\operatorname{CR}_{\infty}}\right)^{\theta}\right)- \frac{\chi_{2}\theta}{\operatorname{CR}_{\infty}^{\theta}}\operatorname{CR}(t )\left(\operatorname{CR}(t)\right)^{\theta-1}\operatorname{CR}^{\prime}(t)\] \[=\chi_{2}\operatorname{CR}^{\prime}(t)\left(1-\left(\frac{ \operatorname{CR}(t)}{\operatorname{CR}_{\infty}}\right)^{\theta}\right)- \frac{\chi_{2}\theta}{\operatorname{CR}_{\infty}^{\theta}}\,\left( \operatorname{CR}(t)\right)^{\theta}\operatorname{CR}^{\prime}(t),\]
therefore
\[\operatorname{CR}^{\prime\prime}(t)=\chi_{2}\operatorname{CR}^{\prime}(t) \left(1-(1+\theta)\left(\frac{\operatorname{CR}(t)}{\operatorname{CR}_{ \infty}}\right)^{\theta}\right). \tag{6.14}\]
Figure 14: _In this figure, we plot the best fit of the Bernoulli-Verhulst model to the cumulative number of reported cases of COVID-19 in China. We obtain \(\chi_{2}=0.66\) and \(\theta=0.22\). The black dots correspond to data for the cumulative number of reported cases and the red curve corresponds to the model._
By using the Bernoulli-Verhulst equation (6.11) and substituting (6.14) in (6.8), we obtain
\[\tau(t)=\frac{\nu\,f\left(\chi_{2}\,\left(1-(1+\theta)\left(\frac{\mathrm{CR}(t)} {\mathrm{CR}_{\infty}}\right)^{\theta}\right)+\nu\right)}{\nu\,f\left(I_{0}+S_{0 }\right)+\nu\mathrm{CR}_{0}-\mathrm{CR}(t)\left(\chi_{2}\left(1-\left(\frac{ \mathrm{CR}(t)}{\mathrm{CR}_{\infty}}\right)^{\theta}\right)+\nu\right)}. \tag{6.15}\]
This formula (6.15) combined with (6.12) gives an explicit formula for the rate of transmission.
Since \(\mathrm{CR}(t)<\mathrm{CR}_{\infty}\), by considering the sign of the numerator and the denominator of (6.15), we obtain the following proposition.
**Proposition 6.5**.: _The rate of transmission \(\tau(t)\) given by (6.15) is non negative for all \(t\geq t_{0}\) if_
\[\nu\geq\chi_{2}\,\theta, \tag{6.16}\]
_and_
\[f\left(I_{0}+S_{0}\right)+\nu\mathrm{CR}_{0}>\mathrm{CR}_{\infty}\left(\chi_{ 2}+\nu\right). \tag{6.17}\]
**Compatibility of the model SI with the COVID-19 data for minimal China**
The model SI is compatible with the data only when \(\tau(t)\) stays positive for all \(t\geq t_{0}\). From our estimation of the Chinese's COVID-19 data we obtain \(\chi_{2}\,\theta=0.14\). Therefore from (6.16) we deduce that model is compatible with the data only when
\[1/\nu\leq 1/0.14=3.3\ \mathrm{days}. \tag{6.18}\]
This means that the average duration of infectious period \(1/\nu\) must be shorter than \(3.3\) days.
Similarly the condition (6.17) implies
\[f\geq\frac{\mathrm{CR}_{\infty}\chi_{2}+\left(\mathrm{CR}_{\infty}-\mathrm{CR }_{0}\right)\nu}{S_{0}+I_{0}}\geq\frac{\mathrm{CR}_{\infty}\chi_{2}+\left( \mathrm{CR}_{\infty}-\mathrm{CR}_{0}\right)\chi_{2}\,\theta}{S_{0}+I_{0}}\]
and since we have \(CR_{0}=198\) and \(\mathrm{CR}_{\infty}=67102\), we obtain
\[f\geq\frac{67102\times 0.66+\left(67102-198\right)\times 0.14}{1.4\times 10^{9}} \geq 3.83\times 10^{-5}. \tag{6.19}\]
So according to this estimation the fraction of unreported \(0<f\leq 1\) can be almost as small as we want.
Figure 15 illustrates the Proposition 6.5. We observe that the formula for the rate of transmission (6.15) becomes negative whenever \(\nu<\chi_{2}\theta\).
In Figure 16 we plot the numerical simulation obtained from (4.1)-(4.3) when \(t\to\tau(t)\) is replaced by the explicit formula (6.15). It is surprising that we can reproduce perfectly the original Bernoulli-Verhulst even when \(\tau(t)\) becomes negative. This was not guaranteed at first, since the I-class of individuals is losing some individuals which are recovering.
Figure 15: _In this figure, we plot the rate of transmission obtained from formula (6.15) with \(f=0.5\), \(\chi_{2}\,\theta=0.14<\nu=0.2\) (in Figure (a)) and \(\nu=0.1<\chi_{2}\,\theta=0.14\) (in Figure (b)), \(\chi_{2}=0.66\) and \(\theta=0.22\) and \(\mathrm{CR}_{\infty}=67102\) which is the latest value obtained from the cumulative number of reported cases for China._
### Results
In [51], we designed an algorithm, based on the monotone property described in Theorem 5.3 to recover the transmission rate from the data. In this section, we reconsider the result presented in [51] where several method was used to regularized the data.
In Figure 17 we plot several types of regularized cumulative data in figure (a) and several types of regularized daily data in figure (b). Among the different regularization methods, an important one is the Bernoulli-Verhulst best fit approximation.
Figure 16: _In this figure, we plot the number of reported cases by using model (4.1) and (6.1), and the rate of transmission is obtained in (6.15). The parameters values are \(f=0.5\), \(\nu=0.1\) or \(\nu=0.2\), \(\chi_{2}=0.66\) and \(\theta=0.22\) and \(\mathrm{CR}_{\infty}=67102\) is the latest value obtained from the cumulative number of reported cases for China. Furthermore, we use \(S_{0}=1.4\times 10^{9}\) for the total population of China and \(I_{0}=954\) which is obtained from formula (6.13). The black dots correspond to data for the cumulative number of reported cases observed and the blue curve corresponds to the model._
In Figure 18 we plot the rate of transmission \(t\rightarrow\tau(t)\) obtained by using Algorithm 2. We can see that the original data gives a negative transmission rate while at the other extreme the Bernoulli-Verhulst seems to give the most regularized transmission rate. In Figure 18-(a) we observe that we now recover almost perfectly the theoretical transmission rate obtained in (6.15). In Figure 18-(b) the rolling weekly average regularization and in Figure 18-(c) the Gaussian weekly average regularization still vary a lot and in both cases the transmission rate becomes negative after some time. In Figure 18-(c) the original data gives a transmission rate that is negative from the beginning. We conclude that it is crucial to find a "good" regularization of the daily number of case. So far the best regularization method is obtained by using the best fit of the Bernoulli-Verhulst model.
Figure 17: _In this figure, we plot the cumulative number of reported cases (left) and the daily number of reported cases (right). The black curves are obtained by applying the cubic spline matlab function “spline(Days,DATA)* to the cumulative data. The left-hand side is obtained by using the cubic spline function and right-hand side is obtained by using the derivative of the cubic spline interpolation. The blue curves are obtained by using cubic spline function to the day by day values of cumulative number of cases obtained from the best fit of the Bernoulli-Verhulst model. The orange curves are obtained by computing the rolling weekly daily number of cases (we use the matlab function ’smoothdata(DAILY,’movemean’,’\(\mathcal{I}\))*) and then by applying the cubic spline function the corresponding cumulative number of cases. The yellow curves are obtained by Gaussian the rolling weekly to the daily number of cases (we use the matlab function ’smoothdata(DAILY,’gaussian’,’\(\mathcal{I}\))*) and then by applying the cubic spline function to the corresponding cumulative number of cases._
## 7 Modeling multiple epidemic waves
### Phenomenological model used for multiple epidemic waves
**Endemic phase:** During the endemic phase, the dynamics of new cases appears to fluctuate around an average value independently of the number of cases. Therefore the average cumulative number of cases is given by
\[\boxed{\text{CR}(t)=N_{0}+(t-t_{0})\times a,\text{ for }t\in[t_{0},t_{1}],} \tag{7.1}\]
where \(t_{0}\) denotes the beginning of the endemic phase, \(N_{0}\) is the number of new cases at time \(t_{0}\), and \(a\) is the average value of the daily number of new cases.
**Epidemic phase:** In the epidemic phase, the new cases are contributing to produce secondary cases. Therefore the daily number of new cases is no longer
Figure 18: _In this figure we plot the transmission rates \(t\to\tau(t)\) obtained by using Algorithm 2 with the parameters \(f=0.5\) and \(\nu=0.2\). In figure (a) we use the cumulative data obtained by using the Bernoulli-Verhulst regularization. In figure (b) we use the cumulative data obtained by using the rolling weekly average regularization. In figure (c) we use the cumulative data obtained by using the Gaussian weekly average regularization. In figure (d) we use the original cumulative data._
constant, but varies with time as follows
\[\boxed{\text{CR}(t)=N_{\text{base}}+\frac{\text{e}^{\chi(t-t_{0})}N_{0}}{\left[1+ \frac{N_{0}^{\theta}}{N_{\infty}^{\theta}}\left(\text{e}^{\chi\theta(t-t_{0})}- 1\right)\right]^{1/\theta}},\text{ for }t\in[t_{0},t_{1}].} \tag{7.2}\]
In other words, the daily number of new cases follows the Bernoulli-Verhulst equation. That is,
\[\boxed{\text{$N(t)=\text{CR}(t)-N_{\text{base}}$},} \tag{7.3}\]
we obtain
\[N^{\prime}(t)=\chi\,N(t)\,\left[1-\left(\frac{N(t)}{N_{\infty}}\right)^{ \theta}\right], \tag{7.4}\]
completed with the initial value
\[N(t_{0})=N_{0}.\]
In the model, \(N_{\text{base}}+N_{0}\) corresponds to the value \(\text{CR}(t_{0})\) of the cumulative number of cases at time \(t=t_{0}\). The parameter \(N_{\infty}+N_{\text{base}}\) is the maximal value of the cumulative reported cases after the time \(t=t_{0}\). \(\chi>0\) is a Malthusian growth parameter, and \(\theta\) regulates the speed at which \(\text{CR}(t)\) increases to \(N_{\infty}+N_{\text{base}}\).
**Regularize the junction between the epidemic phases:** Because the formula for \(\tau(t)\) involves derivatives of the phenomenological model regularizing \(\text{CR}(t)\) (see equations (5.5)), we need to connect the phenomenological models of the different phases (epidemic and endemic) as smoothly as possible. Let \(t_{0},\dots,t_{n}\) denote the \(n+1\) breaking points of the model, that is, the times at which there is a transition between one phase and the next one. We let \(\widetilde{\text{CR}}(t)\) be the global model obtained by placing the phenomenological models of the different phases side by side.
More precisely, \(\widetilde{\text{CR}}(t)\) is defined by (7.2) during an epidemic phase \([t_{i},t_{i+1}]\), or during the initial phase \((-\infty,t_{0}]\) or the last phase \([t_{n},+\infty)\). During an endemic phase, \(\widetilde{\text{CR}}(t)\) is defined by (7.1). The parameters are chosen so that the resulting global model \(\widetilde{\text{CR}}\) is continuous. We define the regularized model by using the convolution formula:
\[\text{CR}(t)=\int_{-\infty}^{+\infty}\mathcal{G}(t-s)\times\widetilde{\text {CR}}(s)\text{d}s=(\mathcal{G}*\widetilde{\text{CR}})(t), \tag{7.5}\]
where
\[\mathcal{G}(t):=\frac{1}{\sigma\sqrt{2\pi}}\text{e}^{-\frac{t^{2}}{2\sigma^{2 }}}\]
is the Gaussian function with mean \(0\) and variance \(\sigma^{2}\). The parameter \(\sigma\) controls the trade-off between smoothness and precision: increasing \(\sigma\) reduces the variations in \(\mathrm{CR}(t)\) and reducing \(\sigma\) reduces the distance between \(\mathrm{CR}(t)\) and \(\widetilde{\mathrm{CR}}(t)\). In any case the resulting function \(\mathrm{CR}(t)\) is very smooth (as well as its derivatives) and close to the original model \(\widetilde{\mathrm{CR}}(t)\) when \(\sigma\) is not too large. Here, we fix
\[\sigma=7\text{ days}.\]
Numerically, we will need to compute some \(t\to\mathrm{CR}(t)\) derivatives. Therefore it is convenient to take advantage of the convolution (7.5) and deduce that
\[\frac{\mathrm{d}^{n}\mathrm{CR}(t)}{\mathrm{d}t^{n}}=\int_{-\infty}^{+\infty} \frac{\mathrm{d}^{n}\mathcal{G}(t-s)}{\mathrm{d}t^{n}}\times\widetilde{ \mathrm{CR}}(s)\mathrm{d}s, \tag{7.6}\]
for \(n=1,2,3\).
### Phenomenological Model apply to France
Figures 19-20 below is taken from [68]. In Figure 19, we present the best fit of our phenomenological model for the cumulative reported case data of COVID-19 epidemic in France. The yellow regions correspond to the endemic phases and the blue regions correspond to the epidemic phases. Here we consider the two epidemic waves for France, and the chosen period, as well as the parameters values for each period.
Figure 20 shows the corresponding daily number of new reported cases data (black dots) and the first derivative of our phenomenological model (red curve).
Figure 19: _The red curve corresponds to the phenomenological model and the black dots correspond to the cumulative number of reported cases in France._
### Phenomenological Model apply to several countries
Our method to regularize the data was applied to the eight geographic areas. The resulting curves are presented in Figure 21. The blue background color regions correspond to epidemic phases and the yellow background color regions to endemic phases. We added a plot of the daily number of cases (black dots) and the derivative of the regularized model for comparison, even though the daily number of cases is not used in the fitting procedure. The figures generally show an excellent agreement between the time series of reported cases (top row, black dots) and the regularized model (top row, blue curve). The match between the daily number of cases (bottom row, black dots) and the derivative of the regularized model (bottom row, blue curve) is also excellent, even though it is not a part of the optimization process. Of course, we lose some information, like the extreme values ("peaks") of the daily number of cases. This is because we focus on an averaged value of the number of cases. More information could be retrieved by statistically studying the variation around the phenomenological model. However, we leave such a study for future work. The relative error between the regularized curve and the data may be relatively high at the beginning of the epidemic because of the stochastic nature of the infection process and the small number of infected individuals but quickly drops below 1% (see the supplementary material in [69] for more details).
Figure 20: _The red curve corresponds to the phenomenological model and the black dots correspond to the cumulative number of reported cases in France._
### Earlier results about transmission rate reconstructed from the data
This problem has already been considered in several articles. In the early 70s, London and Yorke [122, 216] discussed the time dependent rate of transmission in the context of measles, chickenpox and mumps. Motivated by applications to the data for COVID-19 in [20] the authors also obtained some new results about reconstructing the rate of transmission.
### Instantaneous reproduction number
We use the formula (5.5) to compute the transmission rate, and we consider the **instantaneous reproduction number**
\[\mathbf{R_{e}(t)}=\tau(t)\mathbf{S(t)}/\nu,\]
and the **quasi-instantaneous reproduction number**
\[\mathbf{R_{e}^{0}(t)}=\tau(t)\mathbf{S_{0}}/\nu,\]
We compare the above indicators with \(\mathbf{R_{e}^{C}(t)}\) the classical notion of **instantaneous reproduction number**[143, 48].
Figure 21: _In the top rows, we plot the cumulative number of reported cases (black dots) and the best fit of the phenomenological model (blue curve). In the bottom rows, we plot the daily number of reported cases (black dots) and the first derivative of the phenomenological model (blue curve). This figure is taken from [69]._
**Remark 7.1**.: _The standard method to compute \(\mathbf{R_{e}^{C}(t)}\) (see [48, 143]) proposes another form of regularization of the data, which consists of computing the instant of contamination backward in time. This instant is random and follows a standard exponential law._
### Results
In Figure 22, our analysis allows us to compute the transmission rate \(\tau(t)\). We use this transmission rate to calculate two different indicators of the epidemiological dynamics for each geographic area, the instantaneous reproduction number and the quasi-instantaneous reproduction number. Both coincide with the basic reproduction number \(R_{0}\) on the first day of the epidemic. The instantaneous reproduction number at time \(t\), \(R_{e}(t)\), is the basic reproduction number corresponding to an epidemic starting at time \(t\) with a constant transmission rate equal to \(\tau(t)\) and with an initial population of susceptibles composed of \(S(t)\) individuals (the number of susceptible individuals remaining in the population). The quasi-instantaneous reproduction number at time \(t\), \(R_{e}^{0}(t)\), is the basic reproduction number corresponding to an epidemic starting at time \(t\) with a constant transmission rate equal to \(\tau(t)\) and with an initial population of susceptibles composed of \(S_{0}\) individuals (the number of susceptible individuals at the start of the epidemic). The two indicators are represented for each geographic area in the top row of Figure 22 (black curve: instantaneous reproduction number; green curve: quasi-instantaneous reproduction number).
One interpretation for \(R_{e}(t)\) and another for \(R_{e}^{0}(t)\). The instantaneous reproduction number indicates if, given the current state of the population, the epidemic tends to persist or die out in the long term (note that our model assumes that recovered individuals are perfectly immunized). The quasi-instantaneous reproduction number indicates if the epidemic tends to persist or die out in the long term, provided the number of susceptible is the total population. In other words, we forget about the immunity already obtained by recovered individuals. Also, it is directly proportional to the transmission rate and therefore allows monitoring of its changes. Note that the value of \(R_{e}^{0}(t)\) changed drastically between epidemic phases, revealing that \(\tau(t)\) is far from constant. In any case, the difference between the two values starts to be visible in the figures one year after the start of the epidemic.
We also computed the reproduction number using the method described in [48], which we denoted \(R_{e}^{c}(t)\). The precise implementation is described in the supplementary material in [69]. It is plotted in the bottom row of Figure 22 (green curve), along with the instantaneous reproduction number \(R_{e}(t)\) (green curve).
**Remark 7.2**.: _In the bottom of Figure 22, we compare the instantaneous reproduction numbers obtained by our method in black and the classical method in [48] in green. We observe that the two approaches are not the same at the beginning. This is because the method of [48] does not consider the initial values
\(I_{0}\) and \(E_{0}\) while we do. Indeed the method of [48] assumes that \(I_{0}\) and \(E_{0}\) are close to \(0\) at the beginning when it is viewed as a Volterra equation reformulation of the Bernoulli-Kermack-McKendrick model with the age of infection. On the other hand, our method does not require such an assumption since it provides a way to compute the initial states \(I_{0}\) and \(E_{0}\)._
It is essential to "regularize" the data to obtain a comprehensive outcome from SIR epidemic models. In general, the rate of transmission in the SIR model (applying identification methods) is not very noisy and meaningless. For example, at the beginning of the first epidemic wave, the transmission rate should be decreasing since peoples tend to have less and less contact while to epidemic growth. The standard regularization methods (like, for example, the rolling weekly average method) have been tested for COVID-19 data in [51]. The outcome in terms of transmission rate is very noisy and even negative transmission (which is impossible). Regularizing the data is not an easy task, and the method used is very important in order to obtain a meaningful outcome for the models. Here, we tried several approaches to link an epidemic phase to the next endemic phase. So far, this regularization procedure is the best one.
Figure 23 illustrates why we need a phenomenological model to regularize the data. On the left-hand side, we observe the \(\tau(t)\) becomes negative almost immediately. Therefore, without regularization, the fit may not make sense.
Figure 22: _In the top rows, we plot the instantaneous reproduction number \(R_{e}(t)\) (in black) and the quasi instantaneous reproduction number \(R_{e}^{0}(t)\) (in green). In the bottom rows, we plot the instantaneous reproduction number \(R_{e}(t)\) (in black) and the one obtained by the standard method [48, 143]\(R_{e}^{C}(t)\) (in green). This figure is taken from [69]._
### Consequences of the results
In Figure 22, we saw that the population of susceptible patients is almost unchanged after the epidemic passed. Therefore, the system behaves almost like the non-autonomous system
\[\boxed{I^{\prime}(t)=\tau(t)S_{0}I(t)-\nu I(t),\forall t\geq t_{0},\text{ and }I(t_{0})=I_{0},}\]
This means that \(I(t)\) depends linearly on \(I_{0}\). That is, if we multiply \(I_{0}\) by some number, the result \(I(t)\) will be multiply by the same number.
Figure 24 shows two things. The initial number of infected is crucial when we try to predict the number of infected. The average daily number of cases during the endemic phases have strong impact on the amplitude of the next epidemic waves [68].
In this section, we obtained a model that covert the changes of regimen (from endemic to epidemic and conversely). Moreover the det
Figure 23: _In this figure, we plot the instantaneous \(R_{0}\). On the left-hand side, we use our smoothing method (with Bernoulli-Verhulst model (endemic) line (endemic) together with a convolution with a Gaussian). On the right-hand side, we use the original cumulative data and our algorithm to fit the cumulative number of cases._
regimen between epidemic wave and endemic period is still difficult to detect. An attempt to study this question can be found in [54].
## 8 Exponential phase with more compartments
### A model with transmission from the unreported infectious
We consider a model with unreported infection individuals.
\[\left\{\begin{array}{l}S^{\prime}(t)=-\tau(t)S(t)\left(I(t)+U(t)\right),\\ I^{\prime}(t)=\tau(t)S(t)\left(I(t)+U(t)\right)-\nu I(t),\\ U^{\prime}(t)=\nu\ (1-f)\ I(t)-\eta U(t),\end{array}\right. \tag{8.1}\]
for \(t\geq t_{0}\), and with initial distribution
\[S(t_{0})=S_{0},\,I(t_{0})=I_{0},\ \mbox{and}\ U(t_{0})=U_{0}. \tag{8.2}\]
The epidemic model associated with the flowchart in Figure 25 applies to influenza outbreaks in [13], hepatitis A outbreaks in [166], and COVID-19 in [120].
### The exponential phase approximation
We assume that \(S(t)\) is constant, and equal to \(S_{0}\), and \(\tau(t)\) remains constant equal to \(\tau_{0}=\tau(t_{0})\). The consider for example the case of a single age group, we obtain the following model which was first considered for COVID-19
\[\left\{\begin{array}{l}I^{\prime}(t)=\tau S_{0}\left(I(t)+U(t)\right)-\nu I( t),\\ U^{\prime}(t)=\nu\ (1-f)\ I(t)-\eta U(t),\end{array}\right. \tag{8.3}\]
for \(t\geq t_{0}\), and with initial distribution
\[I(t_{0})=I_{0},\ \mbox{and}\ U(t_{0})=U_{0}. \tag{8.4}\]
Figure 25: _Flowchart._
We can reformulate this system using a matrix formulation
\[\left(\begin{array}{c}I^{\prime}(t)\\ U^{\prime}(t)\end{array}\right)=A\left(\begin{array}{c}I(t)\\ U(t)\end{array}\right),\forall t\in[t_{0},t_{1}],\]
where
\[A=\left(\begin{array}{cc}\tau\,S_{0}-\nu&\tau\,S_{0}\\ \nu\,(1-f)&-\eta\end{array}\right).\]
Then the matrix \(A\) is **irreducible** if and only if
\[\nu\,(1-f)>0\text{ and }\tau\,S_{0}>0.\]
Remember the model (8.3) to connect the data and the epidemic model
\[\operatorname{CR}^{\prime}(t)=f\,\nu\,I(t),\text{ for }t\geq t_{0}.\]
Consider the **exponential phase of the epidemic**. That is,
\[\operatorname{CR}^{\prime}(t)=\chi_{1}\chi_{2}e^{\chi_{2}t},\forall t\in[t_{0 },\tau+t_{0}],\]
for some \(\tau>0\). Combining the two previous equations, we obtain
\[f\,\nu\,I(t)=\chi_{1}e^{\chi_{2}(t-t_{0})},\forall t\in[t_{0},\tau+t_{0}],\]
Remember that \(\chi_{1}\) and \(\chi_{2}\) are computed by using the data. More precisely, these parameters are obtained by fitting \(t\to\chi_{1}e^{\chi_{2}t}-\chi_{3}\) to the cumulative number of cases data during a period of time \([t_{0},\tau+t_{0}]\).
We can rewrite \(f\,\nu\,I(t)=\chi_{1}e^{\chi_{2}t}\) by using an inner product
\[\left\langle y_{0},\left(\begin{array}{c}I(t)\\ U(t)\end{array}\right)\right\rangle=\chi_{1}e^{\chi_{2}t},\text{ with }y_{0}=\left( \begin{array}{c}\nu\,f\\ 0\end{array}\right),\]
where \(\left\langle.,.\right\rangle\) is the Euclidean inner product defined in dimension 2 as
\[\left\langle x,y\right\rangle=x_{1}y_{1}+x_{2}y_{2}.\]
The following theorem is proved in Appendix A.
**Theorem 8.1**.: _Let \(\chi_{1}>0\), \(\chi_{2}>0\), and \(\tau>0\). Let \(A\) be a \(n\) by \(n\) real matrix. Assume that the off-diagonal elements of \(A\) are non-negative, and \(A\) is irreducible. Assume that there exist two vectors \(y_{0}>0\), and \(x_{0}>0\) such that_
_(Linear model) \[\dot{x}(t)=Ax(t),\text{ and }x(0)=x_{0},\] satisfies_
_(Connection with the data) \[\left\langle y_{0},x(t)\right\rangle=\chi_{1}e^{\chi_{2}t},\forall t\in[0, \tau]\,,\]_
_where \(\left\langle x,y\right\rangle\) is the Euclidean inner product._
_Then \(\chi_{2}\) must be the dominant eigenvalue \(A\) (i.e., the one with the largest real part). Moreover, we can choose a vector \(x_{0}\gg 0\) (i.e., with all its components strictly positive), satisfying_
\[Ax_{0}=\chi_{2}x_{0}.\]
_Multiplying \(x_{0}\) by a suitable positive constant, we obtain \(\left\langle y_{0},x_{0}\right\rangle=\chi_{1}\), and we obtain_
\[\left\langle y_{0},x(t)\right\rangle=\chi_{1}e^{\chi_{2}t},\forall t\in\left[ 0,\tau\right].\]
Returning back to the example of epidemic model with unreported cases, we must find \(I(0)>0\) and \(U(0)>0\) such that
\[\left(\begin{array}{cc}\tau\,S_{0}-\nu&\tau\,S_{0}\\ \nu\,(1-f)&-\eta\end{array}\right)\left(\begin{array}{c}I(0)\\ U(0)\end{array}\right)=\chi_{2}\left(\begin{array}{c}I(0)\\ U(0)\end{array}\right).\]
After a few computations (see the supplementary in Liu et al. [120]), we obtain
\[\tau=\frac{\chi_{2}+\nu}{S_{0}}\frac{\eta+\chi_{2}}{\nu(1-f)+\eta+\chi_{2}}, \tag{8.5}\]
and
\[U_{0}=\frac{\nu(1-f)}{\eta+\chi_{2}}I_{0}=\frac{(1-f)\nu}{\eta+\chi_{2}}I_{0}. \tag{8.6}\]
**Remark 8.2**.: _Let \(\chi_{1}>0\), \(\chi_{2}>0\), \(\phi_{1}>0\), \(\phi_{2}>0\), and \(\tau>0\). Assume that \(x_{0}>0\), \(y_{0}>0\) and \(z_{0}>0\) satisfy_
\[\dot{x}(t)=Ax(t),\text{ and }x(0)=x_{0},\]
_and_
\[\left\langle y_{0},x(t)\right\rangle=\chi_{1}e^{\chi_{2}t},\forall t\in\left[ 0,\tau\right],\]
\[\left\langle z_{0},x(t)\right\rangle=\phi_{1}e^{\phi_{2}t},\forall t\in\left[ 0,\tau\right].\]
_If \(\chi_{2}\neq\phi_{2}\) the matrix \(A\) must be reducible. That is, up to a re-indexation of the components of \(x(t)\), the matrix \(A\) reads as_
\[A=\left(\begin{array}{cc}A_{11}&0\\ A_{21}&A_{22}\end{array}\right)\]
_where \(A_{ij}\) are block matrices. The matrix \(A\) presents a weak coupling between the last block's components and the first block's components._
### Uncertainty due to the period chosen to fit the data
The principle of our method is the following. By using an exponential best fit method we obtain a best fit of (5.3) to the data over a time \([t_{1},t_{2}]\) and we
derive the parameters \(\chi_{1}\) and \(\chi_{2}\). The values of \(I_{0}\)\(U_{0}\), and \(\tau_{0}\) are obtained by using (5.4), (8.2) and (8.3). Next, we use
\[\tau(t)=\tau_{0}e^{-\mu(t-N)^{+}},\]
we fix \(N\) (first day of public intervention) to some value and we obtain \(\mu\) by trying to get the best fit to the data.
In the method the uncertainty in our prediction is due to the fact that several sets of parameters \((t_{1},t_{2},N,f)\) may give a good fit to the data. As a consequence, at the early stage of the epidemics (in particular before the turning point) the outcome of our method can be very different from one set of parameters to another. We try to solve this uncertainty problem by using several choices of the period to fit an exponential growth of the data to determine \(\chi_{1}\) and \(\chi_{2}\) and several choices for the first day of intervention \(N\). So in this section, we vary the time interval \([t_{1},t_{2}]\), during which we use the data to obtain \(\chi_{1}\) and \(\chi_{2}\) by using an exponential fit. In the simulations below, the first day \(t_{1}\) and the last day \(t_{2}\) vary such that
\[\text{Earliest day }\leq t_{1}\leq t_{2}\leq\text{Latest day}.\]
We also vary the first day of public intervention:
Earliest first day of intervention \(\leq N\leq\text{Latest first day of intervention}\).
We vary \(f\) between 0.1 to 0.9. For each \((t_{1},t_{2},\nu,f,\eta,\mu,N)\) we evaluate \(\mu\) to obtain the best fit of the model to the data. We use the mean absolute deviation as the distance to data to evaluate the best fit to the data. We obtain a large number of best fit depending on \((t_{1},t_{2},\nu,f,\eta,\mu,N)\) and we plot smallest mean absolute deviation \(\text{MAD}_{\text{min}}\). Then we plot all the best fit graphs with mean absolute deviation between \(\text{MAD}_{\text{min}}\) and \(\text{MAD}_{\text{min}}+40\).
The figure below is taken from Liu et al. [121].
## 9 Modeling COVID-19 epidemic with age groups
This section considers an epidemic whenever the population is divided into age groups. Here, age means the chronological age, which is nothing but the time since birth.
Figure 26: _In this figure, we consider the data for Germany. We plot the cumulative number of cases of the left hand side and the daily number of cases on the right hand side. In (a) and (b) we use the data until March \(22\). In (c) and (d) we use the data until April \(11\). (e) and (f) we use the data until June \(10\)._
### Epidemic model with age groups
The epidemic model with age structure and unreported cases reads as follows, for each \(t\geq t_{0}\),
\[\left\{\begin{array}{l}S_{1}^{\prime}(t)=-\tau_{1}S_{1}(t)\bigg{[}\phi_{11} \frac{I_{1}(t)+U_{1}(t)}{N_{1}}+\ldots+\phi_{1n}\frac{I_{n}(t)+U_{n}(t)}{N_{n}} \bigg{]},\\ \vdots\\ S_{n}^{\prime}(t)=-\tau_{n}S_{n}(t)\bigg{[}\phi_{n1}\frac{I_{1}(t)+U_{1}(t)}{N _{1}}+\ldots+\phi_{nn}\frac{I_{n}(t)+U_{n}(t)}{N_{n}}\bigg{]},\\ \end{array}\right.\]
\[\left\{\begin{array}{l}I_{1}^{\prime}(t)=\tau_{1}S_{1}(t)\bigg{[}\phi_{11} \frac{I_{1}(t)+U_{1}(t)}{N_{1}}+\ldots+\phi_{1n}\frac{I_{n}(t)+U_{n}(t)}{N_{n} }\bigg{]}-\nu I_{1}(t),\\ \vdots\\ I_{n}^{\prime}(t)=\tau_{n}S_{n}(t)\bigg{[}\phi_{n1}\frac{I_{1}(t)+U_{1}(t)}{N _{1}}+\ldots+\phi_{nn}\frac{I_{n}(t)+U_{n}(t)}{N_{n}}\bigg{]}-\nu I_{n}(t),\\ \end{array}\right.\]
and
\[\left\{\begin{array}{l}U_{1}^{\prime}(t)=\nu_{2}^{1}\,I_{1}(t)-\eta U_{1}(t),\\ \vdots\\ U_{n}^{\prime}(t)=\nu_{2}^{n}\,I_{n}(t)-\eta U_{n}(t),\\ \end{array}\right.\]
with the initial values
\[S_{i}(t_{0})=S_{i}^{0},I_{i}(t_{0})=I_{i}^{0},\text{ and }U_{i}(t_{0})=U_{i}^{0}, \forall i=1,\ldots,n.\]
### Cumulative reported cases with age structure in Japan
We first choose two days \(d_{1}\) and \(d_{2}\) between which each cumulative age group grows like an exponential. By fitting the cumulative age classes \([0,10]\),\([10,20[\),...and \([90,100[\) between \(d_{1}\) and \(d_{2}\), for each age class \(j=1,\ldots 10\) we can find \(\chi_{1}^{j}\), \(\chi_{2}^{j}\) and \(\chi_{3}^{j}\)
\[\text{CR}_{j}^{data}(t)\simeq\chi_{1}^{j}\,e^{\chi_{2}^{j}t}-\chi_{3}^{j}.\]
We obtain
\[\left\{\begin{array}{l}\text{CR}_{1}(t)=\chi_{1}^{1}\,e^{\chi_{2}^{1}t}- \chi_{3}^{1},\\ \vdots\\ \text{CR}_{n}(t)=\chi_{1}^{n}\,e^{\chi_{2}^{n}t}-\chi_{3}^{n},\\ \end{array}\right. \tag{9.1}\]
where
\[\chi_{j}^{i}\geq 0,\forall i=1,\ldots,n,\,\forall j=1,2,3.\]
In Figures 27-28, the growth rate of the exponential fit depends on the age group [71]. In Figures 27-28, we see the similarity of dynamical behavior at the two extreme age groups \([0,20]\) and \([70,100]\).
### Method to Fit of the Age Structured Model to the Data
By assuming that the number of susceptible individuals remains constant we have for each \(t\geq t_{0}\),
\[\left\{\begin{array}{c}I_{1}^{\prime}(t)=\tau_{1}S_{1}\bigg{[}\phi_{11}\frac{I_ {1}(t)+U_{1}(t)}{N_{1}}+\ldots+\phi_{1n}\frac{I_{n}(t)+U_{n}(t)}{N_{n}}\bigg{]}- \nu I_{1}(t),\\ \vdots\\ I_{n}^{\prime}(t)=\tau_{n}S_{n}\bigg{[}\phi_{n1}\frac{I_{1}(t)+U_{1}(t)}{N_{1 }}+\ldots+\phi_{nn}\frac{I_{n}(t)+U_{n}(t)}{N_{n}}\bigg{]}-\nu I_{n}(t),\end{array}\right.\]
and
\[\left\{\begin{array}{c}U_{1}^{\prime}(t)=\nu_{2}^{1}\,I_{1}(t)-\eta U_{1}(t),\\ \vdots\\ U_{n}^{\prime}(t)=\nu_{2}^{n}\,I_{n}(t)-\eta U_{n}(t),\end{array}\right. \tag{9.2}\]
with the initial values
\[I_{i}(t_{0})=I_{i}^{0},\mbox{ and }U_{i}(t_{0})=U_{i}^{0},\forall i=1, \ldots,n.\]
Figure 27: _In this figure, we plot an exponential fit to the cumulative data for each age groups \([0,10[\),\([10,20[\),...and \([90,100[\) in Japan.
Figure 28: _In this figure, we plot an exponential fit to the cumulative data for each age groups \([0,10[\),\([10,20[\),...and \([90,100[\) in Japan._
### Rate of contact
The values in Figure 29 describe the contact rates between age groups. The values used are computed from the values obtained in [159].
We assume that
\[\left\{\begin{array}{c}\mathrm{CR}_{1}(t)^{\prime}=\nu_{1}^{1}I_{1}(t),\\ \vdots\\ \mathrm{CR}_{n}(t)^{\prime}=\nu_{1}^{n}I_{n}(t),\end{array}\right.\]
where
\[\nu_{1}^{i}=\nu\,f_{i},\text{ and }\nu_{2}^{i}=\nu\,(1-f_{i}),\forall i=1, \ldots,n.\]
Therefore, we obtain
\[I_{j}(t)=I_{j}^{\star}e^{\chi_{2}^{j}t},\]
where
\[I_{j}^{\star}:=\frac{\chi_{1}^{j}\chi_{2}^{j}}{\nu_{1}^{j}}.\]
If we assume that the \(U_{j}(t)\) have the following form
\[U_{j}(t)=U_{j}^{\star}e^{\chi_{2}^{j}t},\]
then by substituting in (9.2) we obtain
\[U_{j}^{\star}=\frac{\nu_{2}^{j}I_{j}^{\star}}{\eta+\chi_{2}^{j}}.\]
The cumulative number of unreported cases \(\mathrm{CU}_{j}(t)\) is computed as
\[\mathrm{CU}_{j}(t)^{\prime}=\nu_{2}^{j}I_{j}(t),\]
Figure 29: _For each age class in the \(y\)-axis we plot the rate of contacts between one individual of this age class and another individual of the age class indicated on the \(x\)-axis. The figure represents the rate of contacts before the start of public measures (April 11)._
and we used the following initial condition:
\[\mathrm{CU}_{j}(0)=\mathrm{CU}_{j}^{\star}=\int_{-\infty}^{0}\nu_{2}^{j}I_{j}^{ \star}e^{\chi_{2}^{j}s}ds=\frac{\nu_{2}^{j}I_{j}^{\star}}{\chi_{2}^{j}}.\]
We define the error between the data and the model as follows
\[\left\{\begin{array}{l}I_{1}^{\prime}(t)=\tau_{1}S_{1}\bigg{[}\phi_{11}\frac{ I_{1}(t)+U_{1}(t)}{N_{1}}+\ldots+\phi_{1n}\frac{I_{n}(t)+U_{n}(t)}{N_{n}} \bigg{]}-\nu I_{1}(t)+\varepsilon_{1}(t),\\ \vdots\\ I_{n}^{\prime}(t)=\tau_{n}S_{n}\bigg{[}\phi_{n1}\frac{I_{1}(t)+U_{1}(t)}{N_{1 }}+\ldots+\phi_{nn}\frac{I_{n}(t)+U_{n}(t)}{N_{n}}\bigg{]}-\nu I_{n}(t)+ \varepsilon_{n}(t),\end{array}\right.\]
or equivalently
\[\left\{\begin{array}{l}\varepsilon_{1}(t)=\left(\chi_{2}^{1}+\nu\right)I_{1 }^{\star}e^{\chi_{2}^{1}t}-\tau_{1}S_{1}\bigg{[}\phi_{11}\frac{I_{1}^{\star}+ U_{1}^{\star}}{N_{1}}e^{\chi_{2}^{1}t}+\ldots+\phi_{1n}\frac{I_{n}^{\star}+U_{n}^{ \star}}{N_{n}}e^{\chi_{2}^{n}t}\bigg{]},\\ \vdots\\ \varepsilon_{n}(t)=\left(\chi_{2}^{n}+\nu\right)I_{n}^{\star}e^{\chi_{2}^{n}t }-\tau_{n}S_{n}\bigg{[}\phi_{n1}\frac{I_{1}^{\star}+U_{1}^{\star}}{N_{1}}e^{ \chi_{2}^{1}t}+\ldots+\phi_{nn}\frac{I_{n}^{\star}+U_{n}^{\star}}{N_{n}}e^{ \chi_{2}^{n}t}\bigg{]}.\end{array}\right.\]
**Lemma 9.1**.: _Assume that the matrix \(\phi\) be fixed. If we consider the errors \(\varepsilon_{1}^{\tau}(t),\ldots,\varepsilon_{n}^{\tau}(t)\) as a function of \(\tau\), then we can a unique value \(\tau^{\star}=(\tau_{1}^{\star},\ldots,\tau_{n}^{\star})\) which minimizes that \(L^{2}\) norm of the errors. That_
\[\sum_{j=1,\ldots,n}\int_{d_{1}}^{d_{2}}\varepsilon_{j}^{\tau^{\star}}(t)^{2} dt.=\min_{\tau\in\mathbb{R}^{n}}\sum_{j=1,\ldots,n}\int_{d_{1}}^{d_{2}} \varepsilon_{j}^{\tau}(t)^{2}dt.\]
_Moreover,_
\[\tau_{j}^{\star}=\frac{\int_{d_{1}}^{d_{2}}K_{j}(t)H_{j}(t)dt}{\int_{d_{1}}^{d _{2}}H_{j}(t)^{2}dt},\]
_with_
\[K_{j}(t):=\left(\chi_{2}^{j}+\nu\right)I_{j}^{\star}e^{\chi_{2}^{j}t},\forall j =1,\ldots,n,\]
_and_
\[H_{j}(t):=S_{j}\bigg{[}\phi_{j1}\frac{I_{1}^{\star}+U_{1}^{\star}}{N_{1}}e^{ \chi_{2}^{1}t}+\ldots+\phi_{jn}\frac{I_{n}^{\star}+U_{n}^{\star}}{N_{n}}e^{ \chi_{2}^{n}t}\bigg{]},\forall j=1,\ldots,n.\]
Proof.: We look for the vector \(\tau=(\tau_{1},\ldots,\tau_{n})\) which minimizes of
\[\min_{\tau\in\mathbb{R}^{n}}\sum_{j=1,\ldots,n}\int_{d_{1}}^{d_{2}}\varepsilon _{j}(t)^{2}dt.\]
Define for each \(j=1,\ldots,n\)
\[K_{j}(t):=\left(\chi_{2}^{j}+\nu\right)I_{j}^{\star}e^{\chi_{2}^{j}t}\]
and
\[H_{j}(t):=S_{j}\bigg{[}\phi_{j1}\frac{I_{1}^{\star}+U_{1}^{\star}}{N_{1}}e^{\chi_ {2}^{\star}t}+\ldots+\phi_{jn}\frac{I_{n}^{\star}+U_{n}^{\star}}{N_{n}}e^{\chi_ {2}^{\star}t}\bigg{]},\]
so that
\[\varepsilon_{j}(t)=K_{j}(t)-\tau_{j}H_{j}(t).\]
Hence for each \(j=1,\ldots,n\)
\[\int_{d_{1}}^{d_{2}}\varepsilon_{j}(t)^{2}dt=\int_{d_{1}}^{d_{2}}K_{j}(t)^{2} dt-2\tau_{j}\int_{d_{1}}^{d_{2}}K_{j}(t)H_{j}(t)dt+\tau_{j}^{2}\int_{d_{1}}^{d_{2}}H _{j}(t)^{2}dt,\]
and the minimum of \(\int_{d_{1}}^{d_{2}}\varepsilon_{j}(t)^{2}dt\) is obtained for \(\tau_{j}\) satisfying
\[0=\frac{\partial}{\partial\tau_{j}}\int_{d_{1}}^{d_{2}}\varepsilon_{j}(t)^{2} dt=-2\int_{d_{1}}^{d_{2}}K_{j}(t)H_{j}(t)dt+2\tau_{j}\int_{d_{1}}^{d_{2}}H _{j}(t)^{2}dt\]
whenever
\[\tau_{j}=\frac{\int_{d_{1}}^{d_{2}}K_{j}(t)H_{j}(t)dt}{\int_{d_{1}}^{d_{2}}H _{j}(t)^{2}dt}.\]
Under this condition, we obtain
\[\int_{d_{1}}^{d_{2}}\varepsilon_{j}(t)^{2}dt=\int_{d_{1}}^{d_{2}}K_{j}(t)^{2} dt-\tau_{j}^{2}\int_{d_{1}}^{d_{2}}H_{j}(t)^{2}dt.\]
**Remark 9.2**.: _It does not seem possible to estimate the matrix of contact \(\phi\) by using similar optimization method. Indeed, if we look for a matrix \(\phi=(\phi_{ij})\) which minimizes_
\[\min_{\phi\in M_{n}(\mathbb{R})}\sum_{j=1,\ldots,n}\int_{d_{1}}^{d_{2}} \varepsilon_{j}(t)^{2}dt,\]
_it turn out that_
\[\sum_{j=1,\ldots,n}\int_{d_{1}}^{d_{2}}\varepsilon_{j}(t)^{2}dt=0\]
_whenever \(\phi\) is diagonal. Therefore the optimum is reached for any diagonal matrix. Moreover by using similar considerations, if several \(\chi_{j}^{2}\) are equal, we can find a multiplicity of optima (possibly with \(\phi\) not diagonal). This means that trying to optimize by using the matrix \(\phi\) does not yield significant and reliable information._
In the Figure below, we present an example of application of our method to fit the Japanese data. We use the period going from 20 March to 15 April.
In the Figure below, we present an example of application of our method to fit the Japanese data. We use the period going from 20 March to 15 April.
## 10 A survey for COVID-19 mathematical modeling
During the COVID-19 pandemic, scientific workforces in different fields published COVID-19-related papers. The number of articles published increased considerably during this period. For example, on August 23, 2023, the WHO COVID-19 Research Database [206] contains 724288 full texts of articles concerning the COVID-19 outbreak. Consequently, providing an extensive review on the subject is hopeless. Here, we make some arbitrary choices that can always be discussed. Our main goal is to give extra references on the topics
Figure 31: _We plot a comparison between the model (without public intervention) and the age structured data from Japan (black dots)._
Figure 30: _We plot a comparison between the model (without public intervention) and the age structured data from Japan (black dots)._
mentioned earlier and highlight topics not considered in the previous sections. Several articles have attempted to do systematic reviews on COVID-19. We refer to [98, 172] for more results and a broader overview of the subject.
The idea of this survey was mostly to collect references from the Infectious Disease Outbreak webinar, which took place from 2020 to 2022 [91].
### Medical survey
Mathematical models alone do not provide reliable information. In Figure 13, we show the divergence of the mathematical model from the data. It is therefore fundamental to bring medical results into the models.
It is therefore fundamental to integrate medical facts into mathematical models. We have tried throughout this text to explain how to make maximum use of the data either as input (test data) or as output (reported case data). But the dynamics of infection can be understood much better by examining concrete case studies in hospitals. For example, modeling the dynamics of infectious clusters is crucial in preventing the spread of disease. We refer to [15, 18, 18, 32, 42, 46, 84, 108, 123, 139, 149, 181, 183, 195, 201] for more results and references.
The early development of an epidemic are very important, and an interesting retrospect of the first weeks of COVID-19 in China was presented by Zhao in [221].
### Incubation, Infectiousness, and Recovery Period
The infectious dynamic has three phases:
1. The emission of the infectious agent, which depends on its concentration during its expulsion (remotely by air transportation or directly by secretion contact) from the contagious person;
2. Transmission of the infectious agent (through an intermediate fluid or on a contact surface);
3. The reception of the infectious agent by a future host who becomes infected and whose symptomatology and secondary emission capacities will depend on the infectious agent's pathogenic nature and the host's immune defenses.
These defenses are set up in two successive stages, corresponding to innate immunity, then to acquired immunity. It is, therefore, conceivable that the transmission capacity of an infectious person depends on the individual infection age. That is, the time since this person was infected. We refer to [164, 163, 164, 166, 209, 230, 82] for more results on the subject. In [50], we proposed a method to understand the average individual dynamic of infection by clusters data. When considering epidemic exponential phase data, a time series approach is proposed in [53]. We refer to [7] for more results on the subject.
### Data
An essential aspect of epidemics outbreaks is understanding the biases in the data. That is the different causes, such as unreported case data, tests, false positive PRC tests, and other factors that may bias our understanding of the data. Clusters of infected also provide another kind of data that may give another angle to examine the same problem. We should also mention the data provided by the wasted water that offers a helpful complement to the existing reported case data.
#### 10.3.1 Contact tracing
Contact tracing has been the main tool of public health authorities, for example, in South Korea when the COVID-19 pandemic started. In France, a dedicated digital tool called Stop-Covid has been developed. In [175], authors estimate that this digital approach was adopted not because digital solutions (to contact tracing) are superior to traditional ones but by default due to alienation and lack of interdisciplinary cooperation, which could be due to the fact that contact tracing is balancing personal privacy and public health, causing significant biases in classical inquiries with questionnaires [106]. We refer to [7, 29, 30, 33, 43, 65, 78, 103, 106, 111, 129, 175, 193, 217] for more results on the subject.
#### 10.3.2 Testing data
A mathematical model to understand the bias in PCR tests was proposed first by [151, 152]. Diagnostic tests, particularly the PCR test, have been of considerable importance in most countries' follow-up of new cases. We refer [12, 31, 110, 112, 161, 83, 110]. Mathematical models, including testing data as an input of the model, were proposed by [34, 70].
#### 10.3.3 Unreported and uncertainty in the number of reported case data
The origin of unreported cases of COVID-19 is multiple. It may be due to
1. a poor organization of the reporting system by the medical profession or recording by the administrative staff (especially at weekends);
2. The presence of asymptomatic cases;
3. The non-consultation and/or the non-taking of medication in the symptomatic case, for reasons related to the patient or his entourage (presence of an intercurrent pathology or an existing chronic disease masking the symptoms, reasons financial, religious, philosophical, social, etc.).
We refer to [14, 44, 85, 120, 168, 224] for more results on the subject.
#### 10.3.4 Clusters
The detection and monitoring of clusters are difficult to achieve and the discovery of patient zero, in a given geographical area, is always a delicate challenge. Nevertheless, there are a number of studies regarding this problem. We refer to [140, 144, 9, 40, 66, 2, 44] for more results on the subject.
#### 10.3.5 More phenomenological model to fit the data
Since Daniel Bernoulli's classic primordial model [23, 24, 25, 57], a number of phenomenological models have emerged, such as that of Richards that Ma cited [124] just before the beginning of Covid-19 outbreak. The COVID-19 pandemic was an opportunity to recall this princips work and to propose new approaches along the same lines, namely minimal modeling integrating the basic mechanisms of infectious transmission. We refer to [16, 16, 124, 135, 169, 186, 191, 229] for more results and references on the subject.
#### 10.3.6 Wasted water data
The French national Obepine project has shown the value of monitoring the COVID-19 pandemic in wastewater, where the concentration of viral RNA fragments can serve as an early indicator of the onset of new waves of cases. An Italian study (Gragnani et al.) has even suggested that SARS-Cov-2 RNA was present in wastewater from Milan, Turin (December 18, 2019) and Bologna (January 29, 2020) long before the first Italian case was described (February 20 20). We refer to [180, 120, 121, 4, 26, 61, 212] for more results on the subject.
#### 10.3.7 Discrete and random modeling
Some modeling approaches are discrete and play with daily data. The equations of the contagion dynamics can be of two types:
1. They can be difference equations modeled on the differential equations of the continuous SIR model;
2. or they can be stochastic in nature, with generally additive Gaussian noise in the second member.
They generally lend themselves well to the statistical estimation of their parameters from the data. We refer to [19, 62, 213, 228] for more results and references on the subject.
#### 10.3.8 Time series and wavelet approaches
If we consider the data recorded on the size of the different sub-populations involved in the contagion process (susceptible, infected, cured, immune, etc.), a possible approach is that of the signal theory, with its classical methods data processing (time series, Fourier transformation, wavelet transformation, etc.).
This approach is generally an excellent introduction to the implementation of prediction methods. We refer to [147, 188, 55, 147, 53, 191] for more results and references on the subject.
#### 10.3.9 Transmission estimation and spatial modeling
Estimating the transmission parameter and studying its spatio-temporal variations is fundamental because it conditions the epidemic waves' location, shape, and duration. The spatial heterogeneity of this parameter, often due to geo-climatic (such as temperature) and/or demographic (such as susceptible population density), are crucial factors in the existence of natural barriers to the spread of a pandemic. We refer to [63, 136, 222] for more results and references on the subject.
#### 10.3.10 Forecasting methods
The prediction of epidemics is one of the major objectives of modeling. It can be carried out by the continuation, in time and space, of the solutions of the spatio-temporal equations of the chosen model or the extrapolation of a statistical description of the evolution of the observed variables. We refer to [121, 138, 20, 173] for more results and references on the subject.
### SIR like models
Since 2020, many articles have appeared on using the SIR model in modeling the COVID-19 outbreak. These models progressively complexified to become SIAURDV models, incorporating explicitly as ODE variables the numbers of asymptomatic (A), non-reported (U), vaccinated (V), and deceased (D) patients. We refer to [150, 178, 194, 5, 225] for more results and references on the subject.
#### 10.4.1 Multigroups or multiscale models
The notion of multi-group and multi-scale appeared when the COVID-19 outbreak appeared, with specific dynamics in several geographical regions of different scales and, in one area, in several distinct groups (demographic, ethnic, economic, religious, social, etc.). We refer to [132, 133, 158, 200, 218, 72] for more results and references on the subject.
#### 10.4.2 Model with unreported or asymptomatic compartment
Modeling the mechanisms of non-reporting of new cases or deaths due to an epidemic makes it possible to compensate for the bias coming from a partial observation of the infected, due to the existence of asymptomatic cases or a deficient administrative registration mechanism. We refer to [145, 220, 3, 10, 11, 21] for more results and references on the subject.
### Connecting reported case data with SIR like model
Very few studies considered that problem in the literature, while again, it is interesting to understand the bias induced by such a mechanism. For example, it would make sense to consider a model including a delay in reporting the data
\[\mathrm{CR}^{\prime}(t)=f\nu\int_{0}^{\tau}\gamma(s)I(t-s)ds\]
where \(s\mapsto\gamma(s)\) is a non negative map. The quantity \(\gamma(s)\) is the probability of reporting \(s\) units of time after the individual leaves the compartment \(I\). This corresponds to patients showing symptoms. We deduce that we must have
\[\int_{0}^{\tau}\gamma(s)ds=1.\]
Unfortunately, people have not considered this issue in the literature. The consequence of such a model for reported case data seems particularly important. We refer to [157, 28, 130] for more results and references on the subject.
### Re-infections, natural and hybrid immunity
The risk of reinfection with the SARS-Cov2 virus comes from two factors:
1. One is due to the infectious agent and its mutagenic genius, modifying its contagiousness and pathogenicity;
2. The other is due to the host, whose natural, innate, and acquired defenses by the adaptive immune system or artificially by vaccination prevent or stop the infection.
The modeling of these two facets of the reinfection process makes it possible to understand the mechanisms of eradication or, on the contrary, the continuation of a pandemic, thanks to or despite collective public health measures. We refer to [155, 190, 89, 102, 142, 105] for more results and references on the subject.
### Mortality
Mortality may appear as more robust data to be connected with epidemic models. The bias for report cases data will also exist for the number of reported dead patients. Again, the model to connect the data and the epidemic model might be more complex than a fraction of the recovered. Nevertheless, there is evidence of an increased risk of death in the event of co-infection. The mortality risk increases dramatically when a patient is infected with another severe disease. This question of co-infection with severe diseases with COVID-19 was studied in [177]. We refer to [107, 105, 131, 92, 94, 95, 107, 131] for more results and references on the subject.
### Vaccination and mitigation measures
Vaccination and exclusion by temporary confinement or physical barriers (masks, anti-viral protection, or anti-transmission intermediates) are the public health measures intended to mitigate or stop an epidemic. The modeling of their gradual introduction and their effects on the spread of the epidemic makes it possible to understand their effectiveness or, on the contrary, their uselessness and, therefore, to adapt the coercive measures best, whether collective or individual [1, 47, 52, 75, 76, 80, 93, 96, 101, 117, 128, 176, 192, 208].
### Chronological age
The problem of age structure is crucial in epidemic modeling for three reasons:
1. The immune system efficacy depends on age. Therefore, its adaptive component is less and less able to resist a new pathogenic agent or react to a vaccine;
2. Age groups communicate differently with each other, with the most mobile (working age group) having the greatest chance of transmission and the most dependent (elderlies) on the care by younger caregivers having the greatest chance of being infected;
3. The prevalence of chronic diseases favoring infections is very unevenly distributed, the age groups at both ends of life being the most susceptible: the young due to the immaturity of the immune system and school promiscuity, and the elderly due to the existence of chronic comorbidities (diabetes, respiratory pathologies, cardiovascular diseases, and immune depression).
These disparities make it necessary to take age into account (through at least three major classes, young people under 20, adults from 20 to 65, and seniors over 65), preventive measures (education, vaccination, isolation) being taken according to this age stratification, crossed with the risk factors linked to the occurrence of chronic pathologies.
Few papers combined epidemic model with age-structure and age structured data [8, 196, 109, 110, 137, 156, 160, 198, 39, 64]. The problem of understanding the relationships between data and models is far from well understood. In Section 9, based on [71], we proposed an approach to understanding how to connect the model and the data during the exponential phase. But such a problem needs further investigation.
### Basic reproduction number
The basic reproduction number \(R_{0}\) is an essential parameter for predicting the occurrence of an epidemic wave. It can vary over time and depends on two main factors:
1. In the infectious subject, the successive establishment of natural defense mechanisms (innate and adaptive) explains the variations in daily \(R_{0}\) during his period of contagiousness;
2. In subjects who are not yet infected, their susceptibility is also dependent on their immune status, but also on the collective public health measures taken at the population level.
Methods for estimating daily \(R_{0}\) are therefore fundamental to understanding the temporal and spatial evolution of a pandemic [27, 56, 223, 56, 81, 205].
### Prediction of COVID-19 evolution
The difficulty of predicting the evolution of a pandemic is due to the adaptive capacities of the infectious agent and the infected and transmitting host. On the one hand, the genetic mutations of the infectious agent and its contagious power and pathogenic dangerousness develop a highly infectious and low pathogenic variant, often signaling the natural end of a pandemic. On the other hand, the permanent adaptation strategy of individual and collective host defense measures makes it possible to anticipate the effects of changes in the agent's infectious strategy. In both cases, modeling the dynamics of mutation and prevention is essential to predict and act in near real-time on the evolution of a pandemic [148, 165, 171, 134, 58, 79, 90, 148].
**Appendix**
## Appendix A When the output is a single exponential function
Let \(X\in\mathbb{R}^{n}\). We recall that
* \(X\geq 0\) if for each \(i\in\{1,\ldots,n\}\) such that \(X_{i}\geq 0\);
* \(X>0\) if \(X\geq 0\) and there exists \(i\in\{1,\ldots,n\}\) such that \(X_{i}>0\);
* \(X\gg 0\) if \(X_{i}>0\) for each \(i\in\{1,\ldots,n\}\).
Let \(A=\left(a_{ij}\right)\in M_{n}\left(\mathbb{R}\right)\) a \(n\times n\) matrix with non-negative off-diagonal elements, and assume that \(A+\delta I\) is non-negative irreducible whenever \(\delta>0\) is large enough. The projector associated to the Perron-Frobenius dominant eigenvalue is defined by
\[\Pi\,x=\frac{\left\langle V_{L}(A),x\right\rangle V_{R}(A)}{\left\langle V_{L} (A),V_{R}(A)\right\rangle},\forall x\in\mathbb{R}^{n},\] (A.1)
where \(V_{R}(A)\gg 0\) (respectively \(V_{L}(A)\gg 0\)) is a right eigenvector (resp. left eigenvector ) of \(A\) associated with the dominant eigenvalue
\[s(A)=\max\left\{\operatorname{Re}\lambda:\lambda\in\sigma(A)\right\},\]
where \(\sigma(A)\) is the spectrum of \(A\) (i.e. the set of all eigenvalues of \(A\)). Then we have
\[A\,\Pi=\Pi\,A=s(A)\,\Pi.\]
Recall that the euclidean inner product is defined by
\[\left\langle X,Y\right\rangle=\sum_{i=1}^{n}X_{i}Y_{i}.\]
The network associated with a non-negative matrix \(A\) corresponds to all the oriented paths from the node \(i\) to the node \(j\) whenever \(a_{ij}>0\).
A non-negative matrix \(A\) is _irreducible_ if the network associated with \(A\) is strongly connected. That is, if we can join any two nodes \(i\) and \(j\) by using a succession of oriented paths.
To understand irreducible matrices in epidemics, one may consider the contact matrix in epidemic models. Then, the contact matrix is irreducible if any infected sub-group has a non-zero probability of infecting any other group (by transmitting the pathogen to intermediate sub-groups if needed).
**Theorem A.1**.: _Let \(A=\left(a_{ij}\right)\in M_{n}\left(\mathbb{R}\right)\), and assume that the off-diagonal elements of \(A\) are non-negative, and \(A+\delta I\) is non-negative irreducible whenever \(\delta>0\) is large enough. We assume that there exists a vector \(X_{0}>0\) such that_
\[X^{\prime}(t)=AX(t),\forall t\in\left[0,\tau\right],\text{ with }X(0)=X_{0},\] (A.2)
_and there exists a vector \(Y>0\) satisfying_
\[\sum_{i=1}^{n}Y_{i}X_{i}(t)=\chi_{1}e^{\chi_{2}t},\forall t\in\left[0,\tau\right],\] (A.3)
_with \(\chi_{1}>0\), \(\chi_{2}>0\), and \(\tau>0\)._
_Then we have_
\[\chi_{2}=s(A),\text{ and }\chi_{1}=\langle Y,\Pi X_{0}\rangle.\]
_That is,_
\[\sum_{i=1}^{n}Y_{i}X_{i}(t)=\langle Y,\Pi X_{0}\rangle e^{s(A)t},\forall t\geq 0.\]
_In other words, we can not distinguish the growth induced by \(\langle Y,X_{0}\rangle\) and \(\langle Y,\Pi X_{0}\rangle\). Therefore we can replace \(X_{0}\) with \(\Pi X_{0}\), and the output \(\langle Y,X(t)\rangle\) will be the same._
Proof.: The equation (A.3) is equivalent to
\[\langle Y,e^{At}X_{0}\rangle=\chi_{1}e^{\chi_{2}t},\forall t\in\left[0,\tau \right],\]
For each \(\delta>0\) large enough such that \(A+\delta I\) is non-negative and primitive, we have
\[\langle Y,e^{(A+\delta I)t}X_{0}\rangle=\chi_{1}e^{(\chi_{2}+\delta)t}, \forall t\in\left[0,\tau\right],\]
so by computing the derivatives on both sides of the above equation and taking \(t=0\), we obtain
\[\langle Y,\left(A+\delta I\right)^{m}X_{0}\rangle=\chi_{1}\left(\chi_{2}+ \delta\right)^{m},\forall m\in\mathbb{N}.\]
But we have \(r\left(A+\delta I\right)=s(A)+\delta\), and
\[\langle Y,\frac{\left(A+\delta I\right)^{m}}{r\left(A+\delta I\right)^{m}}X_ {0}\rangle=\chi_{1}\frac{\left(\chi_{2}+\delta\right)^{m}}{\left(s(A)+\delta \right)^{m}},\forall m\in\mathbb{N},\]
and since the right-hand side of the above equality converges to \(\langle Y,\Pi X_{0}\rangle>0\) (where \(\Pi\gg 0\) is the projector defined in (9.1)), we deduce that
\[\lim_{m\rightarrow\infty}\frac{\left(\chi_{2}+\delta\right)^{m}}{\left(s(A)+ \delta\right)^{m}}=\frac{\langle Y,\Pi X_{0}\rangle}{\chi_{1}}>0,\]
and the result follows.
|
2309.15848 | SHACIRA: Scalable HAsh-grid Compression for Implicit Neural
Representations | Implicit Neural Representations (INR) or neural fields have emerged as a
popular framework to encode multimedia signals such as images and radiance
fields while retaining high-quality. Recently, learnable feature grids proposed
by Instant-NGP have allowed significant speed-up in the training as well as the
sampling of INRs by replacing a large neural network with a multi-resolution
look-up table of feature vectors and a much smaller neural network. However,
these feature grids come at the expense of large memory consumption which can
be a bottleneck for storage and streaming applications. In this work, we
propose SHACIRA, a simple yet effective task-agnostic framework for compressing
such feature grids with no additional post-hoc pruning/quantization stages. We
reparameterize feature grids with quantized latent weights and apply entropy
regularization in the latent space to achieve high levels of compression across
various domains. Quantitative and qualitative results on diverse datasets
consisting of images, videos, and radiance fields, show that our approach
outperforms existing INR approaches without the need for any large datasets or
domain-specific heuristics. Our project page is available at
http://shacira.github.io . | Sharath Girish, Abhinav Shrivastava, Kamal Gupta | 2023-09-27T17:59:48Z | http://arxiv.org/abs/2309.15848v1 | # SHACIRA: Scalable HASH-grid Compression
###### Abstract
Implicit Neural Representations (INR) or neural fields have emerged as a popular framework to encode multimedia signals such as images and radiance fields while retaining high-quality. Recently, learnable feature grids proposed by Muller et al. [1] have allowed significant speed-up in the training as well as the sampling of INRs by replacing a large neural network with a multi-resolution look-up table of feature vectors and a much smaller neural network. However, these feature grids come at the expense of large memory consumption which can be a bottleneck for storage and streaming applications. In this work, we propose SHACIRA, a simple yet effective task-agnostic framework for compressing such feature grids with no additional post-hoc pruning/quantization stages. We reparameterize feature grids with quantized latent weights and apply entropy regularization in the latent space to achieve high levels of compression across various domains. Quantitative and qualitative results on diverse datasets consisting of images, videos, and radiance fields, show that our approach outperforms existing INR approaches without the need for any large datasets or domain-specific heuristics. Our project page is available at [https://shacira.github.io](https://shacira.github.io).
## 1 Introduction
In today's digital age, large quantities of data in different modalities (images, audio, video, 3D) is created and transmitted every day. Compressing this data with minimal loss of information is hence an important problem and a number of techniques have been developed in the last few decades to address this challenging problem. While the conventional methods such as JPEG [5] for images, HEVC [6] for videos excel at encoding signals in their respective domains, coordinate-based implicit neural representations (INR) or Neural Fields [7] have emerged as a popular alternative for representing complex signals because of their ability to capture high frequency details, and adaptability for diverse domains. INRs are typically multi layer perceptrons (MLPs) optimized to learn a scalar or vector field. They
Figure 1: We demonstrate the effectiveness of SHACIRA for two tasks. The left column shows a gigapixel image at \(21450\times 56718\) resolution (cropped for visualization) encoded using Instant-NGP [1], JPEG2000 [2], and SHACIRA (ours). The right column reconstructs NeRF [3] from 2D images and their camera poses using Instant-NGP [1], VQAD [4], and SHACIRA. For each example, we zoom into two crops to compare different methods. We show overall PSNR and size required by each method. SHACIRA can capture high-resolution details with a smaller storage size in a task-agnostic way (only 2D/3D examples shown here).
take a coordinate (location and/or time) as input and predict a continuous signal value(s) as output (such as pixel color/occupancy). Recently, various works have adapted INRs to represent a variety of signals such as audio [8], images [9, 10, 11, 12], videos [13, 14], shapes [15, 16], and radiance fields [3, 17]. Unsurprisingly, several methods have been proposed to compress INRs using quantization [10, 14, 18], pruning, or a combination of both [13]. The focus of these works is to compress the weights of the MLP, which often leads to either a big drop in the reconstruction quality, or slow convergence for high resolution signals.
In this work, we consider a different class of INR approaches that employ learnable multi-resolution feature grids [1, 19]. These feature grids store feature vectors at different coordinate locations with varying Level-Of-Detail (LOD). The features from different levels (or resolutions) are concatenated and passed through a tiny MLP to reconstruct the required signal. This shifts the burden of representing the signal to the feature grid instead of the MLP. Such methods have shown to be effective in approximating complex signals such as 3D scenes and gigapixel images with high fidelity [1] and fast training time (since the cost of lookup is very small). However, the size of the feature grids can be very large which is not memory efficient and impractical for many real-world applications with network bandwidth or storage constraints.
We propose an end-to-end learning framework for compressing such feature grids without any loss in reconstruction performance. Our feature grid consists of quantized feature vectors and parameterized decoders which transform the feature vectors into continuous values before passing them to MLP. We use an entropy regularization loss on the latent features to reduce the size of the discrete latents without significantly affecting the reconstruction performance. To address the discretization gap inherent to this discrete optimization problem, we employ an annealing approach to the discrete latents which improves the training stability of the latents, converging to better minima. Both entropy regularization and reconsutruction objective can be trained jointly in an end-to-end manner without requiring post-hoc quantization, pruning or finetuning stages. Further, the hierarchical nature of feature grids allows scaling to high dimensional signals unlike pure MLP-based implicit methods.
As seen in Figure 1, the proposed approach is able to compress feature-grid methods such as Instant-NGP [1] with almost an order of magnitude while retaining the performance in terms of PSNR for gigapixel images and 3D scenes from the RTMV dataset [20]. We also conduct extensive quantitative experiments and show results on standard image compression benchmarks such as the Kodak dataset outperforming the classic JPEG codec as well as other implicit methods in the high compression regime. Our approach can even be trivially extend to videos, performing competitively with video-specific INR methods such as NeRV [13], without explicitly exploiting the inherent temporal redundancy present in videos. The key contribution of our work is to directly compress the learnable feature grid with proposed entropy regularization loss and highlight its adaptibility to diverse signals. We summarize our contributions below:
* We introduce an end-to-end trainable compression framework for implicit feature grids by maintaining discrete latent representations and parameterized decoders.
* We provide extensive experiments on compression benchmarks for a variety of domains such as images, videos, and 3D scenes showing the generalizability of our approach outperforming a variety of INR works.
## 2 Related work
**Learned image/video compression:** A large number of neural compression works for images consist of an autoencoder framework [21, 22] which transform/encode a data point to a latent code and decode the quantized latent to obtain a reconstruction of the data signal. The autoencoder is typically trained in an end-to-end fashion on a large training dataset by minimizing a rate-distortion objective. Numerous extensions to these works introduce other improvements such as hyperpriors [23], autoregressive modeling [24], Gaussian mixture likelihoods and attention modules [25], improved inference [26]. Another set of works extends this framework for videos as well [27, 28, 29, 30] exploiting the inherent temporal redundancy. These approaches achieve impressive compression results outperforming classical codecs in their respective domains. In contrast, we focus on a different class of works involving implicit neural representations (INRs) which overfit a network to each datapoint and store only the network weights to achieve data compression.
**INRs and application to data compression:** INRs [31] are a rapidly growing field popularized for representing 3D geometry and appearance [3, 15, 32, 33] and have since been applied to a wide variety of fields such as GANs [34, 35], image/video compression [9, 10, 13, 14, 18], robotics [36] and so on. Meta-learning on auxiliary datasets has been shown to provide good initializations and improvements in reconstruction performance while also greatly increasing convergence speed for INRs [11, 16, 18]. Our approach can similarly benefit from such meta-learning stages but we do not focus on it and rather highlight the effectiveness of our approach to compress feature-grid based INRs and its advantages over MLP-based INRs. Perhaps the closest work to our approach is that of VQAD [4] which performs
a vector quantization of these feature grids learning a codebook/dictionary and its mapping to the feature grid in 3D domain. They however learn a fixed-size codebook without any regularization loss and do not perform well for high-fidelity reconstructions as we discuss in Section 4.
**Model compression:** As INRs represent data as neural networks, they transform the data compression problem to a model compression one. Many works exist for model compression involving pruning for achieving high levels of sparsity [37, 38, 39, 40] or quantization for reducing the number of bits necessary [41, 42, 43]. Another line of works [44, 45] perform compression similar to [23] using quantized latents with entropy regularization losses. These methods, however, are specific to convolutional networks and are not trivially extensible to compress INRs.
## 3 Approach
Our goal is to simultaneously train and compress feature-grid based implicit neural networks. Section 3.1 provides a brief overview of feature-grid INRs proposed in [1]. Section 3.2 describes various components of our approach while Section 3.3 discusses compressing feature grids. Our approach for end-to-end feature grid compression is outlined in Section 3.4 and also illustrated in Figure 2.
### Feature-grid INRs
INRs or neural fields [7] typically learn a mapping \(g_{\phi}(\mathbf{x}):\mathbb{R}^{d}\rightarrow\ \mathbb{R}^{c}\) where \(g\) is an MLP with parameters \(\phi\). Input \(\mathbf{x}\in\mathbb{R}^{d}\) to the MLP are \(d\) dimensional coordinates, where \(d=2\) in the case of images, \(3\) in the case of videos, or \(5\) in the case of radiance fields. \(c\)-dimensional output can represent RGB colors or occupancy in space-time for the given input coordinates. Such methods are able to achieve high-quality reconstructions, however, suffer from long training times and scalability to high-resolution signals. A few works have suggested utilizing fixed positional encodings of input coordinates [46] to be able to reconstruct complex high-resolution signals but the training time of these networks still poses a big challenge. To alleviate this issue, [1, 19] proposed storing the input encodings in the form of a learnable feature grid. The feature grid allows for significant speed-up in the training time by replacing a large neural network with a multi-resolution look-up table of feature vectors and a much smaller neural network. We now provide a brief overview of Instant-NGP [1].
For ease of explanation, for the rest of this section, we will assume input is a 2D signal \(\mathbf{x}\in\mathbb{R}^{2}\). However, all the techniques we discuss can be directly applied to the 3D case. In this framework, we represent the feature grid by a set of parametric embeddings \(\mathbf{Z}\). \(\mathbf{Z}\) is arranged into \(L\) levels representing varying resolutions or Levels-Of-Detail (LOD). More formerly, each level has its own embedding matrix \(\mathbf{Z}_{l}\), and hence \(\mathbf{Z}=\{\mathbf{Z}_{1},\dots,\mathbf{Z}_{l}\}\). The number of feature vectors in the embedding matrix \(\mathbf{Z}_{l}\) depends on the resolution of the level \(R_{l}\). For coarser resolutions, we allow \(\mathbf{Z}_{l}\) to consist of \(R_{l}^{2}\) rows, but for finer resolutions, we cap the maximum number of rows in the matrix to \(T\). Hence for
Figure 2: Overview of our approach: We maintain latent representations which are quantized and decoded using parameterized decoders to obtain a hash table/feature-grid at different levels. We then index the input coordinate into the hash table to obtain feature vectors. The feature vectors are then concatenated and passed through an MLP to obtain the output signal.
\(\mathbf{x}\in\mathbb{R}^{2}\),
\[\mathbf{Z}_{l}\in\mathbb{R}^{T_{l}\times F},\text{ where }T_{l}=\text{min}(R_{l}^{2},T) \tag{1}\]
Here \(F\) is the dimension of feature vectors and is kept fixed for all levels. For a given input \(\mathbf{x}\), we can obtain the 4 closest corner indices \(\{tl,tr,bl,br\}\) within the \(R_{l}\times R_{l}\) grid. Each of the corner index maps to an entry in \(\mathbf{Z}_{l}\). This mapping is direct when \(R_{l}^{2}\leq T\) and uses a hashing function [47] otherwise. The feature vector \(\mathbf{f}_{l}\) at level \(l\) for the input \(\mathbf{x}\) is then obtained by a simple bilinear interpolation of feature vectors of the corner indices, _i.e_.
\[\mathbf{f}_{l}(\mathbf{x})=\text{interp}(\mathbf{Z}_{l}[tl],\mathbf{Z}_{l}[tr ],\mathbf{Z}_{l}[bl],\mathbf{Z}_{l}[br]) \tag{2}\]
Note that in the case of 3D input, we consider 8 closest corner indices, and perform a trilinear interpolation. \(\mathbf{f}_{l}(\mathbf{x})\) is concatenated across different levels to obtain the overall feature vector \(\mathbf{f}(\mathbf{x})\in\mathbb{R}^{FL}\), which is passed as input to the neural network \(g_{\phi}\).
\[\widehat{\mathbf{y}}=g_{\phi}(\text{concat}[\mathbf{f}_{1}(\mathbf{x}),\dots, \mathbf{f}_{L}(\mathbf{x})]) \tag{3}\]
Here \(\widehat{\mathbf{y}}\) is the final prediction of INR for the input \(\mathbf{x}\). Since the concatenation, indexing, and interpolation operations are differentiable, parameters \(\{\phi,\mathbf{Z}\}\) can be optimized using any gradient based optimizer by minimizing a loss function \(\mathcal{L}(\widehat{\mathbf{y}},\mathbf{y})\) between the predicted \(\widehat{\mathbf{y}}\) and ground-truth signal \(\mathbf{y}\). \(\mathcal{L}\) can be any differentiable loss function such as the Mean Squared Error (MSE). The MLP is typically very small in comparison to MLP-based INRs. Thus, \(\phi\) consists of far fewer parameters than \(\mathbf{Z}\). Such an INR design allows for much faster training and inference as the cost of indexing into the feature grid is quite small.
### Feature-grid reparameterization
While feature-grid INRs can converge faster than pure-MLP approaches, the space required to store all the parameters at different levels can rise very rapidly if we want high-fidelity reconstructions for high-resolution inputs. This makes them unsuitable for resource-constrained applications. We thus aim to reduce the storage size of these feature grids. To this effect, we propose to maintain discrete or quantized latent representations \(\mathbf{Q}_{l}\in\mathbb{R}^{T_{l}\times D}\) for each embedding \(\mathbf{Z}_{l}\in\mathbb{R}^{T_{l}\times F}\). The latents, consisting of only integer values, can be of any dimension \(D\) with a larger \(D\) allowing for greater representation power at the cost of storage size.
In order to map these discrete latent features \(\mathbf{Q}_{l}\) to the continuous features in the embedding table \(\mathbf{Z}_{l}\), we propose a parameterized decoder \(d_{\theta}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{F}\). While it is possible to use separate complex decoders for each level, in practice, we observed that a single shared decoder parameterized as a linear transform across all L levels works pretty well and has a minimal impact on the training/inference times and fidelity of reconstructions.
Note that the quantized latents \(\mathbf{Q}\) are no more differentiable (here we dropped the subscript \(l\) without loss of generality for notational simplicity). In order to optimize these quantized latents, we maintain a continuous proxy parameters \(\widehat{\mathbf{Q}}\) of the same size as \(\mathbf{Q}\). \(\mathbf{Q}\) is obtained by rounding \(\widehat{\mathbf{Q}}\) to the nearest integer. To make this operation differentiable, we utilize the Straight-Through Estimator [48](STE). In STE, we use quantized weights \(\mathbf{Q}\) during the forward pass, but use the continuous proxies to propagate the gradients from \(\mathbf{Q}\) to \(\widehat{\mathbf{Q}}\) during the backward pass.
STE serves as a simple differentiable approximation to the rounding operation but leads to a large rounding error \(\|\mathbf{Q}-\widehat{\mathbf{Q}}\|\). To overcome this issue, we utilize an annealing approach [26] to perform a soft rounding operation. We represent \(\mathbf{Q}\) by either rounding up (denoted as \(\lceil.\rceil\)) or down (denoted as \(\lfloor.\rfloor\)) \(\widehat{\mathbf{Q}}\) to the nearest integer. Using one-hot gates \(b\in\{0,1\}\), where \(b=0\) corresponds to rounding up, \(b=1\) corresponds to rounding down, we can represent \(\mathbf{Q}=b[\widehat{\mathbf{Q}}]+(1-b)[\widehat{\mathbf{Q}}]\). The gate \(b\) is sampled from a soft relaxed 2-class distribution
\[\text{Prob}(b=0) \propto\exp\left\{-\tanh^{\scalebox{0.5}{$\scalebox{0.5}{$\scalebox{ 0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$ \scalebox{0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$ \scalebox{0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$\scalebox{0.5}{$\scalebox{$ \scalebox{0.5}{$\scalebox{$\scalebox{0.5}$\scalebox{$\scalebox{$\sim$}$ \scalebox{$\scalebox{$\sim$\!$}$}{$\scalebox{$\!$}$}{$\scalebox{$\!$}$ \scalebox{\!$}{$\scalebox{\!$}$\scalebox{\!${\sim$\!${\!${\!$\!${\!$ $}$\scalebox{$\!${$\!${\!$$$$$}$\scalebox{$\!${$$$$$$$$$$}$ \scalebox{\!${$$$$$$$}$\scalebox{\!${$$$$\!${$$$$$$$}$\scalebox{$ \!${$$$$$$}$\scalebox{$\!${$$$$$$}$\scalebox{$\!${$$$$$$$ \scalebox{$$$$$$$}$\scalebox{${$$$$$$$$}$\scalebox{$\!${$$$$$ \scalebox{$$$$$$$}$\scalebox{$$$$\scalebox{$$$$$$$$$$$}{\scalebox{$$$$$$ \scalebox{$$$$$$$$$}$\scalebox{$$$\scalebox{$$$$$$$$$$$$$}{\scalebox{$$$$$$$ \scalebox{$$$$$$$$$$}{\scalebox{$$$$$$$$$$$$}{\scalebox{$$$$$$$$$$ \scalebox{$$$$$$$$}{\scalebox{$$$$$$$$$$$$$$}{\scalebox{$$$$$$$$$$ \scalebox{$$$$$$$$$}{\scalebox{$$$$$$$$$$$$$$$$}{\scalebox{$$$$$$$$ \scalebox{$$$$$$$$$$$}{\scalebox{$$$$$$$$$$$$$}{\scalebox{$$$$$$$$$$ \scalebox{$$$$$$$$$}{\scalebox{$$$$$$$$$$$$$}{\scalebox{$$$$$$$$$ \scalebox{$$$$$$$$$$}{\scalebox{$$$$$$$$$$$$$}{\scalebox{$$$$$$$$$$ \scalebox{$$$$$$$$$}}{\scalebox{\scalebox{$$$$$$$$$\scalebox{$$$$$$$$$$$ \scalebox{$$$$$$}}{\scalebox{\scalebox{$$$$$$$$$$}}{\scalebox{\scalebox{$$$$$$ }}}{\scalebox{\scalebox{\!${\!${\times$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$$$$$$$$$$$$$$$$$\rightright\] \] \] \right) \] \right.\] \] \right.\] \] \ \right.\]
where \(n\sim\mathcal{U}[-1,1]\) represents the uniform random distribution to approximate the effects of quantization, and \(\mathcal{L}_{I}\) is the self-information loss.
### End-to-end optimization
We provide an overview of our approach in Fig. 2. We maintain continuous learnable latents \(\widehat{\mathbf{Q}}\). The proposed annealing approach (Sec. 3.1) progressively converges the continuous \(\widehat{\mathbf{Q}}\) to the discrete \(\mathbf{Q}\). The approach is made differentiable using the gumbel reparameterization trick and straight-through estimator. \(\mathbf{Q}\) is passed through the decoder \(d_{\theta}\) with parameters \(\theta\) to obtain the feature grid table \(\mathbf{Z}\). We then index \(\mathbf{Z}\) at different levels/resolutions using the coordinates \(\mathbf{x}\) to obtain a concatenated feature vector \(\mathbf{f}\) which is passed through an MLP \(g_{\phi}\) to obtain the predicted signal \(\widehat{\mathbf{y}}\).
\[\mathbf{Q} =\text{discretize}(\widehat{\mathbf{Q}}) \tag{6}\] \[\mathbf{Z} =d_{\theta}(\mathbf{Q})\] (7) \[\widehat{\mathbf{y}} =g_{\phi}\left(\text{concat}\left[\text{interp}(\mathbf{Z}, \mathbf{x})\right]\right) \tag{8}\]
Our framework is thus fully differentiable in the forward and backward pass. For a given signal \(\mathbf{y}\) and its corresponding input coordinate grid \(\mathbf{x}\), we optimize the parameters \(\phi\) of MLP \(g_{\phi}\), discrete feature grid \(\widehat{\mathbf{Q}}\), discrete to continuous decoder \(\theta\), and the probability models \(\{P_{d}\}\) in an end-to-end manner by minimizing the following rate distortion objective
\[\mathcal{L}_{\text{MSE}}(\widehat{\mathbf{y}},\mathbf{y})+\lambda_{I}\mathcal{ L}_{I}(\widehat{\mathbf{Q}}) \tag{9}\]
where \(\lambda_{I}\) controls the rate-distortion optimization trade-off. Post training, the quantized latents are stored losslessly using algorithms such as arithmetic coding [51] utilizing the probability tables from the density models \(\{P_{d}\}\).
## 4 Experiments
We apply our INR framework to images, videos, and radiance fields. We outline our experimental setup in Sec. 4.1. Sec. 4.2, Sec. 4.3, and Sec. 4.4 provide results on compression for images, radiance fields, and videos respectively. Sec. 4.5 illustrates the application of our approach for progressive streaming. Sec. 4.6 discusses the convergence speeds of our approach. Sec. 4.7 ablates the effect of entropy regularization and annealing. Additional experiments and ablations are provided in the supplementary material.
### Experimental details and setup
We show image compression performance on the Kodak dataset [52] consisting of 24 images of resolution \(512\times 768\). To show our scalability to larger images, we provide results on higher resolution images varying from 2M pixels to 1G pixels. We use the Kaolin-Wisp library [53] for our experiments on radiance fields. In addition to the Lego scene in Fig. 1, we provide results on 10 bricks scenes from the RTMV dataset [20] which contains a wide variety of complex 3D scenes. For videos, we benchmark on the UVG dataset [54] consisting of seven 1080p resolution videos consisting of 600 frames each at 120fps. Additionally, we extract the first frame from these 7 videos to create a 7-image dataset, UVG-F, for benchmarking image compression at \(1080\times 1920\) resolution against other popular INR compression approaches. We primarily measure distortion in terms of the PNSR (dB) and rate in terms of Bits-Per-Pixel (BPP) (or model size for neural fields).
We fix the minimum grid resolution to 16 and vary the number of per-level entries T, the number of levels L, and the maximum grid resolution for various modalities to control the rate/distortion trade-off. We fix the number of hidden MLP layers to 2 with the ReLU activation. We fix the batch size (number of coordinates per iteration) to be \(2^{18}\) for all our experiments except Kodak where we pass the full coordinate grid. The entropy regularization parameter is set to \(1.0e^{-4}\) for all our experiments unless specified otherwise. The parameters are optimized using the Adam optimizer [55]. We provide additional experimental details in the supplementary material.
### Scalable image compression
We visualize results on the Kodak benchmark in Fig. 3. We outperform the MLP-based INR approaches of COIN [9], COIN++ [10] by a significant margin at all bitrates. We outperform [18], which utilizes positional encodings and weight quantization, at higher bitrates while also having much lesser encoding times (as we show in
Figure 3: Comparison of our approach on the Kodak image dataset with classical (dashed), RDAE (dotted), INR (solid) methods. We outperform state-of-the-art INR approaches bridging the gap to classical and RDAE methods.
Sec. 4.6) We show qualitative results on one of the images from the KODAK dataset in Fig. 4. We obtain higher quality reconstructions (\(28.66\to 34.68\) PSNR) capturing fine details while also requiring fewer bits for storage (\(1.02\to 0.88\) BPP) as compared to [18]. We however, observe slightly lower performance in the low BPP regime where uncompressed MLP weights represent a significant fraction(\(\sim\)\(25\%\)) of the total size. We hypothesize that at lower bit rates, compression of MLPs can achieve further reduction in model size. Our work focuses only on the compression of feature grids and can potentially benefit from the complimentary works on MLP compression.
We additionally outperform JPEG for the full range of BPP with a much larger gap at lower bitrates. Notice the blocking artifacts in Fig. 4 for JPEG while we achieve consistent reconstructions. However, the classical methods of JPEG2000, BPG and the autoencoder based work of [23] continue to outperform our method, especially in the higher BPP regime. Nevertheless, we reduce the gap between INRs and autoencoder in the low-dimensional image space.
In order to understand the scalability of various image compression methods with image resolution, we compress images in the UVG-F dataset (\(1920\times 1080\)) and 4 images with increasing resolution. Results are summarized in Table 1. We outperform [18] by a large margin on the UVG-F dataset with an improvement of over 4.5dB while also requiring \(2\times\) fewer bits. This is also observed in Fig. 4 (right column) where we capture much finer high frequency details compared to the positional encoding approach of [18]. We marginally outperform JPEG as well in the high BPP regime which again exhibits minor blocking artifacts. Perhaps, the most surprising result is the performance of the Rate-Distortion AutoEncoder (RDAE) based approach [23] which does not scale very well to large dimensions. We obtain a 3.5dB improvement in PSNR while maintaining a similar BPP albeit with slightly lower SSIM score.
For higher image resolutions, we continue to observe negligible drops in performance compared to INGP [1] while achieving \(4-9\times\) smaller bitrates. We qualitatively visualize results on the Cosmic-cliffs image (with a resolution of \(8441\times 14575\)) in Fig. 5. We achieve a similar PSNR as Instant-NGP (38.78 dB) with \(\sim\)\(6\times\) reduction in bitrates. MLP-based INRs such as SIRE
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Image & Method & PSNR \(\uparrow\) & SSIM \(\uparrow\) & BPP \(\downarrow\) \\ \hline \multirow{4}{*}{UVG-F (\(1920\times 1080\))} & Positional [18] & 33.17 & 0.86 & 1.52 \\ & JPEG [5] & 36.98 & 0.91 & 0.76 \\ & RDAE [23] & 34.23 & **0.93** & 0.76 \\ & Ours & **37.74** & 0.92 & **0.76** \\ \hline \multirow{4}{*}{SMACS (\(4630\times 4537\))} & INGP [1] & 34.61 & 0.86 & 0.18 \\ & JPEG [5] & 34.77 & 0.86 & 0.18 \\ & RDAE [23] & 34.06 & **0.89** & 0.40 \\ & Ours & **34.90** & 0.86 & **0.04** \\ \hline \multirow{4}{*}{Cosmic-Cliffs (\(8441\times 14575\))} & INGP [1] & 38.78 & 0.96 & 0.63 \\ & SIREN [8] & 27.32 & 0.90 & 0.14 \\ & JPEG [5] & 38.38 & 0.95 & 0.29 \\ & RDAE [23] & 37.90 & **0.97** & 0.38 \\ & Ours & **38.79** & 0.96 & **0.11** \\ \hline \multirow{4}{*}{Pearl (\(23466\times 20000\))} & INGP [1] & **29.62** & 0.84 & 1.00 \\ & JPEG [5] & 29.10 & 0.84 & 0.29 \\ \cline{1-1} & Ours & 29.44 & **0.84** & **0.12** \\ \hline \multirow{4}{*}{Tokyo (\(21450\times 56718\))} & INGP [1] & **31.87** & **0.82** & 0.39 \\ \cline{1-1} & JPEG [5] & 31.16 & 0.82 & 0.18 \\ \cline{1-1} & Ours & 31.62 & 0.82 & **0.06** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Image compression at varying resolutions. We compare against implicit network methods of INGP [1], Positional [18], SIREN [8], and the auto-encoder based work, RDAE [23]. We achieve high values of PSNR, maintaining similar quality reconstructions as INGP while requiring far fewer bits (\(4-9\times\)).
Figure 4: Qualitative results on Kodak and UVG-F: We obtain much higher quality reconstructions capturing finer detail compared to [18] or JPEG. Notice the blocking and discoloration artifacts present in JPEG which are significant at lower BPP values.
ture the high frequency detail resulting in blurry images and achieve low PSNR (27.32 dB). JPEGs also lead to a large drop in performance in the similar low BPP regime(\(\sim\)\(0.15\)).
### Radiance fields compression
We now turn to the application of SHACIRA to neural radiance fields or NeRFs. We compare our approach against the baseline Instant-NGP [1], mip-NERF [56] and VQAD [4], a codebook based feature-grid compression approach. We evaluate on 10 brick scenes from the RTMV dataset by training each approach for 600 epochs and summarize the results in Table 3. We marginally outperform the baseline INGP in terms of all the reconstruction metrics of PSNR, SSIM and LPIPS while also achieving \(\sim\)\(48\times\) compression requiring an average of only 1MB per scene. We also outperform mip-NERF performing better on PSNR and SSIM while reducing the storage size. For a better comparison with VQAD (based off of NGLOD-NERF [19]), we scale down T, the maximum number of hashes. We see that we obtain \(>20\%\) lower model size at 0.43 MB compared to 0.55 MB of VQAD while slightly better in the reconstruction metrics. We also see a clear improvement over VQAD for the LEGO NeRF scene visualized in Figure 1. VQAD has around 1.5dB PSNR drop while still being \(\sim\)\(7\times\) larger in terms of storage size. Additionally, VQAD fails to scale for higher bitwidth due to memory constraints even on an NVIDIA RTX A100 GPU with 40GB GPU RAM.
To better illustrate the 3 approaches of INGP, VQAD and SHACIRA, we train on the full resolution V8 scene (\(1600\times 1600\)) for 600 epochs, visualizing the results in Fig. 6. We outperform VQAD (+1dB) at similar model size and obtain \(\sim\)\(60\times\) compression compared to INGP, but with 1dB drop in PSNR. Nevertheless, we reconstruct the scene with fewer artifacts and consistency in the intricate
Figure 5: Compression result visualization on the Cosmic-Cliffs image for 4 methods. We obtain similar PSNR and reconstruction quality as Instant-NGP while \(\sim\)\(6\times\) smaller. SIREN fails to fit high frequency information leading to blurry patches as seen. JPEG on the other hand suffers from blocking artifacts and discoloration leading to drop in reconstruction quality.
Figure 6: Evaluation on V8 from the RTMV dataset (\(1600\times 1600\) resolution). We obtain \(\sim\)\(60\times\) compression compared to Instant-NGP but with a PSNR drop of \(1dB\). We outperform VQAD obtaining higher PSNR at similar model size. We also obtain much finer reconstructions when compared with VQAD as shown in the zoomed patches.
shapes, compared to VQAD as highlighted in the patches.
### Video compression
Next, we apply our approach to video compression as well. As a baseline, we compare against SIREN a coordinate-based INR. We also compare against NeRV [13], a video INR based approach which takes a positional encoding as input and predicts frame-wise outputs for a video. We also compare against another video INR, NIR-VANA [14], which is an autoregressive patch-wise prediction framework for compressing videos. We run the baselines on the UVG dataset, with the results shown in Table 2.
We outperform SIREN by a significant margin with an almost \(+7dB\) gain in PSNR and \(25\%\) lesser BPP and shorter encoding times. This is to be expected as SIREN fails to scale to higher dimensional signals such as videos usually with more than \(10^{9}\) pixels. We also outperform NIRVANA achieving higher PSNR and lower BPP albeit at longer encoding times. Compared to NeRV, we obtain a 0.5dB PSNR drop but achieve \(3\times\) compression in model size. We would like to add that our current implementation utilizes PyTorch and the encoding time can be reduced significantly using fully fused CUDA implementation ([1] demonstrated that efficient lookup of the hashtables can be an order of magnitude faster than a vanilla Pytorch version). Additionally, our approach is orthogonal to both baselines and provides room for potential improvements. For instance, compressed multi-resolution feature grids can be used to replace positional embedding for NeRV as well as coordinate-based SIREN, and provide faster convergence and better reconstruction for high dimensional signals.
### Streaming LOD
[4] show the advantage of feature-grid based INRs for progressive streaming at inference time due to their multi-resolution representation capabilities. Since our compression framework consists of latents at different LODs as well, they can be progressively compressed with varying LOD yielding reconstructions with different resolution. Formally, for \(\mathbf{Q}=\{\mathbf{Q}_{1},\mathbf{Q}_{2},...\mathbf{Q}_{L}\}\) we can reconstruct the signal at LOD \(l\) by only passing the first \(l\) latents while masking out the finer resolutions. This can be applied directly during inference without any additional retraining. We visualize the effect of such progressive streaming in Fig. 7. We obtain better reconstruction with increasing bitrates or the latent size. This is especially beneficial for streaming applications as well as for easily scaling the model size based on the storage or bandwidth constraints.
### Convergence speeds
We now compare the convergence speeds of our feature-grid based approach with that of [18] which is an MLP-based INR with positional encoding. We summarize the results for an image in the Kodak dataset in Fig.8. We measure encoding times on an NVIDIA RTX A6000 GPU for the full length of the training for both approaches. We obtain higher PSNR at a much faster rate than [18] at a similar BPP range (hue value in color bar). While [18] reduces BPP (0.84 at 600s) with higher encoding times, their PSNR remains stagnant at 32.5dB. In contrast, our approach
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Encoding Time \(\downarrow\) & PSNR \(\uparrow\) & BPP \(\downarrow\) \\ \hline SIREN & 15 hours & 27.20 & 0.28 \\ NeRV & 3.5 hours & **35.54** & 0.66 \\ NIRVANA & 4 hours & 34.71 & 0.32 \\ Ours & 6.5 hours & 35.01 & **0.21** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison against various video INR approaches on the UVG dataset. We outperform NIRVANA with higher PSNR and lower BPP while obtaining slightly lower PSNR than NeRV at \(3\times\) reduction in model size.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & Storage \(\downarrow\) \\ \hline NeRF & 28.28 & 0.9398 & 0.0410 & 2.5MB \\ mip-NERF & **31.61** & **0.9582** & **0.0214** & **1.2MB** \\ \hline NGLOD-NERF & **32.72** & **0.9700** & **0.0379** & \(\approx\)20MB \\ VQAD & 31.45 & 0.9638 & 0.0468 & 0.55MB \\ Ours & 31.46 & 0.9657 & 0.0428 & **0.43MB** \\ \hline Instant-NGP & 31.88 & 0.9690 & 0.0381 & 48.9MB \\ Ours & **32.14** & **0.9704** & **0.0348** & **1.03MB** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of various methods on scenes in the RTMV dataset. We marginally outperform the baseline Instant-NGP in terms of all metrics while also achieving a \(48\times\) compression.
Figure 7: Our multiresolution compressed representations can be transmitted at varying LODs at inference time (without any retraining) making it suitable for applications with progressive streaming.
achieves this PSNR and BPP (0.85) within just 53s achieving more than a \(10\times\) speedup in convergence. Additionally, we achieve higher PSNR with longer encoding times reaching 34dB in 430s while maintaining a similar BPP (0.87).
### Effect of entropy regularization and annealing
In this section, we analyze the effect of entropy regularization and annealing. We pick the Jockey image from UVG-F (\(1080\times~{}1920\)) for our analysis. We set the default values of latent and feature dimensions to \(1\). For the analysis, we compare trade-off curves by increasing the number of entries from \(2^{13}\) to \(2^{17}\) in multiples of 2 which naturally increases the number of parameters and also the PSNR and BPP. Note that better trade-off curves indicate shifting upwards (higher PSNR) and to the left (lower BPP).
Figure 9 shows the effect of entropy regularization. The absence of it, corresponding to a value of \(0.0\) shows a drop in performance compared to values of \(1.0e^{-4}\) and higher. We see that the network performance is fairly stable in this range with much higher values of \(4.0e^{-4}\) showing small drops. This shows that entropy regularization using the defined probability models helps in improving the PSNR-BPP trade-off curve by decreasing entropy (or model size/BPP) with no drop in network performance (PSNR).
Figure 10 shows the effect of annealing. We vary the duration of annealing as a fraction of the total training duration. No annealing corresponds to the value \(0.0\) and has a large drop in the trade-off curve compared to higher values. This shows that annealing performs better than standard STE (Straight Through Estimator) alone and is an important component of our approach for quantizing the feature grid parameters. Increasing the period of annealing shows consistent improvements in the trade-off curve. |
2309.04789 | Local Certification of Some Geometric Intersection Graph Classes | In the context of distributed certification, the recognition of graph classes
has started to be intensively studied. For instance, different results related
to the recognition of planar, bounded tree-width and $H$-minor free graphs have
been recently obtained. The goal of the present work is to design compact
certificates for the local recognition of relevant geometric intersection graph
classes, namely interval, chordal, circular arc, trapezoid and permutation.
More precisely, we give proof labeling schemes recognizing each of these
classes with logarithmic-sized certificates. We also provide tight logarithmic
lower bounds on the size of the certificates on the proof labeling schemes for
the recognition of any of the aforementioned geometric intersection graph
classes. | Benjamín Jauregui, Pedro Montealegre, Diego Ramírez-Romero, Ivan Rapaport | 2023-09-09T13:29:23Z | http://arxiv.org/abs/2309.04789v1 | # Local Certification of Some Geometric Intersection Graph Classes+
###### Abstract
In the context of distributed certification, the recognition of graph classes has started to be intensively studied. For instance, different results related to the recognition of planar, bounded tree-width and \(\boldsymbol{H}\)-minor free graphs have been recently obtained. The goal of the present work is to design compact certificates for the local recognition of relevant geometric intersection graph classes, namely interval, chordal, circular arc, trapezoid and permutation. More precisely, we give proof labeling schemes recognizing each of these classes with logarithmicized certificates. We also provide tight logarithmic lower bounds on the size of the certificates on the proof labeling schemes for the recognition of any of the aforementioned geometric intersection graph classes.
**Keywords:** Distributed computing; Local certification; Proof labeling schemes; Graph classes recognition; Geometric intersection graph classes
**MSC Classification:** 68Q25, 68R10, 68U05
## 1 Introduction
This paper examines the standard scenario of distributed network computing, where nodes in a network, represented as a graph \(G=(V,E)\), exchange information through the links \(E\) of the graph (see, for example, Peleg [47]). The objective is to gain a deeper understanding of the locality of graph properties. For instance, let's consider the property "every node has an even number of neighbors." This property can be checked locally, meaning that if each node verifies that it has an even number of neighbors, then the graph satisfies the property.
Similar to centralized computing, distributed algorithms often make assumptions about the properties of \(G\), and many algorithms are designed for specific types of graphs, such as regular graphs, planar graphs, bipartite graphs, or graphs with bounded tree-width. However, most graph properties of interest are not locally checkable. For instance, determining whether the graph has an even number of vertices requires nodes to examine beyond their immediate vicinity. Other natural properties like aciclicity or planarity requires the nodes to look arbitrarily far in the graph to verify them.
To cope with properties that are not locally checkable, several model extensions have been proposed. One possible solution is through local certification, which enables the local verification of any graph property. A local certification consists of a certificate assignment and a verification algorithm for a specific property. Together with the input information, each node receives a certificate and executes the verification algorithm communicating with their neighborhood. This algorithm determines whether the node accepts or rejects the certification. The protocol has to satisfy soundness and completeness conditions. Namely, if the graph satisfies the property, there exists a certificate assignment where all nodes accept it. Conversely, if a property is not satisfied, there is at least one node that rejects the certificate in every assignment.
In recent years, the field of local certification has gained considerable attention. We refer to Feuilloley [17] for an introduction on the area.
Proof-labeling schemes (PLSs) are, arguably, the best-known local certification type of protocol. They were introduced by Korman, Kutten, and Peleg in 2010 [38]. PLSs represent one of the weakest forms of local certification, where the verification algorithm is restricted to sharing the certificates in just one round of communication. In simpler terms, each node runs a verification algorithm with knowledge limited to its own certificate and the certificates of its neighbors in the graph. Despite these limitations, PLSs exhibit remarkable capabilities when compared to other local certification algorithms.
It is known that any property can be certified by a PLS using certificates of size \(\mathcal{O}(n^{2})\) bits, where \(n\) is the total number of vertices. This can be achieved by providing each node with a complete description of the graph, allowing them to verify the property and the correctness of the local graph description. However, the \(\mathcal{O}(n^{2})\) certificate size is excessively large. Therefore, the primary objective in the study of local certification is to minimize the certificate size, expressed in terms of bits per vertex as a function of \(n\). Determining the minimum certificate size holds theoretical significance, as the optimal certificate size of a property reflects its locality: smaller certificates imply less dependence on global information, indicating a more localized property.
Motivated by the results of Goos and Suomela [29], in [18] the authors remarked that \(\Theta(\log n)\) is a benchmark for the number of bits that one can hope for a PLS to achieve. Indeed, certificates of size \(o(\log n)\) are too short even for very simple properties. For instance, any PLS that verifies acyclicity requires certificates of \(\Omega(\log n)\) bits [38]. On the other hand, a logarithmic number of bits allows us to encode identifiers, distances, spanning trees, etc. For these reasons, a certification with \(\Theta(\log n)\) bits is called a _compact local certification_.
Unfortunately, not every property has a compact certification. For example, not being \(3\)-colorable cannot be certified with less than \(\Omega(n^{2}/\log n)\) bits [29]. This is in sharp contrast with the problem of verifying \(3\)-colorability: there is a trivial PLS to verify whether the graph is \(3\)-colorable with two bits, which simply assigns each vertex a number in \(0,1,2\) representing its color in a proper \(3\)-coloring.
In [18], the authors raise the question of which graph properties admit compact certifications. In recent years, several results have emerged demonstrating that many relevant graph classes can be recognized using compact certificates (see the Related Work section below for more details). In this article, we study a specific set of graph properties defined by the intersection of geometric objects.
### Geometric Intersection Graph Classes
A graph \(G=(V,E)\) is a geometric intersection graph if every node \(v\in V\) is identified with a geometric object of some particular type, and two vertices are adjacent if the corresponding objects intersect. Intersection graphs are the natural model of wireless sensor networks (where simple devices are deployed in large areas), but they also appear in disciplines that do not necessarily come from distributed computing such as biology, ecology, matrix analysis, circuit design, statistics, archaeology, scheduling, etc. For a nice survey, we refer to [45].
The two simplest non-trivial, and arguably two of the most studied geometric intersection graphs are _interval graphs_ and _permutation graphs_. In fact, most of the best-known geometric intersection graph classes are either generalizations of interval graphs or generalizations of permutation graphs. It comes as no surprise that many papers address different algorithmic and structural aspects, simultaneously, in both interval and permutation graphs [2, 32, 40, 51].
In both interval and permutation graphs, the intersecting objects are (line) segments, with different restrictions imposed on their positions. In interval graphs, the segments must all lie on the real line. In permutation graphs, the endpoints of the segments must lie on two separate, parallel real lines. In Figure 1 we give an example of an interval graph, while in Figure 2 we give an example of an interval graph.
Although the class of interval graphs is quite restrictive, there are a number of practical applications and specialized algorithms for interval graphs [28, 30, 37]. Moreover, for several applications, the subclass of unit interval graphs (the situation where all the intervals have the same length) turns out to be extremely useful as well [4, 34].
A natural generalization of interval graphs are _circular arc graphs_, where the segments, instead of lying on a line, lie on a circle. More precisely, a circular arc graph is the intersection graph of arcs of a circle (see Figure 3). Although circular arc graphs look similar to interval graphs, several combinatorial problems behave very differently
on these two classes of graphs. For example, the coloring problem is NP-complete for circular-arc graphs while it can be solved in linear time on interval graphs [27]. Recognizing circular-arc graphs can also be done in linear time [33, 44].
Another natural, well-known generalization of interval graphs is _chordal graphs_. These graphs are intersections of subtrees of a tree. More precisely, \(G\) is chordal if and only if there exists a tree \(T\) such that every node of \(G\) can be associated with a subtree of \(T\) in such a way that two nodes of \(G\) are adjacent if their corresponding subtrees intersect. Chordal graphs are among the most-studied graph classes [6, 28] and, in fact, they have appeared in the literature with different names such as rigid-circuit graphs, triangulated graphs, perfect elimination graphs, decomposable graphs, acyclic graphs, etc. Chordal graphs can be recognized in linear time [49] and they have many applications, for instance in phylogeny tree reconstruction, a fundamental problem in computational biology [8, 35, 41]. The name chordal comes from the fact
Figure 1: An example of an interval graph together with a representation as the intersection of intervals.
Figure 3: An example of a circular arc graph: the left side shows a representation with overlapping arcs in the circle, while the right side shows its associated graph realization.
Figure 2: An example of a permutation graph with its corresponding intersection model.
that a graph is chordal if and only if every cycle of length at least \(4\) has a chord. It is interesting to point out that, in the framework of distributed computing, the authors in [9] exhibit distributed algorithms for recoloring interval and chordal graphs.
In addition, the class of _trapezoid graphs_ is a generalization of both interval graphs and permutation graphs. A trapezoid graph is defined as the intersection graph of trapezoids between two horizontal parallel lines with two vertices in each line (see Figure 5). Ma and Spinrad [42] showed that trapezoid graphs can be recognized in \(\mathcal{O}(n^{2})\) time. Trapezoid graphs were applied in various contexts such as VLSI design [14] and bioinformatics [1].
### PLSs for Geometric Graph Classes
A naive approach to defining a PLS for a geometric graph class is to assign each vertex the corresponding geometric object it represents. During the verification phase, the vertices could check with their neighbors to ensure that the objects they represent intersect. However, this naive approach is not generally effective in defining compact certificates due to two difficulties.
First, we would need to encode the geometric objects using a logarithmic number of bits. While this may be possible for certain geometric graph classes, such as interval graphs, it is not clear if it holds true in general. For example, for chordal graphs, we
Figure 4: An example of a chordal graph with its corresponding intersection model.
Figure 5: An example of a trapezoid graph with its corresponding intersection model.
do not know how to encode subtrees of a given tree using a logarithmic number of bits with respect to its size.
The second difficulty is that even if we could efficiently encode the objects, the soundness condition of a PLS requires that every graph not belonging to the given geometric graph class has to be rejected by the certification process. Therefore, to satisfy the soundness requirement, the vertices would also need to check that all non-adjacent vertices are assigned non-intersecting objects. This would require a vertex to check conditions with other non-adjacent vertices that could be far away in the graph. Notice that geometric graph classes, including the ones discussed in this article, can have arbitrarily large diameters.
Therefore, it is necessary to develop more sophisticated ideas in order to overcome the difficulties inherent in the naive approach.
### Our Results and Techniques
In the present work we show compact PLSs (i.e. with logarithmic-sized certificates) for the recognition of all the aforementioned geometric intersection graph classes, namely interval and chordal graphs (Section 3), circular arc graphs (Section 4) and, finally, trapezoid and permutation graphs (Section 5). For all these classes we also provide, in Section 6, tight logarithmic lower bounds on the size of the certificates.
In our results, we employ different sets of techniques that leverage the structural properties of the considered graph classes. We will now briefly explain our constructions.
_Chordal and Interval graphs._ As we explained above, chordal graphs are intersections of subtrees of a tree. They are also defined as the graph where every induced cycle has length at most 3 (i.e. every non-induced cycle has a _chord_). Interestingly, chordal graphs can be characterized by the existence of a specific tree-decomposition, called _clique-tree_. This tree-decomposition shares the same properties that define the treewidth, with the exception that each bag (node of the decomposition) forms a maximal clique of the graph (see Section 3 for more details on these definitions). The certification of the clique-tree shares some ideas with the ones used in [24] to certify graphs of bounded tree-with. Observe, however, that the maximal cliques of a chordal graph are unbounded, as a chordal graph may have unbounded tree-width. Therefore, new ideas had to be developed. We take advantage of the properties of the clique-trees, in particular high connectivity within the bags, to obtain a PLS with certificates of size \(\mathcal{O}(\log n)\) for the verification of chordal graphs.
The PLS for certification of interval graphs follows as a direct application of the PLS for chordal graphs. Indeed, an interval graph is a particular type of chordal graph, where the clique-tree is restricted to be a path. Then, the certification of interval graphs uses the certification of chordal graphs, while at the same time verifying that the given decomposition is indeed a path.
_Circular Arc graphs._ As described in Figure 3, a circular-arc graph is represented by a set of arcs in a circle such that two nodes are neighbors if and only if their
arcs have no empty intersection. In order to recognize this class, we first tackle the problem of recognizing the subclass of _proper circular-arc_ graphs, which are graphs that admit a circular-arc representation such that no arc is contained in other. Using a property of the adjacency matrix of graphs in this subclass given by [50], we develop an algorithm to recognize this property. Then, we expand the property of [50] to the whole class of circular arc graphs and we proceed to verify this property distributively. In this part, the main idea is to develop an algorithm that allows the nodes to verify a global property of their adjacency matrix, which is an extension of a known characterization for proper circular-arc graphs.
_Trapezoid and Permutation graphs._ Recall that a graph is a trapezoid graph if each node can be assigned to a trapezoid inscribed in two parallel lines, with two vertices in each line, such that two nodes are neighbors if and only if their corresponding trapezoids have no empty intersection, as shown in Figure 5. As both parallel lines contain \(2n\) vertices, we can enumerate these endpoints on each line from \(1\) to \(2n\), so a trapezoid can be characterized as a tuple \((t_{1}(v),t_{2}(v),b_{1}(v),b_{2}(v))\in[2n]^{4}\), corresponding to the enumeration of each vertex. If the collection \(\{(t_{1}(v),t_{2}(v),b_{1}(v),b_{2}(v))\}_{v\in V}\) satisfy that two nodes \(u,v\) are neighbors if and only if their corresponding trapezoids intersect, we say it is a _proper trapezoid model_, and if only satisfy that all neighbors have non-empty trapezoid (one part of the equivalence), we call it a _semi proper trapezoid model_. Then, verifying that a given model \(\{(t_{1}(v),t_{2}(v),b_{1}(v),b_{2}(v))\}_{v\in V}\) given by the prover is a semi-proper model it is straightforward to do distributively: each node shares its trapezoid model with its neighbors and check they intersect. Then, in order to prove that a semi-trapezoid model is a trapezoid model, we need to verify that all non-adjacent nodes have empty trapezoid intersections. As we cannot do this directly, because we don't have direct communication between no adjacent nodes, we prove that a semi-trapezoid model is a proper trapezoid model if it satisfies two conditions (Lemma 4), which are easier to verify distributively, because are dependent on the positions between their vertices in each line, and a local calculation that can be computed by each node.
Then, the result implies a PLS to recognize permutation graphs because we prove that a permutation model, i.e., a collection of lines with endpoints in two parallel lines, as in Figure 2, such that each node is associated with a line, and two nodes are neighbors if an only if their corresponding lines intersects, it can be represented as a specific proper trapezoid model with an extra condition that can be verified locally by the nodes.
_Lower bounds._ To obtain tight lower bounds we use two different approaches. First, to get a lower bound for the classes of interval, circular arc and chordal graphs we adapt a construction for lower bounds in the Locally Proof model from [29] to the PLS model, where the main idea is to construct a collection of graphs in each class that would be indistinguishable from a particular no-instance if we allow messages of just \(o(\log n)\) bits. In order to obtain a lower bound of \(\Omega(\log n)\) proof-size for the recognition of permutation and trapezoid graphs, we use a technique from [25] called _crossing edge_, in which we need to construct a specific graph that is part of the class, but if we
interchange specific edges between some nodes, the resulting graph is no longer part of the class. Then, by a result of [25], we have the desired tight lower bound.
### Related Work
Since the introduction of PLSs [39], different variants were introduced. Some stronger forms of PLS include locally checkable proofs [29], where each node can send not only its certificates, but also its state, and \(t\)-PLS [20], where nodes perform communication at distance \(t\geq 1\) before deciding. Authors have studied many other variants of PLSs, such as randomized PLSs [25], quantum PLSs [23], interactive protocols [13, 36, 46], zero-knowledge distributed certification [5], PLSs use global certificates in addition to the local ones [22], etc. On the other hand, some trade-offs between the size of the certificates and the number of rounds of the verification protocol have been exhibited [20]. Also, several hierarchies of certification mechanisms have been introduced, including games between a prover and a disprover [3, 19].
PLSs have been shown to be effective for recognizing many graph classes. For example, there are compact PLSs (i.e. with logarithmic size certificates) for the recognition of acyclic graphs [38], planar graphs [21], graphs with bounded genus [16], \(H\)-minor-free graphs (as long as \(H\) has at most four vertices) [10], etc.
In a recent breakthrough, Bousquet et al. [11] proved a "meta-theorem", stating that, there exists a PLS for deciding any monadic second-order logic property with \(O(\log n)\)-bit certificates on graphs of bounded _tree-depth_. This result has been extended by Fraigniaud et al [24] to the larger class of graphs with bounded _tree-width_, using certificates on \(O(\log^{2}n)\) bits. This result implies in particular the existence of (nearly) compact PLS for certifying the class of graphs with tree-width at most \(k\) (for any fixed \(k\)). Moreover, these results have other direct implications for the design and analysis of (nearly) compact PLSs for graphs with certain structural properties. For instance, for every planar graph \(H\), there is a PLS verifying \(H\)-minor free graphs with certificates of size \(\mathcal{O}(\log^{2}n)\).
## 2 Preliminaries
All graphs in this work are considered simple and undirected. An \(n\)-node graph \(G=(V,E)\) is a graph with \(|V|=n\). Given a graph \(G=(V,E)\), the set of neighbors of a node \(v\in V\) (nodes connected to \(v\) via an edge en \(G\)) is denoted as \(N_{G}(v)\)1.
Footnote 1: When the graph is clear by context by omit the subscript
Given \(n\in\mathbb{N}\), \([n]\) corresponds to the set \(\{1,...,n\}\) and \(S_{n}\) to the set of all permutations in \([n]\). For \(n,m\in\mathbb{N}\), \(n<m\), we define \([n,m]_{\mathbb{N}}=\{n,n+1,...,m-1,m\}\).
### Distributed Languages
Let \(G=(V,E)\) be a simple connected \(n\)-node graph, let \(I\colon V\to\{0,1\}^{*}\) be an input function assigning labels to the nodes of \(G\). where the size of all inputs is polynomially bounded on \(n\). Let \(\mathsf{id}\colon V\to\{1,...,n^{c}\}\) for some constant \(c>0\) be a one-to-one function assigning identifiers to the nodes. A _distributed language_\(\mathcal{L}\) is a (Turing decidable) collection of triples \((G,\mathsf{id},I)\), called _network configurations_.
Sometimes the label function \(I\) represents some construction over the graph, for example, it can be a single bit in \(\{0,1\}\) indicating a subset of nodes, which can represent a vertex cover set, maximal independent set, etc. In our case, we are interested in a property of \(G\) itself, and not in verifying some property over the labels given by \(I\), so even if the formal definition defines a label function, we are going to omit it for simplicity. The distributed languages under study in this work are the following
interval \[=\{(G,\mathsf{id})\colon G\text{ is a interval graph}\}\] chordal \[=\{(G,\mathsf{id})\colon G\text{ is a chordal graph}\}\] Circular-Arc \[=\{(G,\mathsf{id})\colon G\text{ is a circular-arc graph}\}\] Permutation \[=\{(G,\mathsf{id})\colon G\text{ is a permutation graph}\}\] Trapezoid \[=\{(G,\mathsf{id})\colon G\text{ is a trapezoid graph}\}\]
### Proof Labeling Schemes
Formally, we define a _proof-labeling scheme_ (PLS) for a distributed language \(\mathcal{L}\) as a pair consisting of a prover and a verifier. The _prover_ is an untrusted oracle that, given a network configuration \((G,\mathsf{id})\), assigns a _certificate_\(c(v)\) to each node \(v\) the graph. The _verifier_ is a distributed algorithm that runs locally at each node \(v\) in \(G\). This verification algorithms demand first to each node \(v\) to communicate with its neighbors \(w\in N_{G}(v)\), sending \(c(v)\) (and possibly its \(\mathsf{id}\)'s) and receiving the certificates (and possibly its \(\mathsf{id}\)'s) from all its neighbors. Given \(\mathsf{id}(v)\), \(c(v)\), and the certificates and \(\mathsf{id}\)'s given by its neighbors, each node \(v\) runs the verification algorithm with this information to output either accept or reject.
A PLS is considered correct if it satisfies the following two conditions:
* Completeness: If \((G,\mathsf{id})\in\mathcal{L}\) then the prover can assign certificates to the nodes such that the verifier accepts at all nodes,
* Soundness: If \((G,\mathsf{id})\notin\mathcal{L}\) then, for every certificate assignment to the nodes by the prover, the verifier rejects in at least one node.
The complexity measure of a PLS is the _proof-size_\(f(n)\), measured in function of the number of nodes \(n\), and defined as the maximum length of any message sent by the prover to the nodes or between neighbors in all network configurations \((G,\mathsf{id})\) with \(n\) nodes.
### Toolbox
In this subsection, we present already established protocols which we are going to use, in this paper, as subroutines. Note that some of these subroutines solve problems that are not decision problems.
### Spanning Tree and Related Problems
The construction of a spanning tree is a fundamental component for various protocols in the PLS model. Given a network configuration \(\langle G,\mathsf{id}\rangle\), the Spanning-Tree problem
involves creating a spanning tree \(T\) of \(G\), with each node possessing knowledge about which of its incident edges are part of \(T\).
**Proposition 1**.: _[_39_]_ _There is a PLS for Spanning-Tree with proof-size of \(\mathcal{O}(\log n)\) bits._
It should be helpful, for understanding the PLS model, to show a PLS for verifying a spanning tree.
**Protocol 2**.: _First, the prover gives to each node \(v\in V\) the following information._
* _The identifier of the root_ \(r\in V\) _of the spanning tree._
* _The identifier_ \(p_{v}\) _of its father in the tree._
* _Its distance_ \(d(v)\) _and the distance of its father_ \(d(p_{v})\) _to the root_ \(r\)_._
_Then, in the verification round, each node \(v\in V\) verifies whether_
* _All the nodes received the same root_ \(r\in V\)_._
* _The_ id_of its father_ \(p_{v}\) _is the_ id_of some neighbour._
* _If_ \(d(p_{v})=k\)_, then_ \(d(v)=k+1\)_._
_Each node accepts only if all three conditions are satisfied; otherwise, it rejects._
Now let us analyse the correctness and soundness of the protocol.
**Correctness.** An honest prover provides a unique root and the correct distances in the tree, so all nodes accept.
**Soundness.** If the prover gives two or more different roots, then the nodes reject because there are two neighbors \(u,v\) with different root nodes. If the tree given by the prover forms a cycle, then there exist two nodes \(u\) and \(v\) such that \(u\) is the parent of \(v\) but \(d(v)<d(u)\), and \(v\) rejects. Therefore, the tree constructed by the prover has to be correct, and thus the distances too.
**Proof-size analysis.** As node identifiers can be encoded in \(\mathcal{O}(\log n)\) and the maximum distance in an \(n\)-node graph between two nodes is \(n-1\), it follows that the distances \(d(v)\) can also be encoded in \(\mathcal{O}(\log n)\).
From the protocol of Proposition 1, we can construct another protocol for the Size problem. In this problem, the nodes are given an input graph \(G=(V,E)\) and must verify the exact value of \(|V|\), assuming that the nodes only know a polynomial upper bound on \(n=|V|\). Proposition 3 states that there exists a PLS for Size with certificates of size \(\mathcal{O}(\log n)\).
**Proposition 3**.: _[_39_]_ _There is a PLS for Size with certificates of size \(\mathcal{O}(\log n)\)._
**Protocol 4**.: _In the first round, the prover gives to each node \(v\in V\) a certificate with the following information_
* _The information needed according to Protocol_ 2 _to construct a valid spanning tree_ \(T\)_._
* _The number of nodes_ \(c_{v}\) _in_ \(T_{v}\)_, the T-subtree rooted in_ \(v\)_._
_In the verification round each node \(v\in V\) validates that the spanning tree constructed is correct according to the verification round of Protocol 2 and that_
\[c_{v}=1+\sum_{\begin{subarray}{c}\omega\text{ children}\\ \text{of }v\end{subarray}}c_{\omega}.\]
Soundness and completeness follow directly. Notice that nodes can check with their neighbours that they all received the same \(n\), and the root checks whether this value is correct.
For two fixed nodes \(s,t\in V\), problem \(s,t-\textsc{Path}\) is defined in the usual way: given a network configuration \(\langle G,\mathsf{id}\rangle\), the output is a path \(P\) that goes from \(s\) to \(t\). In other words, each node must end up knowing whether it belongs to \(P\) or not; and, if it belongs to the path, it has to know which of its neighbors are its predecessor and successor in \(P\).
**Proposition 5**.: _[_39_]_ _There is a PLS for \(s,t-\textsc{Path}\) with certificates of size \(\mathcal{O}(\log n)\)._
**Protocol 6**.: _The prover sends to each node \(v\) a bit \(b_{v}\in\{0,1\}\) which reports if the node is part of the path (\(b_{v}=1\)) or not (\(b_{v}=0\))._
_If \(b_{v}=1\), the prover also sends the identifiers of its predecessor and successor in the path._
_In the verification round. each node \(v\) such that \(b_{v}=1\) and \(v\neq s,t\), verifies that exactly two neighbours are part of the path, and exactly one neighbour has \(v\) as predecessor and one neighbour has \(v\) as successor. In the same way, \(s\) and \(t\) verify that they have one successor and predecessor, respectively._
Based on the aforementioned results, we assume the existence of PLSs with logarithmic proof-size for computing Spanning-Tree, Size and s,t-Path, throughout the paper. We treat these algorithms as black boxes and employ them as subroutines in our protocols.
## 3 Interval and Chordal Graphs
We begin with the study of a PLS to recognize the class of interval graphs. An interval graph is a graph \(G=(V,E)\) where each node \(v\in V\) can be identified with a unique interval \(I_{v}\) on the real line such that \(uv\in E\iff I_{u}\cap I_{w}\neq\varnothing\). An example can be seen in Figure 1.
### Proper Interval Graphs
As a warm-up, we start with the problem of recognizing the subclass of _proper interval graphs_, which are interval graphs that can be represented by intervals in such a way that no interval properly contains another. This class is equivalent to the class of _unitary graphs_, where all intervals have length one [48]. The following proposition gives another, useful characterization of proper interval graphs.
**Proposition 7** ([26]).: _A graph \(G\) admits a representation by proper intervals if and only if there exists an ordering \(\{v_{i}\}_{i=1}^{n}\) such that:_
\[\forall i,j,k:i<k<j,\quad v_{i}v_{j}\in E\Longrightarrow v_{i}v_{k}\in E \text{ and }v_{k}v_{j}\in E.\]
The core idea for this class and those that follow is to represent the class property as a combination of simpler instructions to be shared among all nodes. As making the adjacency of a node explicit would require proofs to be too large, we intend to exploit the geometric properties of these classes in order to represent their adjacency with a constant amount of _log_-sized labels.
Considering that constructing a PLS for recognizing proper interval graphs by use of the previous characterization is rather direct
**Theorem 8**.: _There is a PLS for proper interval using certificates of size \(\mathcal{O}(\log n)\) bits._
Proof.: The certificate that the prover sends to each node \(v\in V\) has two parts.
* A number \(i_{v}\in[n]\), which will be interpreted as the position of node \(v\) in the ordering.
* Two different ids: \(\mathsf{id}_{first}\) and \(\mathsf{id}_{last}\), for having a global consensus on the first and last node of the ordering.
The algorithm performed by the nodes is as follows. Each node \(v\) interprets \(i_{v}\) as its position in the ordering. Then, the nodes check locally that they all received the same \(\mathsf{id}_{first}\) and \(\mathsf{id}_{last}\). The nodes with these ids correspond to the first and last nodes of the ordering, which checked that they received numbers \(1\) and \(n\). Finally, every node \(v\) checks locally that all its \(d_{v}\) neighbours receive different numbers in \([i_{v}-a,i_{v}-1]\cup[i_{v}+1,i_{v}+b]\) with \(a+b=d_{v}\) and with \([k,k-1]=\emptyset\). The first node checks that \(a=0\) and the last node checks that \(b=0\). All the other \(a\)'s and \(b\)'s must be different than zero.
**Completeness.** Suppose first that the graph \(G\) is a proper interval graph. Since an honest prover provides the correct ordering, from Proposition 7 it follows that every node accepts.
**Soundness.** Now we are going to prove that, if every node accepts, then the graph \(G\) is a proper interval graph. First, the nodes check that the assignment of numbers corresponds to an ordering. Note that every node \(v\) must have a neighbour \(v^{\prime}\) with \(i_{v^{\prime}}=i_{v}+1\), with the exception of the last one. Since there is one node that receives a \(1\) and another that receives an \(n\), then every node must receive a different number in \([n]\). Therefore, the assignment given by the prover is indeed an ordering from \(1\) to \(n\). We can denote the nodes as \(v_{1},\ldots v_{n}\). Now, we need to prove that the ordering satisfies the condition of Proposition 7. In fact, let \(i<j<k\) such that \(v_{i}v_{j}\in E\). From the point of view of \(v_{j}\), since \(v_{i}\) is a neighbour, then \(v_{k}\) must also be a neighbour (because \([k,j]\subseteq[i,j]\)). On the other hand, from the point of view of \(v_{i}\), since \(v_{j}\) is a neighbour, then \(v_{k}\) must also be a neighbour (because \([i,k]\subseteq[i,j]\)).
### Chordal Graphs and the Particular Case of Interval Graphs
We now extend this strategy of representing the adjacency of nodes in a _compact_ manner to a more general setting by studying the classes of Chordal and Interval graphs. Both of these classes are closely related and share similar challenges when it comes to distributing the proof of its structure among all the nodes in the graph.
Chordal graphs are intersections of subtrees of a tree. More precisely, \(G\) is chordal if and only if there exists a tree \(T\) such that every node of \(G\) can be associated with a subtree of \(T\) in such a way that two nodes of \(G\) are adjacent if their corresponding subtrees intersect. The name chordal comes from the fact that a graph is chordal if and only if every cycle of length at least \(4\) has a chord. That is, for any integer \(k\geq 4\), \(G\) does not have a \(C_{k}\) as an induced subgraph. These graphs are especially relevant from an algorithmic perspective as several graph properties (finding the largest independent
set or clique, computing the chromatic number, etc) can be efficiently computed when the input graph is restricted to this class [31, 49].
As for interval graphs, the structure of a chordal graph is determined by its maximal cliques: while an interval graph can be represented as a path formed by its maximal cliques, in the case of chordal graphs these can be seen as trees.
A tree decomposition of some graph \(G\) is a tree \(T_{G}\) where each node \(b\in T_{G}\) (referred to as _bags_) represents a set of nodes \(b\subseteq V\) in the original graph with the following properties: (1) each node \(v\in G\) is present in at least one bag, (2) for every edge \(e=uv\in E(G)\) there exists a bag \(b\) that contains both \(u\) and \(v\) and (3) if we define \(T_{v}\) as the set of bags in \(T_{G}\) to which \(v\) belongs, they form a connected subgraph of \(T_{G}\). From this, we can define a _clique-tree_ of a graph \(G\) as the special case of a tree decomposition for \(G\) where each bag represents a maximal clique of \(G\).
Similarly, we define a path decomposition of a graph \(G\) as a tree decomposition when the tree \(T_{G}\) in question is a path [7]. Following the previous notation, we define a _clique-path_ of a graph \(G\) as the special case of a path decomposition for \(G\) where each bag represents a maximal clique of \(G\) (see Figure 4).
**Proposition 9** ([12]).: _A graph \(G\) is said to be chordal if and only if it admits a clique-tree._
**Proposition 10** ([12]).: _A graph \(G\) is said of be an interval graph if and only if it admits a clique-path._
Our goal is to show a PLS for recognizing chordal graphs. For this, we would like to find a way to simulate the nodes in the clique-tree by choosing a set of leaders for each maximal clique. Then, we would like to label each node with the range of cliques it belongs to. The problem is that, while interval graphs require only two endpoints to represent such a range, in the case of chordal graphs we would need to encode an entire subtree which would require labels of size \(\mathcal{O}(n\log n)\). Therefore, we need to find a more succinct way to encode a tree.
For this, we show that we can "trim" the graph by sequentially removing nodes by using the tree structure. If we consider a clique-tree rooted at some node \(\rho_{T}\) and trim the graph in a series of \(d\) steps (which are performed simultaneously by the nodes), with \(d\) the depth of the clique-tree, then we can partition the set of nodes as follows. At each step \(i\), we assume that the clique-tree \(T_{G}\) has depth \(i\) and look at the leaves at the deepest level in the clique-tree, and, for each such leaf \(b\), delete all the nodes which belong _only_ to this bag and name \(F_{b}\) such a set. Then, we continue to step \(i-1\) (where our new tree has depth \(i-1\)) and repeat this process. We know this set is non-empty by the maximality of the clique represented by the bag \(b\). As this goes on for \(d\) steps, we have that a node \(v\) is eliminated from the clique-tree at the step corresponding to the lowest depth of a bag containing the node \(v\), as it can be seen in Figure 6.
**Lemma 1**.: _For any chordal graph \(G\) such that \(T_{G}\) is a clique-tree rooted at some bag \(\rho_{T}\), consider \(\{M_{b}\}_{b\in T_{G}}\) to be its set of maximal cliques. Then, it is possible to partition the nodes in \(V\) into a family \(\{F_{b}\}_{b\in T_{G}}\) such that for any pair of bags \(b\neq b^{\prime}\) with \(\mathsf{depth}(b)\geq\mathsf{depth}(b^{\prime})\) it holds that \(F_{b}\subseteq M_{b}\setminus M_{b^{\prime}}\)._
Proof.: We show this by induction on the number of bags in the clique-tree of a graph \(G\), given by \(|T_{G}|\). Indeed, if \(T_{G}\) is composed of only two bags \(b\) and \(b^{\prime}\) with \(T_{G}\) rooted
at \(b\) then, as both \(M_{b}\) and \(M_{b^{\prime}}\) are maximal cliques, we simply consider \(F_{b^{\prime}}\) to be \(M_{b^{\prime}}\setminus M_{b}\) and \(F_{b}=M_{b}\). Clearly. these sets are disjoint.
Consider now \(|T_{G}|=k\) with \(k\geq 3\). Then, as \(T_{G}\) is a tree, there must exists a bag \(b\in T_{G}\) with degree 1, with \(b^{\prime}\) its parent in \(T_{G}\). We then have that \(F_{b}=M_{b}\setminus M_{b^{\prime}}\) is disjoint with all other bags in the tree \(T_{G}\) as, otherwise, if there exists a node in \(F_{b}\) that is also in a bag at a lower depth then, by the definition of a clique-tree, it must belong to \(M_{b^{\prime}}\). From there, we have that the tree \(T^{\prime}=T^{\prime}_{G}-b\) is a clique-tree for the graph \(G^{\prime}=G-F_{b}\). It follows, by induction, that there exists a disjoint collection \(\{F_{\bar{b}}\}_{\bar{b}\in T_{G}-b}\) with the above properties. Hence, the family \(\{F_{\bar{b}}\}_{\bar{b}\in T_{G}-b}\cup\{F_{b}\}\) is as desired.
Now, we would like to select a collection of leaders from each bag in \(T_{G}\) and provide certificates to these leaders in order to verify the overlaying clique-tree. Two difficulties arise:
1. The leaders in each pair of adjacent bags in \(G\), may not be connected, as by construction the leader in some bag \(b\) does not necessarily belong to \(b\)'s parent. Therefore, we would like to consider a collection of leaders (in the intersection of adjacent bags) in order to simulate \(T_{G}\)'s edges.
2. If we could solve the first problem, we have no guarantees that we will be able to choose a leader for each edge of \(T_{G}\) in an injective manner: it could be the case that a leader belongs to the \(\Omega(n)\) maximal cliques adjacent to the same bag, and should therefore handle too many messages.
To solve the first issue, we show how to choose a root for \(T_{G}\) and a collection of nodes that belong to the intersection of adjacent bags in such a way that we are able to verify the correctness of the tree structure.
**Lemma 2**.: _Given a chordal graph \(G\), there exists a rooted clique-tree \(T\) such that it is possible to choose a collection of leaders for each bag \(\{v_{b}\}_{b\in T}\) and auxiliary nodes \(\{w_{\ell}\}_{\ell\in T}\) for each leaf in \(T\) such that if \(\mathsf{depth}(b)\) is the depth of a bag \(b\) in the tree and \(t(b)\) is the parent of the bag \(b\) in \(T\), then:_
* _For each_ \(b\in T,\quad v_{b}\in b\)_._
* _For each_ \(b\in T\)_,_ \(v_{b}v_{t(b)}\in E(G)\)_._
* _If_ \(\mathsf{depth}(b)\neq\mathsf{depth}(b^{\prime})\)_, then_ \(v_{b}\neq v_{b^{\prime}}\)_._
* _If_ \(b\in T\) _is a leaf, then_ \(w_{b}v_{b}\in E(G)\)_._
* \(\{v_{b}\}_{b\in T}\cap\{w_{\ell}\}_{\ell\in T}=\emptyset\)
Figure 6: Graph partition according to Lemma 1. Given a tree decomposition, each node \(v\) is positioned at a different set depending on the bag containing \(v\) such that its depth is the lowest in the clique-tree.
Proof.: Let \(T\) be a rooted clique-tree in \(G\) such that its set of leaves is the largest and the sum of their depths is as small as possible. Consider now \(b_{\rho}\in T\) to be the root of \(T\), and let some arbitrary node \(v_{r}\in b_{\rho}=b^{0}\) be its leader. If we define \(\{b_{i}^{1}\}_{i=1}^{\ell}\) to be the children of \(b_{\rho}\) in \(T\), for each \(i\) we choose an arbitrary leader \(v_{i}^{1}\) in \(b_{r}\cap b_{i}^{1}\), which is non empty as \(G\) is connected.
Consider \(b^{j}\) to be any node at level \(j\geq 1\) of the tree with \(b^{j-1}\) its parent, with \(v^{j}\) its leader and \(v^{j}\in b^{j}\cap b^{j-1}\).
* If \(b^{j}\) is a leaf, we choose an auxiliary node \(w_{j}\in b^{j}\) with \(w_{j}\notin b^{j-1}\). We can pick such a node because \(b^{j}\) represents a maximal clique in \(G\) and, otherwise, it would be contained in \(b^{j-1}\).
* If \(b^{j}\) is not a leaf, let \(\{b_{i}^{j}\}_{i=1}^{k}\) be the set of \(b^{j}\)'s children. For each \(i\) we choose a leader for \(b_{i}^{j}\) in \(b^{j}\cap b_{i}^{j}\setminus b^{j-1}\) as, otherwise, we would have that \(b^{j}\cap b_{i}^{j}\subseteq b^{j-1}\) for some \(i\). This implies that \(b_{i}^{j}\cap b^{j}\subseteq b^{j-1}\cap b_{i}^{j}\) and it would be possible to define a new tree \(T^{\prime}\) where \(b_{i}^{j}\) is a child of \(b^{j-1}\) instead of \(b^{j}\). Now, if \(b_{i}^{j}\) was not a leaf, we would have a tree with more leaves than \(T\). Otherwise, in case that \(b_{i}^{j}\) were a leaf, the sum of the depths of each of its leaves decreases by one. This contradicts our choice for \(T\). Hence, we can choose a leader \(v_{i}^{j}\) in \(b_{i}^{j}\cap b^{j}\) for each value of \(i\) and then go for the next level.
From the construction, we have that for any pair of adjacent bags in \(T\), either their leaders match or are adjacent, as both belong to the intersection of their respective bags. Also, by the way, the leaders were chosen in the latter point, by choosing bags at different depths, it follows directly that their leaders must be different.
Now we have a set of leaders who belong to the intersection of bags at different levels of a rooted clique-tree. Yet, these leaders may have several bags at the same level assigned to them, as multiple bags may share a unique element at the intersection with the previous level.
Figure 7: Transition from a tree decomposition to another one by reassigning a leaf at an upper level, in case that a bag intersection (\(b^{1}\cap b^{2}=\{u,v\}\)) is contained in an intersection at an upper level (\(b^{0}\cap b^{1}=\{u,v,w\}\)). We can do this while keeping a feasible decomposition.
To solve this issue we simply need to make use of the tree structure by setting, for each leader of a bag \(\rho_{b}\), the certificate provided by one of its children (e.g. the one with the smallest identifier). If a node is the leader of several bags, it still receives all corresponding proofs after these are exchanged at the verification round, with auxiliary nodes being chosen in order to cover the case when a node is the leader of multiple leaves in the tree. Hence, we have that each bag leader will receive a unique message, no matter the number of bags it represents. Now we are ready to prove the theorem.
Theorem 4.1: _There is a PLS for chordal using certificates of size \(\mathcal{O}(\log n)\)._
Proof: Assuming that \(G\) is chordal, we first select a collection of nodes that represents a bag \(b\) in \(T_{G}\). We do this by selecting a single element \(\rho_{b}\) from each set \(F_{b}\) according to Lemma 1, as well as a spanning tree triple for simulating the overlaying clique-tree, which we denote by \(\langle\mathsf{id}(\rho_{T}),d_{T}(v),t_{T}(v)\rangle\) indicating the unique leader for the root of the clique tree, as well as the distance to it and \(\rho_{b}\)'s parent in this structure.
The prover provides each node \(v\in F_{b}\) with
* The size of the clique tree \(|T_{G}|\), along with the \(\mathsf{id}\) of the leader for its root \(\rho_{T}\).
* A label \(F(v)\) corresponding to the identifier \(\mathsf{id}(\rho_{b})\) of the leader in the set \(F_{b}\) to which a node \(v\) belongs to, as well as the size of \(F_{b}\).
* The distance from the bag \(b\) to the root \(\rho_{T}\) in the clique tree with \(v\in F_{b}\), given by \(\mathsf{depth}(v)\).
Also, in order to verify the tree structure, we choose a collection of leaders \(\{e_{bb^{\prime}}\}\) for each edge \(bb^{\prime}\in E(T_{G})\) according to Lemma 2, as well as the corresponding auxiliary nodes to pass these messages. Then, the nodes exchange their messages and they check the following:
1. The collections \(\{\rho_{b}\}_{b\in T_{G}}\) and \(\{e_{bb^{\prime}}\}_{bb^{\prime}\in E(T_{G})}\) verify in conjunction the correctness of the tree structure.
2. There is a unique root \(\rho_{T}\).
3. If \(v\in F_{b}\), then \(v\) checks that the nodes with \(F(u)=\mathsf{id}(\rho_{b})\) form a clique.
4. If \(v\) and \(u\) are adjacent with \(\mathsf{depth}(v)\leq\mathsf{depth}(u)\), then \(v\) is adjacent to all the leaders \(\rho_{b}\) (and their sets \(F_{b}\)) in the unique path between \(F(v)\) and \(F(u)\) which also coincides with the unique path between \(F(u)\) and \(\rho_{T}\). In particular, if \(\mathsf{depth}(v)=\mathsf{depth}(u)\), then they must have the same leader.
If all the previous conditions hold, then all nodes accept. Now, we check the correctness of this protocol.
**Completeness.** Suppose first that the graph \(G\) is chordal. An honest prover will provide each node \(v\) with its correct set \(F_{b}\) according to the underlying clique tree \(T_{G}\) which all leaders check correctly. By the definition of the clique tree, it follows that no node has a neighbour at the same depth from a different bag. As the set of bags to which \(v\) belongs to corresponds to a connected subgraph, it follows that if \(v\) is in a bag \(b\) and it is connected to a node \(u\) (which belongs to a bag \(b^{\prime}\)) at a larger depth, then it is adjacent to all nodes in the bags (and therefore the sets \(F_{b^{\prime}}\)) in the path between \(b\) and \(b^{\prime}\). With this, each node \(v\) recognizes that its sets \(F_{b}\) are a clique and that the depth for the set of each of its neighbours is consistent with its set. Therefore, all nodes accept.
**Soundness.** Suppose now that the graph \(G\) is not chordal. We have that, by the constructions in Lemmas 1 and 2, the leaders chosen for both the bags and the edges between them can correctly verify the structure of the clique tree. Now, suppose that \(G\) has an induced cycle \(\{v_{!},\ldots v_{k}\}\) with its nodes arranged such that \(v_{1}\) is the node of largest depth. It must be that at least one of them has a different depth from the rest as otherwise they would reject either because their leaders are different, or because the corresponding set \(F_{b}\) should be a clique and there exist at least two non-adjacent nodes. Suppose that \(\mathsf{depth}(v_{1})=i\), \(\mathsf{depth}(v_{k})=j\) and \(\mathsf{depth}(v_{2})=k\) with \(i>j\geq k\). Then, it must be that \(v_{k}\)'s leader lies in the unique path between \(\rho_{T}\) and \(v_{1}\)'s leader as otherwise it would notice an inconsistency with \(v_{1}\)'s proof and it would reject. This must also be true for \(v_{2}\) for being at a smaller depth. Then, it must lie in the path between \(v_{1}\)'s leader and \(\rho_{T}\) and, therefore, adjacent to \(v_{k}\)'s leader and subsequently to \(v_{k}\) itself, which contradicts the fact that they are not adjacent as they belong to a large induced cycle.
As a corollary, we obtain a PLS for the problem interval by considering the fact that, as described above, interval graphs are a particular subclass of chordal graphs where the clique-tree corresponds to a path [7]. From here it suffices to repeat the same protocol while each leader (with the exception of the root leader) additionally verifies that any bag assigned to it has unique children in the clique tree \(T_{G}\).
**Corollary 1**.: _There is a PLS for interval using certificates on \(\mathcal{O}(\log n)\) bits._
## 4 Circular Arc Graphs
Circular arc graphs are a natural extension of interval graphs. Indeed, they are the graphs that admit a representation by arcs on a circle, and appear, for instance, in the study of resource allocation problems for periodic tasks [43]. We study this class of graphs as we wish to check whether previous results can be extended to this new setting without a large increase in the proof-size. We start by formally defining this new class and, again, studying two variations: where no pair of arcs are properly contained and then the general case. For the sake of simplifying the notation, we identify the set of \(\mathsf{id}\)'s as \([n]=\{0,\ldots n-1\}\). We say that a graph \(G=(V,E)\) admits a circular arc representation if there exists a family of arcs in the unit circle \(\{A_{v}\}_{v\in V}\) such that the adjacency of \(G\) can be determined by the intersection of arcs. That is
\[\forall v\in V,\,\exists A_{v}:\quad\forall u,w\in V,uw\in E \Longleftrightarrow A_{u}\cap A_{w}\neq\varnothing\]
We say that some graph \(G\) is a _proper_ circular arc graph if it admits a representation where no arc is contained in another.
### Proper Circular Arc Graphs
As in previous proofs, the main question is how to represent the adjacency of a node in a succinct manner, considering the geometric properties of this class. We proceed as follows. We assume that the \(\mathsf{id}\)'s are ordered counter-clockwise. We say that, given \(i<j\), the adjacency of a node is given either by \((i,j)\), which we define as \(\{i,i+1,\ldots j\}\)
or \((j,i)\), which we set to be \(\{j,j+1,..n-1,0,..i\}\). If a graph \(G\) satisfies this property we say that its augmented adjacency matrix (the adjacency matrix of \(G\) with the addition of 1's in the diagonal), denoted by \(M^{*}(G)\), has the _circular_ 1's property [50].
Now, again, we can have graphs that follow this property while allowing a representation by proper circular arcs. So there must be another property that we need in order to pin down this graph class. Fortunately, given a characterization by Tucker [50], we can show how to recognize this class with a single round of interaction. For this, we start by giving some definitions for symmetric matrices.
First, consider \(\pi\) to be a permutation of \([n]\) and some matrix \(M\). The matrix \(M_{\pi}\) is obtained when both the rows and columns of \(M\) are reordered according to \(\pi\). Second, consider a symmetric \(\{0,1\}\)-matrix \(M\) with 1's in the diagonal and the circular 1's property. Then, we define \(\mathsf{last}[M,j]\) to be the largest value \(i\) such that \(M_{i,j}=1\) and \(M_{i+1,j}=0\). If such an \(i\) does not exist (meaning the column \(M_{.,j}\) has only 1 entries) we set \(\mathsf{last}[M,j]=\bot\).
Last, for the sake of notation, consider \(\sigma_{\mathrm{inv}}:[n]\to[n]\) to be the permutation given by \(i\to n-i\) if \(i\neq n\) and \(n\) otherwise and \(\sigma_{\mathrm{sh}}:[n]\to[n]\) to be the permutation given by \(i\to i+1\) if \(i\neq n\) and 1 otherwise.
**Definition 1**.: _Given a symmetric \(\{0,1\}\) matrix \(M\) with 1's in the diagonal, we say that it has circularly compatible 1's if \(M\) has the circular 1's property and, for any reordering \(\pi\) of the rows (and respective columns) of the matrix constructed by a finite composition of \(\sigma_{\mathrm{inv}}\) and \(\sigma_{\mathrm{sh}}\), it follows that \(\mathsf{last}[M_{\pi},0]\leq\mathsf{last}[M_{\pi},1]\), unless one of these values is \(\bot\)._
With these definitions, we can finally describe the characterization given by Tucker for this class of graphs.
**Proposition 12** ([50]).: _A graph \(G\) is a proper circular arc graph if and only if its nodes admit an ordering \(\{\pi_{v}\}_{v\in V}\) such that its augmented adjacency matrix \(M^{*}(G)\) has the circularly compatible 1's property._
This characterization suits us greatly as its condition is highly local. If we were able to find such an ordering, then every node would only need to verify it by checking the previous and next nodes in the ordering. For this, we note two important remarks.
**Observation 1** ([50]).: _If we sort the nodes according to their right endpoint in counter-clockwise order, we have that the nodes adjacency matrix has the circularly compatible 1's property according to this ordering._
**Observation 2**.: _We can rotate the arcs in a graph in such a way that, if each node \(v\) is sorted according to the previous order \(\pi\), it follows that \(v\) is adjacent to \(\pi_{v}-1\) and \(\pi_{v}+1\) modulo \(n\), with the exception of the last node (in position \(n\)) which may not be connected to the first._
Now we can start to describe the protocol.
**Theorem 13**.: _There is a PLS for proper circ-arc using certificates of size \(\mathcal{O}(\log n)\)._
Proof.: Assume that the prover assigns to each node a pair \(A_{v}=(r_{v},\ell_{v})\) which correspond to \(v\)'s arc coordinates when the arc is visited in a counter-clockwise direction. Here we assume that such coordinates are given as a value between \((0,2\pi)\) with any pair of values being at a distance at least \(1/\mathrm{poly}(n)\) from each other (and, as such, we require \(\mathcal{O}(\log n)\) bits to represent such a range). Now, let \(v_{1}\) be the node such \(r_{v}\) is the
smallest and such that, in case the graph is a proper circular graph, is the first node if we sort them according to their right endpoint as in Proposition 12 and Observation 1.
Now, first, we ask the prover to provide the identifier \(\mathsf{id}(v_{1})\) of such a node, as well as proof that it is the only node with the smallest right endpoint, which can be provided by sending a spanning tree triple \(\langle\mathsf{id}(\rho),d_{v},t_{v}\rangle\) as well as a verification through the spanning tree that \(v_{1}\) is the unique node with the smallest value for \(r_{v}\). We also ask the prover to provide the size of the graph \(n(G)\) which can also be verified through the spanning tree.
Next, we ask the prover to send to each node a position \(\pi_{v}\) such that \(\pi_{v}=i\) means that \(r_{v}\) is the i-th largest value for a right endpoint in counter-clockwise order. Also, we ask the prover to provide each node with a range \((v_{\min},v_{\max})\) which correspond to the positions in \(\pi\) such that \(v\)'s adjacency equals the set of nodes whose positions are \(\{v_{\min},v_{\min}+1,\ldots v_{\max}\}\) given as a circular sequence as described previously.
Then, at the verification round, all nodes exchange these messages, verifying the existence of a node \(v_{1}\) by using the spanning tree and, starting from \(v_{1}\), each node \(v\) with \(\pi_{v}=i\) checks that there is a unique node labeled by \(i+1\) whose arc intersects with its own and whose left endpoint is immediately after his. They also check that their adjacency is circular, meaning that it has a neighbor labeled with each position in the range \((v_{\max},v_{\min})\), with arcs consistent with such an order.
Finally, in order to check that the matrix has the circularly compatible 1's property, they do the following. In order to check the first two columns in each permutation obtained by a composition of \(\sigma_{\mathrm{inv}}\) or \(\sigma_{\mathrm{sh}}\) we simply ask each node to adjust its range according to these permutations, such that each node \(v\) positioned at \(\pi_{v}\) with neighbors \(w\) and \(u\) such that \(\pi_{w}=\pi_{v}-1\) and \(\pi_{u}=\pi_{v}+1\) must simply consider two cases: (1) When \(v\) is first and \(u\) is second, which occurs when we shift \(\pi\) (by applying \(\sigma_{\mathrm{sh}}\)) until \(v\) is first in the order or (2) when \(v\) is first and \(w\) is second, which occurs when we invert the order by applying \(\sigma_{\mathrm{inv}}\) and then shift \(\pi\) until \(v\) is first. In this way, the verification of all \(2n\) possible permutations is distributed among the nodes, with each \(v\) in charge of two cases.
We explain how to handle both cases by adjusting the range of \(v\) and that of its neighbours as follows:
* If \(v\) is first and \(u\) is second, we can obtain the corresponding range by setting \(k=n-\pi_{v}+1\) and translating both ranges by \(k\mod n\) as \((\bar{v}_{\min},\bar{v}_{\max})=(v_{\min}+k\mod n,v_{\max}+k\mod n)\) and a similar construction for \(u\).
* If \(v\) is first and \(w\) is second, first we obtain the range after applying \(\sigma_{\mathrm{inv}}\) as \((\bar{v}_{\min},\bar{v}_{\max})=(n-v_{\max}+1\mod n,n-v_{\min}+1\mod n)\) and then shifting \(v\) to the first position by adding \(\pi_{v}\) on both sides modulo \(n\), and a similar construction for \(w\).
Given these two different ranges, each node checks that (unless either itself or \(w\) or \(u\) are universal nodes) the range \(\bar{v}_{\max}\leq\bar{u}_{\max}\) (respectively \(\bar{v}_{\max}\leq\bar{w}_{\max}\)), accepting if this holds and rejecting otherwise.
We have that all these messages are of length \(\mathcal{O}(\log n)\) in one round of interaction. Therefore, it only remains to check the correctness of this protocol.
**Completeness.** Suppose that \(G\) is a proper circular arc, an honest prover will provide each node with an ordering \(\pi\) according to each arc's left endpoint, as well a the correct range which all nodes can verify and accept.
**Soundness.** Now, suppose that \(G\) is a No-instance. From what was described above, all nodes correctly compute a starting node \(v_{1}\) as well as the size of the graph. Also, each node \(v\) with \(\pi_{v}=i\) checks that it has a unique neighbour positioned as \(i+1\), which is consistent with its arc. Combining both statements we have that all nodes must have different values in \(\pi\) that match the order of their left endpoints.
Now, if all nodes check that their adjacency is indeed circular there must exists a node \(v\) which, when permuting its order such that it becomes first either \(\bar{v}_{\max}>\bar{u}_{\max}\) or \(\bar{v}_{\max}>\bar{w}_{\max}\), and then it would immediately reject.
### The General Case
To cope with the general case, it suffices to adapt the characterization found in [50] for this class of graphs, as it gives us a simple representation of the adjacency of each node by means of the shape of the adjacency matrix relative to a node ordering.
Given a symmetric \(\{0,1\}\)-matrix \(M\), with ones in the diagonal, consider a column \(i\) and define \(U_{i}\) as the set of \(1^{\prime}s\) starting from the diagonal and going downwards in a circular manner until a zero appears. Now, define \(V_{i}\) as the set of \(1^{\prime}s\) on row \(i\) starting from the diagonal and going rightwards in the same manner. \(M\) is said to have the _quasi-circular \(1^{\prime}s\) property_ if all \(1^{\prime}s\) in the matrix are covered by some \(U_{i}\) or \(V_{i}\). It is important to mention that, since \(M\) is symmetric, we have that \(U_{i}\) and \(V_{i}\) have the same size.
**Proposition 14** ([50]).: _Let \(M^{*}(G)\) be the augmented adjacency matrix of \(G\). We have that \(G\) is a circular arc graph if and only if there exists an ordering for the nodes such that \(M^{*}(G)\) has quasi-circular \(1^{\prime}s\)._
From here, we can describe a PLS with cost \(\mathcal{O}(\log n)\).
**Theorem 15**.: _There is a PLS for circular-arc using certificates on \(\mathcal{O}(\log n)\) bits._
**Protocol 16**.: _First, the prover sends to each node \(v\):_
1. _Its position in the ordering_ \(\pi_{v}\) _as well as the total number of nodes_ \(n(G)\)_._
2. _A spanning tree given by the triple_ \(\langle\mathsf{id}(\rho),d_{v},t_{v}\rangle\)_._
3. _The size of its set_ \(U_{\pi_{v}}\) _denoted by_ \(L_{v}\)
Figure 8: A circular arc representation for a graph, along with its associated drawing.
_After the nodes exchange their certificates, they check the consistency of the spanning tree and use it in order to verify that the total number of nodes is correct. Then, in order to verify the consistency of \(\pi(\cdot)\) as a correct ordering, the nodes proceed as follows._
_If we set \(N^{\pi}(v)\) to be the set of nodes in \(N(v)\) such that they are positioned between \(\pi_{v}\) and \(\pi_{v}+L_{v}-1\), for \(i\in\{0,\ldots n-2\}\) each node \(v\) in position \(\pi_{v}=i\) must check that it has a unique neighbor \(u\) positioned at \(\pi_{u}=j\) for all positions \(j\) in \(\{\pi_{v}+h_{v}\}\), where \(w\in N^{\pi}(v)\) with \(\pi_{w}=h_{v}\) is the first node such that \(\pi_{w}+L_{w}-1>\pi_{v}+L_{v}-1\). By this process, each node starting from \(i=0\) makes sure that there are nodes labeled with a position in \(U_{\pi_{v}}\) and that there is a unique node that can continue this process after him. As \(G\) is connected, we can assume that this process continues on until all nodes with positions in \(\{0,n-1\}\) are verified. As there are \(n\) nodes in the graph, all positions are distinct._
_Finally, each node \(v\) with a neighbour \(u\) checks that, either \(u\in N^{\pi}(v)\) or \(v\in N^{\pi}(u)\) and that \(v\) is adjacent to all nodes with positions in \(N^{\pi}(v)\). Rejecting if any of these conditions are not satisfied._
**Completeness.** We have that, if \(G\) is a circular arc graph, then it admits an ordering with the previous property. Then, each node \(v\) has neighbours whose positions are between \(\pi_{v}\) and \(\pi_{v}+L_{v}\) circularly, and any other neighbour is such that \(v\) verifies that property for them. Therefore, all nodes always accept.
**Soundness.** If \(G\) is not a circular arc graph, then we'll have that, for any order, there exists a pair of adjacent nodes \(u,v\) such that, as a pair, do not belong to any \(U_{i}\) or \(V_{i}\). Thus, we have that either \(u\) rejects as \(\pi_{u}\leq\pi_{v}+L_{v}-1\mod n\) or \(v\) rejects as one of them notices that fact.
## 5 Trapezoid and Permutation Graphs
Now we turn to study the class of trapezoid graphs. A graph is said to be a trapezoid graph if there exists a collection of trapezoids \(\{T_{v}\}_{v\in V}\) with vertices in two parallel lines \(\mathcal{L}_{t}\) and \(\mathcal{L}_{b}\) (as in Figure 9) such that \(\{u,v\}\in E\) iff \(T_{u}\cap T_{v}\neq\emptyset\). We call these lines _top and bottom lines_. The trapezoids have sides contained in each line, and, therefore, are defined by four vertices, two in the top line, and two in the bottom line. Formally,
Figure 8: Augmented adjacency matrix for the previous graph with the quasi-circular \(1^{\prime}s\) property.
each trapezoid \(T\) is defined by the set \(T=\{t_{1},t_{2},b_{1},b_{2}\}\), where \(t_{1}<t_{2}\) and \(b_{1}<b_{2}\), with \(t_{1},t_{2}\in\mathcal{L}_{t}\) and \(b_{1},b_{2}\in\mathcal{L}_{b}\).
Consider a trapezoidal model \(T_{v}v\in V\), as previously described. The vertices of each trapezoid can be labelled from left to right with integers from \(1\) to \(2n\) for both the lower and upper lines. Therefore, we can assume, without loss of generality, that the vertices defining the set \(T_{v}v\in V\) are all distinct and have a value in the range \([2n]\). As a result, each element in the range \([2n]\) corresponds to a vertex of some trapezoid in both the top and bottom lines.
For \(v\in V\), we call \(\{t_{1}(v),t_{2}(v),b_{1}(v),b_{2}(v)\}\) the vertices of \(T_{v}\). Moreover, we say that the collection \(\{t_{1}(v),t_{2}(v),b_{1}(v),b_{2}(v)\}\) are the _vertices_ of node \(v\). In the following, a trapezoid model satisfying the conditions stated above is called a _proper trapezoid model_ for \(G\). Given a graph \(G=(V,E)\) (that is not necessarily a trapezoid graph), a _semi-proper trapezoid model_ for \(G\) is a set of trapezoids \(\{T_{v}\}_{v\in V}\) satisfying previous conditions, such that, for every \(\{u,v\}\in E\), the trapezoids \(T_{v}\) and \(T_{u}\) have nonempty intersection. The difference between a proper and a semi-proper model is that in the first we also ask every pair of non-adjacent edges to have non-intersecting trapezoids.
Given a trapezoid graph \(G=(V,E)\) and a proper trapezoid model \(\{T_{v}\}_{v\in V}\), we define the following sets for each \(v\in V\):
\[F_{t}(v) =\{i\in[2n]\mid i<t_{1}(v)\text{ and }i\in\{t_{1}(w),t_{2}(w)\} \text{ for some }w\notin N(v)\}\] \[F_{b}(v) =\{i\in[2n]\mid i<b_{1}(v)\text{ and }i\in\{b_{1}(w),b_{2}(w)\} \text{ for some }w\notin N(v)\}\]
Intuitively, the set \(F_{t}(v)\) has the positions in the upper line to the left of \(T_{v}\) which are vertices of a trapezoid \(T(\omega)\), with \(\omega\notin N(v)\). Analogously for \(T_{b}(v)\). We also call \(f_{t}(v)=|F_{t}(v)|\) and \(f_{b}(v)=|F_{b}(v)|\).
The Lemmas presented below provide a characterization of trapezoid graphs through equalities that can be computed locally by each node based on the information available from its neighbours.
**Lemma 3**.: _Let \(G=(V,E)\) an \(n\)-connected trapezoid a graph. Then every proper trapezoid model \(\{T_{v}\}_{v\in V}\) of \(G\) satisfies for every \(v\in V\):_
\[f_{b}(v)=f_{t}(v)\]
**Lemma 4**.: _Let \(G=(V,E)\) an \(n\)-connected trapezoid model \(\{T_{v}\}_{v\in V}\) of \(G\). Then every proper trapezoid model \(\{T_{v}\}_{v\in V}\) of \(G\) satisfies for every \(v\in V\):_
\[f_{b}(v)=f_{t}(v)\]
[MISSING_PAGE_POST]
Proof.: Let \(\{T_{v}\}_{v\in V}\) be a proper trapezoid model of \(G\). Then, given a node \(v\in V\), all the coordinates in \(F_{t}(v)\) are vertices of some \(w\neq N(v)\). Such trapezoids \(T_{w}\) have their two upper vertices in the set \(\{1,\ldots,t_{1}(v)\}\) and their two lower vertices in \(\{1,\ldots,b_{1}(v)\}\), as otherwise \(T_{w}\) and \(T_{v}\) would intersect. Then, the cardinality of the set \(F_{t}(v)\) is equal to the cardinality of the set \(F_{b}(v)\), as every position in \(\{1,\ldots,2n\}\) corresponds to a vertex of some node, so if a position \(j<b_{1}(v)\) is not in \(F_{b}(v)\), then has to be a vertex of some neighbour of \(v\). Analogous for the positions \(j<t_{1}(v)\) in the upper line.
Lemma 4.: _Let \(G=(V,E)\) be a \(n\)-node graph that is not a trapezoid graph. Then, for every semi-proper trapezoid model \(\{T_{v}\}_{v\in V}\) of \(G\), at least one of the following conditions is true:_
1. \(\exists v\in V\) _such that some value in_ \(\{b_{1}(v),\ldots,b_{2}(v)\}\) _or_ \(\{t_{1}(v),\ldots,t_{2}(v)\}\) _is a vertex of_ \(\omega\notin N(v)\)_._
2. \(\exists v\in V\) _such that_ \(f_{b}(v)\neq f_{t}(v)\)_._
Proof.: Let \(G\) be a graph that is not a trapezoid graph and \(\{T_{v}\}_{v\in V}\) a semi-proper trapezoid model. As \(G\) is not a permutation graph, by definition necessarily there exists a pair \(\{v,\omega\}\not\in E\) such that \(T_{v}\cap T_{\omega}\neq\emptyset\). We distinguish two possible cases (see Figure 10):
1. \([b_{1}(v),b_{2}(v)]_{\mathbb{N}}\cap[b_{1}(\omega),b_{2}(\omega)]_{\mathbb{N}} \neq\emptyset\) or \([t_{1}(v),t_{2}(v)]_{\mathbb{N}}\cap[t_{1}(\omega),t_{2}(\omega)]_{\mathbb{N}} \neq\emptyset\).
2. \([b_{1}(v),b_{2}(v)]_{\mathbb{N}}\cap[b_{1}(\omega),b_{2}(\omega)]_{\mathbb{N}} =\emptyset\) and \([t_{1}(v),t_{2}(v)]_{\mathbb{N}}\cap[t_{1}(\omega),t_{2}(\omega)]_{\mathbb{N}} =\emptyset\).
Clearly, if the first case holds, then condition 1 is satisfied. Suppose then that there is no pair \(\{v,\omega\}\not\in E\) such that \(T_{v}\cap T_{\omega}\neq\emptyset\) satisfying the first case. Then necessarily the second case holds. Let \(u\) be a node for which exists \(\omega\in V\setminus N(u)\) such that \(T_{u}\cap T_{w}\neq\emptyset\). For all possible choices of \(u\), let us pick the one such that \(b_{1}(u)\) is the minimum. Then \(u\) satisfies the following conditions:
1. Exists a node \(\omega\in V\) such that \(\omega\notin N(v)\) and \(T_{u}\cap T_{\omega}\neq\emptyset\)
2. All nodes \(\omega\in V\) such that \(\omega\notin N(v)\) and \(T_{u}\cap T_{\omega}\neq\emptyset\) satisfy that \(t_{2}(\omega)<t_{1}(u)\) and \(b_{2}(u)<b_{1}(\omega)\)
3. None of the positions in \(\{1,\ldots,b_{1}(u)\}\) is occupied by a vertex of a node \(\omega\) such that \(\{u,\omega\}\notin E\) and \(T_{u}\cap T_{\omega}\neq\emptyset\).
Observe that conditions (a) and (b) imply that \(t_{1}(u)-f_{t}(u)>0\), while condition (c) implies that \(b_{1}(u)-f_{b}(u)=0\). We deduce that condition 2 holds by \(u\).
Figure 10: A representation of the two possible cases. In the first case, depicted in left, at least one vertex of a trapezoid is contained in the other. In the second case, in the right hand, the trapezoids intersect, but not in the vertices.
We are now ready to define our protocol and main result regarding Trapezoid.
Theorem 17: _There is a PLS for Trapezoid using certificates on \(\mathcal{O}(\log n)\) bits._
Protocol 18: _The following is a one-round proof labeling scheme for Trapezoid_
_Given an instance \(\langle G=(V,E),\mathsf{id}\rangle\), the certificate provided by the prover to node \(v\in V\) is interpreted as follows._
1. _The number of nodes_ \(n(G)\)_._
2. _The vertices_ \(b_{1}(v),b_{2}(v),t_{1}(v),t_{2}(v)\in[2n]\) _of the trapezoid_ \(T_{v}\)_, such that_ \(b_{1}(v)<b_{2}(v)\) _and_ \(t_{1}(v)<t_{2}(v)\)_._
3. _Minimum position_ \(p_{v}\in[n]\) _in the upper line greater that_ \(t_{1}(v)\) _that is not a vertex of a neighbour of_ \(v\)_._
4. _Minimum position_ \(q_{v}\in[n]\) _in the lower line greater than_ \(b_{1}(v)\) _that is not a vertex of a neighbour of_ \(v\)_._
5. _Paths_ \(P_{t}\) _and_ \(P_{b}\) _between the nodes with vertices_ \(1\) _and_ \(2n\) _in the upper and lower line, respectively._
_Then, in the verification round, each node shares with its neighbors their certificates. Using that information each node \(v\) can compute \(f_{t}(v)\) and \(f_{b}(v)\), and check the following conditions:_
1. _The correctness of the value of_ \(n\)_, according to protocol for_ Size_._
2. _The correctness of the paths_ \(P_{b}\) _and_ \(P_{t}\)_, according to protocol for_ \(s,t-\textsc{Path}\)_._
3. _The vertices of the trapezoid of_ \(v\) _are in_ \([2n]\)_._
4. \(T_{v}\cap T_{\omega}\neq\emptyset\) _for all_ \(\omega\in N(v)\)_._
5. _All values in_ \([t_{1}(v)+1,t_{2}(v)-1]_{\mathbb{N}}\) _and_ \([b_{1}(v)+1,b_{2}(v)-1]_{\mathbb{N}}\) _are vertex of some neighbour of_ \(v\)_._
6. \(t_{2}(v)<p_{v}\) _and_ \(b_{2}(v)<q_{v}\)_._
7. _If_ \(\omega\in N(v)\) _and_ \(p_{\omega}<t_{2}(v)\)_, then_ \(v\) _verifies that_ \(p_{\omega}\) _is a vertex of some other neighbour._
8. _If_ \(\omega\in N(v)\) _and_ \(q_{\omega}<b_{2}(v)\)_, then_ \(v\) _verifies that_ \(q_{\omega}\) _is a vertex of some other neighbour._
9. \(f_{b}(v)=f_{t}(v)\)_._
We now analyze the soundness and completeness of our protocol.
**Completeness.** Suppose that \(G\) is a trapezoid graph. An honest prover just has to send the real number of nodes \(n\), a trapezoid model \(\{T_{v}\}_{v\in V}\) of G and valid paths \(P_{b}\) and \(P_{t}\) according the trapezoid model. Then, the nodes will verify \((a)\), \((b)\) by the completeness of the protocols for Size and \(s,t-\textsc{Path}\). Conditions \((c)\), \((d)\), \((e)\),\((f)\), \((g)\) and \((h)\) are verified by the correctness of the model \(\{T_{v}\}_{v\in V}\). Condition \((i)\) is also verified, by Lemma 3.
**Soundness.** Suppose \(G\) is not a trapezoid graph. If a dishonest prover provides a wrong value of \(n\), or wrong paths \(P_{t}\) or \(P_{b}\), then at least one node will reject verifying a or b. Then, we assume that the prover cannot cheat on these values.
Suppose that the prover gives values \(\{T_{v}\}_{v\in V}\) such that \(\bigcup_{v\in V}\{t_{1}(v),t_{2}(v)\}\neq[2n]\). If some vertex of a node is not in the set \([2n]\), then that node fails to verify condition \((c)\) and rejects. Otherwise, there exists \(j\in[2n]\) such that \(t_{1}(v),t_{2}(v)\neq j\), for every
\(v\in V\). If a node \(\omega\) satisfies that \(t_{1}(\omega)<j<t_{2}(\omega)\), then node \(\omega\) fails to verify condition \((e)\) and rejects. Then \(j\) is not contained in any trapezoid. As \(P_{t}\) is correct, \(j\) must be different than \(1\) and \(2n\). Also by the correctness of \(P_{t}\), there exists a pair of adjacent nodes \(u,v\in V\) such that \(t_{2}(u)<j<t_{1}(v)\). From all possible choices for \(u\) and \(v\), we pick the one such that \(t_{2}(u)\) is the maximum. We claim that \(v\) fails to check condition \((g)\). Since \(j\) is not a vertex of any node, then \(p_{u}\leq j\). If \(v\) verifies condition \((g)\)., then necessarily \(p_{u}<j\). Then, there must exist a node \(\omega\in N(v)\) such that \(p_{u}=t_{1}(\omega)\). But since we are assuming that \(j\) is not contained in any trapezoid, we have that \(t_{2}(\omega)<j\), contradicting the choice of \(u\). Then the prover needs to send values \(\{T_{v}\}_{v\in V}\) such that
The same argument prove that \(\bigcup_{v\in V}\{t_{1}(v),t_{2}(v)\}=[2n]\). The same argument also prove that \(\bigcup_{v\in V}\{b_{1}(v),b_{2}(v)\}=[2n]\).
Therefore, if conditions \((a)\) - \((h)\) are verified, we can assume that the nodes are given a semi-proper trapezoid model of \(G\). Since we are assuming that \(G\) is not a trapezoid graph, by Lemma 4 we deduce that condition \((i)\) cannot be satisfied and some node rejects.
We now analyse the proof-size of the protocol: the certification size for the number of nodes \(n(G)\) and the paths constructed is \(\mathcal{O}(\log n)\), given by Proposition 3 and Proposition 5. On the other hand, for each \(v\in V\), the values \(b_{1}(v)\), \(b_{2}(v)\), \(t_{1}(v)\), \(t_{2}(v)\), \(p_{v}\), \(q_{v}\) are computable in \(\mathcal{O}(\log n)\) bits as all values are in \([2n]\). Overall the total communication is \(\mathcal{O}(\log n)\).
Now, we can use the above protocol to recognize Permutation, the class of permutation graphs. A graph is said to be a permutation graph if there exists a collection of points \(\{\ell_{1}(v)\}_{v\in V}\) and \(\{\ell_{2}(v)\}_{v\in V}\) inscribed in two parallel lines such that \(\{u,v\}\in E\) iff \((\ell_{1}(v)-\ell_{1}(u))\)\((\ell_{2}(v)-\ell_{2}(u))\ <\ 0\). This means that the line with extremes \(\ell_{1}(v)\) and \(\ell_{2}(v)\) must cross the line with extremes \(\ell_{1}(u)\) and \(\ell_{2}(u)\), as in Figure 2.
By the same argument of the trapezoid model, we can enumerate the points in both lines, so without loss of generality we can assume the collections \(\{\ell_{1}(v)\}_{v\in V}\) and \(\{\ell_{2}(v)\}_{v\in V}\) are permutations from \(V\) to \([n]\). We say that such collection \(\{\ell_{1}(v),\ell_{2}(v)\}_{v\in V}\) is a proper permutation model of \(G\).
Let \(G=(V,E)\) be a graph. We define a trapezoid model \(T_{vv\in V}\) of \(G\) to be _consecutive_ if for every \(v\in V\), we have \(b_{2}(v)=b_{1}(v)+1\) and \(t_{2}(v)=t_{1}(v)+1\). We denote the class of graphs that have a consecutive trapezoid model by ConTrapezoid.
**Proposition 19**.: ConTrapezoid \(=\) Permutation_._
Proof.: Let \(G\in\) ConTrapezoid and \(T_{v}\) its consecutive trapezoid model. By definition, for all \(v\in V\), exists \(i_{v},j_{v}\in\mathbb{Z}_{n}\) such that \(t_{1}(v)=2i_{v}+1\), \(b_{1}(v)=2j_{v}+1\) and no pair of nodes have the same values \(i_{v},j_{v}\) defining its vertices \(t_{1}\) and \(b_{1}\). Then, we can define the functions \(\ell_{1}\) and \(\ell_{2}\) such that \(\ell_{1}(v)=i_{v}\) and \(\ell_{2}v)=j_{v}\), respectively. This clearly represents a valid permutation model of \(G\) given that \(\{T_{v}\}_{v\in V}\) its a valid consecutive trapezoid model.
In the same way, if \(G\) it is a permutation graph and \(\{\ell_{1}(v),\ell_{2}(v)\}_{v\in V}\) its permutation model, then if we define \(\{T_{v}\}_{v\in V}\) such that \(t_{1}(v)=2\ell_{2}(v)-1\), \(t_{2}(v)=2\ell_{2}(v)\), \(b_{1}(v)=2\ell_{1}(v)-1\) and \(b_{2}(v)=2\ell_{1}(v)\), this is a valid consecutive trapezoid model of \(G\).
Using Proposition 19, it is straightforward to modify the protocol described in Protocol 18 by having the prover send consecutive vertices and allowing the nodes to check them accordingly to recognize the class of permutation graphs.
Corollary 2: _There is a PLS for Permutation with proof-size of \(\mathcal{O}(\log n)\) bits._
## 6 Lower Bounds
In this section, logarithmic lower bounds are given in the certificate sizes of any PLS that recognizes the class of interval, circular-arc, chordal, permutation, and trapezoid graphs. In order to do so, two techniques are used, each one explained in Section 6.1 and Section 6.2, respectively.
### Interval, Chordal and Circular Arc Graphs
To prove the lower bound for interval, circular-arc and chordal graphs, we adapt a construction by Goos and Soumela for locally checkable proofs [29] where the main idea is to construct a collection of yes-instances of each class and a no-instance that will be indistinguishable from the yes-instances if we assume there exists a PLS with proof-size of \(o(\log n)\) bits to recognize each class, giving a contradiction.
Before giving our lower bounds, we need to define a combinatorial result. A hypergraph is a generalization of a graph, where each hyper-edge is a subset of nodes. An \(r\)-uniform hyper-graph is a hyper-graph where each hyper-edge has the same cardinality \(r\). A graph \(G\) is simply a 2-uniform hyper-graph. Let \(K^{(r)(\ell)}\) be an \(r\)-uniform hypergraph, where we can split the set of nodes into \(r\) parts, with size \(\ell\) each, such that we have a hyper-edge for any selection of elements from each of the \(r\) different parts, In 1964, Erdos showed the following result.
Proposition 20: _[_15_]_ _Let \(G\) be an \(r\)-uniform hyper-graph. If \(G\) does not contain a \(K^{(r)(\ell)}\) as a subgraph, then \(|E(G)|\leq n^{1-1/\ell^{r-1}}\)._
We are now ready to show our result.
Theorem 6.1: _Any PLS that recognizes interval, chordal and circular-arc needs a proof-size of \(\Omega(\log n)\) bits._
Proof: Consider \(n\) to be even and \(A\) to be a partition of \([1,n^{2}]\) into \(n\) sets of size \(n\), and \(B\) as a partition of \([n^{2}+1,2n^{2}]\) in a similar manner. Let \(\mathcal{G}\) be a family of \(n\)-node graphs satisfying the property P of belonging to the classes above mentioned.
Set \(\mathcal{G}_{A}\) and \(\mathcal{G}_{B}\) to be the set of labeled graphs in \(\mathcal{G}\), with label sets picked from \(A\) and \(B\) respectively. Let \(F_{a}\) be a graph in \(\mathcal{G}_{A}\) and \(F_{b}\) a graph from \(\mathcal{G}_{B}\). Finally consider two disjoint sets \(C\) and \(D\) in \([2n^{2}+1,3n^{2}]\) of size \(n\).
For \((F_{a},F_{b},c,d)\in\mathcal{G}_{A}\times\mathcal{G}_{B}\times C\times D\) let \(G(F_{a},F_{b},c,d)\) be the graph defined by the disjoint union of graphs \(F_{a}\) and \(F_{b}\) plus four additional nodes \(y_{A},y_{B},c,d\). The nodes \(y_{A},y_{B}\) are labelled with different numbers in a set from \([2n^{2}+1,3n^{2}]\) disjoint from \(C\) and \(D\). The nodes \(y_{A}\), \(y_{B}\), \(c\) and \(d\) form a clique \(K_{4}\), while the nodes \(y_{A}\) is connected to some node \(v_{a}\) in \(F_{a}\). Node \(y_{B}\) is similarly adjacent to some node \(v_{b}\) in \(F_{b}\). Observe that all nodes in \(F_{a}\) communicate with \(F_{b}\) only through the nodes \(y_{A},y_{B},c\) and \(d\).
Let \(\mathcal{P}\) be a PLS verifying the property P with bandwidth \(K=\delta\log\log n\) and error probability \(\varepsilon\), for some \(\delta,\epsilon>0\).
Let \(y_{A},y_{B},c,d\) be the _bridge_ of \(G(F_{a},F_{b},c,d)\). Without loss of generality, we assume that protocol \(\mathcal{P}\) satisfies the condition that the nodes in the bridge \(y_{A},y_{B},c,d\) receive the same proof. If the protocol does not satisfy this condition, we can construct a new protocol with proof length \(4L\), where \(L\) denotes the original message size, by having each node select their respective portion of the proof and then follow the original protocol.
We define \(M=\{m_{v}\}_{v\in V}\) as the set of certificates indexed by vertices \(v\in V(G[F_{a},F_{b},c,d])\), where \(m_{v}\) denotes the certificate that the prover sends to node \(v\) in protocol \(\mathcal{P}\). Let \(\mathcal{M}\subseteq\{0,1\}^{K}\) be the set of certificates such that if it is assigned to the nodes \(y_{A},y_{B},x_{A},x_{B}\), it can be extended to a proof assignment for the nodes in both \(Fa\) and \(F_{b}\), causing them to accept whenever the bridge accepts.
Now consider the complete 4-partite, 4-uniform hyper-graph graph \(\tilde{G}=A\cup B\cup C\cup D\). For each \(a\in A,b\in B,c\in C\) and \(d\in D\), color the edge \(\{a,b,c,d\}\) with the certificate \(m_{abcd}\in K\) of a possible assignment to the nodes of the bridge. There are at most \(2^{K}\) possible certificates \(m_{abcd}\) and \(n^{4}\) hyper-edges in \(\tilde{G}\). Therefore, by the pigeonhole principle, there exists a monochromatic set of hyper-edges \(H\) of size at least \(\frac{n^{4}}{2^{K}}\).
Observe that for sufficiently small \(\delta\) and large \(n\), \(2^{K}=(\log n)^{\delta k\log^{\delta k}(n)}=o(n^{1/8})\). Indeed, if \(n>2^{k\delta}\) and \(\delta<1/(2^{4}k)\) have that \(\delta k\log^{\delta k}(n)\leq\log^{2\delta k}(n)<\frac{1}{8}\log n\). Now, following a result from Erdos, described in Lemma 20, by setting \(\ell=2,r=4\) we have that there exists a \(K^{(r)}(\ell)\) subgraph in \(\tilde{G}\) induced by \(H\). That is, the
Figure 11: A yes-instance for interval and its super-classes, as \(F_{a}\) and \(F_{b}\) are two interval graphs which are connected through a 4-clique in the middle
Figure 12: Auxiliary (complete) 4-uniform, 4 partite hyper-graph \(\tilde{G}=A\cup B\cup C\cup D\) with each node in \(A\) and \(B\) representing a subgraph \(F_{a},F_{b}\) and nodes in \(C\) and \(D\) representing single nodes in the original graph construction. The green and orange edge blocks are a pair of blocks from a monochromatic \(K^{(4)}(2)\) structure present in the graph
complete \(r\)-uniform, \(r\)-partite hyper-graph, where each part has a size exactly \(\ell\). Let \(\{a_{i},b_{i},c_{i},d_{i}\}_{i=1}^{2}\) be the nodes involved in such a graph.
Consider now the graph \(G(a_{i},b_{i},c_{i},d_{i})\) defined as follows: First. take a disjoint union of \(F_{a_{1}},F_{b_{1}},F_{a_{2}}\) and \(F_{b_{2}}\). Then, for each \(i\in\{1,2\}\) add nodes \(y_{A}^{i},y_{B}^{i},c^{i},d^{i}\), labelled with different labels in \([2n^{2}+1,3n^{2}]\) correspondent to the yes instances formed by the graphs \(F_{a_{i}}\)\(F_{b_{j}}\), and the nodes \(c_{k}\) and \(d_{h}\). For each \(i\in\{1,2\}\), the node \(y_{A}^{i}\) is adjacent to \(y_{B}^{i}\), \(c_{i}\), \(d_{i+1}\) and the node \(v_{a_{i}}\) of \(F_{a_{i}}\). Also, the node \(y_{B}^{i}\) is adjacent to \(c^{i}\), \(d_{i}\) and the node \(v_{b_{i}}\) of \(F_{b_{i}}\) and \(c_{i}\) is adjacent to \(d_{i+1}\) Where the \(i+1\) is taken \(\mod 2\).
We must demonstrate that the graph \(G(a_{1},b_{1},a_{2},b_{2})\) is a No-instance for the property P of belonging to the classes proper interval, interval, proper circ-arc, circular-arc and chordal.
* As for the classes interval and proper interval we simply define \(F_{a}\) and \(F_{b}\) to be a pair of proper interval graphs of size \(\mathcal{O}(n)\), then \(G(F_{a},F_{b},c,d)\) also admits a representation through proper intervals as we simply connect both graphs through their extremes by a small clique. Finally, the newly constructed graph has an induced 6-cycle and therefore can not have a representation by intervals.
* As for proper circ-arc and circular-arc by using the same construction (by assuring that each part has a large diameter) we also have a valid instance as (proper) interval graphs are in particular (proper) circular arc graphs. We simply consider their representation through intervals in the real line as a big arc in a portion of the circle. As for the newly obtained instance, we have a large, induced cycle which is consistent with a construction for a circular arc graph, while we also have large paths on each side that are in conflict with the cycle as a (proper) circular arc graph behaves locally like an interval graph and therefore can not have an asteroidal triple (three nodes at the extreme of each interval graph).
* Finally, for chordal we can use the same graph described for the class intervalas they are also chordal graphs. We have that the newly constructed graph has a 6-cycle without any chords. Therefore, it is not in chordal.
This means that if we run protocol \(\mathcal{P}\) in instance \(G(a_{1},b_{i},c_{i},d_{i})\) for \(i\in\{1,2\}\), at least one node should reject. But the local information given to each node in \(G(a_{1},b_{i},c_{i},d_{i})\) it is the same that in a yes-instance \(G(F_{a_{i}},F_{b_{i}},c_{i}d_{i})\), as they have the same neighbors with the same _id_'s and labels, and the nodes of the bridge receives the same certificate \(m_{a_{i}b_{i}c_{i}d_{i}}\) that makes the yes-instance accepts, so protocol \(\mathcal{P}\) makes
Figure 13: A No-instance for interval and its super-classes, as the graph admits a large cycle without any chords (and with large diameter subgraphs \(F_{a_{i}},F_{b_{j}}\)), it can not admit a representation through intervals or circular arcs, nor can be a chordal graph.
all the nodes accept in instance \(G(a_{1},b_{i},c_{i},d_{i})\), that is not part of the class, making a contradiction.
### Trapezoid and Permutation Graphs
For the class of trapezoid and permutation graphs, we use a technique given by Fraigniaud et al [25], called _crossing edge_, and which we detail as follows. Let \(G=(V,E)\) be a graph and let \(H_{1}=(V_{1},E_{1})\) and \(H_{2}=(V_{2},E_{2})\) be two subgraphs of \(G\). We say that \(H_{1}\) and \(H_{2}\) are independent if and only if \(V_{1}\cap V_{2}=\emptyset\) and \(E\cap(V_{1}\times V_{2})=\emptyset\).
To prove a lower bound on the proof-size of any PLS that recognizes the remaining classes, we use the following results of Fraigniaud et al. [25].
**Definition 2** ([25]).: _Let \(G=(V,E)\) be a graph and let \(H_{1}=(V_{1},E_{1})\) and \(H_{2}=(V_{2},E_{2})\) be two independent isomorphic subgraphs of \(G\) with isomorphism \(\sigma\colon V_{1}\to V_{2}\). The crossing of \(G\) induced by \(\sigma\), denoted by \(\sigma_{\bowtie}(G)\), is the graph obtained from \(G\) by replacing every pair of edges \(\{u,v\}\in E_{1}\) and \(\{\sigma(u),\sigma(v)\}\in E_{2}\), by the pair \(\{u,\sigma(v)\}\) and \(\{\sigma(u),v\}\)._
Then, a lower bound to any PLS is stated as follows.
**Theorem 22** ([25]).: _Let \(\mathcal{F}\) be a family of network configurations, and let \(\mathcal{P}\) be a boolean predicate over \(\mathcal{F}\). Suppose that there is a configuration \(G_{s}\in\mathcal{F}\) satisfying that (1) \(G\) contains as subgraphs \(r\) pairwise independent isomorphic copies \(H_{1},...,H_{r}\) with \(s\) edges each, and (2) there exists \(r\) port-preserving isomorphisms \(\sigma_{i}\colon V(H_{1})\to V(H_{i})\) such that for every \(i\neq j\), the isomorphism \(\sigma^{ij}=\sigma_{i}\circ\sigma_{j}^{-1}\) satisfies \(\mathcal{P}(G_{s})\neq\mathcal{P}(\sigma_{\bowtie}^{ij}(G)_{s})\). Then, the verification complexity of any proof-labeling scheme for \(\mathcal{P}\) and \(\mathcal{F}\) is \(\Omega\left(\frac{log(r)}{s}\right)\)._
We prove the lower bounds remaining by constructing a specific graph family with respective isomorphisms satisfying Definition 2 and hypothesis of Theorem 22 to conclude.
**Theorem 23**.: _Any PLS for Permutation or Trapezoid needs a proof-size of \(\Omega\left(\log n\right)\) bits._
Proof.: First, let \(\mathcal{F}=\{(Q_{n},\mathsf{id})\}\) be a collection of network configurations, where each graph \(Q_{n}\) consists of \(5n\) nodes forming a path \(\{v_{1},\ldots,v_{5n}\}\) where we add the edge \(\{v_{5i-3},v_{5i-1}\}\), for each \(i\in[n]\). It is easy to see that for each \(n>0\), \(Q_{n}\) is a permutation graph (and then also a trapezoid graph), and therefore \(\mathcal{F}\subseteq\textsc{Permutation}\) and \(\mathcal{F}\subseteq\textsc{Trapezoid}\). In fig. 14 is depicted the graph \(Q_{3}\) and its corresponding permutation model.
Given \(Q_{n}\), consider the subgraphs \(H_{i}=\{v_{5i-2},v_{5i-1}\}\), for each \(i\in[n]\), and the isomorphism \(\sigma_{i}\colon V(H_{1})\to V(H_{i})\) such that \(\sigma_{i}(v_{3})=v_{5i-2}\) and \(\sigma_{i}(v_{4})=v_{5i-1}\).
**Lemma 5**.: _For each \(i\neq j\), the graph \(\sigma_{\bowtie}^{ij}(Q_{n})\) it is neither a permutation graph nor a trapezoid graph with \(\sigma_{i}\colon V(H_{1})\to V(H_{i})\) such that \(\sigma_{i}(v_{3})=v_{5i-2}\) and \(\sigma_{i}(v_{4})=v_{5i-1}\)._
Proof.: Given \(i<j\), by definition of \(\sigma^{ij}\colon V(H_{j})\to V(H_{i})\) in \(\sigma_{\bowtie}^{ij}(Q_{n})\) the nodes \(v_{5j-3}\), \(v_{5j-2}\), \(v_{5i-1}\), \(v_{5i-3}\), \(v_{5i-2}\), \(v_{5j-1}\) form an induced cycle of length \(6\) (see fig. 15 for an example).
As a trapezoid graph have induced cycles of length at most 4, we deduce that \(\sigma_{\bowtie}^{ij}(Q_{n})\) is not a trapezoid graph.
Finally, as the class of permutation graph is contained in the class of trapezoid graphs, then \(Q_{n}\) neither it is a permutation graph.
Then, we have that for all \(n>0\), the graph \(Q_{n}\) is a permutation and a trapezoid graph, but \(\sigma_{\bowtie}^{ij}(Q_{n})\) is neither permutation nor trapezoid graph. Since there are \(r=\mathcal{O}(n)\) isomorphisms \(\sigma_{i}\) and subgraphs \(H_{i}\), each with one edge, it follows from Theorem 22 that any PLS that recognizes Trapezoid and Permutation needs a proof size of \(\Omega(\log n)\) bits.
|
2309.05497 | Personality Detection and Analysis using Twitter Data | Personality types are important in various fields as they hold relevant
information about the characteristics of a human being in an explainable
format. They are often good predictors of a person's behaviors in a particular
environment and have applications ranging from candidate selection to marketing
and mental health. Recently automatic detection of personality traits from
texts has gained significant attention in computational linguistics. Most
personality detection and analysis methods have focused on small datasets
making their experimental observations often limited. To bridge this gap, we
focus on collecting and releasing the largest automatically curated dataset for
the research community which has 152 million tweets and 56 thousand data points
for the Myers-Briggs personality type (MBTI) prediction task. We perform a
series of extensive qualitative and quantitative studies on our dataset to
analyze the data patterns in a better way and infer conclusions. We show how
our intriguing analysis results often follow natural intuition. We also perform
a series of ablation studies to show how the baselines perform for our dataset. | Abhilash Datta, Souvic Chakraborty, Animesh Mukherjee | 2023-09-11T14:39:04Z | http://arxiv.org/abs/2309.05497v1 | # Personality Detection and Analysis using Twitter Data
###### Abstract
Personality types are important in various fields as they hold relevant information about the characteristics of a human being in an explainable format. They are often good predictors of a person's behaviors in a particular environment and have applications ranging from candidate selection to marketing and mental health. Recently automatic detection of personality traits from texts has gained significant attention in computational linguistics. Most personality detection and analysis methods have focused on small datasets making their experimental observations often limited. To bridge this gap, we focus on collecting and releasing the largest automatically curated dataset for the research community which has 152 million tweets and 56 thousand data points for the Myers-Briggs personality type (MBTI) prediction task. We perform a series of extensive qualitative and quantitative studies on our dataset to analyze the data patterns in a better way and infer conclusions. We show how our intriguing analysis results often follow natural intuition. We also perform a series of ablation studies to show how the baselines perform for our dataset.
Neural Networks, Artificial Intelligence,
## I Introduction
The personality of an individual refers to the specific collection of psychological constructs which dictates the visible differences in different human beings in terms of behavior and reaction in particular environments and also dictates the thought process which leads to these different behavioral outcomes (as defined in Roberts and Mroczek [1]). Many researchers have recently tried automatic personality detection with little success primarily because the task is inherently difficult, requiring a thorough understanding of sentence constructs, sentiment toward targets, and its connection to behavioral outcomes. Sentiment analysis alone can be a very challenging task due to abundance of aspects and sparsity of labelled data[2, 3]. Moreover, most research has been carried out on small datasets. Since the expression of a specific personality can have a wide range, small datasets often are unable to capture this variety and thus fail to provide the model with a sufficient inductive bias to learn.
Furthermore, the models used till now lack task-specific design which is essential to solving a complex problem. Attempts for personality modeling ranged from traditional methods like questionnaires to NLP-based approaches. The two widely-used personality models are the Big five personality traits (OCEAN model), coming from Sir Francis Galton's line of work (as described in Goldberg [4], Rothe [5], Rushton [6]) based on linguistically predictive personality types having 5 personality dimensions and the Myers-Briggs Type Indicator (MBTI) personality modeling, based on Carl Jung's theory, containing four personality dimensions as proposed in Jung [7]. While there has been considerable work with the first kind of personality types being invented to be used by linguists, works on MBTI personality types are lacking. We hope to bridge this gap by introducing the largest automatically collected dataset for MBTI personality types. Our contributions in this paper are as follows.
1. We introduce the largest dataset for personality detection with MBTI personality types. We perform all our analyses and automatic classification using the functional personality groups. However, we opensource this dataset in the original form with nuanced attributes containing all the individual 16 personalities for the community to fuel further research and exploration.
2. We perform several quantitative and qualitative studies to analyze the dataset. We introduce novel features like **hashtags, URLs, and Mentions embeddings**, and show how they correlate with an individual's personality. We analyze personality types in several derivative dimensions like professions, readability, and empathy features.
3. We test several machine learning models on the task of predicting MBTI personality types from Twitter profile data. We fine-tune different models taking individual inputs to make better embeddings and use these embeddings to train another model finally enhancing the prediction accuracy. The best accuracy is achieved by a simple random forest classifier over fastText embeddings.
4. We perform a series of ablation studies to understand which features are important for the task. We show that the **hashtags** used by the users, their **empath** features, and their **tweets** are the most important features. We also show the impact of data quality and the number of tweets on the model's prediction performance.
## II Related Works
Personality information can be valuable for a number of applications. Numerous research papers related to predicting
personality traits among social media networks have recently surged interest in the research community(8, 9, 10). Previous research on the prediction of personality uses Twitter, Instagram, and Facebook data include some feature-based techniques such as LIWC(11), SPLICE (structured programme for linguistic cue extraction)(12), SNA (social network analysis) (13), as well as time-based features(14). Mitchell et al. (15) studied self-identified schizophrenia patients on Twitter and found that linguistic signals may aid in identifying and getting help to people suffering from it. Luyckx and Daelemans (16) presented a corpus for computational stylometry, including authorship attribution and MBTIs for Dutch. The corpus consists of 145 students (BA level) essays. They controlled for the topic by asking participants to write about a documentary on artificial life. In a follow-up study (17), they extended the corpus to include reviews and both Big Five and MBTI information. Instead, we focus on English and social media, a more spontaneous sample of language use. Even when using social media, most prior work on personality detection can be considered small-scale. The 2014 Workshop on Computational Personality Recognition hosted a shared task of personality detection on 442 YouTube video logs(18). Celli et al. (19) also examined Facebook messages of 250 users for personality.
In contrast, **our study uses 152M tweets from 56K different users**. The only two prior large-scale open-vocabulary works on social media study Facebook messages ((20, 21)). To date, these studies represent the largest one connecting language and personality. They collected personality types and messages from 75,000 Facebook users through a Facebook app. They found striking variations in language use with personality, gender, and age. On the other hand, we collect our data using the Twitter API and in an automated way, retrieving every possible detail. We also generate our own set of features from the tweets for better classification. Our approach is simpler, requires no tailored app, and can be used to collect large amounts of auto-annotated data quickly.
## III Dataset Creation
The most popular dataset on MBTI personality detection from text has only 1,500 data points with 1.2M tweets(22). We attempt to create a new dataset containing tweets, user descriptions (bio), profile metadata (follower count, media count, listed count, etc.), and finally the MBTI personality type for each user through automatic means.
### _Data collection procedure_
We collect data from people who have publicly shared their personality test results (from www.16personalities.com) on Twitter. Our data collection strategy is as follows.
1. **User mapping from profile link**: Complete profile links for the website follow a specific pattern of https: //www.16personalities.com/profiles/id where _id_ refers to the id of the profile of that person. We use **Twitter API** to search for this pattern and obtain all the users who have shared their profile links. Then we use **selenium** to collect the personality type results calculated from the link.
2. **User mapping from MBTI links**: Some people, instead of sharing their test results directly, share the links to their respective personality types. To capture this, we search for links with pattern https://www. 16personalities.com/ptype-personality where _ptype_ is the four-dimensional personality type. We collect this data using the Twitter API and filter out the cases where the same person has ever shared different personality type links.
3. **Collection of tweets**: Twitter makes the last 3200 tweets for each user available in its web API. We use **snscrape** to collect the same for each user obtained in steps (1) and (2) and save them in text files in order to use them as input data.
4. **Collection of descriptions and metadata**: From the profiles retrieved from step (1) and step (2), we use Twitter's Python API **Tweepy** to map the profile usernames to their Twitter ids. Then we retrieve their user objects containing description (bio) and other profile metadata like follower count, friend count, media count, etc. using **snscrape**.
### _Preprocessing_
We take the following preprocessing steps.
1. We detect the language of the tweets using **fasttext** and filter out all non-English tweets.
2. We keep only those users for whom at least 100 of the filtered tweets are present. This is necessary to obtain
statistically meaningful results from the analysis we perform later.
3. We retain only unambiguous users, i.e., users having only one personality type. If a user tweets multiple '16personalities' links having different types, we remove them from our dataset to maintain consistency.
4. We separate the hashtags, emojis, mentions, and URLs from the tweet text and analyze them individually. URLs may come from different media houses having different biases and attracting people of specific personalities; same may happen with hashtags, mentions and emojis(23, 24). Finally, we use TweetBERT's **tweet normalizer** to normalize the tweet text.
## IV Dataset Analysis
MBTI personality types can be broadly mapped to four classes(25). These are - Extraversion (e) vs introversion (i) - where you get your energy from, Sensing (s) v intuition (n) - what kind of information you prefer to gather, Thinking (t) v feeling (f) - how you make decisions, and Judging (j) v perceiving (p) - how you deal with the world around you.
* **Analysts**: Intuitive (N) and Thinking (T).
* **Diplomats**: Intuitive (N) and Feeling (F).
* **Sentinels**: Observant (S) and Judging (J).
* **Explorers**: Observant (S) and Prospecting (P).
The number of users and their tweets collected for each personality type is enumerated in Table I. Further, we have done several quantitative and qualitative analyses of our large dataset using this mapping. These are summarized below.
### _Readability metrics_
Readability is the ease with which a reader can understand a written text. In natural language, the readability of text depends on its content (the complexity of its vocabulary and syntax) and its presentation (such as typographic aspects that affect legibility, like font size, line height, character spacing, and line length) (26). We have calculated eight types of readability metrics for each user. These are: _Flesch Readability_ ((27)), _Flesch-Kincaid Grade Level_ ((28)), _Dale Chall Readability_ ((29)), _Automated Readability Index_ (ARI (30)), _Coleman Liau Index_ ((31)), _Gunning Fog_ ((32)), _Linsear-write_ ((33)), _SPACHE_ ((34)).
The average values of each personality class's readability metrics are shown in Table II. We can infer from the majority of readability metrics that tweets of **analyst** personality class are the hardest to read as compared to the other three. Similarly, **explorer** personality class tweets are the easiest to read among the three.
### _Empath features_
Empath features draw connotations between words and phrases by learning a neural embedding from more than 1.8 billion words of modern fiction, as proposed by Fast et al. (35). Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 194 built-in, pre-validated categories that the authors generated from common topics in their web dataset, like neglect, government, and social media. We compute empath feature vectors for all users using the pre-trained model. The top distinct empath features for each personality class are presented in Table III. We can see that most of them align with common intuition. For instance, while analysts correspond to words like 'programming', 'tool', and 'optimism', explorers correspond to words like 'dance','music', and 'appearance'.
### _Most distinct professions_
We find the most distinct professions for each personality class using the profile's description or biography. We first parse the descriptions into tokens and then calculate the probability of each personality class given that token, i.e., \(Probability(Class|Token)\). We then take the words having high probability scores as the representative professions for a personality class. The results in Table IV are quite intriguing and align with natural intuition. Analysts have distinctive professions like fullstack engineer and scientist, Diplomats have campaigner and theorist and Sentinels have surgeon and dentists. We do not find any such alignments for Explorers as they may have a diverse range of professions and they do not significantly identify themselves with one profession but they do explore some newer professions driven by social media and web 3.0.
### _Metadata statistics_
From the metadata statistics of the user profiles as shown in Figure 4, we observe that the explorers update their statuses the most and analysts update the least. The favorites count is the highest for the explorers followed by the diplomats. The listed Count shows how many people have added the user to a list. We see that **analysts** are the most listed personality type, followed by diplomats, sentinels, and explorers. This is probably because analysts are good at rational thinking and can explain complex information in a way that is easy to
understand. These traits are beneficial for using Twitter, as the platform requires concise and effective communication.
## V Methodology
Our task is to classify Twitter users into four personality types namely discussed in the previous sections. The input features of the users available to us are - 1) the latest 3200 tweets, 2) the bio (description), and 3) profile statistics (follower count, media count, listed count, etc.). We compute eight different readability metrics from the tweets defined in the previous section. While preprocessing, we clean the tweets and store the hashtags, URLs, and mentions separately. We then compute the empathy features using a pre-trained model from the clean tweets. We also embed hashtags, URLs, and mentions, as they may contain valuable information about a user's personality.
### _URL, hashtag, and mention embeddings_
It has been seen that hashtags contain very indicative and valuable information about the user's personality. To capture this information, we calculate embeddings for the hashtags present in a user's tweets. We first concatenate the hashtags and vectorize them using tf-idf. We ignore all tokens which appear in less than 2% of the tweets. We then pass the vectors to a neural network containing three dense layers and try to predict the personality class. After proper training, we use the output of the second last layer of the neural network as the user embedding for the hashtags. We follow a similar procedure for computing the URL and mention embeddings for each user.
### _Approach_
To classify each user, we use a similar methodology for all the baselines. After preprocessing, we encode the tweets and descriptions using an encoder (fasttext, bert, tweetbert, and roberta). Then we vectorize the empathy features, readability scores, and Twitter profile statistics (counts), and concatenate all these vectors. Finally, we concatenate the URL, hashtag, and mention embeddings with them. We use nine different configurations for classification depending upon the features chosen for input which are as follows. Our objective here was to understand the impact of each feature.
1. All the features.
2. Only the tweets.
3. Without the URL embeddings.
4. Without the hashtag embeddings.
5. Without the mention embeddings.
6. Without the URL, hashtag, and mention embeddings.
7. Without the readability scores.
8. Without the empathy features.
9. Without the profile statistics.
For the classification of tasks, we have used various machine learning models which are described in the next section. We create a class-balanced subset of the whole data, containing approximately 4000 data points per class, sampled randomly for model training purposes. For the testing of models, we fix a random sample of approximately 1000 data points from each class from the remaining data. This ensures unbiased training and proper evaluation of the models. Our model architecture is shown in Figure 5.
### _Baselines_
We use four different encoders to encode the tweets and descriptions into embeddings - fasttext, bert, tweetbert, and roberta For classification, we use two classical models - Random Forest and XGBoost. We also employ other machine learning as well as deep learning
Fig. 1: Average statuses count.
Fig. 4: Metadata statistics of the dataset for each personality class.
Fig. 3: Average listed count.
Fig. 2: Average favorites count.
variants; however, the results being poorer we refrain from reporting them in the paper.
We concatenate the output of the encoder in each case with the empath (194 dim), readability (8 dim), metadata counts (6 dim), and the mention, hashtag, URL embeddings (64 dim each) to get the final feature vector which is used for classification using one of the algorithms mentioned above.
We use default hyperparameters of _sklearn_ and _fasttext_ libraries in every case for finetuning here as they presented the best results.
#### Iii-B1 Embeddings
1. Fasttext: FastText(36) is an open-source, free library from Facebook AI Research for learning word embeddings and word classifications. We use pre-trained fastText embeddings to convert the tweets of each user and their descriptions into 700-dimensional vectors.
2. Bert: Bidirectional Encoder Representations from Transformers is a transformer-based machine learning technique for natural language processing pre-training developed by Google[37]. To encode the tweets, we first tokenize the last 64 tokens using the BERT tokenizer, then pass it into the bert-base model to get a single tweet embedding. Likewise, we do the same for all 3200 tweets, and take an average to get the tweet embedding for a single user.
3. Tweetbert: TweetBERT is a BERT model that has been trained on Twitter datasets, and shows significantly better performance on text mining tasks on Twitter datasets (proposed in Qudar and Mago [38]). We follow a similar strategy as in the case of BERT to obtain the tweet embeddings.
4. Roberta: A Robustly Optimized BERT Pretraining Approach (RoBERTa) was proposed by Liu et al. [39]. It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates. We follow a similar strategy as in the case of BERT to obtain the tweet embeddings.
#### Iii-B2 Classifiers
1. **Random forest classifier (**RFC**): The random forest classifier is commonly used to reduce variance within a noisy dataset. It significantly raises the stability of
Fig. 5: The overall architecture of our model. The AUG module separates the hashtags, URLs and mentions, and cleans the rest of the tweets.
models by improving accuracy and reducing variance, which eliminates the challenge of overfitting. An improved version of the classifier was proposed by Xu et al. (40) which we use for our experiments.
2. **Extreme gradient boosting** (XGB): The gradient boosting classifier model helps in reducing variance and bias in a machine learning ensemble. An efficient and scalable implementation of gradient boosting framework is developed by Chen et al. (41), which is called XGBoost. We use this model for all our experiments.
## VI Results
The main results from the classification are presented in Table V. From the table, we see that RFC performs better than XGB in most of cases. The most important features are the **tweets** and **hashtag embeddings**. The effect of the other features is minimal. As for the embedding learning algorithms, we observe that all of them perform similarly, with a small edge going to the fasttext encoder. The best F1 score is reached by using one of (i) fasttext embeddings with RFC and all features or (ii) fasttext embeddings with XGB and all but the profile statistics features.
Our results further show that employing the URL, hashtag, and mention embeddings along with all other features (readability, counts, etc) gives an overall boost of \(\sim 1-2\%\) in terms of classification F1 score, while the use of profile statistics (followers, listed count, etc.) also gives an overall boost of \(\sim 0.5\%\). The use of the empath features gives an overall boost of \(\sim 0.9-1\%\) in the F1 score and accuracy. The readability scores showed some improvement in accuracy and F1 score for some pairs of encodings and models, although it had the least contribution compared to the other features.
## VII Error analysis
Detecting MBTI personality types can introduce several possibilities of errors. Some are as follows.
* **Lack of control over the sample population**: People who use Twitter are not necessarily representative of the general population. There may be biases in terms of age, gender, ethnicity, and socio-economic status, which can impact the accuracy of the MBTI personality type identification.
* **Variability in expressing personality traits**: Individuals can express their personalities in different ways depending on the situation or context. For example, individuals who are typically introverted may appear to be extroverted in certain social settings.
* **Difficulty in measuring some personality traits**: The MBTI measures personality traits that are not necessarily easily observable, such as intuition or sensing. Moreover, these traits are not always consistently displayed in tweets.
Using tweets to detect MBTI personality types is an interesting and innovative approach but the above limitations can introduce inaccuracies and errors in the predictions. To illustrate this we present some examples of predictions done by our best model in Table VI. Our model does well when the tweets reflect a single trait in their behavior clearly. However, it commits errors when the information available is confusing. For instance, note that since User 5 is a YouTube (in addition to being an athlete) so our model predicts the person to be an explorer (which is, in fact, partially correct). The case of User 6 is more common - many actors in the later stage of their careers enter into active politics (e.g., Hema Malini1, J. Jayalalithaa2, Clay Aiken3, Alessandra Mussolini4, Maria Kozhevnikova5, Jimmy Edwards6, etc.). Our model finds it hard to classify such cases probably because while they self-report themselves as diplomats, they still tweet a lot about the acting world. The error in the case of User 7 arises because the person writes very complex tweets which is an unusual trait for explorers and a usual trait for analysts. The last case (User 8) is also confusing since the person tweets, tags, and mentions political entities. Thus, in summary, while a person's personality class is usually thought to be fixed there might be cases where it can branch out due to multiple interests pursued or due to followership of an ideology or a school of belief, or due to a change in the profession over time. Therefore the model predictions should always be used with appropriate caution.
Footnote 1: [https://en.wikipedia.org/wiki/Hema_Malini](https://en.wikipedia.org/wiki/Hema_Malini)
Footnote 2: [https://en.wikipedia.org/wiki/J_Jayalithaa](https://en.wikipedia.org/wiki/J_Jayalithaa)
Footnote 3: [https://en.wikipedia.org/wiki/Clay_Aiken](https://en.wikipedia.org/wiki/Clay_Aiken)
Footnote 4: [https://en.wikipedia.org/wiki/Alessandra_Mussolini](https://en.wikipedia.org/wiki/Alessandra_Mussolini)
Footnote 5: [https://en.wikipedia.org/wiki/Maria_Kozhevnikova](https://en.wikipedia.org/wiki/Maria_Kozhevnikova)
Footnote 6: [https://en.wikipedia.org/wiki/Jimmy_Edwards](https://en.wikipedia.org/wiki/Jimmy_Edwards)
## VIII Conclusion
In this work, we released the largest automatically curated Twitter dataset for personality detection for MBTI personality types. Then we classified Twitter users into personality types - analysts, diplomats, sentinels, and explorers using the latest 3200 tweets and profile information. We derived new features from the tweets to capture user personality, as well as computed embeddings from the URLs, hashtags, and mentions. We used various encoders (FastText, BERT, Tweet-BERT, and RoBERTa) to convert the tweets into embedding vectors followed by traditional machine learning models for classification.
## IX Limitations
Human language is highly dynamic. Most of the metadata present in the tweets such as hashtags and mentions touches upon topics and their difficulty which may not be well represented by existing machine learning models. In addition, even though we incorporate readability metrics, it may still not be enough to capture an individual's attitude and behavior accurately. Also, due to the length constraint of tweets, deeper context cannot be extracted easily. Finally, our model may not be able to account for changes in users' personalities over time.
## X Future Works
The task of classifying Twitter users according to their personality type is an interesting research area, with many potential applications. We believe that there is room for improvement in our existing method in terms of accuracy and runtime. While text-based data such as tweets can provide valuable insights into a user's personality, incorporating audio and video data from social media platforms such as YouTube and TikTok could provide additional information. However, analyzing such data can be challenging due to its unstructured nature, making this a potentially challenging future work. Further, social media users' personalities can evolve and change over time, making it difficult to classify them accurately based on a single snapshot of their behavior. Developing a model that can capture temporal dynamics and classify users based on their personality over time could be another future direction. Further, since the challenge with large models is interpretability, we would also like to investigate this avenue by digging deeper into the relationships among the input features. In addition, we would also like to explore the potential of multi-dimensional classification to provide more granular information about the personality type of a Twitter user. The current models could also be extended to other social media platforms such as YouTube and Instagram, as the personalities of users on these platforms could influence the type of content they generate and hence could indicate their MBTI type.
|
2309.11384 | Long-Form End-to-End Speech Translation via Latent Alignment
Segmentation | Current simultaneous speech translation models can process audio only up to a
few seconds long. Contemporary datasets provide an oracle segmentation into
sentences based on human-annotated transcripts and translations. However, the
segmentation into sentences is not available in the real world. Current speech
segmentation approaches either offer poor segmentation quality or have to trade
latency for quality. In this paper, we propose a novel segmentation approach
for a low-latency end-to-end speech translation. We leverage the existing
speech translation encoder-decoder architecture with ST CTC and show that it
can perform the segmentation task without supervision or additional parameters.
To the best of our knowledge, our method is the first that allows an actual
end-to-end simultaneous speech translation, as the same model is used for
translation and segmentation at the same time. On a diverse set of language
pairs and in- and out-of-domain data, we show that the proposed approach
achieves state-of-the-art quality at no additional computational cost. | Peter Polák, Ondřej Bojar | 2023-09-20T15:10:12Z | http://arxiv.org/abs/2309.11384v1 | # Long-form End-to-End Speech Translation via Latent Alignment Segmentation
###### Abstract
Current simultaneous speech translation models can process audio only up to a few seconds long. Contemporary datasets provide an oracle segmentation into sentences based on human-annotated transcripts and translations. However, the segmentation into sentences is not available in the real world. Current speech segmentation approaches either offer poor segmentation quality or have to trade latency for quality. In this paper, we propose a novel segmentation approach for a low-latency end-to-end speech translation. We leverage the existing speech translation encoder-decoder architecture with ST CTC and show that it can perform the segmentation task without supervision or additional parameters. To the best of our knowledge, our method is the first that allows an actual end-to-end simultaneous speech translation, as the same model is used for translation and segmentation at the same time. On a diverse set of language pairs and in- and out-of-domain data, we show that the proposed approach achieves state-of-the-art quality at no additional computational cost.
Peter Polak, Ondrej Bojar Charles University, Czechia segmentation, long-form, simultaneous, speech translation, latent alignment
## 1 Introduction
Simultaneous speech translation (SST) is the task of translating speech in one language into target-language text before the speaker finishes the utterance. Traditionally, SST has relied predominantly on cascaded systems that decompose the task into multiple subtasks, including automatic speech recognition (ASR), punctuation restoration (PR), and machine translation (MT) [1, 2, 3]. However, recent advancements in deep learning and the availability of abundant training data [4, 5] have led to a significant paradigm shift towards end-to-end (E2E) models. Despite the recent popularity of end-to-end SST within the research community, most research focuses on the "short-form" setting, which assumes that the speech input is already pre-segmented into sentences. Critically, this assumption poses an obstacle to deployment in the "wild", where speeches consist of several sentences -- a "long-form" regime.
In the traditional cascaded approach, most segmentation methods relied on punctuation predicted by the inverse text normalization [6, 7, 8]. However, such an approach is impossible in the end-to-end models, as the intermediate transcript is unavailable. The E2E approach must, therefore, rely on speech-based segmentation methods. Typical choices are fixed-based segmentation, i.e., segmentation into chunks of equal length, or paused-based methods based on voice activity detection (VAD) [9, 10]. However, these segmentation approaches harm the resulting translation quality, as the translation task is sensitive to poor segmentation and generally prefers a segmentation obeying sentence boundaries [10]. Recent work [11, 12] tries to predict sentence boundaries directly. However, their use in the simultaneous regime imposes further translation delay and requires additional computational resources.
This paper proposes a novel segmentation approach that leverages a popular attention-based encoder-decoder architecture with ST CTC loss [13, 14]. We perform the sentence segmentation on the fly using the punctuation from the translation and speech-to-translation alignment from ST CTC. Without any external segmentation model, we show that models trained for translation only can also be used for segmentation as well. In extensive experiments on TED talks and parliamentary speeches, we show that:
* Translation models can segment speech based on the punctuation included in the translation without any special or additional training.
* Provided segmentation quality is equivalent to or better than the current state-of-the-art segmentation methods based on large pre-trained models.
* The proposed approach does not introduce any additional latency and does not need any additional computational resources.
## 2 Background
This section introduces the most essential concepts of long-form simultaneous speech translation.
Incremental vs. Re-Translation SST models can be either re-translation or incremental. Re-translation models [15, 16] typically run their decoding every time they get a new
portion of the speech. Critically, a _re-translation model can revise its translation_ output as more speech input is read. This design arguably makes it more difficult for the user to process the translation. On the other hand, because the model can revisit its translations, the final translation quality matches the offline translation quality.
Incremental models [17, 18] differ from re-translation models in that they can only append new words to the end of the partial translation but never change the previous words. For the user, the _translation changes only by incrementally getting longer_; none of the previously displayed outputs are ever modified. The incremental approach is required for certain applications (e.g., speech-to-speech translation) and can be considered easier to follow from the user's perspective. From the long-form perspective, re-translation allows for a substantially lower latency: Imagine that punctuation prediction needs a 5-second look-ahead buffer for reliable work. In a re-translation approach, we can emit the expected translation of the 5 seconds, later fixing any punctuation mistakes. The incremental approach has to be much more conservative and delay any output until the punctuation is certain because it has no option to correct itself. In this work, **we follow the incremental approach**.1
Footnote 1: IWSLT shared tasks [19, 20, 21, 22] also follow the incremental SST approach.
Audio Segmentation MethodsThe simplest audio segmentation method, **fixed-length segmentation**, splits audio based on length while disregarding any information contained in the audio. More advanced strategies rely on acoustic information, typically voice activity detection (VAD). VAD concentrates solely on the presence of the speech and disregards sentence boundaries. This usually results in sub-optimal segmentation [11, 12, 23] as humans place pauses inside sentences, not necessarily between them (e.g., hesitations before words with high information content, [24]). To address this, **SHAS segmentation classifier**[11] is directly trained to segment audio into sentences. The model consists of a robust pre-trained multi-lingual model XLS-R [25], an extra Transformer [26] layer and a classification layer. For each speech frame, SHAS outputs the probability of whether it should be included in the segment.
To improve the quality of the VAD-based methods, **offline divide-and-conquer (DAC)**[27] and **simultaneous (SIM)**[23] consider the presence of speech and also the length of the resulting segments. DAC method recursively splits the audio on the longest pause until all segments are shorter than some pre-defined maximum length. The SIM method allows simultaneous segmentation (i.e., without seeing the entire recording) by segmenting on the longest pause between minimum and maximum segment length. If no pause is detected, the segmentation occurs on the maximum length.
Simultaneous Speech Translation Models with Latent AlignmentsA popular architecture for modeling speech translation is the attention-based encoder-decoder (AED) architecture. AED's advantage is the powerful cross-attention mechanism [28, 29] that allows the decoder to "attend" any portion of the source. While having overall good performance, AED models tend to hallucinate, especially in the low-latency regime [21, 30, 31]. To remedy this, an **auxiliary CTC**[32] directly predicting translation (ST CTC), was explored [13, 14]. ST CTC provides extra regularization during training, resulting in faster and better convergence. The ST CTC output can also be used during decoding to re-score the hypotheses produced by the AED decoder [33].2 We note that, unlike AED, CTC does not use cross-attention to attend the entire source speech and instead directly classifies each source-speech frame with a translation token or blank (i.e., no translation). Since each speech frame is classified with a translation or blank, this can be seen as an **explicit latent alignment between the source speech and target translation**. Any word reordering needed between the source and target languages in ST CTC happens in the encoding phase at the level of speech frames, leading to a worse quality of ST CTC alone.
Footnote 2: Other authors use CTC with source language transcriptions, i.e., ASR CTC. However, ASR CTC cannot be used to improve the translation quality during the inference.
## 3 Method
Our method aims to provide segmentation of the source sound by relying on the punctuation that was automatically created on the target side by the speech-to-text model. We start from ST CTC, which classifies each source speech frame with target translation, including punctuation symbols. The ST CTC output thus directly links target-side punctuation to time positions in the source. However, we must consider that the ST CTC translations are typically worse than the AED translations (e.g., [14] report an average translation quality difference of 4 BLEU points). Also, the latent alignments of [14] are a mere modeling tool rather than a goal product. We therefore ask two questions: **Q1: Are the latent alignments reliable? Q2: Are the ST CTC punctuation predictions good enough?** To answer these questions, we propose the following two simple methods:
Greedy ApproachThe first approach, the "greedy" approach, relies solely on the ST CTC predictions. For each speech frame, the greedy approach takes the translation label with the highest probability and looks if the label is a sentence punctuation symbol (i.e., ".!?"). If so, the frame is labeled as a segment boundary. First, the translation of the current segment is finalized using the standard incremental beam search, and a new sentence is started. The approach is summarized in Algorithm 1.
Align ApproachAs pointed out, the ST CTC translations are typically worse than the AED translations. Hence, the
second approach, dubbed "align", uses the AED predictions, and the ST CTC is used only for the alignment. Specifically, the SST model provides a simultaneous translation using the standard incremental beam search. Once a sentence punctuation symbol (i.e., ".!?") is detected in the translation, we use ST CTC to find the alignment of the punctuation in the source speech. Because we assume that the incremental beam search uses CTC re-scoring that computes CTC prefix probabilities [34], we extract the alignment as the frame with the highest CTC prefix probability, where the prefix is the generated sentence, including the sentence punctuation. This way, we obtain the alignment with one pass over the source frames. For technical details on efficient implementation of the CTC prefix probability, follow [33]. The align approach is summarized in Algorithm 2.
```
Input : Streaming speech (split to small blocks), ST model (encoder, ctc, decoder) Output : Partial hypotheses
1foreach streaming speech block \(B\)do
2\(H\leftarrow\) encoder(\(B\))
3\(L\leftarrow\) ctc(\(H\)) p CTC latitude; timeX(vocab + 1); +1 for blank
4\(Y\leftarrow\) incremental-beam-search(\(H\), \(L\))
5\(t^{\text{seq}}\leftarrow\max\{t\,|\) (\(\arg\max_{v}L_{t,v}\)) \(\in\) {"...","..."}}
6if\(t^{\text{seq}}\neq\emptyset\)and\(t^{\text{seq}}\geq\text{min\_len}\)then
7\(H\gets H_{1:\text{req}}\)
8\(p\) prepend \(B_{\text{req}:|B|}\) to next segment return incremental-beam-search(\(H\), \(L\))
```
**Algorithm 1**Proposed greedy segmentation approach.
## 4 Experimental Setup
DataIn our experiments, we use the English \(\rightarrow\) German, English \(\rightarrow\) French, English \(\rightarrow\) Chinese, and English \(\rightarrow\) Russian language pairs of the MuST-C [35] data set. We use the training and validation sets during the training and tuning of the hyper-parameters for the segmentation algorithms. Finally, we use the tst-COMM spin to report the final results. Additionally, we use the test split of Europarl-ST [36] to report out-of-domain results.
ModelsAll models are attention-based encoder-decoder models. To accommodate the simultaneous regime, we adopt a blockwise encoder [37], but any unidirectional encoder would work. We pre-process the audio with 80-dimensional filter banks. We build a unigram [38] vocabulary with a size of 4000 for all language pairs. All models use a block size of 40 (1.6 s). The encoder has 12 layers, and the decoder has six layers. The model dimension is 256, and the feed-forward dimension is 2048 with four attention heads. To improve the training speed, we initialize the encoder with weights pre-trained on the ASR task of the MuST-C dataset. Further, we employ ST CTC [13, 14] after the encoder with weight 0.3 during training and decoding. As a regularization, we use speed perturbation (at 0.9, 1.0, and 1.1 speeds), and to improve the long-form performance, we also include concatenation of two consecutive segments from the training data. Finally, we use checkpoint averaging for the last ten epochs. We use the ESPNet-ST toolkit [39]
EvaluationAll models are evaluated using Simuleval [40] toolkit. We adopt incremental blockwise decoding [31, 37] with CTC incremental policy [30]. In all our experiments, we use beam search with size 6. For the long-form evaluation, we adopt the evaluation protocol suggested by [41]: instead of reporting quality and latency on the document level, we align the hypothesis to the reference using a re-implementation of mwerSegmenter3[42], followed by re-segmentation into sentences based on the reference punctuation. The quality and latency metrics are then computed on the re-segmented utterances. For the translation quality, we report detokenized case-sensitive BLEU [43], and for the latency, we report length-aware average lagging (LAAL) [44, 45].
Footnote 3: For Chinese, we align on the character level instead of word level. We also tokenize the inputs before the alignment process.
BaselinesWe use the development set to tune all hyper-parameters of the baselines. We tuned all parameters for each language pair separately. Fixed-length segmentation is tuned on interval (4, 34) seconds (s). For SHAS+DAC, we tune the maximum length between 4 and 72 s. Both proposed methods and SHAS-SIM have the minimum length between 2 and 32 s. The maximum length for SHAS+SIM was tuned relative to the minimum length on interval (1, 7) s. Because this interval influences the quality-latency tradeoff, we tune one system for latency (denoted SHAS+SIM-L) and another for quality (SHAS+SIM-Q). We found the value of approx. 2.5 s as best for SHAS+SIM-L and 7 s for SHAS+SIM-Q.
## 5 Results
We present the result in Table 1. On in- and out-of-domain data, both proposed methods (greedy and align) outperform
all low-latency baselines (fixed-length and SHAS+SIM-L) except for out-of-domain English-to-French, where the proposed align ties with SHAS+SIM-L. On average, the proposed **align approach outperforms** fixed-length by 1.6 BLEU and SHAS+SIM-L by 0.4 BLEU, and the proposed **greedy approach outperforms** fixed-length by 1.7 BLEU and SHAS+SIM-L by 0.5 BLEU across all language pairs. This answers our question Q1 -- the latent alignments are reliable for the segmentation task. We attribute the worse quality of SHAS+SIM-L compared to the proposed methods to the SIM algorithm that forces the segmentation between minimum and maximum length. I.e., when the SHAS model does not detect any sentence boundary in this interval, SIM segments on the maximum length. In the low-latency SHAS+SIM-L, this interval is approx. 2.5 s. Considering that the average sentence length in the MuST-C test set is 5.8 s, this inevitably leads to incorrect segmentation of some sentences. On the English-to-German MuST-C test set, this occurred 203 times out of 941 segments predicted by SHAS+SIM-L in 4.7 hours, i.e., **0.6 forced sentence segmentations per minute**.
Unsurprisingly, the offline SHAS+DAC performs better than the low-latency systems. However, on average, the **proposed low-latency greedy is only 0.2 BLEU worse than the offline SHAS+DAC**. Interestingly, the high-latency SHAS+SIM-Q is better than the offline SHAS+DAC. This is probably due to the considerable delay introduced by the 7-second interval in the SHAS+SIM-Q. Since the translation model has to wait for 7 s, a large portion of each sentence is translated in an offline regime.
Counterintuitively, the **greedy approach outperforms the align approach** slightly (only 0.1-0.2 BLEU). Because the CTC translation quality is worse than that of the AED [14], we would expect the align approach to reach a better quality. A possible answer might be a mismatch between the ST CTC and AED predictions that leads to a slightly poorer alignment. This answers our question Q2 -- the ST CTC punctuation predictions are suitable for the segmentation.
In Table 2, we compare the computational complexity of the low latency systems. The proposed segmentation methods, like the fixed-length method, **do not introduce new segmentation parameters**. The proposed methods have about **30 % lower real-time factor** (RTF) than the SHAS+SIM-L, as they do not have to evaluate the additional segmentation model. Interestingly, the fixed-length method has a slightly higher RTF. The probable cause is the quadratic complexity of the AED decoder and the length of an average segment proposed by the segmentation methods: the fixed-length method uses 20 s (was found to maximize the translation quality on the development set) and the proposed align method produces segments of an average length of 8.5 s.
## 6 Conclusion
In this paper, we presented two simple speech segmentation methods introducing new state-of-the-art performance to simultaneous speech segmentation. A thorough evaluation on in- and out-of-domain data shows that the proposed methods offer the best quality with the same latency and have the smallest computational footprint. To the best of our knowledge, our methods are the first that allow an actual end-to-end simultaneous speech translation, as they use the translation model for the joint translation and segmentation without explicitly modeling the segmentation. In future research, we will explore the properties of latent alignments, including latent alignments from other architectures.
\begin{table}
\begin{tabular}{l l|c c|c c|c c|c c|c c|c} \hline \hline & & \multicolumn{8}{c|}{MuST-C (in-domain)} & \multicolumn{8}{c}{Euporapl-ST (out-of-domain)} \\ \hline Type & Segm. method & EN\(\rightarrow\)DE & \multicolumn{2}{c|}{EN\(\rightarrow\)FR} & \multicolumn{2}{c|}{EN\(\rightarrow\)RU} & \multicolumn{2}{c|}{EN\(\rightarrow\)ZH} & \multicolumn{2}{c|}{EN\(\rightarrow\)DE} & \multicolumn{2}{c|}{EN\(\rightarrow\)FR} \\ & BLEU\(\dagger\) & LAAL\(\downarrow\) & BLEU\(\dagger\) & LAAL\(\downarrow\) & BLEU\(\dagger\) & LAAL\(\downarrow\) & BLEU\(\dagger\) & LAAL\(\downarrow\) & BLEU\(\dagger\) & LAAL\(\downarrow\) & BLEU\(\dagger\) & LAAL\(\downarrow\) \\ \hline Offline & Oracle & 25.4 & _1750_ & 33.6 & _2091_ & 16.2 & _1819_ & 21.0 & _1858_ & 17.5 & _2043_ & 15.8 & _2691_ \\ & SHAS+DAC & 24.8 & _1421_ & 32.4 & _2273_ & 16.0 & _1466_ & 20.8 & _1248_ & 16.8 & _1450_ & 15.1 & _2177_ \\ \hline High latency & SHAS+SIM-Q & 25.0 & 5378 & 33.8 & 5733 & 16.0 & 2701 & 20.9 & 3295 & 16.9 & 4833 & 16.4 & 5134 \\ \hline \multirow{3}{*}{Low latency} & Fixed-length & 22.8 & 1339 & 31.3 & 3207 & 14.7 & 1418 & 19.6 & 1092 & 14.0 & 392 & 12.2 & 1952 \\ & SHAS+SIM-L & 23.6 & 1582 & 31.3 & 2411 & 15.5 & 1687 & 20.4 & 1581 & 16.1 & 1622 & 14.9 & 2661 \\ & Greedy (ours) & **24.2** & 1533 & **31.9** & 2421 & **16.0** & 1648 & 20.8 & 1553 & 16.7 & 1612 & **15.1** & 2506 \\ & Align (ours) & 24.0 & 1547 & 31.7 & 2423 & **15.9** & 1638 & **20.9** & 1568 & **16.8** & 1614 & 14.9 & 2529 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Systems better than the other low-latency baselines in **bold**. Underlined and dotted-underlined scores are significantly different from other low-latency baselines with \(p\)-value \(<0.01\) and \(p\)-value \(<0.05\), respectively. Offline segmentation methods have only _theoretical latency_, as the segmentation is done offline before the translation. The latency LAAL is in milliseconds.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Segm. method & Segm. param.\(\downarrow\) & Total param.\(\downarrow\) & RTF\(\downarrow\) & LAAL\(\downarrow\) & BLEU\(\dagger\) \\ \hline Fixed-length & **0** & **45 M** & 0.46 & **1339** & 22.8 \\ SHAS+SIM-L & 208 M & 253 M & 0.61 & 1582 & 23.6 \\ Greedy (ours) & **0** & **45 M** & 0.42 & 1533 & **24.2** \\ Align (ours) & **0** & **45 M** & **0.41** & 1547 & 24.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of low latency segmentation methods on English-to-German MuST-C test set. Real-time factor (RTF) measured on Intel i7-10700 using a single thread. Better values in **bold**. Total param. is the total number of parameters, including the translation model. |
2303.17976 | The influence of the boundary conditions on characteristics of nuclear
fission | In this paper, using a quasi-classical statistical approach based on the
Langevin equation, we simulate the fission dynamics of selected even-even $\rm
U$, $\rm Pu$, $\rm Cm$, $\rm Cf$ and $\rm Fm$ actinide nuclei. As a preparatory
part of the work, before solving the Langevin equations, the determination of
transport parameters such as inertia and friction tensors within the
hydrodynamic model is performed. Potential energy surfaces are calculated
within a macroscopic-microscopic approach in a three-dimensional space of
deformation parameters defined within the Fourier decomposition of the surface
radius function in cylindrical coordinates. Using the Lublin-Strasbourg drop
model, Strutinsky shell correction and BCS-like pairing energy model with the
projection onto good particle number, we calculate the nuclear total potential
energy surfaces (PES). The restoration of the particle number in the superfluid
approach is realized within the Generator Coordinate Method (GCM) with the so
called Gaussian Overlap Approximation (GOA). The final study is concerned with
the effect of the starting point of the stochastic Langevin trajectory on its
time evolution and, more importantly, the conditions for judging whether such a
trajectory for a given time moment describes an already passed fission nucleus
or not. Collecting a large number of such stochastic trajectories allows us to
assess the resulting fragment mass distributions, which appear to be in good
agreement with their experimental counterparts for light and intermediate
actinides. More serious discrepancies are observed for single isotopes of
californium and fermium. | Pavel V. Kostryukov, Artur Dobrowolski | 2023-03-31T11:23:12Z | http://arxiv.org/abs/2303.17976v2 | # The influence of the boundary conditions on characteristics of nuclear fission
###### Abstract
In this paper, using a quasi-classical statistical approach based on the Langevin equation, we simulate the fission dynamics of selected even-even U, Pu, Cm, Cf and Fm actinide nuclei. As a preparatory part of the work, before solving the Langevin equations, the determination of transport parameters such as inertia and friction tensors within the hydrodynamic approach is performed. Potential energy surfaces are calculated within a macroscopic-microscopic approach in a three-dimensional space of deformation parameters defined within the Fourier decomposition of the surface radius function in cylindrical coordinates. Using the Lublin-Strasbourg drop model, Strutinsky shell correction and BCS-like pairing energy model with the projection onto good particle number, we calculate the nuclear total potential energy surfaces (PES). The restoration of the particle number in the superfluid approach is realized within the Generator Coordinate Method (GCM) with the so called Gaussian Overlap Approximation (GOA). The final study is concerned with the effect of the starting point of the stochastic Langevin trajectory on its time evolution and, more importantly, the conditions for judging whether such a trajectory for a given time moment describes an already passed fission nucleus or not. Collecting a large number of such stochastic trajectories allows us to assess the resulting fragment mass distributions, which appear to be in good agreement with their experimental counterparts for light and intermediate actinides. More serious discrepancies are observed for single isotopes of californium and fermium.
## I Introduction
This year marks the 85\({}^{th}\) anniversary of the discovery that heavy atomic nuclei are not only radioactive but also can decay into fragments of variable mass numbers called later as fission process. Although fission has been extensively investigated over this long period, we still need to gain complete knowledge about this process. Of course, there have been several successful attempts at its theoretical description, leading to some combinations of various well-known macroscopic liquid drop-like models and microscopic shell and pairing corrections, realized usually by the Strutinsky and the BCS-like models, respectively (see, e.g., Refs [1; 2; 3; 4; 5; 6; 7]) providing a correct description of fission characteristics, such as distributions of masses, charges, kinetic energies, a multiplicity of emitted particles, etc. Nevertheless, "white spots" still exist in the description of the dynamics of the studied phenomenon, especially at its last stage, when the fissile system is close to splitting into fragments.
This study aims to shed some light on the still persistent problems of the dynamical description of low-energy fission of atomic nuclei, knowing that the nature of the fission phenomenon is, to some extent, stochastic. The starting point of the discussion is constructing a model based on the well-known macroscopic-microscopic approach [2; 3], where the potential energy function is expressed via the collective degrees of freedom, known as deformation parameters of the nuclear surface.
The nuclear surface geometry is defined by the so-called shape parametrization, which is given here as a Fourier expansion of the square of the distance of a given point on the surface to the symmetry axis, \(\rho^{2}(z,\varphi)\). The amplitudes of such a linear combination standing in front of the sine and cosine functions are related to the deformation parameters of the PES [8]. The fission dynamics, where the temporal evolution of the surface shape is governed by the system of Langevin equations [9], is described by a set of classical Hamilton-like trajectories, taking into account the excitation energy, friction between moving nucleons, and diffusion effects. Particular attention is paid to investigating the initial and the trajectory-termination conditions, which are crucial in obtaining a reasonable agreement of the generated fragment mass distributions (FMD) of primary fission fragments with the empirical data. The model has been "calibrated" in order to characterize in the best possible way the induced by thermal and 15 MeV neutrons fission of \({}^{235}\)U nucleus. Afterward, with further minor generalizations, it has been applied to simulate the spontaneous and induced fission of composite even-even actinides with proton number \(Z\) in the region of 92-100.
The work has the following structure: after the introduction, the second chapter is devoted to the main points of the model. In the third chapter, we investigate the dependence of evaluated distributions of the primary fission fragments on the initial and termination conditions of the Langevin trajectories. In the fourth chapter, we apply the here fixed model to other than \({}^{236}\)U even-even nuclei and discuss the quality of our results by comparing them to the existing empirical data. We conclude our results in the last chapter.
## II Quasi-classical stochastic Langevin approach
The exact determination of the relevant fission process deformation parameters and the collective inertia
and friction tensors are essential steps to successfully apply the Langevin approach to the evolution of a nucleus towards fission. Therefore, the critical issue of this kind of quasi-stochastic model is to obtain the change of nuclear surface shape with time, thus determining the set of a large number of trajectories \(\mathbf{q(t)}=\{q_{1}(t),...,q_{n}(t)\}\) in the admitted \(n\)-dimensional deformation space.
At present, there exist various nuclear shape parametrizations, among which the most popular are spherical-harmonic decomposition [10], Cassini ovaloids [11; 12], Funny-Hills and its later variations [4; 13] or two-center parameterization [14]. Nevertheless, in this paper one uses a relatively new, efficient, and fast convergent parametrization [8], which represents the axially symmetric nuclear surface in cylindrical coordinates, \(\rho_{s}^{2}(z,\mathbf{q})\) as a Fourier expansion of the form:
\[\begin{split}\rho_{s}^{2}(z,\mathbf{q})=R_{0}^{2}\sum_{n=1}& \bigg{[}a_{2n}(\mathbf{q})\cos\bigg{(}\frac{2n-1}{2}\pi\frac{z-z_ {sh}}{z_{0}}\bigg{)}\\ &+~{}a_{2n+1}(\mathbf{q})\sin\bigg{(}\frac{2n}{2}\pi\frac{z-z_{ sh}}{z_{0}}\bigg{)}\bigg{]},\end{split} \tag{1}\]
where \(R_{0}=1.2\cdot A^{1/3}\) is the radius of the corresponding spherical nucleus, \(z_{sh}\) is the displacement of the center of mass of the nucleus when \(q_{2n+1}\neq 0\) are considered. The dimensionless parameter \(c\) is responsible for elongating the nuclear body along the \(z\)-axis. If \(c>1\), nuclear shapes are _prolate_ whereas \(c<1\) produces oblate shapes. Therefore, the length of the nucleus measured along the \(z\)-axis is \(2z_{0}=2cR_{0}\) where \(\pm z_{0}\) determines the position of the right and the left end of the nucleus case \(z_{sh}=0\) respectively. In the expansion (1), the coefficients \(a_{n}\) are not yet the physical deformation parameters, denoted in the following by \(q_{n}\). It has been proved e.g. in [8] that the transformation between original \(a_{n}\) amplitudes in the Fourier series (1) and the physical deformation parameters \(q_{n}\) can be of the following form:
\[\begin{split} q_{2}&=a_{2}^{0}/a_{2}-a_{2}/a_{2}^{0 },\\ q_{3}&=a_{3},\\ q_{4}&=a_{4}+\sqrt{(q_{2}/9)^{2}+(a_{4}^{0})^{2}}, \\ q_{5}&=a_{5}-(q_{2}-2)\cdot a_{3}/10,\\ q_{6}&=a_{6}-\sqrt{(q_{2}/100)^{2}+(a_{6}^{0})^{2}},\end{split} \tag{2}\]
where parameters \(a_{2}^{0}\), \(a_{4}^{0}\), \(a_{6}^{0}\) describe the spherical nuclear shape with radius \(R_{0}\). In order to discuss the influence of non-axial shapes, one can easily modify our shape parametrization by multiplying the right-hand side of Eq. (1) by a function \(f_{\eta}(\varphi)\)
\[f_{\eta}(\varphi)=\frac{1-\eta^{2}}{1+\eta^{2}+2\eta\cos\varphi}, \tag{3}\]
chosen in such a way that any cross-section of the nuclear drop (1) perpendicular to the \(z\)-axis is an ellipse of half axes \(a\) and \(b\) while \(\eta\equiv\frac{b-a}{b+a}\). The geometry of a non-axial, prolate nuclear shape is presented schematically in Fig. 1. The variety of the shape configurations is presented in Fig. 2.
The most relevant for fission process \(\{q_{2},q_{3},q_{4}\}\) deformation parameters describe the nuclear elongation along \(z\)-axis, mass (volume) asymmetry of the left and right fragment, and the neck shape, respectively. It should be noted that the results presented in Refs. [8; 15] reveal that the set of these three collective deformations \(\mathbf{q}=\{q_{2},q_{3},q_{4}\}\) is sufficient to describe the behavior of the fissioning system close to its scission point within a reasonable energetical uncertainty of less than 1 MeV. Therefore, the higher order deformations, \(q_{5}\) and \(q_{6}\), which mainly modify the shapes of fission fragments in an insignificant way, are neglected at the current stage of our investigations.
A similar argumentation applies the non-axiality degree of freedom, which is known to impact the PES of, in particular, actinide nuclei in the vicinity of the fission barrier, e.g. by reducing its height within 0.5-1 MeV. Thus the above property of PES in actinides allows us, at first approximation, to neglect the influence of the non-axial deformation \(\eta\).
### Potential energy surface
Setting the geometry of the nuclear surface, we come to the problem of defining the PES, which is a crucial factor determining the evolution of the fissile system.
From among a wide range of known approaches able to produce the potential energy function depending on the surface shape, we have decided to use a well-known macroscopic-microscopic model. Then, the total energy of a nucleus, \(V(\mathbf{q})\), can be composed of the leading macroscopic term \(E_{macr}\), evaluated in terms of a liquid-drop type approach, here the Lublin-Strasbourg Drop (LSD) [16], while the microscopic interaction energy \(E_{micr}\), playing the role of the energy correction on top of the dominating smooth liquid-drop term, is strictly related to the specific single-particle structure of a given
Figure 1: An example of the elongated nuclear surface obtained in the Fourier parameterization (1)
nucleus
\[V=E_{macr}+E_{micr}. \tag{4}\]
The deformation-dependent LSD smooth energy contribution in (4) is written as
\[\begin{split} E_{LSD}&=b_{vol}(1-k_{vol}I^{2})A-\\ & b_{surf}(1-k_{surf}I^{2})A^{2/3}B_{surf}(\mathbf{q})\\ &-b_{cur}(1-k_{cur}I^{2})A^{1/3}B_{cur}(\mathbf{q})\\ &-\frac{3}{5}e^{2}\frac{Z^{2}}{r_{0}^{ch}A^{1/3}}B_{Coul}( \mathbf{q})+C_{4}\frac{Z^{2}}{A}\\ &-10\,exp(-4.2|I|),\end{split} \tag{5}\]
where \(I=\frac{N-Z}{A}\) is the so-called _reduced isospin_ whereas \(B_{surf}\), \(B_{cur}\), \(B_{Coul}\) introduce the deformation dependence to the surface, curvature, and Coulomb terms, respectively. The last deformation-independent term is what we usually call the congruence energy and is taken from the estimates of Myers and Swiatecki [1]. All parameters of the LSD formula originally found in Ref. [16] are also rewritten below:
\[\begin{split}& b_{vol}=15.4920\ \mathrm{MeV},\quad k_{vol}=1.8601,\\ & b_{surf}=16.9707\ \mathrm{MeV},\ k_{surf}=2.2038,\\ & b_{cur}=3.8602\ \mathrm{MeV},\quad\ k_{cur}=-2.3764,\\ & C_{4}=0.9181\ \mathrm{MeV},\qquad r_{0}=1.21725\ \mathrm{fm}.\end{split}\]
Please notice that this simple formula has been proven to reproduce the masses of over 3000 isotopes and over 80 fission barriers in actinides and super-heavy nuclei with reasonable accuracy.
In turn, the microscopic part in Eq. (4) is customarily decomposed into two energy components responsible for the shell, \(E_{shell}\), and pairing interaction (superfluidity), \(E_{pair}\), effects calculated within the Bardeen-Cooper-Schrieffer model proposed in [17]. The shell correction, \(E_{shell}\) is, by definition, obtained by subtracting the mean energy \(\tilde{E}\) arisen due to smoothing out the nucleon mean-field spectrum up to the levels from the energy continuum from the sum of the all occupied single-particle energies \(e_{k}\) (see, e.g. [18])
\[E_{shell}=\sum_{k}e_{k}-\tilde{E}. \tag{6}\]
In (6), the averaged energy \(\tilde{E}\) is estimated through the Strutinsky method [2; 3] by smearing out the discrete spectrum with a correction polynomial of the \(6^{th}\) order. Finally, the pairing energy correction is determined in a similar way as done in (6), but the resulting BCS energy is, in addition, reduced by the so-called average pairing-energy term, \(\tilde{E}_{pair}\), which is not accounted in the smooth liquid-drop contribution (5), as done in Ref. [19]
\[E_{pair}=E_{\mathrm{BCS}}-\sum_{k}e_{k}-\tilde{E}_{\mathrm{pair}}. \tag{7}\]
Single-particle spectra for protons and neutrons of here discussed actinide nuclei are eigenvalues of the folded-Yukawa mean-field Hamiltonian diagonalized numerically as described in Ref. [18].
### Nuclear shape evolution
As mentioned, to describe the fission dynamics of selected actinide nuclei, we use a quasi-classical stochastic model, widely presented in Ref. [9]. In this approach,
Figure 2: Fourier-like nuclear shapes for \(q_{2}=0.3\) (blue dot-dashed line), \(q_{2}=1.0\) (green dashed line), \(q_{2}=1.9\) (orange solid line), \(q_{2}=2.35\) (black solid line), \(q_{2}=2.9\) (black dotted line).
a compound, excited, and in a general rotating nucleus is represented in the form of a superfluid incompressible drop [4] with well-defined deformed surface whose time evolution is governed by the set of coupled Langevin equations as functions of collective deformation variables \(\{q_{i}(t)\}\) and the corresponding canonically coupled momenta \(\{p_{i}(t)\}\), written as
\[\left\{\begin{array}{l}\frac{dq_{i}}{dt}=\sum_{j}\left[\mathcal{ M}^{-1}\right]_{ij}p_{j},\\ \\ \frac{dp_{i}}{dt}=-\left[\frac{1}{2}\sum_{jk}\frac{\partial\left[ \mathcal{M}^{-1}\right]_{jk}}{\partial q_{i}}p_{j}p_{k}+\frac{\partial F}{ \partial q_{i}}\right.\\ \\ \left.+\sum_{jk}\gamma_{ij}\left[\mathcal{M}^{-1}\right]_{jk}p_{k} \right]+\mathcal{R}_{i},\end{array}\right. \tag{8}\]
where \(\mathcal{M}_{ij}\) and \(\gamma_{ij}\) are tensors corresponding to mass (inertia) and friction, respectively, while \(F\) is the Helmholtz free energy potential of the compound fissile system
\[F(\mathbf{q},T)=V(\mathbf{q})-a(\mathbf{q})T^{2}. \tag{9}\]
In the above, \(a(\mathbf{q})\) is deformation-dependent energy level density, defined according to the prescription [20], and \(T\) is the temperature of the system, which is related to the excitation energy \(E^{*}\) through the relation:
\[E^{*}=a(\mathbf{q})\,T^{2}. \tag{10}\]
It is assumed that the excitation energy in our work, \(E_{0}^{*}\), at an initial time \(t=0\) is the difference of the excitation energy \(E_{init}\) relative to the ground state and the height of the fission barrier \(V_{B}\).
The last term, \(\mathcal{R}_{i}\), of the second equation of the equation system (8) corresponds to the \(i^{th}\) component of the Langevin random force, which by definition writes
\[\mathcal{R}_{i}=\sum_{j}g_{ij}\Xi_{j}(t), \tag{11}\]
where \(\Xi(t)\) is a time-dependent stochastic function given as \(\Xi_{j}(t)=\nicefrac{{\xi_{j}}}{{\sqrt{\bar{\epsilon}}}}\) with the following properties: \(\langle\xi_{k}\rangle=0\), \(\langle\xi_{k}\rangle^{2}=2\). The amplitudes \(g_{ij}\) can be deduced from the fluctuation-dissipation theorem, known [9; 21] as the Einstein relation enabling for calculating the diffusion tensor
\[\mathcal{D}_{ij}\equiv\sum_{k}g_{ik}g_{jk}=\gamma_{ij}\cdot T \tag{12}\]
with \(\gamma_{ij}\) being the friction tensor. The collective inertia used in Eq. (8) is calculated within the incompressible irrotational flow approach using the Werner-Wheeler approximation [22]
\[\mathcal{M}_{ij}(\mathbf{q})=\pi\rho_{m}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
of the diffusion tensor \(D_{ij}\) of Eq. (12). The introduced temperature-dependent factor significantly changes the friction when \(T\) tends to zero, e.g. in spontaneous fission. As known, classical Brownian motions vanish when the system's temperature tends to zero. Thus the diffusion tensor \(D_{ij}\), which fixes the magnitude of the random Langevin force, should vanish, too and therefore the statistical nature of the fission processes will be violated. On the other hand, quantum-mechanical considerations bring us to the conclusion that even in temperature being close to zero, i.e. for very low excitation energy, the zero-point motion of nucleons can cause fission.
To simulate these quantum effects in the semi-classical Langevin description, one can replace the temperature \(T\) with an effective temperature \(T^{*}\) in (12), as proposed in Ref. [25]
\[T^{*}=E_{0}\coth\frac{E_{0}}{T}, \tag{19}\]
where \(E_{0}=\frac{\hbar\omega_{0}}{2}\) corresponds to the zero-point collective oscillation energy of the nucleus in the vicinity of its ground state, which typically varies between \(0.5-2\) MeV. Under this assumption one obtains from Eq. (12) a more realistic description of the friction in low energy fission, responsible for the energy exchange between single-particle and collective degrees of freedom.
The set of Langevin equations (8) is solved by the discretization method in which the corresponding differential quotients are applied instead of the time derivatives on the left-hand sides of both equations. The finite time step for the numerical solution of their discretized forms is taken as \(0.01\tau\) of the characteristic relaxation time \(\tau\equiv\frac{2\mathcal{M}}{\gamma}\frac{\hbar}{\text{MeV}}\).
### Initial and trajectory-terminating conditions
Having described the essential components of the model, we can proceed to a crucial point of this work, namely, defining the set of boundary conditions for the differential Langevin equations. For this purpose, first, one should define a region in the domain of collective variables \(\mathbf{q}\) in which the shape evolution of a nucleus is performed. Some detailed studies have shown that for actinide nuclei, it is necessary to consider the following collective three-dimensional deformation space
\[\begin{split} q_{2}&=[\begin{array}{cc}0&(0.05)&2. 35\end{array}]\\ q_{3}&=[-0.21&(0.03)&0.21\end{array}]\\ q_{4}&=[-0.21&(0.03)&0.21\end{split} \tag{20}\]
which comprises a vicinity of the ground state, all relevant for fission process saddle points and isomeric minima ending within the configurations, where the nucleus is already split into two fragments, or alternatively, the width of the neck of a compound nucleus is sufficiently small (around \(0.2\,R_{0}\)) to observe the fission. In the nodes of such lattice, we have calculated the previously introduced values of the collective potential, inertia, and friction tensors. To determine the values between the lattice nodes, we use the so-called Gauss-Hermite approximation method proposed in Ref. [26], which determines the demanded values on, in general, N-dimensional mesh with very satisfactory accuracy.
At the end of the paragraph, let us mention the behavior of a trajectory when a variable \(q_{i}\), being a part of the parametric definition of that, reaches its extreme (border) value given in (20). This may happen, for example, when the entry point in a given isotope is located relatively close to the grid boundaries and, therefore, after a couple of time steps, can easily reach the border. Technically, such a trajectory does not lead to fission and, strictly speaking, should be removed from our consideration. In such a case, some conditions for resuming such a trajectory may be helpful. A reasonable possibility to solve this problem may be to change the sign of the momentum component conjugated to this coordinate, allowing it to turn around and continue its evolution.
With the coordinate \(q_{2}\), the situation is slightly exceptional. After reaching its maximum value \(q_{2}^{max}=2.35\), the system is elongated more than two times than in its ground state. Suppose that for such a deformation, the decisive criterion for qualifying a given trajectory as the one which leads to fission is still not fulfilled. In that case, such a trajectory has no physical meaning, and its further evolution is meaningless. Hence, this kind of basic condition must be imposed in most further calculations, mainly if a symmetric and highly elongated fission channel is intensely populated.
## III Effect of boundary conditions on model results
Nevertheless, previously mentioned boundary conditions usually need to be improved to determine stochastic trajectories effectively. We initiate the evolution of a Langevin trajectory by choosing its initial deformation point on the PES. In general, when using the mentioned formalism, such a point is assumed to be the ground state of the compound system, as it is done in the works of Abe [9; 27]. According to the theory of our stochastic framework, the compound system has to initially stay close to the ground state to be able to say about its evolution towards different decays. In fission, the available energy excess has, at least, to be enough for the system to overcome the barriers standing in the way of the fissioning nucleus. Thus, the set of initial configurations drawn at the beginning of each trajectory is likely located in a specific area in the vicinity of the outer saddle point, through which the system must pass in the direction of fission.
### Modifications of initial conditions
In order to prove this assumption, we calculate the fragment mass distributions for the thermal neutron-induced fission of \({}^{235}\)U nucleus with two different starting points. The first option is to initiate the trajectory from the ground state. Using the initial conditions mentioned in the previous paragraph, the second one starts from the outer saddle point. The initial conjugated momenta are put in both cases to be equal to zero.
We assume that a nucleus undergoes fission and the determination of the corresponding trajectory is terminated if the neck radius in the thinnest point reaches \(r_{neck}\approx 0.3\,R_{0}=2.0\) fm (see, e.g., [28; 29; 30]. Of course, such a criterion is chosen, to some extent, arbitrarily and can be modified by introducing a dependence of \(r_{neck}\) on the scission deformation (elongation) or temperature. Nevertheless, in this study, no such dependence is assumed. As seen in Figure 3, the fragment mass distributions for both these cases are nearly identical. The total number of trajectories used here to generate serious statistics is significant and equal to \(10^{5}\). However, only 1 per 100 initiated in the ground state trajectories overcome the barrier and efficiently evolved to fission. In contrast, the others are stuck in the potential energy well for a long time. If the calculation starts in the direct surrounding of the saddle point, the number of "imprisoned" trajectories is lower practically by some order of magnitude. However, introducing a few additional constraints can still improve the ratio of "passed" to "imprisoned" trajectories. We, therefore, use the method according to the ideas proposed in the Refs [32; 33], where the manner of generating \(\mathbf{q}^{0}\) initial coordinates to be used in the first time step was proposed. This method consists of the following procedure: using the normal distribution \(\xi_{norm}\) with \(\mu=0\) and \(\sigma=\frac{1}{2}\sqrt{\frac{E_{0}/2V}{\omega_{q}^{2}}}\) we fix the set of coordinates \(\mathbf{q}^{0}\), which then has to satisfy the following condition
\[\begin{cases}q_{2}^{0}\geq q_{2}^{start}\\ \frac{1}{2}\sum_{ij}\left[\mathcal{M}^{-1}\right]_{ij}p_{i}^{0}p_{ j}^{0}\equiv V(\mathbf{q}^{start})-\\ -V(\mathbf{q}^{0})-E_{0}\geq 0.\end{cases} \tag{21}\]
where \(\mathbf{q}^{start}\) is an actual starting point of a trajectory, and \(E_{0}\) describes a contribution of the zero-point vibration energy at that point.
The question arises whether the space of \(\mathbf{q}^{0}\) points should be restricted to a certain volume around the point \(\{\mathbf{q}^{start}\}\). In terms of the condition (21), such a problem may occur when the PES is sufficiently flat around this point, allowing the initial configuration to exceed the borders of the fixed grid, see Fig. 4(a,c). To avoid this, we can somewhat arbitrarily restrict the deformation space \(\mathbf{q}^{0}\) to the following limits:
\[\begin{split}& q_{2}=\left[q_{2}^{start};\ q_{2}^{start}+0.2 \right]\\ & q_{3}=\left[q_{3}^{start}\ -\ 0.09;\ q_{3}^{start}+0.09\right]\\ & q_{4}=\left[q_{4}^{start}\ -\ 0.09;\ q_{4}^{start}+0.09\right] \end{split} \tag{22}\]
In Fig. 4, we see four PES for \({}^{236}\)U, where the coordinates (\(\mathbf{q}^{0}\) are distributed without (a,b) and with (c,d) including the zero-point vibration energy in (21) and with and without limitations of the initial coordinate region (b,d). The figure in the last two panels reveals the lack of sensitivity to these limitations. In this case, the ratio of traversed to not traversed trajectories in case (d) places within the interval \(1-1.5\), noticeably reducing the computation time.
### Fissioning trajectories
Having determined the criteria for fixing the starting point for a given Langevin trajectory which is assumed to lead to fission, let us turn to the problem of assessing whether, at a given time, the trajectory describes a fission configuration or whether the nucleus is still compound. This task, as commonly known, is not trivial as, in reality, the division of a nucleus into fragments may significantly depend not on the neck width alone but also on a series of other quantities characterizing bulk and surface properties of both fragments, their shell structures, deformations, excitation energies, the relative collective velocity of fragments towards fission, neck curvature, etc. As we have explored, knowing the decisive criteria for suspending the evolution of a given trajectory due to the achievement of a neck braking configuration is even more crucial than choosing its starting point. Unfortunately,
Figure 3: Primary FMDs for thermal neutron induced fission of \({}^{235}\)U initiated from the ground state (red) and the second saddle (blue) whereas black triangles correspond to values adapted from experimental data [31]. Here \(Y(A_{f})\) denotes the yield for the corresponding fragment mass.
this problem is not unambiguously solvable at the moment. It requires the introduction of several additional phenomenological assumptions, which will only be tested by comparing the simulation results with empirical data. This definitively may reduce the transparency and universality of this approach.
Since the phenomenological criteria for the neck capture are, to some extent, arbitrary and model dependent, we decide to test within this work the one which effectively would lead to a division of a nucleus into two fragments and depending only on the neck radius (width), \(r_{neck}\), in case axial shapes are considered. Such a solution is widely used in several recent works, e.g., Refs. [27; 28; 29; 30; 34; 35]. In our approach, the fission onto two fragments occurs when the neck radius, a value of which may vary between \(1-2.5\) fm, is close to the effective radius of a single nucleon, denoted in the following by \(r_{n}\) and is approximately equal to 1 fm. Suppose the neck radius \(r_{neck}\) is too big at reaching the presumed elongation grid-border \(q_{2}^{max}\). In that case, the trajectory has, in principle, should be excluded from consideration as a non-fissioning one.
In practical calculations on a finite deformation grid, such grid-border values are usually fixed slightly before geometrical scission, i.e., where the neck radius is strictly
Figure 4: Samples of starting-point distributions on PES of \({}^{236}\)U. Panel (a) - without limit control and subtraction of \(E_{0}\), (b) - without limit control and inclusion of \(E_{0}\), (c) - without control and inclusion of \(E_{0}\) and (d) - with limit control and \(E_{0}\) included, the gray cross gives the location of the second saddle.
Figure 5: Primary FMDs for starting-point distributions, where symbol \((i)\) corresponds to analogous cases from Fig. 4.
equal to zero. This is so because the accuracy of numerical determining of the PES and necessary transport quantities for already two separated, strongly elongated fragments are considerably lowered due to limitations of numerical routines used to develop the eigensolutions of the Yukawa-folded Hamiltonian and the liquid-drop deformation functions in highly elongated, necked nuclear shapes.
However, in specific test cases shown below, where the scission configurations for symmetric fission can be strongly elongated, we allow for a possibility that the trajectory is continued even though the presumed elongation limit, \(q_{2}^{max}\), is slightly exceeded. At the same time, the condition for the neck radius is still not satisfied. To show the contribution of such strongly elongated states to the final FMD, in addition to the previously obtained initial conditions (22), we introduce the trajectory-termination conditions in two ways. First of them, for elongations \(q_{2}<q_{2}^{max}\), a trajectory that satisfies only the neck-radius criterion \(r_{neck}<r_{neck}^{stop}\) is counted as a fissioning one. In case \(q_{2}>q_{2}^{max}\) and the neck radius is still greater than a fixed \(r_{neck}^{stop}\) value, such a trajectory is then rejected. This scenario is depicted in Fig. 6(a) with the red line. In contrast, we consider the second way, where the neck radius condition is completely ignored, and the trajectory reaching the elongation limit \(q_{2}^{max}\) describes the act of fission. Clearly, in the last scenario, the values of the neck radii in the fissile configurations distribute over different possible values ranging from \(r_{n}\) to even more than \(4r_{n}\) with a clear peak around \(2r_{n}\), as presented in Fig. 6(b) with the navy blue line.
As shown in Fig. 6(a), neglecting the "neck-radius condition" results in a significant contribution of both near-symmetric and extremely asymmetric channels, which are not observed in the experimental distribution. To explain this, let us return to Fig. 2, where one can see that at elongation \(q_{2}=q_{2}^{max}=2.35\), in light of the neck-radius condition, and some configurations cannot be qualified as being very close to splitting. If one considers the achievement of this elongation limit as the only decisive condition for fission, there appears a danger of obtaining unrealistic FMD. As also seen in Fig. 2, extending this value even to \(q_{2}^{max}\approx 2.9\), at which the accuracy of determining necessary input quantities is getting increasingly questionable, we are still facing nuclear shapes with significant neck widths of approximately \(0.5R_{0}\). Figure 7 displays the increase of the near-symmetric fission yields of the FMD with a gradual shifting of the \(q_{2}^{max}\) value from \(2.35\) to \(2.9\). The above is an effect of the undesired property of our Fourier shape parametrization, which, particularly for large nuclear elongations, cannot produce well-separated, symmetric fragments.
One then deduces that the conditions for nuclear scission applied to our Langevin framework, which mainly determines the quality of reproduction of FMD, have to be searched according to the following rules: first, by considering pure geometrical criteria for the neck
Figure 6: Primary FMD’s (a) for thermal neutron induced fission of \({}^{235}\)U with obligatory usage of neck radius condition (red) and without (navy). The histogram (b) shows \(r_{neck}\) distribution for both cases.
Figure 7: Primary FMDs for thermal neutron induced fission of \({}^{235}\)U with non-obligatory neck radius condition usage at limit value \(q_{2}^{max}=2.35\) (dotted line), \(q_{2}^{max}=2.5\) (dash-dotted line) and \(q_{2}^{max}=2.9\) (solid line).
width, dependent only on the surface-parametrization properties and second, by verifying whereas, for such pre-selected deformation point, the accuracy of determining the macroscopic-microscopic quantities fit the acceptable limits. This also indicates that the choice of the optimal \(r_{neck}^{stop}\) value may not, in general, be universal across the complete set of studied nuclei and needs, at least, to be validated when changing Z or N numbers by a couple of units.
### Searching for the optimal neck radius
Now, after proving that the condition of the neck size is crucial, let us investigate its effect on the distributions of the fission mass fragments. For this purpose, we assume that the value of the limit radius \(r_{neck}^{stop}\) at which a trajectory is stopped may vary from \(3r_{n}\) to 0 with a step of \(r_{n}\). We set the initial points according to Fig. 4(d) and prescription of Eq. (22) while the upper elongation limit \(q_{2}^{max}=2.35\). As can be seen from Fig. 8, the resulting mass distributions change their form for different \(r_{neck}^{stop}\) radii. With decreasing neck radius, the fragment mass distribution is getting slightly narrower, and the asymmetric peak shifts towards more and more symmetric yields. At the same time, its symmetric part is gradually vanishing, approaching the experimental value.
To understand the dominance of the asymmetric fission channel in this nucleus, let us notice at the PES presented in Fig. 4 that the most like path from the starting configuration, set around the second saddle point at \((q_{2},q_{3})\approx(1.0,0.09)\), to the scission leads directly towards the asymmetric valley which is separated from the symmetric one by the edge of almost 3 MeV high, visible around \(q_{3}\approx 0.06\). Moreover, since the excitation energy at the initial configuration is relatively low, the random force defined through Eqs. (11) and (12) has very little chance to push the system over this edge. It can also be seen that except for the extreme cases (a) and (d) with \(r_{neck}^{stop}=3\,r_{n}\) and \(r_{neck}^{stop}=0\), respectively, the overall features of other presented distributions are generally weakly affected, which may indicate that the main contributions to the final FMD come from \(r_{neck}^{stop}=\{2r_{n},\ r_{n},\ 0\}\).
### Stochastic character of neck-breaking
Taking into account the results shown in Fig. 8, one can ask whether the use of the strictly fixed value of the \(r_{neck}^{stop}\) which governs the moment of splitting of a nucleus into fragments of different masses (charges) is not a severe simplification of the stochasticity of fission phenomenon. A simplistic realization of the idea of, to some extent, random value of the \(r_{neck}^{stop}\) radius just before the neck-breaking is to draw at the beginning of each trajectory its value from a specific interval, say \([0,\alpha_{r}\,r_{n}]\), with a probability given through the uniform distribution. The fragment mass distributions shown in Fig. 9 are calculated for the following three values of \(\alpha_{r}=\{1,2,3\}\). The results are compared with the FMD obtained within analogous intervals shown in Fig. 8(a)-(c), respectively.
The above concept may also be realized if instead of the uniform discrete distribution of \(r_{neck}^{stop}\) one uses the continuous normal distribution peaked at \(r_{n}\) with the dispersion \(\sigma\) equal to \(r_{n}\), denoted by \(P_{norm}(r_{n},r_{n})\). These parameter values allow us to cover all the scission neck configurations previously considered in Fig. 9. If the drawn value of \(r_{neck}^{stop}\) happens negative, its absolute value is taken. The resulting distributions seen in Fig. 10 with comparison to the previous ones of Fig. 9(b) seem to be hardly distinguishable.
### Final conditions and excitation energy
As can be seen, the introduction of a more involved Gaussian distribution on \(r_{neck}^{stop}\) thresholds does not qualitatively change the final fragment mass distributions for thermal neutron-induced fission of \({}^{235}\)U. One can then apply a similar procedure to analyze the shapes of distributions for the systems of higher excitation energy. It is clear that at higher temperatures, the system, especially in the neck region, is less stable. Some local surface vibration provoked by thermal nucleon motion can lead to a more rapid neck rupture, even when its radius is much greater than \(r_{n}\). We then study the fast-neutron fission reaction where \(E_{n}=14.8\) MeV. This means the excitation energy \(E^{*}\) exceeds the fission barrier \(V_{B}\) by almost 15 MeV. The calculation is performed for the two variants of \(r_{neck}^{stop}\) conditions, where first, \(r_{neck}^{stop}=2r_{n}\) (see further)
Figure 8: Primary FMD’s for neutron-induced fission \({}^{235}\)U with variation of the neck radius \(r_{neck}\) from \(3r_{n}\) (a), \(2r_{n}\) (b), \(r_{n}\) (c), 0 (d).
and second, \(r_{neck}^{stop}\) is randomly drawn with the probability given by the Gaussian distribution \(P_{norm}(r_{n},r_{n})\). The results are shown in Fig. 11. It is seen that both theoretical estimates of FMDs have a serious discrepancy with the experimental data in the region of symmetric channels.
This example illustrates, slightly in contrast to the thermal-neutron induced fission depicted in Fig. 10, an increased sensitivity of the FMD on the conditions which define the end of a Langevin trajectory. A good illustration to that statement is shown in Fig. 12, where the neck-radius \(r_{neck}^{stop}\) varies from 2 to 4 values of \(r_{n}\) for the previously considered system with an excitation energy of about 15 MeV above the barrier (left panel). For comparison, we consider the uranium system with excitation energies already of 55 MeV in the right panel. With the condition for neck radius \(r_{neck}^{stop}=r_{n}\) the form of the FMD is practically the same as for \(2r_{n}\) while at higher values, e.g. \(r_{neck}^{stop}>4r_{n}\), it is difficult to talk about the occurrence of true neck. One can see that the symmetric yields of the final FMD's become closer to the measured values when the neck radius is higher than in the case of the thermal-neutron induced fission shown in Fig. 10 and varies between \(2r_{n}\) and \(3r_{n}\). A similar tendency to enhance the symmetric fission channel is observed in a highly excited system of \({}^{236}\)U.
Figure 11: Comparison of primary FMD with experimental data [36] for 15 MeV neutron induced fission of \({}^{235}\)U (black triangles) with FMDs calculated within Gaussian \(P_{norm}(r_{n},r_{n})\) distribution (gray) and constant value \(2r_{n}\) (red) of \(r_{neck}^{stop}\).
Figure 10: Comparison of primary FMD’s for thermal-neutron induced fission \({}^{235}\)U calculated within Gaussian \(P_{norm}(r_{n},r_{n})\) (gray) and random picking (blue) distributions imposed on \(r_{neck}^{stop}\).
Figure 9: Primary FMD’s for thermal neutron induced fission of \({}^{235}\)U calculated within random pick (blue) of \(r_{neck}^{stop}\) defined on the following intervals [0,3\(r_{n}\)] (a), [0, 2\(r_{n}\)] (b) and [0,\(r_{n}\)] (c), compared with analogous FMD’s of Fig. 8(a)-(c).
### Symmetric fission effect of very elongated systems
Now, let us return to the influence of the upper-limit value of elongation, \(q_{2}^{max}\), on the resulting FMD's - a problem already introduced in subsection (III.2). In Fig. 13, the change of the FMD for medium excited fissioning \({}^{235}\)U system as a function of \(q_{2}^{max}\) is presented. Recall only that if \(q_{2}^{max}\) is continuously prolonged beyond the safe limit of 2.35, some growing numerical uncertainties in determining the PES and the transport coefficients may appear. Therefore, the temporal evolution of this nucleus is performed until this limit is achieved. We realize that we could obtain distributions that better fit the experimental data by ignoring the fact that such inaccuracies exist. In Ref. [30], the \(q_{2}^{max}\) value was chosen to be equal to 2.9, which combined with \(r_{neck}^{stop}=r_{n}\) allowed the authors to almost perfectly describe the mass and total kinetic energy (TKE) distributions for the thermal-neutron induced with an energy of 14.8 MeV fission of \({}^{235}\)U and \({}^{239}\)Pu.
Figure 13: Comparison of primary FMD’s for 15 MeV neutron induced fission \({}^{235}\)U calculated within Gaussian \(P_{norm}(r_{n},r_{n})\) distribution of \(r_{neck}^{stop}\) with the upper limit of \(q_{2}^{max}\) is 2.35 (dotted), 2.5 (dot-dashed), 2.7 (dashed) and 2.9 (solid).
Figure 12: Comparison of the primary FMDs for 15 MeV neutron-induced fission \({}^{235}\)U (left) and \({}^{236}\)U with 55 MeV excitation above the barrier top (right), calculated for the value of \(r_{neck}^{stop}\) varying from \(2r_{n}\) (dotted line), \(3r_{n}\) (dashed line), \(4r_{n}\) (solid line). Experimental data [36] are also presented to make the comparison clearer.
with 14.8 MeV neutrons, the resulting distributions are noticeably closer to the experimental curves and fit well the error-bar areas.
Studying the results of Fig. 13, we notice that by systematic prolonging of the \(q_{2}^{max}\) up to 2.9, we obtain a more substantial population of the highly elongated near-symmetric yields that contribute on average to the reduction of the TKE's, particularly in part corresponding to symmetric fragmentation.
Summarizing the above-presented test results obtained for studied uranium isotopes, we may conclude that the selection of the starting point and trajectory-termination conditions on the neck width are essential to reasonably reproduce the empirical FMD and TKE distributions, especially at higher excitation energies. The other types of constraints only allow the elimination of trajectories that are not physical, thus reducing the time to generate a necessary number of statistical samples. Moreover, a nontrivial relation between the final conditions and the excitation energy has been observed, the form of which has yet to be mathematically formulated.
## IV Results and Discussion
After establishing the initial and termination (final) criteria for Langevin trajectories by studying the fission of \({}^{235}\)U isotope as the benchmark case, let us broaden the applicability of the model in question to the other actinide elements. In particular, we will focus on the isotopes of \({}^{233}\)U, \({}^{239}\)Pu, \({}^{245}\)Cm, \({}^{249}\)Cf and \({}^{255}\)Fm, for which experimental data on FMD are available. Obtained distributions for chosen nuclei from among these isotopic chains are illustrated in Figs 15. The evaluated results agree with experimental data in medium-heavy actinides, such as uranium, plutonium, and curium. Nevertheless, some larger discrepancies between estimated and empirical distributions are present in selected heavy actinides of californium and fermium isotopes. Although the available experimental data refer to the distributions of secondary fragments (after the emission of light particles from compound nucleus as well as from fission fragments), the mutual shift of both those distributions by a couple of mass units for \({}^{250}\)Cf or appearing of the symmetric-fission peak in our FMD's of \({}^{255}\)Fm cannot be fully explained by the effects of light particle evaporation alone. Also, as commonly known, in Cf and Fm nuclei, a rapid transition from the dominant asymmetric to symmetric fission mode, caused by adding two neutrons, is noticed. The reproduction of that is a particular challenge for our model. Recall that spontaneous or induced fission processes are probabilistic phenomena associated with overcoming the fission barrier between the ground state or some excited state and an exit point of the same energy, by definition, located outside the barrier. In a quantum approach, the probability of passing the barrier is crudely dependent on the barrier shape and the number of hits on the barrier per time unit. In contrast, the barrier is not "tunnelled" in our Langevin-like semi-classical approach. However, it must be over-jumped by a system with kinetic energy greater than the barrier height in the initial evolution stage.
Using the method of determining the starting points, given by formula (21), which are located slightly beyond the outer saddle point, we perform the calculations of Langevin trajectories corresponding to the spontaneous fission for the following even-even nuclei series: \({}^{238}\)U, \({}^{238-244}\)Pu, \({}^{244-248}\)Cm, \({}^{252-256}\)Cf and \({}^{254-260}\)Fm. The trajectory evolution is initiated using similar rules as in the case of induced fission, described in the previous sections. A particular value of \(r_{neck}^{stop}\) for which a given trajectory is terminated at the pre-scission point is drawn at the beginning of each trajectory with the Gaussian probability distribution, \(P_{norm}(r_{n},r_{n})\), as already used in the study of \({}^{235}\)U isotope. Figure 16 illustrates the final FMDs for the spontaneous fission of mentioned nuclei. At the background of generally satisfactory agreement between theoretical and experimental, mainly primary distributions of mass fragments in these nuclei, we note in \({}^{252}\)Cf that although the evaluated distributions reproduce the dominance of asymmetric yields, it considerably overestimate the number of symmetrical fragments and, in addition, are much too narrow. Notice that in \({}^{254,256}\)Fm \({}^{256}\)Cf, the measured distributions include the effect of light-particle emission. Both the compared in this figure distributions, however, differ radically.
Searching the discrete grids of the potential energy of californium and fermium nuclei with neutron numbers corresponding to the transition area from asymmetric to symmetric FMD, we can find more than one point describing possible configurations of the exit from under the barrier. We, therefore, postulate that each such state should be treated as the starting point to perform the Langevin fission simulation. The final FMD thus obtained, say "partial fragment mass distributions," should
Figure 15: Comparison of primary FMDs calculated in our Langevin approach within \(P_{norm}(r_{n},r_{n})\) (solid red line) with primary (solid triangles) and secondary (hollow triangles) FMD’s [31, 37, 38] for thermal neutron induced fission of \({}^{233}\)U, \({}^{235}\)U, \({}^{239}\)Pu, \({}^{245}\)Cm, \({}^{249}\)Cf and \({}^{255}\)Fm nuclei.
be superimposed with appropriate weights to obtain the final FMD. A classical measure of these weights may be the values of the action integrals evaluated between a given starting and exit point. The latter may be found either in the symmetric or asymmetric fission valley. This approach can be used mainly for Cf and Fm nuclei, in which the system decides where to go within a small bifurcation area after crossing the barrier. Nevertheless, this issue is beyond the scope of this work and will be addressed in future investigations.
For a complete comparison, we also include the experimental distributions of primary and secondary fission fragments, as well as the evaluated distributions obtained within the \(P_{norm}(r_{n},r_{n})\) neck-radius normal distribution, and those calculated within the framework of the static Born-Oppenheimer (BOA) model. This latter approach is based on an approximate solution of the eigenvalue problem of the three-dimensional collective Hamiltonian. A more detailed description of that and the corresponding results for a wide range of even-even actinides can be found in [15; 42]. Despite the different theoretical underpinnings of these two models, they both exploit identical PES's and inertia parameters associated with our three-dimensional Fourier deformation space.
## V Summary
This work presents a quasi-classical dynamical approach to simulate the stochastic nature of the fission of a compound nucleus using a system of Langevin equations which needs as the input the free (Helmholtz) energy based on the macroscopic-microscopic PES. The calculations are done in the space of three relevant for the fission process Fourier surface deformations to describe the nucleus elongation, mass asymmetry, and neck thickness. As widely known, the impact of the non-axiality degree of freedom on the PES is irrelevant beyond the outer saddle and in the neighborhood of the scission configurations, thus, is neglected in this study. Such a simplification allows generating of hundreds of thousands of Langevin trajectories for a single nucleus within a reasonable time of tens of minutes.
We emphasize the importance of initial and trajectory-termination conditions, which are independent of the particular realization of the Langevin framework. This last condition appears particularly important as it defines the "critical" width of the neck that allows the composite nucleus to be divided into fragments. Since our calculations are carried out on a finite PES grid, we pay
Figure 16: Comparison of primary FMD’s (solid red line) calculated in Langevin approach within \(P_{norm}(r_{n},r_{n})\) with experimental [37; 38; 39; 40; 41] primary (solid circles) and secondary (hollow circles) FMDs for spontaneous fission of Pu, Cm, Cf and Fm nuclei also calculated with FMDs calculated within BOA method [15](green dashed)
particular attention to the exact setting out of the grid boundaries to capture all essential fission modes, predominantly strongly asymmetric, and additionally not to consider non-physical energy configurations showing up for considerably large elongations.
After analyzing the trajectory termination condition, we concluded that noticeably better results could be obtained if a normal probability distribution \(P_{norm}(r_{n},r_{n})\) of the size of the neck radius instead of its fixed value is used. The maximum of \(P_{norm}\) is located at the mean value of the nucleon radius and the exact value of \(r_{n}\) as its standard deviation. With the increasing temperature of the system, the value of \(r_{neck}^{stop}\) is shifted towards larger values.
The results on the induced and spontaneous fission of various actinide nuclei, such as U, Pu, Cm, Cf, and Fm show generally good agreement with experimental data for medium and some selected heavy actinides. However, there is a discrepancy in the overall behavior of FMD in the Cf series. We strongly hope that applying the concept mentioned above of the superposition of various partial FMD's initiated from a different available exit from the barrier points and using modified Fourier-over-spheroid shape parametrization [43] would allow us to better reproduce the mentioned effect of the abrupt transition between asymmetric to symmetric fragmentation. Further work is needed to improve the model's ability to describe other fission characteristics, such as secondary FMD's and TKE's corrected by the effect of light particle evaporation.
###### Acknowledgements.
The authors are grateful to K. Pomorski and C. Schmitt for their support and fruitful discussions. This work is also partly supported by the Polish National Science Center by SHENG-1 project (Grant No 2018/30/Q/ST2/00185) and NAWASTER project "UMCS Doctoral Schools - Your Success in Globalized World of Science" (Grant No BPI/STE/2021/1/00006/U/00001)
|
2309.16400 | Physics-Preserving AI-Accelerated Simulations of Plasma Turbulence | Turbulence in fluids, gases, and plasmas remains an open problem of both
practical and fundamental importance. Its irreducible complexity usually cannot
be tackled computationally in a brute-force style. Here, we combine Large Eddy
Simulation (LES) techniques with Machine Learning (ML) to retain only the
largest dynamics explicitly, while small-scale dynamics are described by an
ML-based sub-grid-scale model. Applying this novel approach to self-driven
plasma turbulence allows us to remove large parts of the inertial range,
reducing the computational effort by about three orders of magnitude, while
retaining the statistical physical properties of the turbulent system. | Robin Greif, Frank Jenko, Nils Thuerey | 2023-09-28T12:46:54Z | http://arxiv.org/abs/2309.16400v1 | # Physics-Preserving AI-Accelerated Simulations of Plasma Turbulence
###### Abstract
Turbulence in fluids, gases, and plasmas remains an open problem of both practical and fundamental importance. Its irreducible complexity usually cannot be tackled computationally in a brute-force style. Here, we combine Large Eddy Simulation (LES) techniques with Machine Learning (ML) to retain only the largest dynamics explicitly, while small-scale dynamics are described by an ML-based sub-grid-scale model. Applying this novel approach to self-driven plasma turbulence allows us to remove large parts of the inertial range, reducing the computational effort by about three orders of magnitude, while retaining the statistical physical properties of the turbulent system.
_Introduction._ The advent of computing, and now machine learning (ML), has enabled scientists to address challenging scientific questions that have previously been intractable. One of the most prominent of these is the study of turbulence, with applications ranging from quantum physics [1] to astrophysics [2] - involving liquids, gases, and plasmas [3; 4]. However, even exascale computing will not allow us to tackle some of the most pressing open issues in a brute-force style.
As it turns out, the combination of computing and machine learning offers some unique opportunities along these lines. This is the topic of the present Letter.
One popular approach for efficiently computing the dynamics of turbulent systems for a wide range of applications is the Large Eddy Simulation (LES) technique. Here, the system is simulated with only the largest scales resolved explicitly, while the unresolved scales are accounted for by a Sub-Grid-Scale (SGS) model [5; 6; 7]. In the following, we will give this old idea a new twist. Specifically, we will develop an SGS model based on a Neural Network (NN) with Learned Corrections (LC) on the resolved scales to create a hybrid numerical and ML approach. As will be demonstrated below, by using a non-propagated field, this approach can be remarkably effective allowing us to cut off virtually the entire inertial range, just retaining the drive range [8]. This is fundamentally different from previous studies, which focused on the much simpler problem of removing diffusion-dominated scales in the dissipation range [9; 6]. However, removing (large) parts of the inertial range while retaining the integrity of the cascade dynamics has been the major challenge facing LES approaches [10]. Simply extending approaches that work within the dissipation range to the inertial range is typically not a viable option. In this Letter, we introduce a model that is able to overcome these difficulties and do so very efficiently. In fact, it is able to produce physically indistinguishable results even when removing (large) parts of the inertial range, while allowing for a relative speedup of about three orders of magnitude.
_Turbulent system._ To illustrate the power of our approach, we will address an open issue of utmost practical importance in the area of contemporary turbulence research - namely the need to understand, predict, and control turbulent flows that are observed in magnetic confinement fusion plasmas [11; 12]. To create burning (i.e., self-heated and electricity-producing) plasmas, the energy confinement time of a fusion device needs to exceed a threshold set by the so-called Lawson criterion [13]. This very quantity is determined by plasma turbulence [13; 14]. The underlying nonlinear plasma dynamics can be described, e.g., by the two-fluid Hasegawa-Wakatani (HW) model [15; 16], which produces quasi-stationary turbulent states, whose statistical properties have been studied and documented thoroughly [15].
The HW model describes the turbulent transport perpendicular to the confining toroidal magnetic field, providing the time evolution of the plasma density \(n(x,y,t)\) and the vorticity \(\Omega(x,y,t)\):
\[\partial_{t}n = c_{1}\left(n-\phi\right)-\left[\phi,n\right]-\kappa_{n}\partial _{y}\phi-\nu\nabla^{2N}n\,, \tag{1}\] \[\partial_{t}\Omega = c_{1}\left(n-\phi\right)-\left[\phi,\Omega\right]-\nu\nabla^{2N }\Omega\,. \tag{2}\]
Here, the Poisson bracket is defined as \([a,b]=\partial_{x}a\,\partial_{y}b-\partial_{y}a\,\partial_{x}b\), and the vorticity \(\Omega\) relates to the electrostatic potential \(\phi\) via \(\Omega=\nabla^{2}\phi\). The model contains three parameters: the drive strength \(\kappa_{n}\), which describes the degree of inhomogeneity of the background plasma density (in the \(x\) direction), the so-called adiabaticity parameter \(c_{1}\), which is inversely proportional to the resistivity of the plasma, and the hyperviscosity parameter \(\nu\), which determines the onset of dissipation on small scales. In the hydrodynamic limit of large resistivity (\(c_{1}\to 0\)), the model reduces to the Navier-Stokes (NS) equation. This property, in a sense, makes it a more generalizable testing ground that allows to build on top of recent developments in simulating NS turbulence using traditional solvers with learned components [9; 6; 17].
A typical snapshot - taken in the fully developed turbulent phase of a 2D HW simulation (HW2D)
is shown in Fig. 1. The observed highly irregular flow patterns in space and time are characteristic of turbulence. However, the resulting quasi-stationary states far from thermodynamic equilibrium are remarkably robust in a statistical sense [18]. For a high-dimensional complex system like this, validation in a strictly deterministic and microscopic sense is meaningless -- any small numerical variation will lead to drastically differently looking states on a very short time horizon. For studies of plasma turbulence via the HW model, the single most important quantity of interest is the turbulent particle flux \(\Gamma_{n}\), which can be written in real-space and spectral coordinates as
\[\Gamma_{n}(t)=-\!\!\iint\!\mathrm{d}^{2}\!x\,\,\,n\,\partial_{y}\phi=-\!\! \int\!\mathrm{d}k_{y}\,\,ik_{y}\,n(k_{y})\,\phi^{*}(_{y}) \tag{3}\]
where the spectral representation employs averages in \(x\) and \(\phi(_{x})^{*}\) marks the complex conjugate of \(\phi\). During the quasi-stationary turbulent phase, the value of \(\Gamma_{n}(t)\) will fluctuate temporally around a stable long-term mean in a characteristic manner, making it a statistical property of the system. Therefore, we will employ a statistical paradigm to validate that machine learning imitates the physical effects rather than learning specific states explicitly. In this approach, an idealized simulator does not only retain visual dynamics and average values, but in fact the underlying distribution from which the physical values are sampled. This marks the strongest possible verification of the physicality for complex systems that can be envisioned without analytic solutions and therefore significant stricter evaluation than previously employed.
_Novel ML-based LES technique._ Direct Numerical Simulations (DNS) of plasma turbulence tend to require significant computational resources. In Cartesian space, for a square box resolving wavenumbers of multiples of \(k_{0}=0.15\), the minimum spatial resolution to retain key physical properties in the HW system is 512x512, as careful convergence tests have revealed. The associated time step is \(\Delta t=0.025\), using a 4th-order Runge-Kutta (RK4) scheme.
The hyperviscosity-coefficient \(\nu\) of order \(N\) and resolution determine the onset of diffusion in the system. The dissipative scales (which are dominated by diffusion) provide a trivial application space for convolutional neural networks, since diffusion can be expressed via convolutional kernels [6; 9].
Previous LC-SGS approaches have stayed strictly in the diffusion dominated range [6], since the approaches breaks down when cutting into the inertial range. Even here, small differences accumulate through the positive-feedback loops from repeated applications of NNs and had to be mitigated through temporally unrolling on the order of 100 simulation steps [9]. The approach presented in this Letter, however, is able to remove significant portions of the inertial range, while maintaining the physicality of the system with just three steps unrolled.
We resolve this fundamental limitation faced by LC-SGS models by introducing the potential-based surrogate correction (PSC). By restricting the input to exclude model-propagated fields and projecting the information into the potential instead, we constrain the information flow from forming the common positive-feedback loop [6; 9]. This means, the prediction of the low-resolution DNS is transformed into the potential \(\tilde{\phi}\) that is fed as a surrogate into the SGS-NN to derive the LC on the model fields \(n\) and \(\Omega\). With this approach, we are able to not only keep the simulations of the model stable, but are able to demonstrate that the network performs almost perfectly as an idealized simulator without a fully resolved inertial range. A sample evaluation visualizing this stability is visualized in the first row of Fig. 2 after running for three million steps. By allowing us to cut into the inertial range, we are able to reduce the size of the simulation by a factor of 256 from 512x512 \(\rightarrow\) 32x32.[19] We further significantly reduce the computational effort by going from a RK4 scheme to an Euler prediction -- while retaining the timestep size.
This timestep is five times larger than what is required for predictor-corrector schemes, and can be increased by another factor of five without affecting the results.
Figure 1: Snapshot of a DNS showing the characteristic plasma turbulence of the HW model in the fully saturated turbulent regime (t=300). Note, only \(n\) and \(\Omega\) are fields propagated in the model and the potential \(\phi\) is derived from these.
Figure 2: Top: PSC Model run continuously for about 3,000,000 steps after being trained on a window of 3 frames. Bottom: Downsampled ground truth DNS at t=1,000.
However, this property is a phenomena previously encountered [6; 9] and as an engineering optimization left for the supplementary information to focus here on the more critical, novel physicality of the solution with under-resolved inertial range. As will be shown next, the proposed approach retains the statistical properties of the turbulent system - which is a remarkable achievement.
_Preserving the statistical properties_. We compare different methods in the drastically reduced physical space of 32x32, primarily based on their preservation of the critically important turbulent particle flux over time \(\Gamma_{n}(t)\) (see Eq. 3). In the following, **Downsampled** refers to reduced representations of the high-resolution DNS that are provided as ground-truths for training. This value is compared to previous approaches that include and learn on the model-propagated fields (\(n\), \(\Omega\)) in the network, named **previous()**, where the brackets denote the fields used and the number of timesteps unrolled for training. In our experiments of more than 5,000 trained networks, only about 1% of these evaluated setups produced stable models. Therefore, we steelman our argument by cherry-picking the best of these previous approaches as the baseline **previous(\(n\),\(\Omega\),15)**, with two others shown for illustration purposes of the characteristic divergence when extrapolating long-term (**previous(\(n\),\(\Omega\),\(\phi\),5)**, **previous(\(n\),\(\Omega\),5)**). In contrast, the PSC approach produced stable simulation in every single of the 500+ initialization we attempted even with as few as 3 timesteps unrolled. We evaluated these for up to \(10^{6}\) times larger timeframes (see Fig. 2) than they were trained upon. Additionally, the PSC approach demonstrated no signs of over-fitting even when continuing training for a factor of 100 beyond what produced stable simulations, indicating further robustness of the approach. Finally, fine-tuned low-resolution DNS are given as references denoted with the fourth-order time integration scheme used **DNS(rk4)**.
The timetraces of \(\Gamma_{n}(t)\) are shown in Fig. 3. The PSC approach, even with 3 steps of unrolling **PSC(3)** shows a mean deviation of less than 0.1% w.r.t. the reference data **Downsampled**. Even the cherry-picked best models from previous approaches accumulate errors before reaching an equilibrium, resulting in significantly larger values for \(\Gamma_{n}(t)\). The example visualized here found an equilibrium at twice the target value. In relative terms, therefore, the mean value of PSC simulations are 2,000 times more accurate than the best mean value of **previous** models and 700 times more accurate than artificially fine-tuned **DNS(rk4)** simulations. Our approach accomplishes this feat with only a single gradient computation, compared to four previously, for a theoretically reduction in computational effort by a factor of \(10^{4}\). Empirically, we observe roughly half of this ideal value without introducing specific optimizations.
This marks a significant improvement to previous methods in two fields: generalizability and efficiency.
First, previous non-LC approaches often relied on explicitly enforcing retention of conservation laws [20; 21; 22; 23] for systems to remain stable and physical when extrapolating far beyond the time-scales it was trained upon or into non-dissipative regions. Our approach contains no model-specific knowledge.
Secondly, generalized LC approaches previously required the network to be exposed to \(10^{2}\) steps that it generated during training [6; 9]. This vastly increases resource usage for training, thereby limiting the complexity of models it can be applied to. This means, previously more than 99% of data had to be affected by the network, whereas the PSC approach required just over half, or 2 out of 3 time steps in training, to be affected by it. Going to fewer steps, e.g., 2, would mean the network is no longer dominantly trained on the data it affected. Therefore, the PSC approach reached the lower bound for temporal unrolling, where _just_ over half the data generated in a training step is affected by the network. At the same time, the PSC data presented here extrapolates orders of magnitude further into the future (far be
Figure 3: Comparison of \(\Gamma_{n}(t)\) for 32x32 slices of: DNS(rk4) **Downsampled** from 512x512, a fine-tuned **DNS(rk4)**, best versions of **previous** LC approaches using different fields and temporal unrolling, and the **PSC** approach parametrized in its temporal unrolling. The only previous model that kept stable provided unphysical results by a factor 2, while all other trained versions diverged.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Variant & \(\Gamma_{n}\pm\delta\Gamma_{n}\) & \(\Gamma_{c}\pm\delta\Gamma_{c}\) & \(E\pm\delta E\) & \(U\pm\delta U\) \\ \hline Downsampled & 0.57\(\pm\)0.05 & 0.45\(\pm\)0.03 & 3.69\(\pm\)0.29 & 8.16\(\pm\)0.41 \\ \hline
**Our Model** & **0.57\(\pm\)0.05** & **0.46\(\pm\)0.03** & **3.33\(\pm\)0.24** & **8.08\(\pm\)0.40** \\ Previous & 0.84\(\pm\)0.10 & 1.11\(\pm\)0.14 & 8.38\(\pm\)0.81 & 17.23\(\pm\)1.61 \\ DNS(rk4) & 0.92\(\pm\)0.13 & 0.51\(\pm\)0.08 & 10.89\(\pm\)1.65 & 15.87\(\pm\)2.20 \\ \hline \hline \end{tabular}
* Comparison of the physical values for the **Downsampled** DNS from 512x512 to 32x32 to **Our Model** of the PSC, **Previous** approaches using the model fields themselves, and fine-tuned **DNS(rk4)** all run at 32x32: \(\Gamma_{n}\) (particle flux), \(\Gamma_{c}\) (primary sink), \(E\) (Energy), and \(U\) (Enstrophy).
\end{table}
Table 1: Physical values at 32x32
yond the training time window) than previous studies to demonstrate stability. Furthermore, the large windows for training that were previously used introduce implicit time-smoothing that eliminates effects present on temporal scales smaller than the training window. Just as a reminder, the PSC approach achieved this while removing significant parts of the inertial range and not simply modeling diffusion-dominated regions, marking a huge step forward towards efficient, physically consistent data-driven methods that currently stunt the widespread adoption in computational sciences.
Moving to a first- and second moment statistical evaluation, table 1 shows physical properties evaluated for 24 simulations over time well within the turbulent phase (\(t\in(300,1000)\)), with the mean and standard deviation across 24 simulations separated by \(\pm\). All PSC values fall well within a single standard deviation of the downsampled DNS, with only the energy being slightly outside one sigma bounds.
This marks the first indication that the PSC-LC are mimicking the contributions of unresolved scales.
In the following, we will solidify this conclusion via additional careful analysis of the hybrid simulations and investigate how far we get towards an idealized generator that preserves all effects of the unresolved physics.
Moving beyond the analysis of scalar quantities, we will now consider the properties of various spectral distributions. Obviously, this step represents a significant refinement of our systematic comparison of the direct and hybrid simulations.
Considering once again the turbulent particle flux \(\Gamma_{n}\), it is of great interest to inspect the spectrum \(k_{y}\) to determine the contributions stemming from different spatial scales: \(\Gamma_{n}(k_{y})=ik_{y}\,n(k_{y})\;\phi^{*}(k_{y})\) (see Eq. 3). Visualizing this wavelength spectrum representation in Fig. 4, we expect a distinct bell-shape around \(2\times 10^{-2}\) from the high-resolution DNS, here called **Numerical Integrator (512x512)**. Due to coarser resolution, the high-\(k_{y}\) (low wavelengths) tail is not resolved, leading intrinsically to a slight underestimation for \(\Gamma_{n}\) at very low resolution, like in the downsampled case here. In these representations, **DNS(rk4)** on 32x32 had to be left out, as the scales would make the images unreadable. Previous approaches overestimated the turbulent source across the spectrum to retain stability. The PSC, meanwhile, results in a slight shift towards lower frequencies to account for the unresolved turbulent flux contributions of unresolved scales. Given the coarse spectral representation, these are still _very_ close to the ground truth, especially with respect to the spectral coupling strength at the wavelengths.
While these changes remain within reasonable limits for \(32\times 32\) hybrid simulations, it is clear that a further reduction to, say, \(16\times 16\) grid points is not possible - unless other simulation parameters (like the box size) are adapted. Such attempts shall not be pursued here.
Another spectral value that is important in studies based on the HW model, is the phase shift between the two complex-valued variables \(n(k_{y})\) and \(\phi(k_{y})\) originating from the linear drift waves, as defined by
\[\delta(k_{y})=\Im[\log n(k_{y})^{*}\phi(k_{y})] \tag{4}\]
In HW turbulence, like in many other turbulent systems characterized by nonlinearly coupled waves, one tends to find that these distributions are centered around the respective linear values and have a small to moderate width. The \(\delta(k_{y})\) spectra from our hybrid model and the corresponding direct simulation are plotted in Fig. 4 (top right). Once again, the model traces the ground truth closely within about one standard deviation throughout the retained scale range of the hybrid simulation. Previous approaches had significant challenges to retain this shape, resulting in non consistent phase angles, thereby changing the physical dynamics of the system.
Given the phase angles dependence on the model fields in Fourier space, their wavespectra are additionally given in Fig. 4. In this case, they provide additional support that the PSC approach retains the memory of the initial linear phases of the system consistent with the physical model. It should be noted here that no information about the spectral properties were introduced into the training or design of the system: the network produced the dynamics that preserve spectral properties simply from seeing 3 slices in Euclidean space at a time. As
Figure 4: The spectral comparison shown is between high-resolution DNS, the downsampled version, our model, and a previous reference model respectively. Shaded areas mark \(\pm 1\sigma\) across 25 simulations of their mean over time. Additional spectra are given on the right. Significant improvements in preserving accuracy, precision, and shape can be observed in comparison to previous models across all four spectra. Upper: Spectral distribution of \(\Gamma_{n}(k_{y})\), describing the strength of the coupling of that wavenumber/wavelength to the background gradient. Lower: drift-wave angle \(\delta_{k}\).
such, these spectral results provide further evidence towards the physicality of the corrections that the neural network provided, while also visualizing some sources for the discrepancies arising in previous approaches.
Finally, we now show that our hybrid model actually preserves the underlying statistical distributions themselves. For turbulent systems, as with all complex systems, the statistical distribution of physical values describes the fundamental physical process underpinning it [24, 25]. Given the chaotic nature of turbulent systems, no exact value or state can be recreated in finite precision computations. Therefore, showing that these very distributions are reproduced provides the strongest possible verification of the hybrid model, suggesting that it actually accounts for the relevant physical effects instead of simply creating similar looking data.
To achieve this, we will consider the discrete Cumulative Distribution Function (CDF) of the values of \(\Gamma_{n}\) over time and simulations. The discrete CDF is preferred to remove any suggestion of hiding results behind smoothing effects from transforming discrete simulations into continuous probability distribution functions. For the values of \(\Gamma_{n}\), this is defined as \(F_{\Gamma_{n}}(x)=P(\Gamma_{n}\leq x)\) where x is the value of \(\Gamma_{n}\), visualized on the x-axis. Therefore, it shows on the y-axis the fraction of values smaller than or equal to the corresponding value on the x-axis, with y ranging from 0 to 1 (100% of the data is smaller than this value).
Considering one last time \(\Gamma_{n}\), Fig. 5 shows on the top that the cumulative distribution of values recovers the shape within the shaded \(\pm\) standard deviation of the ground truth, with almost identical bounds across the entire range of values. Even when comparing higher-order moments, like the skewness (0.24 vs 0.22) or the kurtosis (0.13 vs 0.12), the results from our model and the downsampled high-resolution DNS data are once again very close. In fact, once adjusted for a small constant offset, statistical tests like the Kolmogorov-Smirnov test cannot reject the null hypothesis that the data were sampled from the same distribution. Meanwhile, the significant deviation in shape and position of the previous models for the turbulent particle flux is self-evident.
Lastly, we include three other metrics in all figures to further emphasize that the \(\Gamma_{n}\) evaluation metric was not cherry-picked, but that the PSC approach is in fact able to retain the distribution of physical properties that are determined by the dynamics of the turbulent system over time. Adopting a source-sink perspective, \(\Gamma_{n}\) takes the role of the turbulent source term in the system. The most important of the counteracting sink, \(\Gamma_{c}\), shown on the bottom left in Fig. 5 exhibits remarkably similar results including its higher order statistical moments- To summarize the dynamics, the time variation of the energy \(\partial_{t}E\) and enstrophy \(\partial_{t}U\) of the open system are given to show their similarity.
In fact, these results for the CDF are representative of the other source and sink terms that define the motion of the system. These results mark the strongest possible indication that the PSC-based LC model presented here does in fact reproduce the physical effects of unresolved scales even when removing significant parts of the inertial range for the HW model.
_Summary._ In this Letter, we have presented, applied, tested, and discussed a potential based NN-SGS method for LES that preserves the physical effects of unresolved scales even when cutting into the inertial range of the turbulent motion.
Our method allows the neural network to efficiently learn how to correct corresponding simulations with a grid 256 times coarser and up to timesteps 5 times larger than was required to produce a converged reference solution. Meanwhile, it provides results for key physical quantities of the turbulent system that are almost indistinguishable from their DNS counterparts - not simply visually, in averages, standard deviations, spectrally, but even in terms of their statistical distributions. These findings provide a significant step forward in the area of turbulence simulations and highlight the value of further developing AI-based techniques for the acceleration of important computational problems.
The approach presented here used as little model-specific information as possible, to strengthen the case for potential generalization to other physical systems. This
Figure 5: Showing the fraction of in values smaller than the value on the x-axes over time (discrete cumulative distribution function) for the downsampled, our model, and previous approaches, with the integrated \(\Gamma_{n}\) on the top, and \(\Gamma_{c}\), \(\partial_{t}U\), and \(\partial_{t}E\) below from left to right. Once again, means are a solid line with standard deviations across simulations shaded, indicating clearly that not only mean and standard deviation are preserved, but also higher order statistical moments of the distributions from the shape.
implies, in turn, that many further optimizations for this specific model were potentially left on the table, such as, e.g., the inclusion of information from the frequency domain and known conservation laws. In addition, our approach has not yet been optimized for using the least amount of data or targeting the shortest training time. Despite that, we find an empirical increase of the computational performance by about three orders of magnitude. Further enhancements are left for future work.
|
2308.16729 | JavaScript Dead Code Identification, Elimination, and Empirical
Assessment | Web apps are built by using a combination of HTML, CSS, and JavaScript. While
building modern web apps, it is common practice to make use of third-party
libraries and frameworks, as to improve developers' productivity and code
quality. Alongside these benefits, the adoption of such libraries results in
the introduction of JavaScript dead code, i.e., code implementing unused
functionalities. The costs for downloading and parsing dead code can negatively
contribute to the loading time and resource usage of web apps. The goal of our
study is two-fold. First, we present Lacuna, an approach for automatically
detecting and eliminating JavaScript dead code from web apps. The proposed
approach supports both static and dynamic analyses, it is extensible and can be
applied to any JavaScript code base, without imposing constraints on the coding
style or on the use of specific JavaScript constructs. Secondly, by leveraging
Lacuna we conduct an experiment to empirically evaluate the run-time overhead
of JavaScript dead code in terms of energy consumption, performance, network
usage, and resource usage in the context of mobile web apps. We applied Lacuna
four times on 30 mobile web apps independently developed by third-party
developers, each time eliminating dead code according to a different
optimization level provided by Lacuna. Afterward, each different version of the
web app is executed on an Android device, while collecting measures to assess
the potential run-time overhead caused by dead code. Experimental results,
among others, highlight that the removal of JavaScript dead code has a positive
impact on the loading time of mobile web apps, while significantly reducing the
number of bytes transferred over the network. | Ivano Malavolta, Kishan Nirghin, Gian Luca Scoccia, Simone Romano, Salvatore Lombardi, Giuseppe Scanniello, Patricia Lago | 2023-08-31T13:48:39Z | http://arxiv.org/abs/2308.16729v1 | # JavaScript Dead Code Identification, Elimination, and Empirical Assessment
###### Abstract
Web apps are built by using a combination of HTML, CSS, and JavaScript. While building modern web apps, it is common practice to make use of third-party libraries and frameworks, as to improve developers' productivity and code quality. Alongside these benefits, the adoption of such libraries results in the introduction of JavaScript dead code, i.e., code implementing unused functionalities. The costs for downloading and parsing dead code can negatively contribute to the loading time and resource usage of web apps. The goal of our study is two-fold. First, we present _Lacuna_, an approach for automatically detecting and eliminating JavaScript dead code from web apps. The proposed approach supports both static and dynamic analyses, it is extensible and can be applied to any JavaScript code base, without imposing constraints on the coding style or on the use of specific JavaScript constructs. Secondly, by leveraging Lacuna we conduct an experiment to empirically evaluate the run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of mobile web apps. We applied Lacuna four times on 30 mobile web apps independently developed by third-party developers, each time eliminating dead code according to a different optimization level provided by Lacuna. Afterward, each different version of the web app is executed on an Android device, while collecting measures to assess the potential run-time overhead caused by dead code. Experimental results, among others, highlight that the removal of JavaScript dead code has a positive impact on the loading time of mobile web apps, while significantly reducing the number of bytes transferred over the network.
Dead code, JavaScript.
## 1 Introduction
Web apps are built by using a combination of HTML, CSS, and JavaScript. To increase developers' productivity via code reuse, we have been witnessing a proliferation of third-party libraries and frameworks, ranging from Model-View-Controller (MVC) frameworks, efficient DOM manipulators, User-Interface (UI) kits, etc. [1]. This phenomenon is happening not only for browser-based web apps, but even for mobile [2] and desktop software [3]. In addition to the speed-up of the development, the use of these libraries and frameworks--which are usually well tested and maintained--positively affects the quality of the implemented web-based solutions (or also _web apps_ from here onwards). Unfortunately, this comes at the price of an increase in their execution time and higher usage of resources. For example, given a web app, the used JavaScript framework could include unused functionalities that are never executed. In such a context, the code implementing unused functionalities is known as _dead code_[4]. Besides the obvious cost of increased file size and network transfer time, there is an additional hidden cost to dead code: despite JavaScript dead code never being executed at run-time, it is still downloaded and parsed by the JavaScript engine. This overhead can take a significant portion of the complete execution time of JavaScript code [5]. The costs for downloading and parsing dead code can negatively contribute to the loading time and energy consumption of web apps.
While some approaches have been developed to minimize this overhead (e.g., lazy parsing1 and script streaming2), dead code identification and elimination is still an open problem in web apps [6]. As far as the identification of JavaScript dead code in web apps, the currently available solutions either: _(i)_ impose a certain coding style to developers, banning certain code structures (e.g., object reflection), or _(ii)_ require specific constructs of the JavaScript specification. An example of the latter is the use of modules, which allow developers to specify self-contained namespaces in JavaScript and to conditionally load them when needed. While modules are certainly useful in terms of maintainability and code reuse, most web apps today have not been built with modules in mind [1].
Footnote 1: [https://v8.dev/blog/preparser](https://v8.dev/blog/preparser)
Footnote 2: [https://v8.dev/blog/v8-release-75](https://v8.dev/blog/v8-release-75)
Researchers have investigated the presence of dead code in web apps. For example, Boomsma _et al_. [7] reported that, in a subsystem of an industrial web app written in PHP, the developers removed 30% of the subsystem's files because these files were actually dead code. Eder _et al_. [8] observed that, in an industrial web app written in.NET, 25% of methods were dead. Surprisingly, no empirical studies have been conducted to assess the effect of JavaScript dead code on Web apps at run-time. For example, so far, no
empirical studies have been conducted to assess the impact of downloading and parsing the JavaScript dead code of a web app. In other words, the common belief is that there is a cost to pay when JavaScript dead code is present, but there is no evidence of its extent.
The goal of this paper is two-fold. First, we present _Lacuna_, an approach for automatically eliminating JavaScript dead code from web apps (Section 2.2). Secondly, we empirically evaluate the run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of mobile web apps (Section 3).
**Lacuna.** At the core of Lacuna lies the construction of a call graph \(G_{w}\) of the web app \(w\) being analysed; \(G_{w}\) is unidirectional and represents JavaScript functions as nodes and the caller-calle relationship between functions as edges. In this context, dead code elimination consists of the removal of all the (connected) components in \(G_{w}\) that are isolated from the root node representing the global scope of the web app. The unique characteristic of Lacuna is its ability to _build and iteratively refine_\(G_{w}\) by executing in sequence different program analysis techniques, each with its own potential support for specific aspects of the JavaScript language. Lacuna supports any kind of program analyses (both static and dynamic), provided that they are aimed at building a call graph of the JavaScript code being analysed. Lacuna is _extensible_ and independent from the used program analysis techniques, allowing developers and researchers to build the combination of analyses that best fits their own needs. Finally, Lacuna can be applied to any JavaScript-based web app, without imposing any constraints on the developer on coding style (e.g., banning the use of reflection or objects self-inspection) or on the use of specific JavaScript features (e.g., modules). We exploit this feature of Lacuna in our experiment, where we assess the run-time overhead of JavaScript dead code on 30 independently-developed mobile web apps.
**Experiment.** The goal of our experiment is to empirically assess the overhead that JavaScript dead code has when executing mobile web apps. We scope this experiment in the context of _mobile web apps_ since _(i)_ web browsers are more used on mobile devices [9], _(ii)_ the web browser is one of the most used apps on mobile devices [10], and _(iii)_ mobile devices tend to have limited processing power, poorer network capacities, and lesser memory with respect to desktop machines [5]. In this experiment, we target 30 mobile web apps independently developed by third-party web developers. The 30 web apps are divided into two different families: 15 _in-the-lab_ web apps and 15 _in-the-wild_ web apps. In-the-lab subjects are randomly sampled from the TodoMVC project [11]. This project contains different implementations of the "same" Todo web app, each using a different JavaScript MV* (Model View Anything) framework (e.g., AngularJS, React, Vue.js, etc.). Since all in-the-lab subjects share the same functionalities, they might negatively influence the experiment's external validity, making our results less generalizable. In order to mitigate this potential bias, we decided to complement the 15 in-the-lab subjects with 15 additional in-the-wild subjects; those subjects are sampled from the Tranco list [12] and include well-known web apps such as amazon.com, wikipedia.com, and youtube.com. We applied Lacuna four times on each of the 30 mobile web apps, each time eliminating dead code according to a different optimization level of Lacuna (see Section 2.2.4). Later, we executed each different version of the mobile web apps, while collecting measures where the presence of dead code might result in run-time overhead for the user experience or for the (technical, ecological) sustainability of mobile web apps. The most notable results of this experiment are:
* eliminating JavaScript dead code makes the considered mobile web apps _slightly more energy-efficient_ across all Lacuna optimization levels, but this phenomenon is not statistically significant;
* considered mobile web apps load faster when dead code is eliminated, especially for the most aggressive optimization level of Lacuna (this result is statistically significant, with a small effect size), however, the measures of _first contentful paint_ and _first paint_ do not show any noticeable improvement across the various Lacuna optimization levels;
* the elimination of JavaScript dead code leads to noticeable (and statistically significant) differences in terms of the number of performed HTTP requests only for in-the-lab subjects;
* the number of transferred bytes (significantly) diminishes when dead code is eliminated, especially for the most aggressive optimization level of Lacuna, with small effect size for in-the-lab subjects and medium effect size for in-the-wild subjects;
* CPU and memory usage tend to be (significantly) lower when dead code is eliminated from in-the-wild subjects, but not for in-the-lab subjects; GPU usage is (significantly) lower for in-the-lab subjects without JavaScript dead code, but not for in-the-wild ones.
An initial version of Lacuna was presented at the 2018 IEEE International Conference on Software Analysis, Evolution and Reengineering [6]. The first new contributions of this journal version consist of an in-depth description of the new features of Lacuna, described in Section 2.2.6. The current implementation of Lacuna has been completely redone and it is publicly available on itHub [13]. Another new contribution of this paper is the empirical evaluation of Lacuna, for which we designed, conducted, and reported an experiment about the run-time overhead of JavaScript dead code in terms of energy consumption, performance (e.g., page load time), network usage, and resources usage in the context of mobile web apps. In summary, the **main contributions** of this paper are:
* the presentation of Lacuna, an extensible approach for JavaScript dead code elimination;
* the integration of five new third-party analysis techniques in Lacuna;
* a completely new and publicly available implementation of Lacuna in Node.js;
* an experiment on the run-time overhead of JavaScript dead code on 30 third-party web apps;
* a publicly available replication package [14].
The **target audience** of the research presented in this paper consists of _(i)_ web developers and _(ii)_ researchers. Web
developers can use the current implementation of Lacuna for removing dead code from their web apps, thus making their products more lightweight in terms of, e.g., network usage, load time, or energy consumption. Researchers can use Lacuna as a means for benchmarking their analysis techniques for JavaScript dead code elimination.
**Paper Structure.** The remainder of this paper is organized as follows. Section 2.1 provides background information on dead code, while Section 2.2 presents our extended version of Lacuna. In Section 3, we introduce the empirical study on the run-time overhead of JavaScript dead code. The obtained results are reported in Section 4. A discussion of the obtained results and the threats to validity of the experiment are presented in Section 5. Finally, Section 6 presents related work and Section 7 closes the paper with final remarks.
## 2 Background
In this section, we provide context and discuss preliminary concepts required in the remainder of our paper. We define the concept of dead code, discuss related research, provide a description of the inner workings of Lacuna, and describe the results of our internal and external evaluations of Lacuna.
### _Dead Code_
Dead code is part of the so-called _code smells_, a series of indicators and characteristics in the source code of a program that can possibly indicate a deeper problem. Although dead code was not considered by Fowler in his original catalog of code smells [15], it was later introduced in their respective code-smell catalogs by Brown [16], Wake [17] and Martin [18].
The perspective of these authors towards dead code is that of _refactoring_--i.e., dead code removal makes source code easier to comprehend and maintain [19, 20]. Developers, besides being interested in dead code for refactoring reasons, can be interested in _optimization_ and _energy-efficiency_ reasons. In other words, developers do not remove dead code because they are interested in improving source code comprehensibility and maintainability, but because they want to make their apps faster and/or lighter (optimization reason) or less energy-consuming (energy-efficiency reason). This perspective, that is the one taken in this paper, has practical implications: if the perspective is of refactoring, the removal of dead code does not regard external dependencies (e.g., libraries or framework); if the perspective is of optimization or energy-efficiency, developers need to remove dead code from external dependencies as well. Specifically, we adopt in our paper the optimization and energy-consuming perspectives when detecting and removing JavaScript dead code. Accordingly, we are not interested in the benefits, deriving from dead code removal, in terms of source code comprehensibility and maintainability; moreover, we remove JavaScript dead code from external dependencies as well.
A survey among almost 9,300 JavaScript developers rated code splitting and dead code elimination as the highest-rated requested features [21]. However, due to the highly-dynamic and event-based nature of JavaScript, it is hard to completely and correctly analyze JavaScript source code [22]. The features of this language pose challenges for analysis tools, making call-graph construction3 and dead-code removal especially difficult. To circumvent these challenges, currently available tools for the detection and removal of JavaScript dead code tend to prevent the use of language features (such as reflection) or require the application to meet certain characteristics. Bundlers like rollup.js and Webpack perform dead code elimination using a process known as tree-shaking [23]. This is an effective way of (partial) dead code elimination. Differently from Lacuna, tree-shaking requires the use of ECMAScript6 modules, which are not widely supported at the time of writing [24]. Moreover, it requires developers to meticulously write import and export statements, as otherwise unused functions might still be imported. The Google Closure Compiler is a tool that rewrites JavaScript code to improve download and execution speed. It analyzes the source code, removes dead code, and rewrites it to a more optimal form [25]. While the Closure Compiler is effective for dead code elimination, it requires, differently from Lacuna a specific coding style. Recently Kupoluyi _et al_. [26] propose Muzeel, a black-box approach (to identify and remove dead code functions in JavaScript libraries) that requires neither knowledge of the code nor execution traces. To identify dead code functions, Muzeel performs dynamic analysis through an user's emulation implemented in a bot (i.e., browser automated tool). One of the most remarkable differences with Muzeel is that Lacuna combines source code analysis and dynamic approaches to identify dead code functions and this allows saving computation time to their identification.
Footnote 3: A call graph contains nodes that represent functions of the program and edges between nodes if there exists at least one function call between the corresponding functions.
The call graph representation of JavaScript programs is the base of many static analysis tools; not only for the detection of dead code but also to detect security issues [27]. For example, Antal _et al_. [27] compare five widely adopted static tools. In addition to (Google) Closure Compiler, the authors analyze npm cg, WALA, Approximate Call Graph (ACG), and Type Analyzer for JavaScript tools (TAJS). The authors observe a variance in the results of these tools (in terms of number, precision, and type of call edges) and suggest combining their output to get a better trade-off in the construction of call graphs. Chakraborty _et al_. [28] identify different root causes of missed edges in JavaScript static call graphs and an approach to build call graph representations of JavaScript programs. The approach works by identifying the dynamic function data flows relevant to each call edge missed by the static analysis. In the implementation of Lacuna, we take advantage of the findings reported in [27, 28] by combining the results of different static and dynamic analyzers to obtain a single call graph representation of a JavaScript program (see Section 2.2.5)
As compared with past research, Lacuna is based on both static and dynamic analyses, it can be easily extended, and it can be applied to any JavaScript code base (e.g., without imposing constraints on the developers' coding style). In this paper, we also present the results of an empirical
assessment of the possible run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of mobile web apps. The experimental subjects of our assessment were 30 third-party web apps (e.g., amazon.com, wikipedia.com, and youtube.com) that we have run on a real Android device.
**Running example**. In the remainder of this section, we will adopt the sample program of Listing 1 as a running example. Three functions compose the example program: function \(a\) (lines 1-5 in Listing 1) is directly invoked from the global scope (line 16) and thus is reachable; function \(b\) (lines 7-9) is reachable as is called from function \(a\) after a timeout has expired (lines 2-4); function \(c\) (lines 11-14) is not called by any other function and thus is unreachable and represents dead code.
```
1functiona0//
2setTimeout(function()/
3b);
4\(J\),6000);
5\(J\)
6
7functionb/
8console.log(6 seconds have passed);
9\(J\)
10
11functionc/
12console.log(functionc has been called);
13/* Other potentially heavy statements */
14
15
16acall0;
```
Listing 1: Running example
### _Lacuna_
In this section, we describe the inner workings of Lacuna. The high-level workflow of Lacuna is outlined in Figure 1.
Lacuna takes as input \(w\), the source code of the web app being analyzed, and \(l\), the desired optimization level. It is important to note that these are the only input needed by Lacuna, making it applicable in the context of a wide spectrum of projects, independently of the used development process or company-specific practices. In the first phase, namely the **Parsing** phase, JavaScript code inside \(w\) is detected and parsed, and an initial Call Graph (CG) is built. The results of this phase are provided as input to the second phase, **Analysis**, in which multiple analysis techniques integrated into Lacuna are executed in parallel and the results of each of them are merged. Finally, the last phase, **Elimination**, is executed. In this phase, dead code is identified and the corresponding JavaScript source code is optimized according to the optimization level \(l\). The final output is _w_', an optimized version of \(w\) where the detected dead code is removed.
In the following of this section, we first introduce preliminary concepts and then the three phases (i.e., Parsing, Analysis, and Elimination) behind Lacuna. Finally, we provide implementation details, including used technologies.
#### 2.2.1 Preliminary concepts
All the algorithms adopted in the parsing, analysis, and elimination phases operate on call graphs. A _call graph_\(G=(V\), _E_) is a uni-directed graph where the set of nodes \(V\) represents JavaScript functions and the set of edges \(E\) represents the caller-callee relationship between functions. Specifically, an edge _eij_ between the node \(i\) and the node \(j\) in \(G\) represents the fact that the function \(i\) is able to call the function \(j\). In the context of JavaScript web apps, a call graph always contains one root node; such a root node corresponds to the JavaScript global scope, which is always present and executed when the web app is run in the browser.4
Footnote 4: [http://www.w3schools.com/js/js](http://www.w3schools.com/js/js) function invocation.asp
In this context, dead code elimination consists of the removal of all the (connected) components in \(G\) that are isolated from the root node representing the global scope of the web app. Due to the highly-dynamic and event-based nature of JavaScript, the identification of the edges of \(G\) is difficult [22, 29, 30]. As explained in Section 2.1, currently there is no technique for building correct and complete call graphs for JavaScript without imposing any constraints to developers or making strong assumptions on the usage of the language, e.g., having a complete test suite or prohibiting the use of reflection. To overcome this challenge, Lacuna leverages a set of external analysis techniques, \(A\). Lacuna considers the included analysis techniques as black-box components, with the only assumption being that each analysis technique _a_\(A\) adheres to the interfaces defined by Lacuna, meaning that it has to take as input an initial call graph _G0_ and the source code of \(w\), and builds its own call graph _G_a_, leveraging principles and analysis techniques of choice for the identification of edges. This allows for the inclusion of analysis techniques that are either dynamic or static. In Section 2.2.5, we will show that this restriction is not limiting and several existing tools have been integrated into Lacuna with relatively low effort. Each edge in _Ga_ will be labeled with the analysis technique that identified it. Thus, in our final call graph, built from the combination of all graphs _G_a_s, each edge can have multiple labels to take into account the fact that multiple analysis techniques can identify the same function call as reachable.
Fig. 1: Workflow of Lacuna
#### 2.2.2 Parsing
In the parsing phase, Lacuna performs two main procedures, **Parse** and **InitializeCG**, both described in Algorithm 1. In the first, given as input the source code of the web app being analyzed \(w\), Lacuna identifies all the JavaScript code within it by considering _(i)_ all the JavaScript code defined in-line in all HTML files, _(ii)_ all JavaScript files referenced by the HTML code by means of the \(<\)script\(>\) tag, and _(iii)_ all the JavaScript files in \(w\) that are not referenced by any \(<\)script\(>\) tag (lines 1-6 in the Algorithm 1). Once all the JavaScript code related to \(w\) has been identified, Lacuna parses it into an internal representation of all its statements to ease subsequent steps (lines 7-16). During this step, to enable full analysis and optimization of \(w\), all the externally hosted JavaScript code will be downloaded locally (lines 9-11). With the assumption that the entirety of the program is contained in a single example.js file, this first phase is trivial for our running example of Listing 1.
```
input :\(w\), source code of the web app to analyze output :\(G_{0}\), initial call-graph representation of \(w\)
1Function Parse(\(w\)) \(\prec\) S:
2 begin
3\(J_{\mathit{Inline}}\) = JavaScript code defined in-line in \(w\)
4\(J_{\mathit{Script}}\) = JavaScript code in files referenced by \(<\)script\(>\) tags in \(w\)
5\(J_{\mathit{file}}\) = JavaScript code in files not referenced by any \(<\)script\(>\) tag in \(w\)
6\(J=J_{\mathit{Inline}}\) U \(J_{\mathit{Script}}\) U \(J_{\mathit{file}}\)
7\(S=\varnothing\)foreach\(j\) \(\in\) J do
8if\(j\) is externally hostedthen
9download\(j\) locally
10 end if
11\(s=\)statements in\(j\)
12\(S=S\) U {\(s\)}
13 end if
14
15return\(S\)
16
17 end for
18
19FunctionInitializeCG(\(S\)) \(\prec\) G\({}_{0}\):
20 begin
3\(G_{0}\) = (\(V_{0}=\varnothing\), \(E_{0}=\varnothing\))foreachfunction declaration f in Sdo
3\(V_{0}=V_{0}\) U {\(f\)}
3 end for
4\(V_{0}=V_{0}\) U {global}
5return\(G_{0}\)
6 end for
```
**Algorithm 1**Parsing Algorithm of Lacuna
Afterward, as part of the InitializeCG procedure, an initial call graph \(G_{0}\) = (\(V_{0}\), \(E_{0}\)) is instantiated. To this end, first, all function definitions within \(w\) are retrieved, including anonymous and inline functions, and a node for each identified function declaration is created in \(G_{0}\) (lines 17-22). Additionally, a starting node representing the JavaScript global scope is included in \(G_{0}\) to be able to consider also all those functions directly called from the global scope of the web app (line 23). The \(G_{0}\) call graph of our running example contains five nodes, namely: the \(global\) node, one node for each \(a\), \(b\), and \(c\) functions, and one node for the inline function defined in the setTimeout call. \(G_{0}\) does not contain any edge in this phase, they will be added in the next phase.
#### 2.2.3 Analysis
Lacuna's analysis algorithm is presented in Algorithm 2, once more divided in the **Analyze** and **Merge** procedures. The former takes as input the \(G_{0}\) call graph and produces as output a set of call graphs \(H\). To do so, it executes each \(a\not\in\) in parallel on \(G_{0}\) and collects each resulting \(G_{a}\) in \(H\) (lines 1-10 in Algorithm 2). During its execution, each \(\not\in\)\(A\) performs the identification of the edges in \(G_{0}\), leveraging its own analysis principles. For instance, TAJS relies on abstract interpretation, which approximates the execution of a program (and thus the identification of edges) by means of monotonic functions [31], while ACG employs a field-based flow analysis technique, which statically approximates the flow of data [32]. Let us refer to the running example of Listing 1, and let us assume, for the sake of a simpler explanation, that among the analysis techniques available in Lacuna (see Section 2.2.5), only the following three are executed: _static_, _dynamic_, and _native calls_. Each of the three techniques is executed independently on \(G_{0}\), producing the set of call graphs \(H=G_{\mathit{static}}\)\(G_{\mathit{dynamic}}\), \(G_{\mathit{nativecalls}}\). These techniques, and that other ones available, are explained in detail in Section 2.2.5.
After all the analysis techniques have been executed, during the _Merge_ procedure Lacuna joins the call graph \(G_{a}\) produced by each analysis technique \(a\) into a final call graph \(G_{w}\). The strategy applied in this step is the following: _(i)_ each node in \(G_{0}\) is replicated into \(G_{w}\) (line 13), _(ii)_ for each \(G_{a}\in H\) we add all its edges into \(G_{w}\) (lines 14-17), _(iii)_ when adding an edge \(e_{\mathit{if}}\) produced by a technique \(a\), if \(e_{\mathit{if}}\) is already in \(G_{w}\), then we just add the label \(a\) to \(e_{\mathit{if}}\) (lines 18-21). The resulting graph \(G_{w}\) is provided as output.
Figure 2 shows the merged call graph \(G_{w}\) for our running example after running the three analysis techniques mentioned in the previous paragraph. Here, the _dynamic_ analysis identified the call from the _global_ scope to the function \(a\), the _native calls_ analysis identified the call from \(a\) to the inline function defined in lines 2-4 of Listing 1 (by considering the call to setTimeout as a direct function call), and the _static_ analysis identified the call from the body of the previous function definition to \(b\). No analysis technique identified any call to function \(c\), so it is unreachable from _global_ because it has no incoming edges at all.
#### 2.2.4 Elimination
Once all analysis techniques have been executed and the complete \(G_{w}\) is available, the elimination phase identifies all the nodes in \(G_{w}\) representing dead code. The algorithm employed by Lacuna in this phase is presented in Algorithm 3, constituted by the **IdentifyAlive** and **RemoveDead** procedures. The IdentifyAlive procedure identifies alive nodes in \(G_{w}\). To do so, it performs a traversal of \(G_{w}\), starting from the root node _global_, while keeping track of \(G_{v}\), the graph of visited nodes (lines 1-5 in Algorithm 3). Nodes visited during this traversal are knowingly alive, as _(i)_ there exists a path of edges in \(G_{w}\), representing caller-c
```
input :\(w\), source code of the web app to analyze \(G_{w}\), complete call graph of \(w\) \(I\), desired optimization level output :\(w\), optimized version of \(w\) FunctionIdentify:Alive(\(G_{w}\)) \(G_{v}\): begin
\(G_{v}=\) result of \(G_{w}\) traversal starting from _global_ return\(G_{v}\)
``` FunctionRemoveDead(\(w\), \(G_{w}\), \(G_{v}\), \(I\)) \(w\)': begin
\(w\)' = \(w\)foreachnoden\(\in(V_{w}-V_{I})\)do retrieve\(f\) = function declaration of \(n\) if\(I=0\)then
\(w\)(\(f\)) = \(f\), no change to \(f\)
\(\mathbf{elseif\ }l=1\)then
\(w\)(\(f\)) = \(f\)_lazy_-loading version of \(f\)
\(\mathbf{elseif\ }l=2\)then
\(w\)(\(f\)) = \(f\)_empty_-body version of \(f\)
\(\mathbf{elseif\ }l=3\)then
\(\mathbf{Algorithm}\)3:Elismized graph algorithm of Lacuna
```
input :\(w\), source code of the web app to analyze \(G_{w}\), complete call graph of \(w\) \(I\), desired optimization level output :\(w\), optimized version of \(w\)
\(\mathbf{Function}\)Identify:Alive(\(G_{w}\)) \(G_{v}\): begin
\(G_{v}=\) result of \(G_{w}\) traversal starting from _global_ return\(G_{v}\)
\(\mathbf{
FunctionRemoveDead(\(w\), \(G_{w}\), \(G_{v}\), \(I\)) \(w\)':
\(w\)' = \(w\)foreachnoden\(\in(V_{w}-V_{I})\)do retrieve\(f\) = function declaration of \(n\) if\(I=0\)then
\(w\)(\(f\)) = \(f\), no change to \(f\)
\(\mathbf{elseif\ }l=1\)then
\(w\)(\(f\)) = \(f\)_lazy_-loading version of \(f\)
\(\mathbf{elseif\ }l=2\)then
\(w\)(\(f\)) = \(f\)_empty_-body version of \(f\)
\(\mathbf{elseif\ }l=3\)then
\(\mathbf{Algorithm}\)3:Elismized graph algorithm of Lacuna
``` input :\(w\), source code of the web app to analyze \(G_{w}\), complete call graph of \(w\) \(I\), desired optimization level output :\(w\), optimized version of \(w\)
\(\mathbf{Function}\)Identify:Alive(\(G_{w}\)) \(G_{v}\):
\(G_{v}=\) result of \(G_{w}\) traversal starting from _global_ return\(G_{v}\)
\(\mathbf{
Function}\)RemoveDead(\(w\), \(G_{w}\), \(G_{v}\), \(I\)) \(w\)':
\(w\)' = \(w\)foreachnoden\(\in(V_{w}-V_{I})\)do retrieve\(f\) = function declaration of \(n\) if\(I=0\)then
\(w\)(\(f\)) = \(f\), no change to \(f\)
\(\mathbf{elseif\ }l=1\)then
\(w\)(\(f\)) = \(f\)_lazy_-loading version of \(f\)
\(\mathbf{elseif\ }l=2\)then
\(w\)(\(f\)) = \(f\)_empty_-body version of \(f\)
\(\mathbf{elseif\ }l=3\)then
\(\mathbf{Algorithm}\)3:Elismized graph algorithm of Lacuna
``` input :\(w\), source code of the web app to analyze \(G_{w}\), complete call graph of \(w\) \(I\), desired optimization level output :\(w\), optimized version of \(w\)
\(\mathbf{Function}\)Identify:Alive(\(G_{w}\)) \(G_{v}\):
\(G_{v}=\) result of \(G_{w}\) traversal starting from _global_ return\(G_{v}\)
\(\mathbf{Function}\)RemoveDead(\(w\), \(G_{w}\), \(G_{v}\), \(I\)) \(w\)':
\(w\)' = \(w\)foreachnoden\(\in(V_{w}-V_{I})\)do retrieve\(f\) = function declaration of \(n\) if\(I=0\)then
\(w\)(\(f\)) = \(f\), no change to \(f\)
\(\mathbf{elseif\ }l=1\)then
\(w\)(\(f\)) = \(f\)_lazy_-loading version of \(f\)
\(\mathbf{elseif\ }l=2\)then
\(w\)(\(f\)) = \(f\)_empty_-body version of \(f\)
\(\mathbf{elseif\ }l=3\)then
\(\mathbf{Algorithm}\)3:Elismized graph algorithm of Lacuna
dynamically fetches the original function body from the lazy loading server when invoked (lines 12-16).
* **Optimization level 2:** performs a conservative optimization, removing the function body while keeping the function declaration. The rationale for this choice is that in many cases function declarations are used as expressions in JavaScript and are used in various contexts in which complete removal would lead to run-time errors in the browser. Figure 2(c) shows an example of code optimized with this level, where the original function body of c has been removed (line 18) but references to it have been preserved (line 22).
* **Optimization level 3:** performs an aggressive optimization, removing the presumed dead functions entirely. This elimination strategy maximizes the benefits of dead code removal. However, it also maximizes adverse effects in the case of false positives. In Figure 2(d), we provide an example of code optimized at level 3, where the dead function c has been removed entirely (line 17), including references to it (line 22).
After applying optimizations, \(w\)', the optimized version of the web app provided as input, is returned as output to the user. With the exception of optimization level 0, all optimizations are applied on a copy of the original web app \(w\) provided as input. In our example of Listing 1, the proper optimization, in accordance with the user-defined optimization level, would be applied only for the function \(c\).
#### 2.2.5 Implementation and used technologies
We developed Lacuna as a Node.js application. To carry out parsing of the input web app \(w\), we adopt the Esprima [34] parser. Currently, our implementation comes with eight ready-to-use analyzers, which have already been integrated into Lacuna. Each is described in the following:
* **Static**: a static analyzer based on an approach utilizing point analysis [35]. It makes use of Esprima, and builds an approximate call graph, ignoring dynamic properties and context binding.
* **Dynamic:** a basic dynamic analyzer for web apps. Firstly, it instruments the web app by adding logging statements at the beginning of the body of every function definition (including anonymous and inline functions). Then, it runs the web app in a headless browser (namely, in our implementation, PhantomJS [36]), collects the logging information, and builds the call graph according to the functions executed at run-time. It does not provide any input to the web app while executing it.
* **Native calls:** an extension of the ACG analyzer, where we also consider native JavaScript functions (i.e., Array.prototype.map or Function.prototype.call) when building the call graph.
* **ACG:** our implementation of the field-based call graph construction algorithm proposed by Feldthaus _et al._[32]. It does not consider dynamic properties, and it does not take arrow functions into account.
* **WALA [37]:** a static analysis framework for Java and JavaScript. It builds an intermediate form of the JavaScript code being analysed, then used as a basis for pointer analysis and call graph construction. We wrapped the publicly available implementation [37] in a Lacuna module.
* **TAJS:** a dataflow analysis technique for JavaScript that infers type information and call graphs [38]. TAJS performs abstract interpretation using a customization of the monotone framework [39] tailored to precisely model JavaScript-specific constructs [40]. We wrapped the Java implementation [41] in a Lacuna module.
* **npm_cg [42]:** npm cg is a tool made to produce call graphs from JavaScript source code. It comes with a series of significant limitations: only a single JavaScript file is considered at a time and only named JavaScript functions are taken into consideration (thus no arrow functions or function expressions are considered). Minor modifications were made to its implementation to integrate it in Lacuna. The resulting implementation is available in the Lacuna repository [13] along with a patch file reflecting all changes made to the original source code.
* **Closure Compiler:** the Closure Compiler [25] is a tool from Google for making JavaScript download and run faster. Instead of compiling from a source language to machine code, it compiles from JavaScript to an improved JavaScript where dead code is removed and live code is minimized. Behind the curtains, the closure compiler creates a call graph for its internal representation of the source code. By default there is no way of outputting this call graph, therefore some modifications were made to output the call graph. The resulting implementation is included in the Lacuna repository [13] along with a patch file that reflects the changes made to the original source code.
It is worth mentioning that the Static, Dynamic, WALA analyzers were already integrated into the previous version of Lacuna [6]. On the other hand, we integrated ACG, TAJS, npm cg, and Closure Compiler into Lacuna because these analyzers were empirically assessed in the comparative study by Antal _et al._[27], who concluded that combining more analyzers, rather than using them individually, can lead to more accurate JavaScript call graphs. Finally, we included Native calls since it is an extension of ACG.
#### 2.2.6 Novel features and extensions
An initial version of Lacuna was presented at the 2018 IEEE International Conference on Software Analysis, Evolution, and Reengineering [6]. With respect to the previous paper, novel features of Lacuna presented in this journal version include:
* the new subsystem for the removal of dead code according to four different optimization levels, previously described in Section 2.2.4;
* the support for externally hosted JavaScript code. Externally hosted JavaScript files are now downloaded locally during the parsing phase, as to enable a correct and complete analysis of the application under scrutiny. This enables the analysis of web
apps partially hosted on a public Content Delivery Network (CDN);
* the support for JavaScript code embedded into non-JavaScript files--i.e., the HTML files of the web app under analysis are now considered during the parsing phase, to identify JavaScript code referenced or embedded by them;
* the integration of five new third-party analysis techniques into Lacuna, namely ACG [32], TAJS [38], npm cg[42], Native calls, and Closure Compiler [25];
* improvements to the JavaScript call graph representation. Specifically, in the new version of Lacuna, each call graph node (i.e., code function) is annotated with a number of supplemental information (e.g., source file name, starting code line, ending code line) to allow for easier integration of the output of different analysis techniques;
* the new version of Lacuna is made available as a stand-alone NodeJS module, which can be imported into any NodeJS project. This allows for easier integration of Lacuna into a development pipeline.
#### 2.2.7 Correctness, Completeness, and Accuracy of Lacuna
We empirically assessed Lacuna in terms of correctness, completeness, and accuracy of the detected JavaScript dead functions. To do so, we replicated our previous experiment [6] on a wider dataset and by considering more instances of Lacuna--each instance either uses a single analyzer to build JavaScript call graphs or a combination of analyzers. The dataset of this experiment consists of 39 web apps developed by independent web developers in the context of the TodoMVC project--as compared to our previous experiment on Lacuna [6], we included 10 more web apps. Later, for each web app, we built the ground truth (i.e., we determined which functions are actually dead or alive), executed each instance of Lacuna on it, and then gathered the functions detected as dead by that instance. In total, we ran 127 different instances of Lacuna, each instance integrated one to seven analyzers--in our previous experiment on Lacuna [6], we ran three instances of Lacuna only: Static, Dynamic, and their combination. The analyzers we executed in this replicated experiment are those listed in Section 2.2.5 with the exception of WALA.5 Finally, we quantified the correctness, completeness, and accuracy of the detected JavaScript dead functions by using the _precision_, _recall_, and _F-score_ measures from the Information Retrieval (IR) field [43]. The results of this experiment suggest that: _(i)_ combining two or more analyzers leads to improvements in terms of correctness, completeness, and accuracy of the detected JavaScript dead functions and _(ii)_ the best instance of Lacuna is the one based on the joint use of Dynamic and TAJS. This instance allows achieving the highest accuracy level (average F-score = 87.9%) so well balancing correctness (average precision = 82.5%) and completeness (average recall = 97.2%). While F-score is a trade-off measure between precision and recall, the average values of precision and recall reported above can be interpreted as follows: on average, 82.5% of the functions this instance of Lacuna detects as dead are correct (i.e., on average, only 17.5% of the functions are wrongly detected as dead); on the other hand, this instance of Lacuna detects 97.2% of all the dead functions available in a web app (i.e., on average, it misses less than 3% of the dead functions available in a web app). It is important to note that having a precision of 82.5% might be acceptable for many projects, but it might be not enough
Fig. 3: Optimization levels offered by Lacuna: (a) original example code, (b) after applying optimization level 1, (c) after applying optimization level 2 and (d) after applying optimization level 3.
for some other projects (e.g., those where the incorrectly removed dead code performs critical functionalities of the web app); in the latter case we suggest to the users of Lacuna to adopt optimization level 1, where the body of incorrectly-removed functions is lazily loaded and executed from a server [33]. We also suggest the users of Lacuna to experiment with other combinations of analyzers, which might lead to a precision-recall combination which is better fitting the requirements of their project and organization. For example, in our experiment the combination of the Dynamic, Closure Compiler npm cg, TAJS, and ACG lead to a higher precision (88.1%) thart'the one obtained via the Dynamic-TAJS combination (82.5%); however, the higher precision came with a high cost in terms of recall, which was only 54.3%, thus leading to a much lower F-score (64.8%). Thanks to the extensible architecture of Lacuna, in those cases where already-existing analyzers do not perform well, developers can still integrate in Lacuna a new analyzer with their own project- or organization-specific algorithms for building more accurate call graphs. Nevertheless, at the time of writing, as we will report in Section 2.2.8, the aggregated F-score values achieved by Lacuna with the Dynamic-TAJS combination are the highest when compared to those of other state-of-the-art approaches.
Finally, to give an idea about the impact of the improvements to Lacuna, we summarize in Table 1, the average values of the F-score, precision, and recall measures reported in the previous Lacuna paper [6], where three instances of the old Lacuna version were studied (i.e., Static, Dynamic, and their combination). It is easy to grasp that the new Lacuna version, based on the combination of Dynamic and TAJS, lead to improvements in terms of correctness, completeness, and accuracy of the detected JavaScript dead functions. For details about the replicated experiment briefly described in this section, we redirect the interested reader to our online appendix [44].
#### 2.2.8 External Evaluation of Lacuna
Before focusing on the assessment of the run-time overhead of JavaScript dead code, it is important to be reasonably confident that Lacuna is the right instrument for the detection and removal of JavaScript dead code. We carried out a small-scale experiment to evaluate Lacuna against state-of-the-art tools that are currently able to detect (and remove, in some cases) JavaScript dead code. In this section, we report the results of such an experiment.
We first identify an initial set of analysis tools that are currently able to detect JavaScript dead code. This step is carried out by: _(i)_ performing a lightweight search on Google Scholar, and _(ii)_ by analysing the scientific publications cited and citing the studies we already identified as related to our work (see Section 6). This activity leads to the following 6 promising tools: Qiong _et al._[45], UFFRemover [33], JSLIM [46], Muzeel [47], Goel _et al._[48], Google LightHouse6. Three researchers assessed the applicability of each potentially-usable tool (e.g., a functioning implementation of the tool must be publicly available). This analysis led to the identification of two tools that are usable in our study: UFFRemover and Muzeel. For the sake of space, the details of such analysis are included in the replication package of the study. The main distinguishing factors of the selected tools with respect to Lacuna are: _(i)_ both UFFRemover and Muzeel detect dead code via dynamic analysis, whereas Lacuna can combine static and dynamic analyses; _(ii)_ UFFRemover performs a preliminary static analysis to identify required JavaScript modules and to instrument them for logging the JavaScript functions executed during the dynamic analysis; _(iii)_ the dynamic analysis of UFFRemover can execute various parts of the web app under analysis by executing test cases (if available) or via (user-defined) interaction scripts; _(iv)_ Muzeel complements the initial loading of the web app with the emulation of all possible interactions within the web app (interaction points are identified during a preliminary pass via dynamic analysis); _(v)_ Lacuna is a meta-tool, i.e., it allows the integration of additional 3rd-party analyzers in its pipeline; and _(vi)_ Lacuna is the only tool supporting different optimization levels, where one of them - level 1 - is the one provided by UFFRemover [33].
Footnote 6: [https://developer.chrome.com/docs/lighthouse/performance/unused-javascript](https://developer.chrome.com/docs/lighthouse/performance/unused-javascript)
We execute the UFFRemover and Muzeel tools on all the 39 TodoMVC web apps we used for the internal evaluation of Lacuna; for the sake of completeness, we execute two different configurations of UFFRemover, where the first one focusses exclusively on the initial load of the analysed web app (we call it _UFFRemover (L)_) and the second one is considering (scripted) interaction scenarios covering all functionalities of the analysed web app (we call it _UFFRemover (L)_). Finally, we consider the outputs of the three tools (i.e., Muzeel, UFFRemover (L), and UFFRemover (I)) over all 39 TodoMVC apps, we compute their precision, recall, and F-score, and finally we compare them against the same metrics we collected for the Dynamic-TAJS instance of Lacuna (see Section 2.2.7).
\begin{table}
\begin{tabular}{l c c c}
**Variable** & **Static** & **Dynamic** & **Static + Dynamic** \\ \hline Precision & 56\% & 57\% & 63\% \\ \hline Recall & 49\% & 77\% & 40\% \\ \hline F-score & 49\% & 64\% & 47\% \\ \hline \end{tabular}
\end{table} TABLE 1: Average values regarding the correctness, completeness, and accuracy of the old Lacuna version—the values are those reported in the previous Lacuna paper [6]
\begin{table}
\begin{tabular}{l c c c c c c}
**Tool** & **Min.** & **Max.** & **Median** & **Mean** & **SD** & **CV** \\ \hline \multicolumn{8}{c}{**Precision**} \\ Lacuna & 0.207 & 0.992 & 0.870 & 0.825 & 0.181 & 21.988 \\ Muzeel & **0** & & & 0.632 & 0.346 & 54.763 \\ UFFRem. (L) & 0.200 & 1 & 0.965 & 0.877 & 0.178 & 20.313 \\ UFFRem. (I) & 0.421 & 1 & 1 & 0.949 & 0.143 & 15.113 \\ \hline \multicolumn{8}{c}{**Recall**} \\ Lacuna & 0.688 & 1 & 1 & 0.972 & 0.065 & 6.721 \\ \hline Muzeel & **0** & 1 & 0.749 & 0.685 & 0.216 & 31.552 \\ UFFRem. (L) & 0.053 & 1 & 1 & 0.833 & 0.287 & 34.494 \\ UFFRem. (I) & 0.053 & 1 & 0.975 & 0.791 & 0.302 & 38.146 \\ \hline \multicolumn{8}{c}{**F-score**} \\ Lacuna & 0.344 & 0.996 & 0.918 & 0.879 & 0.138 & 15.655 \\ \hline Muzeel & **0** & & 0.714 & 0.594 & 0.302 & 50.831 \\ UFFRem. (L) & 0.101 & 1 & 0.891 & 0.801 & 0.247 & 30.904 \\ UFFRem. (I) & 0.101 & 1 & 0.955 & 0.810 & 0.255 & 31.430 \\ \hline \end{tabular}
\end{table} TABLE 2: Descriptive statistics for the external evaluation of Lacuna against Muzeel, UFFRem. (L), and UFFRem. (I).
Table II shows the descriptive statistics for the external evaluation of Lacuna against Muzeel, UFFRemover (L), and UFFRemover (I). UFFRemover (I) is the tool with the highest **precision** (mean=0.949). This result is expected since the interaction scripts we developed for interacting with the subjects are covering all primary functionalities of the analysed TodoMVC web apps; this result is also highlighted by the fact that the median precision of UFFRemover (I) is 1, i.e., the tool correctly identifies _all_ dead functions as dead for at least 50% of the subjects. UFFRemover (L) (mean=0.877) and Lacuna (mean=0.825) perform similarly in terms of average precision; we conjecture that this result is primarily due to the fact that UFFRemover's detection algorithm executed on only the page loading phase of the subject is the same as the Dynamic analyzer of Lacuna (we trace the small difference in terms of precision to the fact that the two tools use a different library for parsing the JavaScript code, which might have lead to some functions not been detected by the parser).
Lacuna is the tool with the highest **recall** (mean=0.972), followed by UFFRemover (L) (mean=0.833), UFFRemover (I) (mean=0.791), and Muzeel (mean=0.685). As described in Section II.2.7, having a high recall is fundamental for our experiment on its overhead at run-time (see Section III) since a high recall makes us reasonably confident that (on average) Lacuna is able to detect 97.2% of all dead functions in a given web app. We conjecture that Lacuna is performing better than all the other tools since our Dynamic-TAJS instance of Lacuna includes also a static analysis component in it, allowing our tool to reach parts of the Javascript call graph that is not reached via either _(i)_ the pure dynamic analysis performed by UFFRemover (L) and UFFRemover (I) or _(ii)_ the dynamic analysis combined with the traversal of the event listeners statically-identified by Muzeel.
When looking at the F-score combined metric, Lacuna is again the tool performing better (mean=0.879), followed by UFFRemover (I) (mean=0.810), UFFRemover (L) (mean=0.801), and finally Muzeel (mean=0.594). The fact that Lacuna is the most accurate tool overall (i.e., it has the highest F-score) makes us reasonably confident in using it for detecting and removing JavaScript dead code in the subjects used when assessing the run-time overhead of JavaScript dead code (see next section).
## III Experiment on the Run-time Overhead of JavaScript Dead Code
In this section, we describe the main aspects of the design of the experiment on the run-time overhead of JavaScript dead code. This experiment has been designed and conducted by following well-known guidelines for experimentation and data analysis in empirical software engineering [49, 50, 51, 52, 53]. We refer the reader to the replication package of the experiment [14] for further details on the experiment execution, used tools, and collected data. The replication package contains all the information for independent verification and replication of the study, namely: _(i)_ the Python scripts for executing the experiment, _(ii)_ the raw data measures collected during the execution of the experiment, and _(iii)_ the R scripts for analysing the collected data, and _(iv)_ a detailed guide for replicating the experiment.
### _Goal and Research Questions_
In this context, we use Lacuna to eliminate dead code from the subjects of the experiment according to the four optimization levels of Lacuna (see Section II.2.4). By following the GQM (Goal-Question-Metric) template [54], the goal of this experiment is formulated as:
_Analyze the presence of JavaScript dead code for the purpose of empirically assessing its run-time overhead with respect to energy consumption, performance, network usage, and resources usage from the point of view of researchers, developers, and users in the context of mobile web apps._
The goal presented above is achieved by answering the four research questions listed below. The main motivation for having the four research questions is to investigate the overall overhead that JavaScript dead code can have on mobile web apps at run-time. We define a research question for each of the main perspectives under which having a run-time overhead might be relevant either for the user experience or for the (technical, ecological) sustainability of mobile web apps.
**RQ1.** What is the overhead of JavaScript dead code on the _energy consumption_ of mobile web apps?
It is known that mobile web apps consume different amounts of energy while being loaded [55, 56] and that improving their energy efficiency might lead to consistent savings in terms of electricity [57]. So, answering RQ1 will help both web developers and researchers understand to what extent removing JavaScript dead code might be a useful instrument for improving mobile web apps from the perspective of energy consumption.
**RQ2.** What is the overhead of JavaScript dead code on the _performance_ of mobile web apps?
For what concerns RQ2, the performance of mobile web apps is a crucial factor for their success. Users expect mobile web apps to load within a reasonable time [58]; having mobile web apps with poor performance can potentially impact profits and/or lead to users' abandonment, especially on mobile devices where hardware and connectivity are constrained [59]. By answering RQ2 we aim to objectively assess to what extent the removal of JavaScript dead code might support _(i)_ web developers in improving the performance of their mobile web apps and _(ii)_ researchers in better understanding the relationship between the presence of (dead) JavaScript code and the performance of mobile web apps.
**RQ3.** What is the overhead of JavaScript dead code on the _network usage_ of mobile web apps?
It has been empirically confirmed that networking is the most relevant bottleneck for mobile web apps [5]. Also, the network conditions under which a mobile device operates can be limited depending on factors such as the network coverage at a specific location, the connectivity subscription of the user, the type of cellular network supported by the mobile device (e.g., 4G, 5G), etc. So, reducing the amount of network traffic required by a mobile web app to fully load is a relevant factor for improving its performance or
even its loading itself. By answering RQ3 we aim at getting empirical evidence about the impact of JavaScript dead code and the network traffic required to load a mobile web app. Such results support both web developers and researchers in understanding if removing JavaScript dead code is a viable tool for reducing the requirements of mobile web apps in terms of network traffic.
**RQ4.** What is the overhead of JavaScript dead code on the _resources usage_ of mobile web apps?
Mobile devices tend to have limited hardware resources, such as CPUs, GPUs, and memory. Also, the browser engine shares such resources with other apps running on the user's device and, when such resources are getting abused, the device might become slow and the operating system might even decide to forcibly shutdown some of the running apps to free resources for the other ones. By answering RQ4 we aim to empirically assess to what extent the presence of JavaScript dead code impacts the usage of hardware resources of the mobile device. Our results can support web developers and researchers in understanding if removing JavaScript dead code might help to reduce the number of resources needed to run a mobile web app, thus leading to an overall better user experience for their users.
### _Subjects Selection and Planning_
For this experiment, we consider a total of 30 web apps that have been independently developed by third-party web developers. The 30 web apps are divided into two different families: 15 _in-the-lab_ web apps and 15 _in-the-wild_ web apps.
The 15 in-the-lab subjects were randomly sampled from the TodoMVC project. This project aims to help developers to choose the MV* framework more suitable for structuring and organizing their JavaScript Web apps. To that end, TodoMVC consists of different implementations of the "same" Todo web app, each of which uses a different MV* framework, so that developers can inspect the codebase and then compare the different MV* frameworks. The Todo app is a manager for to-do lists, which includes the following features: _(i)_ adding a to-do item, _(ii)_ removing a to-do item, _(ii)_ modifying an existing to-do item, and _(iv)_ marking a to-do item as completed. We refer to each sampled web app as the name of the used MV* framework. The sampled in-the-lab subjects are listed in Table III.
Despite the in-the-lab web apps allowing us to study a _large_ and _heterogeneous_ set of MV* frameworks that real-world JavaScript Web apps can rely on, they share the same functionalities. This might negatively influence the external validity of the experiment, making our results less generalizable. In order to mitigate this potential bias, we decided to complement the 15 in-the-lab subjects with 15 additional in-the-wild subjects. The subjects are sampled from the Tranco list [12], which aggregates the rankings from the lists provided by Alexa, Umbrella, Majestic, and Quantcast. Starting from the first 150 web apps in the Tranco list, we iteratively download and manually analyze each candidate web app against a set of selection criteria we defined a priori, reaching a final set of 15 web apps satisfying all the selection criteria. The selection criteria, their rationale, and the results of their application are reported below:
S1 _- The web app should not redirect to another domain_. The rationale for this criterion is that there are mobile web apps that redirect the user to a different domain, such as Apple (aaplimg.com **mpple.com**) and Twitter (t.c**qr** twitter.com); these pages could redirect to duplicate domains within the list or domains that are not part of the Tranco list at all. The application of S1 led to the identification of 24 web apps redirecting to another domain, which were discarded from the initial 150 web apps.
S2 _- The web app must be accessible without user authentication_. The rationale for this criterion is that there are mobile web apps in which the actual page content is available for authenticated users only, such as Twitter and Instagram. After applying this criterion we identified 8 web apps requiring user authentication, leading to a set of 118 potentially-usable web apps.
S3 _- Lacuna must be able to successfully remove JavaScript dead code from the web app without errors_. The rationale for this criterion is that we need to be sure that for each subject of the experiment, we can successfully run Lacuna to obtain its dead-code-free version for all Lacuna optimization levels. When applying Lacuna to the 118 selected web apps we encountered two main situations where it was not successful: _(i)_ 96 web apps included external JavaScript scripts we did not manage to properly download locally on our server (i.e., some scripts were imported dynamically and the browser blocked their request due to Cross-Origin Resource Sharing errors, the HTML code of the web app was referencing scripts which were not available anymore at the referenced URLs, etc.) and _(ii)_ for 7 web apps TAJS failed since at the time of executing the experiment it did not support the following ES6 features: the let keyword, arrow functions, and template literals.
The 15 selected in-the-wild subjects resulting from this procedure are listed in Table III. These subjects are heterogeneous from different perspectives (e.g., application domain, functionalities, size, amount of JavaScript code), making them good candidates for complementing the in-the-lab subjects and achieving more generalizable results in our experiment.
Once we obtain the final set of 30 individual subjects, we apply Lacuna four times to each of them, each time with a different optimization level (OL-0 as the baseline, then OL-1, OL-2, OL-3). This leads to have four versions for each web apps. In Table III, we report the number of functions detected as dead by Lacuna when it is executed on each web app--it is worth recalling that such a number is the same across the different Lacuna optimization levels.
Regardless of the web app and optimization level, we configured Lacuna so that it combined the results of two third-party analysis techniques: Dynamic and TAJS. This design choice was taken empirically. That is, we performed a preparatory experiment thanks to which we concluded that the best configuration of Lacuna was the one based on the joint use of Dynamic and TAJS (see Section 2.2.7).
### _Variables and Statistical Hypotheses_
This experiment has the same **independent variable** for all research questions, i.e., the Lacuna _optimization level_ applied to each of the subjects. According to the currently available
dead code elimination procedure of Lacuna described in Section 2.2.4, this variable has four levels: OL-0, OL-1, OL-2, and OL-4.
All **dependent variables** are measured in the time frame between the first GET request issued by the browser to the server hosting the currently-measured web app and the web app's page load time. The dependent variables of this experiment are described below:
* _Energy_ (RQ1): the energy consumed by the mobile device to load the web app in mJ (milli-joule). Energy values are computed by following a sampling-based approach widely used in software engineering studies [60, 61, 62, 63], that is: _(i)_ sampling the instantaneous power consumed by the browser app running on the Android device (in microWatts) _(ii)_ applying the \(E=P\)_t_Mormula, where \(P\) is the measured power and \(t\) is the page load time of the web app (see next dependent variable), and _(iii)_ solving the integral of \(P\) over \(t\) (in our case by applying the trapezoidal method [64]).
* _Page load time_ (RQ2): the timestamp in milliseconds (ms) in which the web app is fully loaded in the browser [65]. More specifically, page load time is defined as the time from the start of a user-initiated page request (the initial GET request issued by the browser in our case) to the time the entire page content is loaded, including all dependent resources like CSS stylesheets, JavaScript code, or images; this time is collected by recording the timestamp in which the load event is fired by the browser engine.
* _First contentiful paint_ (RQ2): the timestamp in milliseconds when the browser first renders any text, image, non-white canvas, or SVG of the web app [65]. Intuitively, it is the first time when the user can start consuming the content of the web app. According to the Paint Timing W3C specification [66], the First contentiful paint metric and the First paint one (see below) complement Page load time since they provide a user-oriented assessment of the performance of the web app.
* _First paint_ (RQ2): the timestamp in milliseconds when the browser renders the first pixels to the screen of the mobile device, rendering anything that is visually different from what was on the screen prior to navigation [65]. Intuitively, it is the time when the user is aware that "something is happening" in the browser after they decided to navigate to the URL of the mobile web app.
* _HTTP requests_ (RQ3): the number of HTTP(S) requests issued by the browser engine while loading the currently-measured web app. We include this variable since our RQ3 is concerning the overhead of JavaScript dead code in terms of network usage, mainly due to the additional network traffic caused by either the additional JavaScript files retrieved by the web app (even if they are not executed since they contain dead code).
* _Transferred bytes_ (RQ3): the sum of the size, in kilobytes (Kb), of the payloads of all HTTP(S) requests issued by the browser engine while loading the currently-measured web app. Similarly to the previous variable, we are measuring the number of transferred bytes in order to quantify how much additional (and unused) JavaScript code is transferred from the servers to the web app when dealing with JavaScript dead code.
* _CPU usage_ (RQ4): the average of the percentage of CPU consumed while loading the currently-measured web app. We include this variable since RQ4 deals with the overhead imposed by JavaScript dead code in terms of computational resources, which are typically the processor, GPU, and memory (see the description of the next two dependent variables).
* _GPU usage_ (RQ4): the average of the percentage of GPU consumed while loading the currently-measured web app. Similarly to the previous variable, we include this variable in order to measure what is the added overhead of JavaScript dead code in terms of GPU usage.
* _Memory usage_ (RQ4): the average amount of memory consumed by the Android device while loading the currently-measured web app in megabytes (Mb). Similarly to CPU usage, we include this variable in order to measure what is the memory overhead imposed by JavaScript dead code.
For each dependent variable listed above and each family of subjects (i.e., in-the-lab vs. in-the-wild ones), we formulate the following parameterized null hypothesis:
\[\begin{array}{l}H0_{var}:\text{There is no statistically significant difference in the values of the dependent variable }var\text{ (e.g., }\end{array}\]
\begin{table}
\begin{tabular}{l r} \hline \hline
**Subject** & **\# Dead Functions** \\ \hline \multicolumn{2}{c}{**In-the-lab subjects**} \\ \multicolumn{2}{c}{angulars require} & 32 \\ \multicolumn{2}{c}{backbone} & 542 \\ \multicolumn{2}{c}{canjs} & 492 \\ \multicolumn{2}{c}{dijon} & 410 \\ \multicolumn{2}{c}{dojo} & 411 \\ \multicolumn{2}{c}{enyo backbone} & 6 \\ \multicolumn{2}{c}{gwt} & 17 \\ \multicolumn{2}{c}{query} & 420 \\ \multicolumn{2}{c}{isblocks} & 459 \\ \multicolumn{2}{c}{knockoutjs require} & 35 \\ \multicolumn{2}{c}{mitri} & 55 \\ \multicolumn{2}{c}{polymer} & 6 \\ \multicolumn{2}{c}{reagent} & 3,357 \\ \multicolumn{2}{c}{vanillajs} & 59 \\ \multicolumn{2}{c}{vue} & 266 \\ \hline \multicolumn{2}{c}{**In-the-wild subjects**} \\ \multicolumn{2}{c}{apache.org} & 437 \\ \multicolumn{2}{c}{aws.amazon.com} & 409 \\ \multicolumn{2}{c}{m\_youtube.com} & 1,812 \\ \multicolumn{2}{c}{nl.paddy.com} & 19 \\ \multicolumn{2}{c}{stackexchange.com} & 457 \\ \multicolumn{2}{c}{stackexchangeflow.com} & 491 \\ \multicolumn{2}{c}{www.amazon.com} & 144 \\ \multicolumn{2}{c}{www.bbc.com} & 345 \\ \multicolumn{2}{c}{www.bbc.com} & 1,90 \\ \multicolumn{2}{c}{www.buzzfeed.com} & 353 \\ \multicolumn{2}{c}{www.mozilla.org} & 436 \\ \multicolumn{2}{c}{www.office.com} & 616 \\ \multicolumn{2}{c}{www.paypal.com} & 639 \\ \multicolumn{2}{c}{www.theguardian.com} & 16 \\ \multicolumn{2}{c}{www.wikipedia.org} & 46 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Number of dead functions detected by Lacuna for each subject.
energy, page load time, etc.) between the optimization levels of Lacuna.
The alternative hypothesis for \(H0_{var}\) (i.e., \(H1_{var}\)) admits that there is a statistically significant difference. For example, if \(H0_{energy}\) is rejected, we can accept the alternative hypothesis \(H1_{energy}\) stating that _there is a statistically significant difference in the values of energy between the optimization levels of Lacuna_.
### _Experiment execution_
In Figure 4, we present the measurement infrastructure for running the experiment. The experiment involves two main hardware nodes: a laptop acting as a base station and an Android smartphone for running and measuring the subjects. The laptop has an Intel Core i7-4710HQ processor, 12Gb of memory, and runs Ubuntu 20.04 as the operating system. The Android device is an LG G2 smartphone with a Qualcomm MSM8974 Snapdragon 800 processor, 2Gb of memory, a 5.2" LCD display, and running Android 6.0.1 operating system. The main rationale for using two separate hardware nodes is to keep the Android device as lightweight as possible, so as to not influence the measurements [67, 68]. As shown in the right-hand side of the figure, the Android device is running only two apps: _(i)_ the Google Chrome browser, which is used for loading the web apps and _(ii)_ Trepn, a software-based profiler for Android devices. Trepn is widely used in empirical studies on energy-efficient software [69, 70, 71] and it has been reported as sufficiently accurate with respect to hardware power measurement (e.g., the Monsoon Power Monitor), with an error margin of 99% [72]. Trepn supports also the collection of the CPU, GPU, and memory usage.
The laptop and the Android device are connected to the same WiFi network. To reduce as much as possible the influence of the network conditions on the experiment, the WiFi network does not have any other connected devices.
All four versions of each of the 30 subjects of the experiment are hosted on the laptop and served via a dedicated Web server. To collect the values of the HTTP requests and Transferred bytes dependent variables, all HTTP(S) traffic between the smartphone and the laptop passes through an instance of _mitmproxy_[73], which records all HTTP(S) requests and locally stores them the form of network logs.
The experiment is orchestrated via Android Runner, a framework for defining and executing measurement-based experiments targeting Android (web) apps [67]. Android Runner allows us to define the experiment in a descriptive manner via a JSON file and then it automatically takes care of the complete execution of the experiment. Specifically, for each experiment run, Android Runner uses the Android Debug Bridge tool (ADB [74]) to interact with the smartphone, e.g., to collect Android system logs, to activate/deactivate the profiling features of Trepn, to instruct the Google Chrome app on the smartphone to load the currently-measured subject, to enable/disable the USB charging of the smartphone, etc.. For this experiment, we use two plugins of the Android Runner tool: _(i)_ Trepn, for collecting data via the Trepn profiler and _(ii)_ PerfumeJS, to collect web performance metrics via the Perfume.js library [75], such as the page load time, first contentful paint, and first paint.
In order to mitigate possible threats to the internal validity of the experiment and to facilitate its replicability, we take the following precautions while executing it: _(i)_ the measurement of each experiment trial (i.e., a subject-OL pair) is repeated 20 times, leading to a total of 2,400 individual runs (i.e., 4 treatments x 30 subjects x 20 repetitions), _(ii)_ the order of execution of the 2,400 experiment runs is randomized, _(iii)_ between each run the smartphone and the laptop remain idle for 2 minutes so to take into account tail energy usage [76], _(iv)_ the Chrome app is cleared before each run so to reset its cache, persisted data, and configuration, and _(v)_ the USB charging of the smartphone is disabled during the execution of each run.
### _Data Analysis_
We first perform a _data exploration_ step where we inspect and get an overview of the collected data via box plots and summary tables. In this step, we also check if the assumption that in-the-lab and in-the-wild subjects exhibit different values holds for the considered metrics. Since normality of the collected data is the underlying assumption of parametric statistical tests [49], as part of the data exploration step, we check if the distribution of the data collected for each dependent variable follows a normal distribution, both globally and between in-the-lab and in-the-wild subjects. We assess normality by means of three complementary methods: _(i)_ by applying the Shapiro-Wilk statistical test with \(\alpha=0.05\), _(ii)_ by producing and visually analysing the density plots of every dependent variable, and _(iii)_ by producing and visually analysing QQ-plots.
We anticipate that all the collected data do not follow a normal distribution. Based on this fact, in our statistical analysis, we apply non-parametric statistical tests and effect size measures. Specifically, _for each dependent variable and each family of subjects_ (i.e., in-the-lab and in-the-wild) we do the following:
Fig. 4: Measurement infrastructure
1. We apply the Kruskal-Wallis test (with \(\alpha=0.05\)), a non-parametric test for testing whether the collected measures come from populations with identical distributions; the application of this test gives an initial indication about whether the Lacuna optimization levels lead to statistically-different differences in terms of, e.g., energy consumption, memory usage, etc.
2. If the p-value of the Kruskal-Wallis test is not greater than \(\alpha\), then we assess the magnitude of the detected differences by applying the Eta squared effect size measure based on the H-statistic [77]. Eta squared is a non-parametric effect size measure compatible with the Kruskal-Wallis statistical test [77]. The values of the obtained effect size measures are interpreted according to threshold values commonly used in the literature, namely: \(\eta^{2}<0.06\) (small effect - S), \(0.06\), \(\eta^{2}<0.14\) (moderate effect - M), and \(\eta^{2}\) Q.14 (large effect - L).
3. Having a statistically significant result for the Kruskal-Wallis test also allows us to investigate which _pairs_ of optimization level exhibit statistically-significant differences. We do so by applying the Dunn Test as post-hoc analysis [78] to each pair of optimization level. Since we are applying multiple statistical tests, to reduce the chance of Type-I error we adjust the obtained p-values via the Benjamini-Hochberg correction [79].
## 4 Results
In this section, we present the results of the experiment.
### _Data exploration_
Table IV gives an overview of the measures we collected for all dependent variables across all in-the-wild and in-the-lab subjects. We observe that the measures collected for in-the-lab subjects tend to have different central values (i.e., mean and median) with respect to those collected from in-the-wild subjects; this phenomenon is especially prominent for page load time, first contentiful paint, first paint, HTTP requests, transferred bytes, CPU usage, and memory usage. Also, when the central values are different, in-the-wild subjects tend to consistently perform worse with respect to in-the-lab subjects; for example, the average page load time of in-the-wild subjects is 4.229s, whereas it is 1.137s for in-the-lab subjects. The differences in the obtained measures for in-the-lab and in-the-wild subjects further validate our design choice of considering the type of subject as a blocking factor for our experiment. Indeed, we expected such a kind of difference since the purpose and context in which those two families of subjects are developed are completely different; in-the-lab subjects have a relatively small size (both in terms of provided features and source code) and are developed on a voluntary basis, whereas in-the-wild subjects are fully-fledged web apps developed either by _(i)_ companies like Google or Amazon or _(ii)_ large-scale organizations like the Wikimedia Foundation.
The collected data for each dependent variable exhibits values within the expected ranges. For example, energy consumption is between 343.39mJ and 1,819.67mJ, which are acceptable values if we consider that the average page load time of the measured subjects is relatively short (i.e., 2.6 seconds). Overall, there is a high relative variance in the data, especially for network-related variables (i.e., HTTP requests and transferred bytes) for in-the-wild subjects and memory usage for all subjects, but also for performance-related metrics (i.e., page load time, first contentiful paint, and first paint) for in-the-wild subjects. Such variance is not a surprise if we consider that the time span in which measures are collected is relatively short (it raises the chances of having outlier values) and that we are including in-the-wild subjects in the experiment.
The Shapiro-Wilk normality test reveals that all the data exhibit a non-normal distribution. This result is further confirmed visually via density plots and QQ-plots (all available in the replication package of this study). Since the normality of the data is one of the assumptions of the ANOVA statistical test, we resort to the Kruskal-Wallis test as a non-parametric statistical test in the remainder of our data analysis procedure.
\begin{table}
\begin{tabular}{l r r r r r r}
In the next sections, we analyze the data related to each research question of the experiment.
### _Overhead on Energy Consumption (RQ1)_
As shown on the left-hand side of Figure 5, eliminating JavaScript dead code from the in-the-lab subjects results in slightly more energy-efficient web apps. Indeed, the median energy consumption of the original web apps (OL-0 in Lacuna) is 1,406.35mJ, against 1,367.59mJ, 1,374.34mJ, and 1,370.83mJ for the other Lacuna optimization levels. However, this result is not statistically significant (p-value: 0.268, see the first row of Table V).
The results are similar for in-the-wild subjects, with the exception that the median energy consumption remains approximately the same for the OL-1 and OL-3 Lacuna optimization levels. The energy consumption of the OL-2 optimization level is even slightly higher than that of the others, but still without statistical significance; we speculate that this result is due to the intrinsic variability of the experiment execution.
Overall, the obtained p-values do not allow us to reject \(H0_{energy}\) for both in-the-lab and in-the-wild subjects, so we cannot claim that different Lacuna optimization levels have an impact on the energy consumption of in-the-lab or in-the-wild web apps.
having an approach with different strategies for eliminating JavaScript dead code paid off in terms of page load time and that having a more comprehensive (but risky) strategy for dead code elimination leads to significantly better results in terms of page load time.
Differently from page load time, the measures of first contentful paint (Figure 7) and first paint (Figure 8) do not show any noticeable improvement across the various Lacuna optimization levels. As shown in Table V, we obtained p-values higher than our significance threshold for both first contentful paint and first paint, thus we cannot reject the corresponding null hypotheses for both families of subjects. That is, we cannot claim that different Lacuna optimization levels impact differently the time of the first contentful paint and first paint of web apps (both in-the-lab and in-the-wild subjects).
### _Overhead on Network Usage (RQ3)_
As shown in Figure 9 and Table V, the elimination of JavaScript dead code leads to noticeable (and statistically significant, p-value: 1.48x10\({}^{-40}\)) differences in terms of the number of HTTP requests only for in-the-lab subjects. Moreover, by looking at Table VI, it can be noticed that such differences are statistically significant for all pairs involving the OL-3 optimization level of Lacuna. In any case, the observed effect size is 0.155, i.e., _small_. The situation is different for in-the-wild subjects, where we do not observe any relevant difference among the various Lacuna optimization levels. Summing up, only for in-the-lab subjects (not for in-the-wild ones) we can reject \(H0_{HT}\)\(T\)\(P\)\(request\) stating that the number of HTTP requests is not the same across different Lacuna optimization levels.
The amount of transferred bytes is considerably lesser when the various Lacuna optimization levels are applied to both in-the-lab and in-the-wild subjects (see Figure 10). The differences in the transferred bytes are statistically significant for both in-the-lab subjects (p-value: 2x10\({}^{-14}\), effect size: 0.054 - _small_) and in-the-wild subjects (p-value: 2.19x10\({}^{-20}\), effect size: 0.077 - _moderate_). So, we can reject \(H0_{transformed\ bytes}\) for both families of subjects, allowing us to claim that different Lacuna optimization levels have an impact on the transferred bytes from the server to the client web apps (both in-the-lab and in-the-wild). As shown in Table VI, the observed differences are statistically significant for almost all pairs of Lacuna optimization levels.
### _Overhead on Resources Usage (RQ4)_
The CPU usage remains stable for in-the-lab subjects (left-hand side of Figure 11), with an average close to 58% and a p-value of 0.552. Differently, the CPU usage for in-the-wild subjects is reduced when applying Lacuna (p-value: 135x10\({}^{-9}\)), with statistically-significant results for all pairs of optimization levels, but not for the OL-QDL-1 one. The obtained p-values tell us that we cannot reject \(H0_{cpu\ usage}\) for in-the-lab subjects, but we can reject such a null hypothesis for in-the-wild subjects--i.e., there is a difference in the percentage of CPU usage across different Lacuna optimization levels when considering in-the-wild subjects.
The scenario is more stable when considering GPU usage (see Figure 12, with an average close to 25% and 24% for in-the-lab and in-the-wild subjects, respectively. The
Fig. 8: First paint of the web apps
Fig. 7: First contentful paint of the web apps
Fig. 9: Number of HTTP requests
application of the Kruskal-Wallis test reveals a statistically-significant difference only for in-the-lab subjects (p-value: 0.002 in Table V) and the OL-OL-3 and the OL-2_OL-3 pairs of optimization levels. So, we can reject \(H0_{gen\_usage}\) for in-the-lab web apps (i.e., different Lacuna optimization levels impact differently the percentage of GPU usage), but we cannot reject the same hypothesis for in-the-wild ones.
Memory usage remains relatively stable for in-the-lab subjects (see Figure 13), whereas it exhibits an observable improvement when considering in-the-wild subjects, with a statistically-relevant difference (p-value: 0.0159 in the last row of Table V), further confirmed for the OL-0_u_OL-3 and the OL-LOL-3 pairs of optimization levels (see the last row of Table VI). Given the obtained p-values, we cannot reject \(H0_{memory\_usage}\) for in-the-wild web apps (i.e., there is a difference in the memory usage of the web apps across different Lacuna optimization levels), but we can reject the same null hypothesis for in-the-lab web apps.
Despite the observed statistical differences, the effect size for CPU usage, GPU usage, and memory usage remains _small_ (see Table V).
\begin{table}
\begin{tabular}{l l l l l l l l l l}
**Variable** & **Subject type** & **RQ** & **OL-0** & **OL-1** & **OL-0** & **OL-0** & **OL-1** & **OL-2** & **OL-1** & **OL-3** & **OL-2** & **OL-3** \\ \hline Page load time (ms) & Lab & RQ2 & 0.367 & 0.119 & 1.21x10\({}^{-8}\) (*) & 0.464 & 1.97\({}^{-6}\) (*) & 4.45x10\({}^{-4}\) (*) \\ \hline Page load time (ms) & Wild & RQ2 & 0.691 & 0.737 & 0.049(*) & 0.737 & 0.114 & 0.063 \\ HTTP requests & Lab & RQ3 & 0.205 & 0.163 & 8.97x10\({}^{-24}\) (*) & 0.813 & 4.48x10\({}^{-5}\) (*) & 6.31x10\({}^{-31}\) (*) \\ Transferred bytes (Kb) & Lab & RQ3 & 0.028 (*) & 1.67x10\({}^{-4}\) (*) & 1.45x10\({}^{-14}\) (*) & 0.109 & 4.66x10\({}^{-8}\) (*) & 1.001x10\({}^{-4}\) (*) \\ Transferred bytes (Kb) & Wild & RQ3 & 1.80x10\({}^{-4}\) (*) & 7.44x10\({}^{-14}\) (*) & 1.24x10\({}^{-17}\) (*) & 1.80x10\({}^{-4}\) (*) & 1.71x10\({}^{-6}\) (*) & 0.258 \\ \hline CPU usage (\%) & Wild & RQ4 & 0.462 & 0.031(*) & 1.26x10\({}^{-6}\)(*) & 5.99x10\({}^{-3}\) (*) & 5.72x10\({}^{-6}\) (*) & 0.027(*) \\ GPU usage (\%) & Lab & RQ4 & 0.201 & 0.061 & 0.197 & 0.433 & 0.012 (*) & 1.44x10\({}^{-3}\) (*) \\ Memory usage (Mb) & Wild & RQ4 & 0.871 & 0.532 & 0.021(*) & 0.534 & 0.021 (*) & 0.106 \\ \hline \end{tabular}
\end{table} TABLE VI: Pairwise comparison across Lacuna optimization levels for variables with statistically-significant differences - the (*) symbol denotes cases with statistically-significant differences (with Benjamini-Hochberg correction)
Fig. 11: CPU usage of the mobile device
Fig. 12: GPU usage of the mobile device
Fig. 10: Bytes transferred over the network
### Impact of Dead Function Removal
In the following, we present the results of a further analysis to study the potential correlations between the number of dead functions detected by Lacuna and the improvements due to the elimination of these functions (i.e., by applying OL-1, OL-2, OL-3, respectively) in terms of _energy consumption_, _performance_, _network usage_, and _resources usage_. To do so, we run a correlation test between the number of dead functions detected by Lacuna and the saving achieved after applying each Lacuna optimization level (except for OL-0) with respect to each measure listed in Section 3.3. In particular, given a measure and an optimization level (except for OL-0), the saving on each subject is computed by averaging the 20 measurements for OL-0 (i.e., no optimization) and then subtracting the average of the 20 measurements for the considered optimization level. We use the Kendall correlation coefficient because to apply such a test it is not required that the data are normally distributed. Moreover, by averaging the data, we meet the data independence assumption. If the p-value of the correlation test is not greater than \(\alpha=0.05\), then there is a statistically-significant correlation. In this case, we then report the Kendall correlation coefficient (Tau), which provides an indication of how strong a statistically-significant correlation is. The values of this correlation coefficient are interpreted according to threshold values commonly used in the literature, namely: \(\tau<0.1\) (no correlation), 0.1 \(\tau<0.4\) (weak correlation - W), 0.4 \(\tau<0.7\) (moderates\(\&\)orrelation - M), and \(\tau\) 0.7 (strong\(\&\)orrelation - S).
In Table VII, we show the results of this further analysis. For most of the measures, we cannot show there are statistically-significant correlations with respect to the number of dead functions Lacuna detected. Among the most important outcomes, there is that in terms of network usage. In particular, we find positive, moderate, and statistically-significant correlations between the number of dead functions and the number of transferred bytes for all optimization levels (p-values ranging from 2.94x10\({}^{-6}\) to 4.11x10\({}^{-4}\), correlation coefficients ranging from 0.456 to 0.603). That is, regardless of the Lacuna optimization level, when the number of dead functions increases, the saving on transferred bytes tends to increase as well.
The other statistically-significant correlations regard the saving in terms of performance (i.e., page load time) and resources usage (i.e., CPU usage). In particular, we have a positive and statistically-significant correlation between the number of dead functions and saving on page load time when applying OL-3 (p-value: 0.009, correlation coefficient: 0.336 - _weak_). That is, as the number of dead functions removed with OL-3 increases, the saving on page load time tends to increase. Finally, we find a positive, _weak_, and statistically-significant correlation between the number of dead functions and saving on CPU usage when applying OL-2 (p-value: 0.022, correlation coefficient: 0.295). That is, the more the number of dead functions removed after applying OL-2, the higher the saving on CPU usage.
## 5 Discussion
In this section, we discuss the obtained results in terms of implications from the perspectives of users, researchers, and web developers. We conclude the section by presenting possible threats that could affect the validity of the obtained results, including countermeasures we applied for mitigating them.
### _Implications_
We observed that **page load time** is significantly lower when JavaScript dead code is completely removed (i.e., Lacuna optimization level 3) whatever the family of web apps is. Taking into account that users tend to abandon a web app when it takes too much time to load pages [59], we can postulate that our outcome has practical implications from the perspective of _users_. For example, users could appreciate the page load time of mobile web apps without dead code, thus positively affecting the user experience7 of these web apps. This outcome has also implications from the perspective of _researchers_ because they could be interested in studying to what extent the user experience is affected by the presence, or not, of dead code in a given mobile web app. From the _web developer_ perspective, it could be relevant to integrate a tool, like Lacuna, in the user experience design process since the presence of dead code could significantly affect page load time, possibly affecting user experience.
Footnote 7: The overall experience of a person using a mobile web app, a website, or a computer application, especially in terms of how easy or pleasing it is to use.
Regardless of the family of web apps, the amount of **transferred bytes** from the server to the client is significantly less when the various Lacuna optimization levels are applied, especially when using optimiz
\begin{table}
\begin{tabular}{l l c c}
**Variable** & **Optimization level** & **P-value** & **Corr. Coeff.** \\ \hline \multicolumn{4}{c}{**Number of dead functions and saving in terms of energy consumption**} \\ & OL-1 & 0.239 & - \\ Energy (m) & OL-2 & 0.318 & - \\ & OL-3 & 0.830 & - \\ \hline \multicolumn{4}{c}{**Number of dead functions and s using in terms of performance**} \\ & OL-1 & 0.475 & - \\ Page load time (ms) & OL-2 & 0.125 & - \\ & OL-3 & 0.009 (*) & 0.336 (W) \\ \hline \multirow{3}{*}{First cont. paint (ms)} & OL-1 & 0.432 & - \\ & OL-2 & 0.335 & - \\ & OL-3 & 0.775 & - \\ \hline \multirow{3}{*}{First paint (ms)} & OL-1 & 0.187 & - \\ & OL-2 & 0.134 & - \\ & OL-3 & 0.972 & - \\ \hline \multicolumn{4}{c}{**Number of dead f**} & **inctions and sa** & **ing in terms of network usage**} \\ & OL-1 & 0.312 & - \\ HTTP requests & OL-2 & 0.808 & - \\ & OL-3 & 0.781 & \\ \hline \multirow{3}{*}{Transferred bytes (Kb)} & OL-1 & 4.11x10\({}^{-4}\) (*) & 0.456 (M) \\ & OL-2 & 2.94x10\({}^{-6}\) (*) & 0.603 (M) \\ & OL-3 & 8.16x10\({}^{-6}\) (*) & 0.575 (M) \\ \hline \multicolumn{4}{c}{**Number of dead functions and saving in terms of resources usage**} \\ & OL-1 & 0.886 & - \\ CPU usage (\%) & OL-2 & 0.022 (*) & 0.295 (W) \\ & OL-3 & 0.225 & - \\ \hline \multirow{3}{*}{GPU usage (\%)} & OL-1 & 0.643 & - \\ & OL-2 & 0.592 & - \\ & OL-3 & 0.116 & - \\ \hline \multirow{3}{*}{Memory usage (Mb)} & OL-1 & 0.803 & - \\ & OL-2 & 0.108 & - \\ \cline{1-1} & OL-3 & 0.125 & - \\ \end{tabular}
\end{table} TABLE VII: Results of the further analysis – the (*) symbol denotes statistically-significant correlations
we provide evidence that removing JavaScript dead code is effective in reducing the network transfer of web apps. We can postulate that this outcome is relevant for _users_ and _web developers_ since transferring fewer bytes is definitely valuable when the network is either a scarce resource (e.g., due to low available bandwidth) or underpaid/limited subscription plans (i.e., for 4G/5G connectivity). _Web developers_ could find this outcome interesting also because a small earn in the transferring of bytes from the Cloud to a single client reflects in a large earn (in terms of transferred bytes) when millions of clients are connected to the Cloud. _Researchers_ could be interested in studying to what extent the mentioned earn affects the _end-to-end_ energy consumption, i.e., the total energy needed to transfer bytes from the Cloud to the clients, including the consumption of network devices (e.g., switches and routers). This proposed line of research aims at deepening the results of RQ1, where we could not find evidence that the removal of JavaScript dead code affects the energy consumption of mobile devices (i.e., when considering the client side only).
As for the **number of HTTP requests and CPU, GPU, and memory usage**, we observed mixed outcomes between in-the-lab and in-the-wild web apps--e.g., we found that different Lacuna optimization levels have an impact on the number of HTTP requests when considering in-the-lab web apps, but not when considering in-the-wild ones. This suggests the possible existence of moderating variables that can diminish or hamper the impact of the Lacuna optimization levels on HTTP requests and CPU, GPU, and memory usage. This could be relevant to _researchers_, who could plan and execute further studies on such moderating variables (also by exploiting our replication package [14]).
Finally, by manually inspecting the source code of the subjects, we conjecture that the emerging results for **HTTP requests and transferred bytes** are due to the fact that web developers tend to statically import and declare JavaScript scripts in their web apps. Specifically, the vast majority of imported/declared scripts are still requested by the browser engine, even though they contain less source code and have a smaller transfer size (due to the dead code being eliminated). This observation highlights the importance of the _bundling technique_, where the number of HTTP requests made to the server is reduced by merging multiple JavaScript files; we advise _web developers_ to use existing tools for bundling the JavaScript code of their web apps, such as Webpack [80], gulp-bundle [81], and Browserify [82].
### _Threats to Validity_
We discuss the threats to validity according to Cook and Campbell's categorization [83].
#### 5.2.1 Construct validity
We mitigated potential construct validity threats by following well-known guidelines for experimentation in empirical software engineering [49, 50, 51, 52, 53] and by defining all details related to the design of the experiment (e.g., the goal, research questions, tools, variables, statistical, analysis procedures) before starting its execution.
#### 5.2.2 Conclusion validity
Since all the collected data do not follow a normal distribution, we utilized non-parametric tests. Additionally, we perform the Benjamini-Hochberg correction procedure to account for potential Type-I errors. Finally, we provide a publicly available replication package for independent verification of our findings.
#### 5.2.3 Internal validity
A possible threat to the internal validity of the experiment comes from the "maturation" of test subjects, leading them to behave differently across different experiment runs. To mitigate this possible threat, the following precautions have been adopted: (i) the measurement for each experiment trial (i.e., a subject-OL pair) has been repeated 20 times; (ii) the order of execution of the experiment runs has been randomized; (iii) the Chrome app has been cleared and reset before each run so to clean its cache, persisted data, and configuration; (iv) the USB charging of the smartphone is disabled during the execution of each run; (v) between each run the smartphone and the laptop remain idle for 2 minutes to take into account for tail energy usage [76]. Another potential threat to the internal validity comes from the usage of a software power profiler rather than a hardware measurement tool, potentially introducing errors in the measurements. However, the accuracy of the Trepn power profiler has been reported to be close to 99% [72]
#### 5.2.4 External validity
To ensure that our experimental subjects are representative of real-world web apps, _in-the-lab_ subjects have been complemented with _in-the-wild_ subjects. The latter have been sampled from the Tranco list, and constitute a sample of popular real-world websites that are heterogeneous from different perspectives (e.g., application domain, functionalities, size). At the same time, the sample of _in-the-lab_ subjects constitutes a varied set of JavaScript development frameworks. Another potential threat to the external validity comes from the usage of a single smartphone device, equipped with an older Android version, for the experiment execution, potentially harming the generalizability of obtained results. This was a forced choice, as the Trepn power profiler does not support newer Android versions. Nonetheless, Trepn is widely used in empirical studies on energy-efficient software [69, 70, 71] which provides confidence in the generalizability of its measurements. Finally, the results of our experiment are obtained by integrating into Lacuna two analyzers: Dynamic and TAJS. The experiment results cannot hold if we use other analyzers for call graph constructions. However, this is true for any choice of analyzer/s to be integrated into Lacuna. Since the experiment on the run-time overhead of JavaScript dead code is very expensive--i.e., 4 optimization levels x 30 subjects x 20 repetitions, in total 2,400 experiment runs--, it is unfeasible to test all the 127 configurations of Lacuna (resulting from the combinations of one to seven analyzers). Therefore, we considered just one Lacuna configuration (i.e., the one based on the joint use of Dynamic and TAJS), suggested by the results of the preparatory experiment (see Section 2.2.7). This choice, taken empirically, should allow for maximizing the benefits resulting from the removal of the dead code.
Related Work
Dead code (also known as unused code [17], unreachable code [20], and lava flow [16]) has been included in several code-smell catalogs [17, 16, 18] since it is claimed to have negative effects on source code comprehensibility and maintainability [20]. Researchers have investigated the claimed effects of the dead code. For example, Romano _et al._[84] conducted a controlled experiment where part of the participants had to comprehend and then maintain a Java codebase containing dead code, while another part had to do the same in a codebase deprived of dead code. The authors found that dead code hinders source code comprehensibility, while they could not demonstrate the negative effects of dead code on source code maintainability. Later, Romano _et al._[19] replicated that experiment three times. The results confirm that dead code penalizes source code comprehensibility; also, they found that dead code negatively affects source code maintainability when developers work on unfamiliar source code. The most important difference between our paper and those introduced just before is that we focus here on the detection and elimination of JavaScript dead code from web apps, while these papers mostly focused on detection and the study the effect of this smell on Java desktop apps. Eder _et al._[8] conducted a case study on the modifications to dead methods in an industrial web app developed in.NET. Differently from us, the authors only considered dynamic information to detect dead code. In particular, Eder _et al._ monitored the execution of methods in a given time frame, and those methods not executed in a given time frame were considered as dead. In their case study, these authors observed that 48% of the modifications to dead methods were unnecessary (e.g., because dead methods were removed later). A similar finding was reported by Cassieri _et al._[85] in the context of Java desktop apps hosted on GitHub. This study is different from ours because the authors studied the presence of dead code in Java desktop apps and how developers deal with dead code (e.g., modify and remove it in software evolution tasks).
Researchers have also proposed dead code detection techniques to support developers who aim to remove dead code for refactoring reasons. Chen [86]_et al._ proposed a data model for C++ software repositories supporting reachability analysis and dead code detection. Fard and Mesbah [20] presented JSNOSE, a metric-based technique for detecting smells, including dead code, in JavaScript code. JSNOSE marks a code block as dead if the EXEC metric or the RCH one is equal to zero. The EXEC metric relies on dynamic analysis to count the times a given code block is executed, while the RCH metric measures, by leveraging static analysis, the reachability of a given code block. Boomsma _et al._[7] proposed a dynamic technique for detecting dead code (dead files, in particular) in web apps written in PHP. This technique monitors the execution of a web app in a given time span to determine the usage of PHP files. A file is deemed as dead if it is not used in that period. The authors applied their technique on a subsystem, allowing the developers to remove 2,740 dead files (i.e., about 30% of the subsystem files). Romano _et al._[87] proposed DUM, a static technique for detecting dead code (dead methods, in particular) in Java desktop apps, which is based on a call-graph representation where nodes correspond to methods while directed edges correspond to _caller-callee_ relationships. The authors implemented DUM in an Eclipse plug-in, named DUM-Tool [88]. Romano and Scanniello [89] explored the use of RTA, an algorithm for call graph construction that is known to be fast and well approximate virtual method calls [90], to detect dead code (dead method, in particular) in Java desktop apps. To this end, they developed a tool, DCF, and evaluated its performance against the one of JTombbstone, CodePro AnalytiX, and DUM-Tool. The results of this evaluation show that DCF outperforms the other tools in terms of precision and f-measure of the detected dead methods. As for the recall, DCF is comparable to DUM-Tool. Alabwaini _et al._[91] proposed a model, based on program slicing, for automatically removing dead code. In particular, they applied a program slicing technique to identify the slices of a program--any code involved in a slice was considered alive. The slices were then merged and any code not involved in a slice was discarded because it was considered dead. The research discussed just before approaches dead code from a refactoring perspective, while we are interested to evaluate the run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of web apps.
Researchers have also investigated dead code detection by taking an optimization perspective. Sunitha and Kumar [92] proposed a technique that combines copy propagation and dead code elimination by using hash-based value numbering to avoid executing unnecessary code--e.g., instructions that compute values not used in any execution path starting from them. Karer _et al._[93] conceived a dead code elimination technique for Java apps based on two steps. First, they converted Java source code into an SSA form--in this form, each variable is assigned exactly once statically. Second, they identified DU-chains to find variables with a definition but without any use during program execution. The found variables are then removed since they are considered dead. Kim _et al._[94] proposed a technique to efficiently remove dead code in SSA forms, hence obtaining faster and lighter Java bytecode. Wang _et al._[95] conceived a framework for detecting dead code based on the LLVM compiler infrastructure. The framework consists of three steps. It first translates the source code of the program into an LLVM intermediate representation, then a symbolic execution technique is applied to generate test cases. Finally, the framework combines static and dynamic slicing--the program is analyzed dynamically through the generated test cases--to detect dead code (in particular, dead statements). The proposed framework can be applied to programs written in any programming language as long as it is supported by the LLVM compiler infrastructure. The authors showed that, on five C programs, their framework detected, on average, about 94% of dead statements. Differently from these papers, we present here evidence also about how the presence of JavaScript dead code impacts web apps on Android devices in terms of energy efficiency, loading time, number and payload of HTTP requests, CPU, and memory usage. Vazquez _et al._[33] proposed a technique called UFFRemover, based on dynamic analysis, to aid developers in identifying and then removing dead functions from the dependencies of JavaScript apps. On the other hand, Lacuna
supports both static and dynamic analyses and it is also extensible. Vazquez _et al._ first gathered execution traces of the app being analyzed--for this purpose, the app can be run via its tests in the development environment or via user-app interactions in the production environment--so as to identify the functions that do not belong to any execution trace. These functions are then suggested to developers for removal because they are deemed dead. The authors applied their technique to 22 JavaScript apps and found that around 70% of the functions in the dependencies were dead.
In summary, we contribute in this paper to advance the state of the art on JavaScript dead code identification and elimination in several ways. We can summarize our most important contributions as follows: _(i)_ we designed and implemented an extensible approach for JavaScript dead code elimination on which third-party analysis techniques can be reused and integrated and _(ii)_ we provide evidence about how JavaScript dead code impacts web apps on Android devices in terms of energy efficiency (slight positive impact), loading time (statistically-significant positive impact), number and payload of HTTP requests (statistically-significant positive impact), CPU and memory usage (mixed results).
## 7 Conclusions and Future Work
In this paper, we present Lacuna, an approach for automatically eliminating JavaScript dead code from web apps. By building on Lacuna, we conducted an empirical evaluation of the run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of 30 third-party web apps running on a real Android smartphone. The obtained results lead to relevant implications for users, researchers, and web developers.
As future work, we are planning to extend the formalization of the CG so as to distinguish (and treat differently) between edges that are surely navigated at run-time (e.g., those identified via dynamic analysis) and those that are navigated with a certain probability (e.g., those identified by a static analyzer). We will also expand the CG with the notion of JavaScript module to distinguish between internal, imported, and exported functions and treat them differently while building the CG. Finally, we will integrate additional analysis techniques and tools into Lacuna.
## Acknowledgments
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 871342 "uDEVOPS".
We would like to thank Christos Petalotis and Luka Krumpak, both students of the Vrije Universiteit Amsterdam, for their invaluable help in the external evaluation of Lacuna.
|
2309.06613 | Unsupervised Learning of Nanoindentation Data to Infer Microstructural
Details of Complex Materials | In this study, Cu-Cr composites were studied by nanoindentation. Arrays of
indents were placed over large areas of the samples resulting in datasets
consisting of several hundred measurements of Young's modulus and hardness at
varying indentation depths. The unsupervised learning technique, Gaussian
mixture model, was employed to analyze the data, which helped to determine the
number of "mechanical phases" and the respective mechanical properties.
Additionally, a cross-validation approach was introduced to infer whether the
data quantity was adequate and to suggest the amount of data required for
reliable predictions -- one of the often encountered but difficult to resolve
issues in machine learning of materials science problems. | Chen Zhang, Clémence Bos, Stefan Sandfeld, Ruth Schwaiger | 2023-09-12T21:45:33Z | http://arxiv.org/abs/2309.06613v1 | # Unsupervised Learning of Nanoindentation
###### Abstract.
In this study, Cu-Cr composites were studied by nanoindentation. Arrays of indents were placed over large areas of the samples resulting in datasets consisting of several hundred measurements of Young's modulus and hardness at varying indentation depths. The unsupervised learning technique, Gaussian mixture model, was employed to analyze the data, which helped to determine the number of "mechanical phases" and the respective mechanical properties. Additionally, a cross-validation approach was introduced to infer whether the data quantity was adequate and to suggest the amount of data required for reliable predictions - one of the often encountered but difficult to resolve issues in machine learning of materials science problems.
unsupervised learning, cross-validation, Gaussian mixture model, CuCr composite, mechanical properties, nanoindentation
## 1. Introduction
Nanoindentation is a powerful technique for characterizing the mechanical properties of materials at small length scales (Oliver and Pharr, 2004; Liu et al., 2018). For analyzing nanoindentation data, data analysis and machine learning techniques have gained importance in recent years. A CNN-based classifier (Kossman and Bigerelle, 2021) was developed to identify whether pop-in events are present in the load-displacement curves from nanoindentation tests. The classifier achieved an accuracy of approximately 93%, which helped understand the process that created pop-ins. In another study (Vignesh et al., 2019), the phase level features were extracted from spatial hardness and elastic modulus maps using a deconvolution method based on the k-means clustering algorithm. Furthermore, a Gaussian mixture model (GMM) (Sorelli et al., 2008) was employed to develop the fiber-reinforced ultra-high performance concrete. To characterize the nano-mechanical properties of the phases governing the microstructure, the experimental probability distribution functions (PDFs) of indentation modulus and indentation hardness were deconvolved; this unsupervised learning approach has also proven beneficial for mapping the mechanical properties of
complex surfaces in two dimensions by deconvolving the experimental cumulative distribution functions of indentation modulus and hardness for naval brass, cast iron, Ti64-10TiC alloy, and M3 high speed steel (Randall et al., 2009).
The GMM technique can be implemented in a wide range of materials science fields. It has been applied to the automated analysis and visualization of continuum fields in atomistic simulations, where individual grain's distributions of total strain, elastic strain, and rotation are extracted as key features (Prakash and Sandfeld, 2022). It has also been used to analyze the so-called _grain orientation spread_ from electron back-scattered diffraction, which measures intragranular lattice distortion. Three zones were separated from the GMM results, representing the recrystallized zone, mixture zone and relict zone, eliminating the ambiguity in the selection of the cut-off threshold when identifying the recrystallized grains (Yeo et al., 2023). It has also been applied to high resolution high-angle annular dark-field scanning transmission electron microscopy data on counting the number of atoms of monotype crystalline nanostructures, assuming that the total scattered intensity is proportional to the number of atoms per atom column (De Backer et al., 2013). In addition, it has been utilized in X-ray diffraction investigations, enabling automatic extraction of charge density wave order parameters and detection of intraunit cell ordering and its fluctuations from a series of high-volume X-ray diffraction measurements taken at multiple temperatures (Venderley et al., 2022). Using nanoindentation data to investigate the distribution of heterogeneous materials has a great potential, provided the GMM technique is applied properly and a sufficient amount of data is available.
In this work, the properties of the phases of two different Cu-Cr composites and the volume fractions of the constituent phases were explored using a clustering technique based on the Gaussian mixture model, and the model results were validated. Initial tests in this study were conducted at a depth of 1 \(\upmu\)m on single-phase pure Cu and pure Cr. Our methodology was then applied at the same depth to Cu-Cr composites with a nominal Cr content of 25 and 60 wt%. Additional datasets were included in the analysis to investigate the effect of data size on model performance.
## 2 Materials and Methodology
### Experimental details
In this study, we have evaluated different Cu-Cr composites containing 25 wt% Cr and 60 wt% Cr corresponding to 29.95 at% and 64.40 at% Cr, respectively, as well as Cu and Cr as reference samples. All materials were produced via field-assisted sintering technique (FAST) as described in detail in (von Klinski-Berger, 2015). Briefly, Cu powder with 99.9 at% purity and technically pure Cr powder (99.5 at%) were used and compacted at a temperature of 950 and a pressure of 40 MPa, except for the Cr sample that was compacted at a temperature of 1450. The composite samples ( Figure 1) will be referred to as CuCr25 and CuCr60 according to their nominal compositions. For indentation testing, the sample surfaces were prepared using standard grinding and polishing techniques using SiC paper and diamond suspensions with decreasing grain size down to 0.1 \(\upmu\)m.
A nanoindenter G200 XP (Agilent/Keysight Technologies, Inc., CA, USA) equipped with a diamond Berkovich tip was used to investigate the mechanical properties of the composite samples. The samples were indented to different depths using the so-called Express Test option. Arrays of indents covering areas up to 500 \(\upmu\)m x 500 \(\upmu\)m were made in different locations on the sample surface. The indentation depths ranged from 300 to 1000 and the distance between individual indents was between 20 and 23 \(\upmu\)m for all depths. Hardness \(H\) and Young's modulus \(E\) were determined assuming 1141 GPa and 0.07 for Young's modulus and Poisson's ratio, respectively, of the diamond tip.
Figure 2 shows an example distribution of hardness and Young's modulus of CuCr60 for an indentation depth of 1 \(\upmu\)m. The marginal hardness distribution is a bimodal distribution, while a trimodal distribution can be seen for the marginal histogram of Young's modulus.
### Clustering based on the Gaussian mixture model
The Cu-Cr composites consisted of two distinct phases with average Cr particle diameters of approximately 30 \(\upmu\)m on the indented surface; thus, upon indenting the surface we assume that either one or the other element dominates or the properties of a mixture of both phases is measured. However, the "mixture of elements" might as well be more than "just the sum of its parts" since additional effects, e.g., related to the presence of interfaces, might occur during indentation. Furthermore, we assume that similar
Figure 1: Optical micrographs of Cu-Cr composites produced by field-assisted sintering technique with (a) 25 wt% Cr, (referred to as CuCr25), and (b) 60 wt% Cr (referred to as CuCr60) were investigated applying the grid indentation technique.
Figure 2: Scatter plot with marginal histograms of all obtained values for Young’s modulus and hardness measured for CuCr60.
local microstructural or chemical properties lead to similar measurement data and that the mechanical properties exhibit gradual changes over the surface.
Thus, the data scientific task is to analyze a number of data records consisting of the given feature variable \(E\) (Young's modulus) and the feature \(H\) (hardness), given for four different materials and at different depths, i.e., 300 nm, 400 nm and 1000 nm. The whole dataset \(\mathcal{D}\) consisting of \(N\) data records can be written as the set of pairs
\[\mathscr{D}=\left\{\left(E_{i},H_{i}\right)\right\}_{i=1\ldots N}. \tag{1}\]
The goal is to perform a clustering analysis, during which data points with similar elastic properties are grouped together, i.e., for each pair \(\left(E_{i},H_{i}\right)\) in the feature space, we determine the categorical variable \(y_{i}\) that gives the number of the respective cluster. Additionally, the total number of different clusters needs to be determined. Since annotated training data does not exist, this has do be done in an unsupervised manner.
For the clustering, we use the Gaussian mixture model (GMM), which is a robust and well-established probabilistic clustering model in the statistics literature (see, e.g., (Bishop and Nasrabadi, 2006; Reynolds et al., 2009; Reynolds and Rose, 1995)). Each different "phase" is assumed to correspond to an individual Gaussian distribution of Young's Modulus and hardness. These materials "phases" are commonly referred to as _components_ in the context of machine learning. We assume that the distribution of experimental data was generated by a combination of Gaussian processes, which are represented by the probability density functions (PDFs) \(\mathcal{N}\) for each component \(j\). The resulting superposition is then given by
\[p\left(\mathscr{D};\boldsymbol{\Phi}\right)=\sum_{j=1}^{k}\alpha_{j}\mathcal{N }\left(\mathscr{D}\ \mid\boldsymbol{\theta}_{j}\right)\, \tag{2}\]
where \(k\) is the number of components of the model, \(\alpha_{j}>0\) are the weights of each component \(j\), the \(\boldsymbol{\theta}_{j}\) are the vectors of parameters for the Gaussian and \(\boldsymbol{\Phi}=\left\{\alpha_{1},...,\alpha_{k},\boldsymbol{\theta}_{1},...,\boldsymbol{\theta}_{k}\right\}\) is a short notation for the whole set of parameters governing the Gaussian mixture model. For a multivariate Gaussian, the component \(j\) of the superimposed function is given by the parameters \(\boldsymbol{\theta}_{j}=\left\{\boldsymbol{\mu}_{j},\boldsymbol{\Sigma}_{j}\right\}\), where \(\boldsymbol{\mu}_{j}\) and \(\boldsymbol{\Sigma}_{j}\) are the mean value and the covariance matrix, respectively. \(\boldsymbol{\mu}_{j}\) describes the location of the component \(j\) in the feature space, while the covariance matrix \(\boldsymbol{\Sigma}_{j}\) characterizes the \(j\)-th component data distributed around \(\boldsymbol{\mu}_{j}\). The objective of the training process is to estimate the values for the model parameters of the Gaussian distribution that best align with the training data \(\mathscr{D}^{train}\subset\mathscr{D}\). The model parameters \(\boldsymbol{\Phi}_{k}\) are iteratively determined while assuming the number of clusters \(k\) to be predefined. Typically, the superposition of Gaussians is computed using the maximum likelihood method based on a set of candidate models that differ in the number of clusters generated using the expectation maximization algorithm. For further details of the algorithm used, please refer to the appendix section 1.
The Bayesian Information Criterion (\(\mathrm{BIC}\)) is employed as a selection criterion to identify the optimal model. It is widely recognized and applied in model selection (Gideon, 1978), providing a measure for assessing the accuracy of the unsupervised GMM:
\[\mathrm{BIC}=-2\ln\mathcal{L}+d\ln N \tag{3}\]
where \(d\) denotes the number of parameters of the model, and \(\mathcal{L}\) is the maximum likelihood achieved by the model used. The first term represents the maximized likelihood of a model, and the second term introduces a penalty for the number of parameters to mitigate the risk of overfitting. The model with the lowest \(\mathrm{BIC}\)
value indicates the highest likelihood and better predictive capability for the observed data. In this study, the GMM has been implemented using the open-source Python package scikit-learn (Pedregosa et al., 2011).
## 3 Results and Discussion
In the following, we will start with the data cleaning process and the analysis of pure metals using a 1D Gaussian mixture model. Then, the CuCr25 and CuCr60 composites are investigated using both 1D and 2D GMM for an indentation depth of 1 \(\upmu\)m, with a detailed discussion of the impact of sample size on model robustness. Finally, we will evaluate the effect of indentation depth on the 1D GMM results for the depth range of 200 nm to 600 nm.
### Preparation of the datasets
Figure 3**A** illustrates the original data sets comprising the measured \(E\) and \(H\) values of the four materials tested. Due to variations in the height of compacted particles on the polished surface, the actual recorded maximum indentation depths ranged from 600 nm to 2000 nm, deviating from the nominal 1000 nm. We focused our analysis on depths ranging from 800 nm to 1200 nm. Subsequently, the data underwent cleaning and filtering processes to remove measurement errors, particularly outliers with unrealistically high values as well as other invalid data. For CuCr25, data within the ranges of \(100\leq E\leq 400\,\mathrm{GPa}\) and \(0.8\leq H\leq 4.5\,\mathrm{GPa}\) were retained, while for CuCr60 the data range was \(100\leq E\leq 500\,\mathrm{GPa}\) and \(1.0\leq H\leq 5.0\,\mathrm{GPa}\). The resulting cleaned data, comprising approximately 98% of the original data, is presented in Figure 3**B**. Notably, the distributions of the two pure metals exhibit lower variances compared to the Cu-Cr composites. Further preprocessing of the data, such as standardization, was found to have no significant impact on the training results and were therefore not included in this study.
Figure 3: The experimentally obtained distributions of Young’s modulus and hardness for 0 wt%, 25 wt%, 60 wt% and 100 wt% Cr content. (**A**) The experimental data in its original form, (**B**) The cleaned and preprocessed data set used in our analysis. The rectangle in **A** represents the region illustrated in **B**.
### Mechanical properties of the pure Cu and pure Cr specimens
The mechanical properties of pure Cu and pure Cr were analyzed for the indentation depth of 1 \(\upmu\)m. The PDF plots of \(E\) and \(H\) are depicted in Figure 4**A-E** showing the mean values of \(E=118.80\,\mathrm{GPa}\) and \(H=0.91\,\mathrm{GPa}\) for pure Cu, and \(E=371.24\,\mathrm{GPa}\) and \(H=3.21\,\mathrm{GPa}\) for pure Cr. The bin size was selected to ensure an approximately equal number of bins covering the range of all PDFs. To utilize the GMM, it is important to demonstrate that the distributions are roughly normally distributed, despite the GMM's inherent robustness. Thus, the Shapiro-Wilk test was employed to assess the normality of the distributions (Oztuna et al., 2006), resulting in \(p\)-values of 0.95 for \(E\) and 0.14 for \(H\) in pure Cu, and 0.95 for \(E\) and 0.14 for \(H\) in pure Cr. The test accepts the normality hypothesis, when the \(p\)-value exceeds 0.05, confirming that the data are sufficiently normally distributed. The optimal number of components was 1 for both the Cu and Cr specimens (as shown in Figure 4**C** and Figure 4**F**), as determined by the BIC analysis. Regarding determination of the component numbers, the result of 1D GMM for \(E\) with high \(p\)-values is therefore more reliable than the fit of \(H\). Nonetheless, because \(H\) is strongly correlated with local microstructure details, such as grain boundaries and dislocation pile-ups (Oliver and Pharr, 1992, 2004), the fitting result of \(H\) can be used to infer local microstructural characteristics. This can also be seen from the small variation at the beginning of the BIC plot for hardness in Figure 4**F**, where the BIC values for \(k=2\) differ only marginally from the one at \(k=1\) that we had identified as the optimal one.
### Mechanical properties of the CuCr25 and CuCr60 specimens
The histograms of \(E\) and \(H\), together with their best-fit models are shown in Figure 5**A** and Figure 5**D** (left panel) and Figure 5**B** and Figure 5**E** (middle panel). The right panel (Figure 5**C** and Figure 5**F**) shows the BIC as a function of the number of components.
Figure 4: Probability density functions (left and middle column) and plot of BIC (right column) of pure Cu and Cr. The BIC values are shown for both, Young’s modulus (left “\(y\)” axis) and hardness (right “\(y\)” axis). The top row shows the data for Cu, the bottom row for Cr.
The BIC values for both the modulus and the hardness cover a range of around \(500\), and even \(\approx 200\) for hardness, excluding \(k=1\). Though not obvious, this is an important difference. Taking CuCr60 for instance, there are \(N\approx 300\) valid data for \(E\) determined at a depth of 1 \(\upmu\)m. Assuming an optimal number of clusters of \(k=3\), the number of parameters to be measured in a 1-dimensional analysis are (i) three mean values (\(\mu_{1},\mu_{2},\mu_{3}\)), (ii) three variance values (\(\sigma_{1}^{2},\sigma_{2}^{2},\sigma_{3}^{2}\)), and (iii) two coefficients that determine the relative weights of the three Gaussians (\(\alpha_{1},\alpha_{2}\)). Given that the sum of all weights should be 1, \(\sum_{j=1}^{k}\alpha_{j}=1\), and \(\alpha_{3}=1-(\alpha_{1}+\alpha_{2})\), the number of parameters to be determined is \(d=8\).
Are the differences in BIC values large or small? To answer this question, we need to understand how much variation results from a small change in the dataset. The calculated values of the logarithm of \(\mathcal{L}\) (cf. Equation (3)) of this dataset are between \(-10\) and \(-4\). According to Equation (3), the first term varies between \(-2\times-4=8\) and \(-2\times-10=20\) and the second term is \(8\times\ln 300\approx 45\). The sum of the two terms lies between 53 and 65. This indicates, assuming a minor variation in 1 out of 300 data points, that the BIC variation should fall between 53 and 65. In other words, the BIC very effectively captures changes in the data. If the BIC difference between the two models for this dataset is greater than \(\approx 50\), it already suggests that the corresponding number of components \(k\) is indeed the more likely one. In this example, the difference between a BIC value for \(k\) and \(k+1\) is slightly smaller. However, taking a look at larger values of \(k\), there is a very clear trend that points to the minimum. In conclusion, our 1D GMM analysis of Young's modulus reveals that both alloys have three mechanical phases at 1 \(\upmu\)m depth. Table 1 summarizes the results of the Gaussian mixture models evaluated for the four different materials.
As stated above, our GMM analysis of CuCr25 shows the presence of three mechanical phases as well as differences in the properties of nominally identical phases. In CuCr25, for example, the Cu-rich phase accounts for 64.5 vol% (defined by \(E_{1}\) as the lowest value, closest to the value of pure Cu), while \(E_{2}\) and \(E_{3}\) account for a total of 35.5 vol%. These results indeed coincide well with the 35.5 vol% Cr estimated by
Figure 5: 1D GMM results of CuCr25 and CuCr60 at 1 \(\upmu\)m indentation depth. CuCr25: **(A)** histogram of \(E_{i}\) and the best fit (solid line), **(B)** histogram of \(H_{i}\) and the best fit (solid line), **(C)** BIC of \(H\) and \(E\); CuCr60: **(D)** histogram of \(E_{i}\) and the best fit model, **(E)** histogram of \(H_{i}\) and the best fit model, **(F)** BIC of \(H\) and \(E\).
optical microscopy (Bos, 2019). The lowest modulus value for the Cu phase was found in the compacted Cu sample (i.e. \(E_{1}\)), followed by \(E_{1}\) of CuCr60 and CuCr25. \(E_{3}\) is Young's modulus of the Cr-rich phase (with \(E_{3}\) being the highest, closest to the modulus value of pure Cr). The highest modulus of the Cr phase (i.e. \(E_{1}\)) was found for the compacted Cr sample, followed by the \(E_{3}\) values of the CuCr60 and the CuCr25 samples.
These differences in the mechanical properties fitted by GMM are likely related to the diffusion of one phase into the other, the presence of foreign particles or pores, or the influence of the surrounding material. Assuming that a small amount of Cr, i.e. 0.4 at% - 3 at%, can be dissolved in the Cu matrix (Jacob et al., 2000; Chakrabarti and Laughlin, 1984), the Cr solid solution likely contributes to the difference of the modulus of the Cu phase in the composite samples. In addition, the presence of Cr nanoparticles of 100 - 200 nm in size was reported (von Klinski-Berger, 2015), which as well results in a higher Young's modulus value of the Cu phase in CuCr25 and CuCr60. Finally, considering the relative densities of 99.4 % and 98.3 % for CuCr25 and CuCr60, respectively, compared to 97.7 % for the pure Cu sample, a Young's modulus value with an estimated reduction up to \(9\%\)(Lebedev et al., 1995) can be expected. The reduction of the modulus of the Cr phase in CuCr25 (i.e. \(E_{3}\)) can be explained by the surrounding softer Cu phase. While in general also the hardness fitting of CuCr25 supports the presence of three mechanical phases, the percentage results are different. This is not surprising, though, since hardness is a local property that is strongly related to the microstructure including grain and phase boundaries as well as dislocations.
After discussing the 1D GMM analysis, we now take a look at the outcomes of the 2D GMM with independent feature variables \(E\) and \(H\) shown in Figure 6. The distribution of three and four components or mechanical phases of CuCr25 and CuCr60, respectively, are shown in Figure 6**A**-**B** and Figure 6**D**-**E**. The red points in the graphs represent the average value of each component, while the concentric ellipses
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Cr [wt\%] & F\({}_{3}\)/F\({}_{2}\)/F\({}_{1}\)/F\({}_{0}\) & \multicolumn{4}{c}{1D} \\ \cline{3-6} & & Mechanical property & Average [GPa] & Standard deviation & Percentage [\%] \\ \hline
0 (pure Cu) & 30/30/33/35 & \(E_{1}\) & 118.80 & 9.45 & 100 \\ & & \(H_{1}\) & 0.91 & 0.06 & 100 \\ \hline & & \(E_{1}\) & 145.55 & 14.12 & 64.5 \\ & & \(E_{2}\) & 226.50 & 38.42 & 22.6 \\ & & \(E_{3}\) & 337.02 & 31.69 & 12.9 \\
25 (CuCr25) & 513/520/555/675 & & & & \\ & & \(H_{1}\) & 1.14 & 0.09 & 50.1 \\ & & \(H_{2}\) & 1.47 & 0.23 & 22.6 \\ & & \(H_{3}\) & 2.92 & 0.62 & 27.3 \\ \hline & & \(E_{1}\) & 177.35 & 24.17 & 52.2 \\ & & \(E_{2}\) & 272.17 & 29.75 & 22.4 \\ & & \(E_{3}\) & 379.32 & 34.34 & 25.4 \\
366/376/427/675 & & & & & \\ & & \(H_{1}\) & 1.45 & 0.22 & 50.0 \\ & & \(H_{2}\) & 2.96 & 0.56 & 50.0 \\ \hline
100 (pure Cr) & 31/35/35/35 & \(E_{1}\) & 371.24 & 10.54 & 100 \\ & & \(H_{1}\) & 3.21 & 0.10 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 1: 1D mechanical property fitting based on the optimal BIC results at 1 \(\upmu\)m depth
with different orientations (covariance) represent the different components. The model selection criteria BIC are shown as a function of the number of components in the right panel (Figure 6**C** and Figure 6**F**).
Based on Figure 6**C**, one could say that four mechanical phases are the ideal match for the CuCr25 composite. Given the anomaly in the upper left corner of Figure 6**B**, though, the best assumption remains at three, which will be further discussed in Section 3.4. As shown in Table 1, in the 2D GMM, which combines both \(E\) and \(H\), the estimated amount of Cr in CuCr25 was 38.9 vol%, which is close to the actual experimental findings (Bos, 2019).
The 2D GMM analysis (in Figure 6**F**) reveals that also CuCr60 contains three mechanical phases at 1 \(\upmu\)m depth. The fitted result for the volume of Cr in 1D is 47.8 vol% based on the modulus values, which is less than the nominal value (i.e., 65 vol% Cr). By contrast, the 2D GMM result indicates a Cr volume fraction of 56.6 vol% (Table 1). The difference between the 1D GMM (\(E\)) and 2D GMM results is related to the \(H\), while the difference between the fitting results and the experimental data is due to the amount of data and
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{Cr [wt\%]} & \multirow{2}{*}{F\({}_{3}\)/F\({}_{2}\)/F\({}_{1}\)/F\({}_{0}\)} & \multicolumn{3}{c}{2D} \\ \cline{3-5} & & Average \(E\) & Average \(H\) & Percentage \\ & & [GPa] & [GPa] & [\%] \\ \hline \multirow{3}{*}{25 (CuCr25)} & \multirow{3}{*}{513/520/555/675} & 144.91 & 1.17 & 61.1 \\ & & 220.92 & 2.25 & 27.1 \\ & & 340.52 & 3.18 & 11.8 \\ \hline \multirow{3}{*}{60 (CuCr60)} & \multirow{3}{*}{366/376/427/675} & 172.32 & 1.42 & 43.4 \\ & & 262.63 & 2.67 & 34.6 \\ \cline{1-1} & & 383.35 & 3.02 & 22.0 \\ \hline \end{tabular}
\end{table}
Table 2: 2D mechanical property fitting based on the optimal BIC results at 1 \(\upmu\)m depth
Figure 6: 2D Gaussian mixture model clustering of CuCr25 and CuCr60. CuCr25: (A) Three components. (B) Four components and **(C)** 2D BIC; CuCr60: (D) Three components (E) Four components and (F) 2D BIC. The ellipses in A, B, D, and E are isolines of the Gaussian distributions and the red points represent the average values of the different components.
variation of microstructures over the samples. Note that the number of datapoints for CuCr60 is 40% less than that for CuCr25; insufficient data can be the source of inaccuracies, which we are addressing in the following.
In the above analysis, the size of the dataset was 1,844 and collected over an area of 500 x 500 \(\upmu\)m\({}^{2}\). To increase the size of the dataset, we merged it with two more nanoindentation areas (100 x 100 \(\upmu\)m\({}^{2}\) and 300 x 300 \(\upmu\)m\({}^{2}\)) and analyzed the data using the same procedures described above. The results are summarized in Table 3 and Table 4. Merging the three datasets now includes indentation depths ranging from 500 nm to almost 2000 nm with 97.8% of the data lying between 800-1200 nm indentation depth. As shown in Figure 7**A**, the distribution ranges of the data coincide well indicating that the microstructure of the material was comparable over the different areas indented.
As shown in Figure 7**B**, the most likely number of phases determined by analyzing Young's Modulus is three, while two phases are most probable when analyzing the hardness. The 2D fit, though, also indicates \(k=3\), as can be seen in Figure 7**C**. The individual 2D fits for the different areas of 300 x 300 \(\upmu\)m\({}^{2}\) and the 500 x 500 \(\upmu\)m\({}^{2}\), each reflect three mechanical phases with 67.1 vol% Cr and 59.5 vol% Cr, respectively. Not unexpectedly, the merged data then yielded a Cr fraction of 61.5 vol%. The somewhat different results reflect the variation of the microstructures over the sample surface and underscores the importance of identifying characteristic microstructures or increasing the size of the dataset.
### Cross-validation of GMM
Data size has been a common concern in clustering algorithms for machine learning. In contrast to the model selection criteria used in the GMM with BIC, the following discussion is intended to verify the robustness and validity of the clustering results. Clustering can be used to evaluate the effect of data size on the results and to identify the amount of experimental data required to achieve the same level of performance. We applied the following procedure:
Figure 7: (A) Distribution of the CuCr60 data over the depth range of 500 nm - 2000 nm (combined datasets for indentation arrays of 100 \(\upmu\)m x 100 \(\upmu\)m, 300 \(\upmu\)m x 300 \(\upmu\)m, and 500 \(\upmu\)m x 500 \(\upmu\)m size) **(B)** 1D BIC for \(E\) and \(H\), and **(C)** 2D BIC for \(E\) and \(H\).
1. Given the whole dataset \(\mathcal{D}\), the categorical variable \(y_{i}\) is generated by a clustering algorithm \(A_{k}\). It constructs such a solution \(\mathbf{Y}:=A_{k}(\mathcal{D})\), where \(\mathbf{Y}=\{y_{1},...,y_{i}\}\), and \(y_{i}=\{1,...,k\}\) is assigned to the cluster. In our case, labels \(y_{i}\) were generated using the KMeans and GMM clustering algorithms, with the prediction of the optimal GMM algorithm serving as the ground truth.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Indentation & \multicolumn{3}{c}{2D} \\ \cline{2-4} area & Average \(E\) & Average \(H\) & Percentage \\ \([\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{ \text{ \text{ \text{ \text{ \text{ }}}}}}}}}}}}}]\) & [GPa] & [\%] \\ \hline
100x100 & 252.19 & 2.46 & 78.4 \\ & 393.06 & 3.27 & 21.6 \\ \hline & 159.47 & 1.39 & 32.8 \\
150x150 & 229.10 & 2.29 & 31.9 \\ & 337.91 & 3.51 & 35.2 \\ \hline & 172.25 & 1.41 & 40.5 \\
250x250 & 267.99 & 2.74 & 38.1 \\ & 383.12 & 3.01 & 21.4 \\ \hline & 170.49 & 1.42 & 38.5 \\ merged data & 264.73 & 2.78 & 40.4 \\ & 379.28 & 3.70 & 21.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: 2D mechanical property fitting of CuCr60 based on the optimal BIC results at 1 \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text
2. \(k\)-fold cross-validation is conducted on the dataset \(\mathcal{G}\): First, divide the data set into \(k\) equal parts (the "folds"), then choose \((k-1)\) folds for training and the remaining fold for testing. Then, validate the model using the testing data set \(\mathcal{G}^{test}\) after training the model with the training data set \(\mathcal{G}^{train}\). Perform \(k\) rounds of cross-validation using multiple training datasets.
3. The adjusted Rand score is used to ensure the robustness and validity of the clustering results. A score of 1 for the adjusted Rand score indicates complete agreement between the two clusters (Chacon and Rastrojo, 2023).
Figure 8**A** shows the scores for CuCr60 obtained through \(k\)-fold cross-validation using two clustering algorithms with 366 datapoints at 1 \(\upmu\)m depth. GMM outperformed KMeans in predicting individual phase content of the CuCr composite. Figure 8**B** shows the scores for varying data sizes after the \(k\)-fold cross-validation (\(k=2,3,...,7\)) with the GMM model for CuCr60 data at 1 \(\upmu\)m depth. The total number of datasets was increased in increments of 50 until 550 data sets were reached. The standard deviation and mean score of the model decreased as the number of valid datapoints increased, indicating improved accuracy. The model's fit accuracy was considered reliable only when the lowest score was more than 0.95 and the standard deviation smaller than 0.05 in our case. This observation explains the higher accuracy of the CuCr60 results using 574 data compared to CuCr60 with 366 data. confirming the viability of the two-dimensional GMM model for predicting the Cr content.
Figure 9 shows the outcomes of \(k\)-fold cross-validation of CuCr25 with different clusters. Based on the 2D GMM analysis described above, the optimal model was \(k=3\). However, we also notice that the BIC values determined for \(k=4\) and \(k=5\) are comparable to those determined for \(k=3\) (Figure 6**C**). When the BIC values do not indicate a significant difference, the \(k\)-fold cross-validation can certainly provide a hint as to which model is superior. In the case of \(k=3\), the adjusted Rand scores increased with the amount of data, whereas with other numbers of clusters the scores did not exhibit an upward trend in conjunction with predicted scores lower than 80%. As a supplement to BIC, the \(k\)-fold cross-validation method can be used to determine the number of clusters and, more importantly, whether the amount of data is sufficient for training.
Figure 8: Cross-validation results of CuCr60 with different data size. **(A)** CuCr60 at 1 \(\upmu\)m indentation depth with 366 data points. **(B)** CuCr60 at 1 \(\upmu\)m depth with 574 data points
## 4 Conclusion
Cu-Cr composites were studied by indentation as a model material to evaluate the ability to determine the properties of individual phases as well as the number of phases present. 1D and 2D Gaussian mixture models were trained and the most likely number of components identified based on the BIC. Using cross-validation we could show that the GMM gave more accurate results than Kmeans clustering. Investigating the dependence of the cross- validation results on the size of the datasets helped to understand what a reasonable amount of data for the training of such models might be.
## 1 Appendix
### Formulation of the Maximum Likelihood Estimate for the Gaussian Model
In the following, we summarize the most important equations and relations required for the BIC criterion for the one-dimensional case. The generalization towards higher dimensions follows the same arguments, cf. (Neath and Cavanaugh, 2012). Given a mixture of \(k\) Gaussian distributions, we start from Equation (2) and use it to measure the likelihood of a single data point \(x_{i}\):
\[p\left(x_{i}\mid\Phi\right)=\sum_{j=1}^{k}\alpha_{j}\mathcal{N}\left(x_{i}\mid \mu_{j},\sigma_{j}\right)\, \tag{4}\]
where \(\Phi\) is a vector that contains for each component the weight \(\alpha_{j}\) and the Gaussian parameter \(\boldsymbol{\theta}_{j}=\{\mu_{j},\sigma_{j}\}\). With this we can define the likelihood function \(\mathcal{L}\) of the entire data set \(\mathcal{D}\) as the product of
Figure 9: Cross-validation results of CuCr25 at 1 μm depth with different data size (in total 513 data points). **(A)** k=2 **(B)** k=3 **(C)** k=4 **(D)** k=5
probabilities for each individual value:
\[\mathcal{L}=\prod_{i=1}^{N}p\left(x_{i}\mid\mathbf{\Phi}\right) \tag{5}\]
and \(x_{i}=\{x_{1},...,x_{N}\}\) are the data samples. \(\mathcal{L}\) acts as the joint probability resulting from the likelihood of all individual data points and is a function of the data and the model. Since the data set is fixed, the variables on which \(\mathcal{L}\) depends are therefore the model parameters \(\mathbf{\Phi}\). The optimal parameters are those that maximize the likelihood of the underlying Gaussian model for the given data set
\[\boldsymbol{\mu}^{*}=\mathrm{argmax}_{\mu}\,\mathcal{L}(\mathcal{D}| \boldsymbol{\theta})\qquad\text{and}\qquad\boldsymbol{\sigma}^{*}=\mathrm{ argmax}_{\boldsymbol{\sigma}}\,\mathcal{L}(\mathcal{D}|\boldsymbol{\theta}). \tag{6}\]
For finding the maxima of \(\mathcal{L}\), the partial derivative of \(\ln L\) with respect to each parameter are calculated
\[\frac{\partial\ln\mathcal{L}}{\partial\mu_{j}}=0,\frac{\partial\ln\mathcal{L }}{\partial\sigma_{j}}=0,\text{ and }\frac{\partial\ln\mathcal{L}}{\partial\alpha_{j}}=0\quad\text{ where}\quad\ln\mathcal{L}=\sum_{i=1}^{N}\ln\left[\sum_{j=1}^{k}\alpha_{j}\mathcal{N} \left(x_{i}\mid\mu_{j},\sigma_{j}\right)\right] \tag{7}\]
To solve this system of equations, the expectation maximization (EM) algorithm is used. First, we assume that the model parameters (\(\mu_{j},\sigma_{j}\)) are fixed for each specific model number \(j\) (note that a model \(j\) represents a cluster \(j\)). The probability that data \(x_{i}\) belongs to cluster \(j\) is denoted by \(p\left(j\mid x_{i}\right)\). An often used way to obtain the maximum is to take the derivative of the logarithm of the likelihood (note that this does not change the location of the maximum as the logarithm function increases monotonically):
\[\frac{\partial\ln\mathcal{L}}{\partial\theta_{j}}=-\sum_{i=1}^{N}p\left(j\mid x _{i}\right)\frac{\partial}{\partial\theta_{j}}\left[\ln\sigma_{j}+\frac{\left( x_{i}-\mu_{j}\right)^{2}}{2\sigma_{j}^{2}}\right] \tag{8}\]
By setting each of the derivatives to 0, we obtain the new estimators:
\[\mu_{j}=\frac{\sum_{i=1}^{N}p\left(j\mid x_{i}\right)x_{i}}{\sum _{i=1}^{N}p\left(j\mid x_{i}\right)}\qquad\text{and}\qquad\sigma_{j}^{2}= \frac{\sum_{i=1}^{N}p\left(j\mid x_{i}\right)\left(x_{i}-\mu_{j}\right)^{2}}{ \sum_{i=1}^{N}p\left(j\mid x_{i}\right)} \tag{9}\] \[\text{with}\qquad\alpha_{j}=\frac{1}{N}\sum_{i=1}^{N}p\left(j \mid x_{i}\right). \tag{10}\]
In the EM algorithm, the estimation step (E) refers to the calculation and update of \(p\left(j\mid x_{i}\right)\) in each iteration, whereas the maximization step (M) refers to the calculation (Equation (9) and Equation (10)) of the updated model parameters, which approach the local maximum until convergence.
|
2309.10071 | Not even 6 dB: Gaussian quantum illumination in thermal background | In analyses of target detection with Gaussian state transmitters in a thermal
background, the thermal occupation is taken to depend on the target
reflectivity in a way which simplifies the analysis of the symmetric quantum
hypothesis testing problem. However, this assumption precludes comparison of
target detection performance between an arbitrary transmitter and a vacuum
state transmitter, i.e., ``detection without illumination'', which is relevant
in a bright thermal background because a target can be detected by its optical
shadow or some other perturbation of the background. Using a target-agnostic
thermal environment leads to the result that the oft-claimed 6 dB possible
reduction in the quantum Chernoff exponent for a two-mode squeezed vacuum
transmitter over a coherent state transmitter in high-occupation thermal
background is an unachievable limiting value, only occurring in a limit in
which the target detection problem is ill-posed. Further analyzing quantum
illumination in a target-agnostic thermal environment shows that a weak
single-mode squeezed transmitter performs worse than ``no illumination'', which
is explained by the noise-increasing property of reflected low-intensity
squeezed light. | T. J. Volkoff | 2023-09-18T18:36:43Z | http://arxiv.org/abs/2309.10071v2 | # Not even 6 dB: Gaussian quantum illumination in thermal background
###### Abstract
In analyses of target detection with Gaussian state transmitters in a thermal background, the thermal occupation is taken to depend on the target reflectivity in a way which simplifies the analysis of the symmetric quantum hypothesis testing problem. However, this assumption precludes comparison of target detection performance between an arbitrary transmitter and a vacuum state transmitter, i.e., "detection without illumination", which is relevant in a bright thermal background because a target can be detected by its optical shadow or some other perturbation of the background. Using a target-agnostic thermal environment leads to the result that the oft-claimed 6 dB possible reduction in the quantum Chernoff exponent for a two-mode squeezed vacuum transmitter over a coherent state transmitter in high-occupation thermal background is an unachievable limiting value, only occurring in a limit in which the target detection problem is ill-posed. Further analyzing quantum illumination in a target-agnostic thermal environment shows that a weak single-mode squeezed transmitter performs worse than "no illumination", which is explained by the noise-increasing property of reflected low-intensity squeezed light.
## 1 Introduction
Quantum illumination (QI) is a symmetric hypothesis testing problem in which a decision is made between "target absent" and "target present" after transmitting many copies of a quantum subsystem to the target [1, 2]. The full quantum system may have quantum memory registers which are not transmitted to the target. The model of the target includes a small reflectivity \(\kappa\ll 1\) beamsplitter uniformly coupling the transmitted registers and corresponding thermal modes of energy \(N_{B}\), and, in analyses of the purely information-theoretic QI problem, has no other interesting property like spatial or temporal dynamics, thermal or active emission, absorption, etc. Such properties are important for realistic analyses of QI problem involving target properties or specific receivers [3, 4, 5, 6, 7, 8], but they are not considered in the present work. In Gaussian QI, the full state of the transmitted register \(T\) and the quantum memory register \(Q\) is a continuous-variable (CV) Gaussian state \(|\psi\rangle_{TQ}\). One can, of course, consider alternative information-theoretic settings such as asymmetric hypothesis testing setting [9], or first-photon radar [10], but the present work is concerned with the traditional setting.
For QI in an athermal environment (number of thermal environment photons \(N_{B}=0\)) and a target modeled by a quantum-limited attenuator channel, Nair's "no-go" theorem establishes that the optimal error probability for the optimal transmitter is \(1/2\) of the optimal error probability for a coherent state transmitter in the limit of small reflectivity, \(\kappa\ll 1\)[11]. The factor of \(1/2\) is merely an additive increase of the quantum Chernoff exponent. In contrast, in a bright thermal background (\(N_{B}\gg 1\) per mode), and assuming a certain model of the target, it has been shown that a two-mode squeezed state (TMSS) transmitter with transmitted intensity \(N_{S}\) attains about a \(6\) dB (specifically, a factor of exactly \(4\)) smaller quantum Chernoff exponent than a coherent state transmitter \(|\alpha\rangle_{T}\) with \(\alpha=\sqrt{N_{S}}\). The supporting calculation is based on the comparison of the Bhattacharyya bounds on the optimal error probability, which is justified in the parameter domains considered. However, and most importantly, the thermal background mode is assumed to depend on the target's parameter \(\kappa\) in such a way that, regardless of the value of \(\kappa\), the thermal environment increases the noise in each quadrature of the reflected state by an additive amount \(N_{B}+\frac{1}{2}\). The assumption is implemented by taking \(N_{B}\mapsto\frac{N_{B}}{1-\kappa}\) after the beamsplitter transformation is made. Such a description of the target is still a bosonic Gaussian channel, but it is not a unitary beamsplitter. Beyond the mathematical convenience of this assumption, it is sometimes justified by the restriction that the target must be _illuminated_ for target detection to occur (a vacuum transmitter does not give a well-defined QI problem under this model of the target- the states corresponding to the "target present" and "target absent" hypotheses would be exactly the same.). The present work dispenses with this assumption and its ensuing mathematical convenience, instead taking the target to be defined in a simple way by the thermal attenuator channel involving a beamsplitter of reflectivity \(\sqrt{\kappa}\) and an environment consisting of a bare thermal background \(\rho:=\sum_{n=0}^{\infty}\left(\frac{N_{B}}{N_{B}+1}\right)^{n}|n\rangle \langle n|_{E}\). Such a definition is a straightforward generalization of the quantum-limited attenuator channel used in the noiseless target detection problem- just put a thermal state on the beamsplitter instead of vacuum. The present model of the target has a major consequence: the QI problem is well-posed even for vacuum transmitters because the quadrature noises are different between the "target present" and "target absent" hypotheses. As shown in Section 2, the possibility of vacuum detection also leads to corrections to the coherent state transmitter QI error probability. The main results of the present work are in Sections 3, 4 which show, respectively, that nonclassical Gaussian transmitters can perform worse than coherent state transmitters, and that the oft-claimed \(6\) dB advantage of two-mode squeezed state transmitters over classical transmitters for Gaussian QI in a bright thermal background is strictly unachievable.
Like Ref.[12], and most other analyses of QI, we will assume \(0<\kappa\ll 1\) for the "target present" hypothesis throughout this work. Our convention for the formalism for CV Gaussian states follows Ref.[13]. Specifically, \(R=(q_{1},p_{1},\ldots,q_{M},p_{M})\) is the row vector of canonical operators (\([q_{j},p_{j^{\prime}}]=i\delta_{j,j^{\prime}}\)) of an \(M\)-mode CV system, \(a_{j}=\frac{q_{j}+ip_{j}}{\sqrt{2}}\) is an annihilation operator, and we use the symplectic form \(\Delta\) on the phase space \(\mathbb{R}^{2M}\) associated with matrix \(\begin{pmatrix}0&1\\ -1&0\end{pmatrix}^{\oplus M}\). The reader can learn about symmetric hypothesis testing and quantum Chernoff bound in [14], CV Gaussian states in [13, 15], and quantum illumination in [1].
Gaussian quantum illumination with displaced vacuum
In Ref.[1], the time-bandwidth product is used to define the \(M\) modes of the transmitter register \(T\) (or \(2M\) modes of the transmitter-plus-quantum-memory register \(TQ\) in the case of an entangled transmitter). Each of the \(M\) modes, which we associate with annihilation operator \(a_{j}\), \(j\in[M]\), is assumed to reflect from the target with probability \(\kappa\). Therefore, the target is modeled as a beamsplitter (\(\theta:=\cos^{-1}\sqrt{\kappa}\), \(\theta\in[0,\frac{\pi}{2}]\))
\[U_{\kappa}:=e^{-\theta\sum_{j=0}^{M-1}\left(a_{j}b_{j}^{\dagger}-h.c.\right)} \tag{1}\]
uniformly interfering each \(a_{j}\) with its corresponding mode \(b_{j}\) of the environment register \(E\). Each environment mode is assumed thermal with intensity \(N_{B}\). For a coherent state transmitter, we will consider \(N_{S}\) photons in expectation in each mode and take zero phase without loss of generality, leading to the parametrized model
\[\rho_{\kappa}=\mathrm{tr}_{E}\left[U_{\kappa}(D_{\tilde{0}}(\sqrt{N_{S}}) \otimes\mathbb{I}_{E})|\mathrm{VAC}\rangle\langle\mathrm{VAC}|_{T}\otimes\rho _{\beta}^{\otimes M}(D_{\tilde{0}}(-\sqrt{N_{S}})\otimes\mathbb{I}_{E})U_{ \kappa}^{\dagger}\right] \tag{2}\]
where \(\beta:=\ln\frac{1+N_{B}}{N_{B}}\) is the reciprocal of the effective temperature of each environment mode, and the Fourier mode \(j\) of register \(T\) is \(\tilde{a}_{j}:=\frac{1}{\sqrt{M}}\sum_{k=0}^{M-1}e^{\frac{2\pi ijk}{M}}a_{k}\), which is associated with a CV displacement \(D_{\tilde{j}}(z):=e^{z\tilde{a}_{j}^{\dagger}-\overline{z}\tilde{a}_{j}}\), \(z\in\mathbb{C}\). Written in terms of Fourier modes, both \(U_{\kappa}\) and \(\rho_{\beta}^{\otimes M}\) are unchanged in form, so \(U_{\kappa}\) couples the \(\tilde{a}_{0}\) mode to (and only to) the \(\tilde{b}_{0}^{\dagger}\) mode. Taking the partial trace over all but the \(\tilde{b}_{0}\) mode of \(E\) gives
\[\rho_{\kappa}=\mathrm{tr}_{\tilde{E}_{0}}\left[e^{-\theta\left(\tilde{a} \tilde{b}_{0}^{\dagger}-h.c.\right)}\tilde{D}_{0}(\sqrt{N_{S}})|\mathrm{VAC} \rangle\langle\mathrm{VAC}|_{T}\tilde{D}_{0}(\sqrt{N_{S}})^{\dagger}\otimes \rho_{\beta}e^{\theta\left(\tilde{a}\tilde{b}_{0}^{\dagger}-h.c.\right)}\right] \tag{3}\]
which shows that \(\rho_{\kappa}\) is described by a single-mode bosonic Gaussian channel, specifically, the thermal attenuator channel [13]. The quantum channel that describes the reflection process in Ref.[10] is also a bosonic Gaussian channel, but actually involves amplification, which we view as an exotic type of target, not the passive reflective target of the originally envisioned QI task. When analyzing Gaussian quantum illumination in a thermal background, we thus restrict to a single-mode transmitter mode \(T\) (potentially entangled with a single-mode memory \(Q\)), and a single-mode thermal environment \(E\), no longer explicitly identifying Fourier modes of a register defined by the time-bandwidth product. This simplification is also made in Ref.[12].
The covariance matrix of \(\rho_{\kappa}\) (the state under hypothesis "target present") is given by
\[\Sigma_{\rho_{\kappa}}=\frac{1}{2}\begin{pmatrix}1+2N_{B}(1-\kappa)&0\\ 0&1+2N_{B}(1-\kappa)\end{pmatrix}, \tag{4}\]
its mean vector is given by \(m_{\rho_{\kappa}}=(\sqrt{2\kappa N_{S}},0)\). The covariance matrix of \(\rho_{0}\) (the state under hypothesis "target absent") is \(\Sigma_{\rho_{0}}\), and the mean vector is \(m_{\rho_{0}}\). It is important to notice that even if the transmitter register \(T\) were vacuum, i.e., \(N_{S}=0\), so that \(m_{\rho_{0}}=m_{\rho_{\kappa}}=0\), the hypotheses \(\rho_{0}\) and \(\rho_{\kappa}\) could still be distinguished due to the different power in the quadrature noise. This comparison is not possible using the method of Ref.[12]. Physically, one can consider the difference as fully taking into account the optical shadow of the target.
In QI, the task is to distinguish optimally between \(\rho_{0}^{\otimes N}\) and \(\rho_{\kappa}^{\otimes N}\), \(\kappa\neq 0\) as \(N\rightarrow\infty\) with these hypotheses having equal prior probability (i.e., symmetric hypothesis testing). To evaluate the
asymptotic error probability for this task, one needs to compute the quantum Chernoff exponent \(\xi\), where
\[\xi(\rho_{0},\rho_{\kappa}) := -\log\left(\inf_{0\leq s\leq 1}Q_{s}(\rho_{0},\rho_{\kappa})\right)\] \[Q_{s}(\rho_{0},\rho_{\kappa}) := {\rm tr}\rho_{0}^{s}\rho_{\kappa}^{1-s} \tag{5}\]
for Gaussian states [16], and use the fact that \(p_{\rm err}\sim\frac{1}{2}e^{-N\xi}\) as \(N\to\infty\)[14]. The quantity \(2(1-Q_{s})\) is a special case of a relative \(g\)-entropy, or Petz-Renyi relative quasi-entropy, or a quantum \(g\)-divergence corresponding to the operator convex function \(g(t)=2(1-t^{s})\), \(0\leq s\leq 1\)[17, 18, 19, 20, 21]. The specific quantity \(2(1-Q_{1/2})\) is the quantum analogue of the Hellinger distance, with \(Q_{1/2}\) (referred to as the quantum affinity [22, 23]) appearing in the Bhattacharyya upper bound to the optimal discrimination error, \(p_{\rm err}\leq\frac{1}{2}Q_{1/2}^{N}\) (in classical statistics, _affinity_ is another name for the Bhattacharyya coefficient). Note that by invoking the one-to-one correspondence [24] between relative \(g\)-entropies (specifically, \(g(t)=2(1-\sqrt{t})\)) and monotone Riemannian metrics on the quantum state space (specifically, the one defined by the Chentsov-Morozova function \(c(x,y)=\frac{2}{(\sqrt{x}+\sqrt{y})^{2}}\), sometimes called the Wigner-Yanase monotone metric [25]), one can obtain the \(O(\kappa^{2})\) approximation to \(\inf_{0\leq s\leq 1}Q_{s}(\rho_{0},\rho_{\kappa})\)[26, 27]
Because \(\rho_{0}\) and \(\rho_{\kappa}\) are Gaussian states, Theorem 2 of Ref.[16] or Theorem 18 of Ref.[28] can be used according to convenience to compute \(Q_{s}\). In the latter reference, \(Q_{s}\) depends in a more or less simple way on the mean vectors and covariance matrices of \(\rho_{0}\) and \(\rho_{\kappa}\), whereas the former reference computes \(Q_{s}\) from data of the symplectic diagonalizations (Williamson theorem) applied to \(\Sigma_{\rho_{0}}\) and \(\Sigma_{\rho_{\kappa}}\). The vector of canonical operators in Ref.[16] is given by \(\sqrt{2}R\), so their symplectic eigenvalues are multiplied by 2 compared to the ones in the present work, and their mean vectors are multiplied by \(\sqrt{2}\) compared to the ones in the present work. Using the definitions from Ref.[16] for the real-valued functions \(G_{p}(x)\) and \(\Lambda_{p}(x)\), where \(x\geq 1\), \(p\geq 0\), and noting that \(\Sigma_{\rho_{0}}\) and \(\Sigma_{\rho_{\kappa}}\) are symplectically diagonalized by the \(2\times 2\) identity matrix \(\mathbb{I}_{2}\), one finds that
\[Q_{s} = \frac{2G_{s}(2N_{B}+1)G_{1-s}(2(1-\kappa)N_{B}+1)e^{-\frac{2 \kappa N_{S}}{\Lambda_{s}(2N_{B}+1)+\Lambda_{1-s}(2(1-\kappa)N_{B}+1)}}}{\sqrt {\det\left(\Lambda_{s}(2N_{B}+1)+\Lambda_{1-s}(2(1-\kappa)N_{B}+1)\right)}}0 \sqrt{0}} \tag{6}\] \[= \frac{\exp\left[-4\kappa N_{S}\left(\frac{(N_{B}+1)^{s}((1-\kappa )N_{B}+1)^{1-s}-N_{B}(1-\kappa)^{1-s}}{((N_{B}+1)^{s}-N_{B}^{s})((1-\kappa)N_ {B}+1)^{1-s}-(1-\kappa)^{s}N_{B}^{1-s}}\right)\right]}{(1+N_{B})\left(1-\frac{ \kappa N_{B}}{1+N_{B}}\right)^{1-s}-N_{B}(1-\kappa)^{1-s}}.\]
If the minimum over \(s\) that defines the quantum Chernoff exponent \(\xi\) in (5) were achieved near \(s=1/2\) in some limit of the parameters \(\kappa\) and \(N_{B}\), it would imply that the Bhattacharyya upper bound \(\frac{1}{2}Q_{1/2}^{N}\) to the optimal error probability is tight. For \(N_{S}\gg 0\), the minimal value of \(Q_{s}\) is approximately determined by the minimal value of \(\Lambda_{s}(2N_{B}+1)+\Lambda_{1-s}(2(1-\kappa)N_{B}+1)\) due to the exponentially vanishing factor. With \(N_{B}\gg 0\), this has the following expansion
\[\Lambda_{s}(2N_{B}+1)+\Lambda_{1-s}(2(1-\kappa)N_{B}+1)=\frac{2N_{B}(1-\kappa s )+1}{s(1-s)}+O(1/N_{B}) \tag{7}\]
and the leading order term is minimized at \(s=\frac{1}{2}+\frac{\kappa N_{B}}{4(2N_{B}+1)}+o(\kappa)\) for \(\kappa\ll 1\). Therefore, for \(\kappa\ll 1\) and \(N_{B}\gg 1\), the Bhattacharyya bound describes the optimal error probability well. By
contrast, for \(N_{B}\ll 1\) the exponential factor has no dependence on \(s\), as can be seen from the asymptotic
\[\Lambda_{s}(2N_{B}+1)+\Lambda_{1-s}(2(1-\kappa)N_{B}+1)=2+2(N_{B}^{s}+N_{B}^{1-s })+O(N_{B})+O(N_{B}^{1-s}\kappa). \tag{8}\]
One then looks to maximize the denominator of (6) to minimize \(Q_{s}\). Expanding the derivative of this denominator to first order in \(N_{B}\) and third order in \(\kappa\), one obtains a quadratic equation for the critical point with solution \(\frac{1}{2}+\frac{\kappa}{24}+o(\kappa^{2})\). Therefore, for \(N_{B}\ll 1\) and \(\kappa\ll 1\), the Bhattacharyya bound again describes the optimal error probabilfity well regardless of \(N_{S}\).
We now examine the Bhattacharyya bounds in the parameter domains described above. For \(N_{B}\gg 0\), the Bhattacharyya bound
\[\frac{1}{2}Q_{1/2}^{N} =\frac{1}{2}\left(1-\frac{(N_{B}-1)\kappa^{2}}{8N_{B}}+O(\frac{ \kappa^{2}}{N_{B}^{2}})\right)^{N}e^{-\frac{N\kappa N_{S}}{2((2-\kappa)N_{B}+ 1)+O(N_{B}^{-1})}}\] \[\leq\frac{1}{2}e^{-N\left(\frac{\kappa^{2}(N_{B}-1)}{8N_{B}}+ \frac{\kappa N_{S}}{2((2-\kappa)N_{B}+1)}\right)} \tag{9}\]
holds for sufficiently large \(N_{B}\), where the \(N_{S}\) dependent exponential factor is found by taking \(s=1/2\) in (7); this factor agrees with the result of Ref.[12] in the limit \(N_{B}\rightarrow\infty\), \(\kappa\to 0\). The first term in the exponent can be considered as a reduction of optimal error probability due to the vacuum contribution, and is absent from Ref.[12]. It is negligible compared to the second term in the limit \(\kappa\to 0\). In the parameter domain \(N_{B}\ll 1\), the Bhattacharyya bound is
\[\frac{1}{2}Q_{1/2}^{N} =\frac{1}{2}\left(1-\frac{N_{B}\kappa^{2}}{8}+O(N_{B}^{2}\kappa^ {2})\right)^{N}e^{-\frac{N\kappa N_{S}}{1+2\sqrt{N_{B}+O(N_{B})+O(\kappa\sqrt {N_{B}})}}}\] \[\leq\frac{1}{2}e^{-N\left(\frac{N_{B}\kappa^{2}}{8}+\frac{\kappa N _{S}}{1+2\sqrt{N_{B}}}\right)} \tag{10}\]
where the inequality is valid for sufficiently small \(N_{B}\). The vacuum contribution to (10) is shown only to indicate its existence. It is not only negligible as \(N_{B}\to 0\) or \(\kappa\to 0\), but for large transmitter intensity, for \(N_{S}>1\) it is not even the dominant term of order 2 in \(\kappa\) because \(O(\kappa\sqrt{N_{B}})\) contributes in the denominator of the non-vacuum contribution. In the QI setting of the present work, the Bhattacharyya bounds (9), (10) are the fiducial values to which optical transmitters of intensity \(N_{S}\) will be compared to in the respective large \(N_{B}\) and small \(N_{B}\) regimes.
Before moving on, we note that for \(N_{B}=0\), coherent states minimize \(p_{\rm err}\) over all single-mode states of a fixed energy in the limit \(\kappa\to 0\)[3, 27].
## 3 Gaussian quantum illumination with squeezed vacuum
Gaussian QI with the \(N_{B}\) rescaling of Ref.[12] was considered for a general single mode Gaussian state in Ref.[6], with only a negligible \(O(\kappa^{2})\) benefit of squeezing observed for the optimal error probability exponent quantified by signal-to-noise ratio of an on-off receiver or photon number-resolving receiver in the domain \(\kappa\ll 1\), \(N_{B}\gg 1\). The vacuum contributions to the optimal error probability derived in Section 2 imply that replacing \(N_{B}\mapsto N_{B}/1-\kappa\) in the covariance matrix of the reflected Gaussian state leads to an underestimate of the success probability of QI. One consequence of our not carrying out this replacement is that it becomes meaningful to compare
the QI performance of a vacuum transmitter to a squeezed vacuum transmitter. Although a squeezed vacuum transmitter indeed illuminates the target and, for sufficiently small \(\kappa\), squeezed photons are reflected from the target, it is not obvious that a squeezed vacuum transmitter should result in a lower optimal error probability compared to bare vacuum (zero intensity transmitter) because the relation between quadrature noise anisotropy and detection is not clear. Fig. 1 compares the fidelity between reflected squeezed vacuum ("target present") and a thermal state (over a range of \(N_{S}\)) to the fidelity between transmitted thermal state and a thermal state (no \(N_{S}\) dependence because no photons are transmitted).
There is a domain of squeezing strengths for which the reflected squeezed state is more similar to the thermal environment than is the state resulting from sending no transmitter. Quadrature noise provides some insight into this fact. One can note that taking a convex combination of the covariance matrices corresponding to slightly squeezed vacuum and thermal noise results in a quadrature ellipse that is, on angular average, less distinguishable from thermal noise than an isotropic contraction of the thermal noise covariance by a factor \(1-\kappa\). This fact motivates a counterintuitive hypothesis: there may be a domain of squeezing strengths for which a squeezed transmitter state is detrimental for QI compared to sending no transmitter. We prove this hypothesis by showing the validity of the Bhattacharyya bound in this case and comparing the bounds.
The covariance matrix of reflected squeezed vacuum transmitter \(\rho_{\kappa}\) is given by
\[\Sigma_{\rho_{\kappa}}=\frac{1}{2}\begin{pmatrix}\kappa e^{-2r}+(1-\kappa)(2N _{B}+1)&0\\ 0&\kappa e^{2r}+(1-\kappa)(2N_{B}+1)\end{pmatrix}=\nu_{1}SS^{T} \tag{11}\]
where \(S\) is a \(2\times 2\) symplectic matrix (which turns out to simply describe squeezing of a certain strength) and \(\nu_{1}:=\frac{1}{2}\sqrt{(\kappa e^{2r}+(1-\kappa)(2N_{B}+1))(\kappa e^{-2r }+(1-\kappa)(2N_{B}+1)}\) is the symplectic
eigenvalue. The mean vector of \(\rho_{\kappa}\) is \((0,0)\). With (11) in hand, we aim to justify the statement that \(s=1/2\) is a good approximation of the minimum of \(Q_{s}\) when \(\kappa\ll 1\). Given zero-mean, one-mode Gaussian states \(\rho_{0}\), \(\rho_{1}\) with covariance matrices having symplectic eigenvalues \(\nu_{0}\), \(\nu_{1}\) and symplectic diagonalizations \(S_{0}\), \(S_{1}\), respectively, introduce the functions
\[F_{0}(s,\nu_{0},\nu_{1}):=\frac{\left((2\nu_{0}+1)^{s}+(2\nu_{0}-1)^{s} \right)\left((2\nu_{1}+1)^{1-s}-(2\nu_{1}-1)^{1-s}\right)}{4}\]
\[F_{1}(s,\nu_{0},\nu_{1}):=F_{0}(1-s,\nu_{1},\nu_{0}) \tag{12}\]
so that
\[Q_{s}=2e^{-\frac{1}{2}\mathrm{tr}\log\left[F_{0}S_{0}S_{0}^{T}+F_{1}S_{1}S_{1 }^{T}\right]}. \tag{13}\]
For \(\nu_{0},\nu_{1}\gg 1/2\), \(F_{0}(s,\nu_{0},\nu_{1})\sim(1-s)\nu_{0}^{s}\nu_{1}^{-s}\) and \(F_{1}(s,\nu_{0},\nu_{1})\sim F_{0}(1-s,\nu_{0},\nu_{1})\), so that in this limit, \(Q_{s}\) is invariant under \(s\mapsto 1-s\) if \(S_{0}=S_{1}\). Recall that \(Q_{s}\) has a unique global minimum due to convexity with respect to \(s\)[14]. Therefore, in the limit \(\nu_{0},\nu_{1}\gg 1/2\), the expression (13) implies that deviations of the symplectic matrices \(S_{0}\), \(S_{1}\) from each other are responsible for deviations of the critical point from \(s=1/2\). In the present analysis, we take \(N_{B}\gg 0\) and \(\kappa\ll 1\) which together imply \(\nu_{0},\nu_{1}\gg 1/2\) and \(S_{0}=\mathbb{I}_{1}=S_{1}+O(\frac{\kappa N_{S}}{N_{B}})X\), where \(X\) is a matrix with \(\|X\|=1\).
Taking \(\kappa\ll N_{S}^{-1}\), \(N_{B}\gg 0\), and \(2N_{S}\ll N_{B}\), we obtain the Bhattacharyya bound
\[\frac{1}{2}Q_{1/2}^{N}=\frac{1}{2}\left(1-\frac{\kappa^{2}(N_{B}-1-2N_{S})}{8 N_{B}}+o\left(\frac{\kappa^{2}N_{S}}{N_{B}^{2}}\right)\right)^{N} \tag{14}\]
which is greater than the vacuum transmitter limit of (9). Therefore, in this parameter domain, optimal detection with a single-mode squeezed transmitter is less useful than optimal detection without illumination. Fig. 1b) shows this phenomenon in a parameter domain with large thermal noise. By contrast, if one considers \(\kappa\ll N_{S}^{-1}\), \(N_{B}\gg 0\), and \(N_{B}<N_{S}\) the Bhattacharyya bound becomes
\[\frac{1}{2}Q_{1/2}^{N}=\frac{1}{2}\left(1-\frac{\kappa^{2}(N_{B}-1)}{8N_{B}}- \frac{\kappa^{2}N_{S}(N_{S}-N_{B})}{4N_{B}^{2}}+O\left(\frac{\kappa^{2}N_{S}} {N_{B}^{2}}\right)\right)^{N} \tag{15}\]
which is less than the optimal error probability of detection without illumination.
It turns out that for \(N_{B}\ll 1\), \(Q_{s}\) for a squeezed state transmitter is not generally minimized near \(s=1/2\), so the Bhattacharyya bound does not accurately represent the optimal error. Numerically optimizing \(Q_{s}\) over \(s\) and comparing to (10), one finds that the ratio of error probability between the squeezed state transmitter and coherent state transmitter is monotonically increasing with transmitter intensity \(N_{S}\). One concludes that single-mode squeezing is not a resource for advantage in QI for \(N_{B}\ll 1\), including optical QI.
## 4 Not even 6 dB
Symmetries of the target detection problem imply that it suffices to consider two-mode CV states of the form \(\sum_{n=0}^{\infty}c_{n}|n\rangle\otimes|n\rangle_{T}|n\rangle_{Q}\) for optimal QI [3, 30]. Restricting to Gaussian states puts one on the \(U(1)\times U(1)\) orbit of two-mode squeezed states (TMSS). The TMSS of the transmitter and quantum memory is defined by
\[|\psi\rangle_{TQ}=\frac{1}{\sqrt{N_{S}+1}}\sum_{n=0}^{\infty}\left(\frac{N_{S} }{N_{S}+1}\right)^{\frac{n}{2}}|n\rangle_{T}|n\rangle_{Q}, \tag{16}\]
with this parametrization chosen so that \(\langle a_{T}^{\dagger}a_{T}\rangle=N_{S}=\langle a_{Q}^{\dagger}a_{Q}\rangle\). The covariance matrix \(\Sigma_{\rho_{\kappa}}\) of the two-mode state obtained by reflection of the \(T\) mode in a thermal environment is equal to the upper \(4\times 4\) block of the full \(6\times 6\) covariance matrix \(\Sigma_{U_{\kappa}^{TE}|\psi\rangle\langle\psi|_{TQ}\otimes(\rho_{\beta})_{ E}U_{\kappa}^{TE}|}\) given by
\[\begin{pmatrix}\kappa(2N_{S}+1)+(1-\kappa)(2N_{B}+1)&0&2\sqrt{N_{S}\kappa(N_{S}+1)}\\ 0&\kappa(2N_{S}+1)+(1-\kappa)(2N_{B}+1)&0\\ 2\sqrt{N_{S}\kappa(N_{S}+1)}&0&2N_{S}+1\\ 0&-2\sqrt{N_{S}\kappa(N_{S}+1)}&0\\ 2\sqrt{\kappa(1-\kappa)}(N_{S}-N_{B})&0&2\sqrt{N_{S}(1-\kappa)(N_{S}+1)}\\ 0&2\sqrt{\kappa(1-\kappa)}(N_{S}-N_{B})&0\\ 0&2\sqrt{\kappa(1-\kappa)}(N_{S}-N_{B})&0\\ 0&2\sqrt{N_{S}\kappa(1-\kappa)}(N_{S}-N_{B})&0\\ 0&2\sqrt{N_{S}(1-\kappa)(N_{S}+1)}&2\sqrt{\kappa(1-\kappa)}(N_{S}-N_{B})\\ 0&2\sqrt{N_{S}(1-\kappa)(N_{S}+1)}&0\\ 2N_{S}+1&0&-\sqrt{N_{S}(1-\kappa)(N_{S}+1)}\\ 0&\kappa(2N_{B}+1)+(1-\kappa)(2N_{S}+1)&0\\ 0&\kappa(2N_{B}+1)+(1-\kappa)(2N_{S}+1)\end{pmatrix} \tag{17}\]
The symplectic eigenvalues \(\gamma_{1}\) and \(\gamma_{2}\) of \(\Sigma_{\rho_{\kappa}}\) can be obtained using the same method as Ref.[12] and for \(\kappa\ll 1\) are given by
\[\gamma_{1} =(1+2N_{B})-\frac{2(N_{B}(1+N_{B}))\kappa}{1+N_{S}+N_{B}}+o(\kappa)\] \[\gamma_{2} =(1+2N_{S})-\frac{2(N_{S}(1+N_{S}))\kappa}{1+N_{S}+N_{B}}+o(\kappa). \tag{18}\]
The first terms in the respective expressions dominate if \(\kappa\) is further taken to satisfy \(\kappa\ll N_{B}^{-1}\), \(\kappa\ll N_{S}^{-1}\), respectively. In the equation \(S\Sigma_{\rho_{\kappa}}S^{T}=\gamma_{1}\mathbb{I}_{2}\oplus\gamma_{2}\mathbb{ I}_{2}\), one finds that \(S=\mathbb{I}_{4}+O(\sqrt{\kappa})(\mathbb{I}_{2}\otimes Z)\) with \(Z=\mathrm{diag}(1,-1)\). The "target absent" hypothesis \(\rho_{0}\) is now a state on \(TQ\) and has covariance matrix \(\Sigma_{\rho_{0}}=\beta_{1}\mathbb{I}_{2}\oplus\beta_{2}\mathbb{I}_{2}\) with \(\beta_{1}:=2N_{B}+1\), \(\beta_{2}:=2N_{S}+1\) (so \(\Sigma_{\rho_{0}}\) is symplectically diagonalized by the identity in \(Sp(4,\mathbb{R})\)). The structure of the covariance matrix \(\Sigma_{\rho_{0}}\) arises from two phenomena: 1. the first two diagonal elements describe total loss of the transmitter mode (the reflected transmitter mode is replaced by the thermal environment), 2. the second two diagonal elements arise from losing all information in the transmitter mode, resulting in a thermal state of the quantum memory \(Q\) with \(N_{S}\) photons in expectation. In full, the "target absent" hypothesis corresponds to the quantum channel \(\rho_{\beta}\otimes\mathrm{tr}_{T}\) mapping the set of states of \(TQ\) to itself.
Because the 6 dB advantage discussed in Ref.[12] is obtained in parameter domain \(N\ll 1\), \(N_{B}\gg 1\), \(\kappa\ll 1\), we first note that this domain corresponds to \(\beta_{1},\gamma_{1}\gg 1\) and \(\beta_{2},\gamma_{2}\approx 1\) and symplectic diagonalizations that differ by \(O(\sqrt{\kappa})\), so that the rigorous justification of using the Bhattacharyya bound proceeds very similarly to the analysis of (13). To compare the Bhattacharyya bound of the TMSS transmitter (16) to the coherent state result (9) with equal
\(N_{S}\) in both cases, we compute \(Q_{1/2}\) for the TMSS transmitter to be
\[Q_{1/2}^{\rm(TMSS)} =1-\frac{(N_{S}-2N_{S}^{3/2}+3N_{S}^{2})\kappa}{N_{B}}-\frac{(N_{B} -1)\kappa^{2}}{8N_{B}}\] \[-\frac{(\frac{5}{4}N_{S}-3N_{S}^{3/2}+9N_{S}^{2})\kappa^{2}}{N_{B} }+O(\kappa^{3})+O(N_{B}^{-3/2})+O(N_{S}^{5/2}) \tag{19}\]
where we kept the \(O(\kappa^{2})\) contribution because the vacuum contribution to (9) is of that order. In terms of the ratio of the Bhattacharyya approximations to the quantum Chernoff exponents, we compare
\[\frac{\log 2p_{\rm err}^{\rm(TMSS)}}{\log 2p_{\rm err}^{\rm(coherent)}}\stackrel{{ M \rightarrow\infty}}{{\sim}}\frac{\frac{(N_{S}-2N_{S}^{3/2}+3N_{S}^{2})}{N_{B}} +\frac{(N_{B}-1)\kappa}{8N_{B}}+O\left(\frac{N_{S}\kappa^{2}}{N_{B}}\right)} {\frac{N_{S}}{2((2-\kappa)N_{B}+1)}+\frac{(N_{B}-1)\kappa}{8N_{B}}+O\left( \frac{N_{S}\kappa}{N_{B}^{2}}\right)} \tag{20}\]
The \(N_{B}\gg 0\) asymptotic shown in (20) is not a continuous function of \((\kappa,N_{S})\) at \((0,0)\), and the 6 dB advantage of the TMSS transmitter (i.e., the factor of 4 in the ratio of the quantum Chernoff exponents) is obtained on paths associated with the limit \(\lim_{N_{S}\to 0}\lim_{\kappa\to 0}\). This order of limits gives an unachievable value because the target detection problem is not defined for \(\kappa=0\). By contrast, there is no advantage on the \(\lim_{\kappa\to 0}\lim_{N_{S}\to 0}\) paths. For \(N_{S}\) going to a fixed positive intensity, as for a realistic transmitter, followed by \(\lim_{\kappa\to 0}\), the advantage is a factor strictly less than 4. Under the model of the target which corresponds to making the substitution \(N_{B}\mapsto N_{B}/1-\kappa\), the ratio of the Bhattacharyya approximations to the quantum Chernoff exponents for the TMSS transmitter and coherent state transmitter is a continuous function of \(N_{S}\) and \(\kappa\) when \(N_{B}\gg 0\), the limit being exactly 4.
For completeness, we show in Fig.2 the numerically identified critical \(s\) value over orders of magnitude in both \(N_{S}\) and \(N_{B}\) for a fixed \(\kappa=10^{-2}\). Numerical minimization of \(Q_{s}\) becomes challenging for small \(\kappa\) because the function becomes constant with respect to \(s\). For the same \(\kappa\), Fig.2 also shows a maximal factor of 2.23 advantage (3.48 dB) of the TMSS transmitter over the coherent state transmitter, quantified by the ratio of the Chernoff exponents \(\xi\) for the respective transmitters. For \(N_{B}\ll 1\), closer to the parameter regime of QI at daytime optical frequencies, one finds a domain of transmitter intensities (\(N_{S}\gg 1\)) for which the TMSS transmitter is disadvantageous compared to coherent state. This conclusion can also be arrived at by carrying out an \(N_{B}\ll 1\), \(\kappa\ll 1\) expansion of \(Q_{1/2}^{\rm(TMSS)}\) and comparing to (10). Similar to the result in Fig.1 in which the increased error probability of a single-mode squeezed vacuum transmitter relative to vacuum state transmitter in a parameter domain was corroborated by a corresponding increased
fidelity of the alternatives \(\rho_{\kappa}\) and \(\rho_{0}\) in that parameter domain, one finds that the parameter domain for a disadvantageous TMSS transmitter is also concomitant with the increased fidelity of \(\rho_{\kappa}\) and \(\rho_{0}\) compared to the fidelity of the alternatives in the coherent state transmitter QI problem.
## Acknowledgements
The author thanks N. Dallmann, R. Newell, K. Meier, D. Dalvit, and P. Milonni for discussions, and acknowledges the LDRD program at Los Alamos National Laboratory. Los Alamos National Laboratory is managed by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. 89233218CNA000001.
|
2310.02270 | Comparative Evaluation of Transfer Learning for Classification of Brain
Tumor Using MRI | Abnormal growth of cells in the brain and its surrounding tissues is known as
a brain tumor. There are two types, one is benign (non-cancerous) and another
is malignant (cancerous) which may cause death. The radiologists' ability to
diagnose malignancies is greatly aided by magnetic resonance imaging (MRI).
Brain cancer diagnosis has been considerably expedited by the field of
computer-assisted diagnostics, especially in machine learning and deep
learning. In our study, we categorize three different kinds of brain tumors
using four transfer learning techniques. Our models were tested on a benchmark
dataset of $3064$ MRI pictures representing three different forms of brain
cancer. Notably, ResNet-50 outperformed other models with a remarkable accuracy
of $99.06\%$. We stress the significance of a balanced dataset for improving
accuracy without the use of augmentation methods. Additionally, we
experimentally demonstrate our method and compare with other classification
algorithms on the CE-MRI dataset using evaluations like F1-score, AUC,
precision and recall. | Abu Kaisar Mohammad Masum, Nusrat Badhon, S. M. Saiful Islam Badhon, Nushrat Jahan Ria, Sheikh Abujar, Muntaser Mansur Syed, Naveed Mahmud | 2023-09-24T03:46:38Z | http://arxiv.org/abs/2310.02270v1 | # Comparative Evaluation of Transfer Learning for Classification of Brain Tumor Using MRI
###### Abstract
Ahnormal growth of cells in the brain and its surrounding tissues is known as a brain tumor. There are two types, one is benign (non-cancerous) and another is malignant (cancerous) which may cause death. The radiologists' ability to diagnose malignancies is greatly aided by magnetic resonance imaging (MRI). Brain cancer diagnosis has been considerably expedited by the field of computer-assisted diagnostics, especially in machine learning and deep learning. In our study, we categorize three different kinds of brain tumors using four transfer learning techniques. Our models were tested on a benchmark dataset of \(3064\) MRI pictures representing three different forms of brain cancer. Notably, ResNet-50 outperformed other models with a remarkable accuracy of \(99.06\%\). We stress the significance of a balanced dataset for improving accuracy without the use of augmentation methods. Additionally, we experimentally demonstrate our method and compare with other classification algorithms on the CE-MRI dataset using evaluations like F1-score, AUC, precision and recall.
Transfer Learning, MRI, Brain Cancer.
## I Introduction
Diseases can be caused by external sources like infections and internal dysfunctions, and they are frequently identified by distinct signs and symptoms. Cancer is regarded as the most dangerous and life-threatening of these illnesses. There are almost \(2\) million new cases of brain tumors per year in Bangladesh, which has a population of \(165\) million [1][2].
The type of brain tumor differs depending on the location, size, and shape of the vital organ [3]. Based on variables like location, patient age, and general health [4], whether a brain tumor is benign or malignant affects therapy and prognosis. Moreover, brain tumor incidence is rising in Bangladesh, which may be related to environmental, lifestyle, or hereditary causes. In order to account for malignancy [5], the World Health Organization (WHO) divides brain tumors into two classes. The origin, location, and benign or malignant status of the tumor determine its classification. Astrocytomas, oligodendrogliomas, and ependymomas are frequent gliomas that develop from glial cells [6].
Brain tumors are diagnosed using a mix of physical examinations, patient history reviews, and medical imaging techniques like CT scans, MRIs, and biopsies [7][8]. Surgery, radiation therapy, chemotherapy, and other medical procedures are all available as treatments for brain tumors in Bangladesh. Radiologists are favoring medical imaging modalities more and more due to their effectiveness and patient safety. Particularly MRI provides rich soft tissue information through intricate multidirectional imaging, making it a key technique for finding brain malignancies. However, the technique has some drawbacks such as the inability to handle huge datasets, automate classification, or handle non-linear relationships between inputs and outputs [9].
People suffer every year as a result of misdiagnosis of the kind of tumor in the early stages or unanticipated tumor discoveries during the initial test [10]. Machine learning techniques have been used by numerous writers to categorize various tumor forms [11][12][13][14]. Despite substantial advances in picture segmentation and classification, machine learning still has some drawbacks [15][16]. Especially with diverse datasets, traditional machine learning algorithms fail to learn complicated image characteristics and frequently rely on human-engineered features that might not capture all relevant information. In contrast to traditional methods, Convolutional Neural Networks (CNNs) [17] which are built for image analysis, excel at seeing complex patterns in medical imaging that can be missed. This allows for a more accurate classification of brain tumors.
This study is an effort to improve brain tumor classification by utilizing four well-known pre-trained models such as ResNet50, VGG16, Inception-V3, and MobileNet-V2, and their evaluation scores. These models were polished and trained using a large dataset of brain tumor images. The goal of the study was to better understand performance indicators and the comparative performance of the four pre-trained models in the particular domain of brain tumor categorization. By enabling better diagnosis and treatment planning for people with brain tumors, this research advances the field.
## II Related Work
Guan et al. proposed [18] a model to improve the visual quality of images by utilizing contrast optimization and non |
2307.16812 | Even-point Multi-loop Unitarity and its Applications: Exponentiation,
Anomalies and Evanescence | We identify novel structure in newly computed multi-loop amplitudes and
quantum actions for even-point effective field theories, including both the
nonlinear sigma model (NLSM) and double-copy gauge theories such as Born-Infeld
and its supersymmetric generalizations. We exploit special properties of all
even-point theories towards efficient unitarity based amplitude construction.
In doing so, we find evidence that the leading IR divergence of NLSM amplitudes
exponentiates when the symmetry group is $\mathbb{CP}^1\cong SU(2)/U(1)$. We
then systematically compute the two-loop anomalous behavior of Born-Infeld, and
find that the counterterms needed to restore $U(1)$ invariant behavior at
loop-level can be constructed via a symmetric-structure double-copy. We also
demonstrate that the divergent part of the one-minus $(-+++)$ two-loop anomaly
vanishes upon introducing an evanescent operator. In addition to these pure
photon counterterms, we verify through explicit calculation that the anomalous
matrix elements that violate $U(1)$ duality invariance can be alternatively
cancelled by summing over internal $\mathcal{N}=4$ DBIVA superfields. Finally
we find that $\mathcal{N}=4$ Dirac-Born-Infeld-Volkov-Akulov (DBIVA) amplitudes
permit double-copy construction through two-loop order by reproducing our
unitarity based result with a double copy between color-dual $\mathcal{N}=4$
super-Yang-Mills and our two-loop NLSM amplitudes. This result supports the
possibility of color-dual representations for NLSM beyond one-loop. We conclude
with an overview of how $D$-dimensional four-photon counterterms can be
constructed in generality with the symmetric-structure double-copy, and outline
a convenient way of counting evanescent operators using Hilbert series as
generating functions. | John Joseph M. Carrasco, Nicolas H. Pavao | 2023-07-31T16:29:17Z | http://arxiv.org/abs/2307.16812v1 | # Even-point Multi-loop Unitarity and its Applications: Exponentiation, Anomalies and Evanescence
###### Abstract
We identify novel structure in newly computed multi-loop amplitudes and quantum actions for even-point effective field theories, including both the nonlinear sigma model (NLSM) and double-copy gauge theories such as Born-Infeld and its supersymmetric generalizations. We exploit special properties of all even-point theories towards efficient unitarity based amplitude construction. In doing so, we find evidence that the leading IR divergence of NLSM amplitudes exponentiates when the symmetry group is \(\mathbb{CP}^{1}\cong SU(2)/U(1)\). We then systematically compute the two-loop anomalous behavior of Born-Infeld, and find that the counterterms needed to restore \(U(1)\) invariant behavior at loop-level can be constructed via a symmetric-structure double-copy. We also demonstrate that the divergent part of the one-minus \((-+++)\) two-loop anomaly vanishes upon introducing an evanescent operator. In addition to these pure photon counterterms, we verify through explicit calculation that the anomalous matrix elements that violate \(U(1)\) duality invariance can be alternatively cancelled by summing over internal \(\mathcal{N}=4\) DBIVA superfields. Finally we find that \(\mathcal{N}=4\) Dirac-Born-Infeld-Volkov-Akulov (DBIVA) amplitudes permit double-copy construction through two-loop order by reproducing our unitarity based result with a double copy between color-dual \(\mathcal{N}=4\) super-Yang-Mills and our two-loop NLSM amplitudes. This result supports the possibility of color-dual representations for NLSM beyond one-loop. We conclude with an overview of how \(D\)-dimensional four-photon counterterms can be constructed in generality with the symmetric-structure double-copy, and outline a convenient way of counting evanescent operators using Hilbert series as generating functions.
###### Contents
* 1 Introduction
* 2 Review
* 2.1 Color-dressed and ordered amplitudes
* 2.2 Color-Kinematics Duality and the Double-Copy
* 2.3 4D Spinor Helicity vs. \(D\)-dimensions
* 2.4 One-loop integral basis and tensor reduction
* 2.5 Even-point Effective Field Theories
* 2.6 On-shell Unitarity methods
* 3 Even-point Multi-loop Unitarity
* 3.1 Multi-loop recursive integrals
* 3.2 Two-loop tensor reduction
* 3.3 Gauge-invariant on-shell basis
* 4 Loop-level results
* 4.1 NLSM via EMU
* 4.1.1 One-loop
* 4.1.2 Two-loop
* 4.2 DBIVA via EMU
* 4.2.1 One-loop DBIVA
* 4.2.2 Two-loop Born-Infeld
* 4.2.3 Two-loop \({\cal N}=4\) DBIVA
* 4.3 DBIVA via double copy
* 4.3.1 One-loop \({\cal N}=4\) DBIVA
* 4.3.2 Two-loop \({\cal N}=4\) DBIVA
* 5 Effective Actions
* 5.1 Anomaly cancellation
* 5.1.1 One-loop
* 5.1.2 Two-loop
* 5.2 Double copy construction
* 5.2.1 Symmetric-structure double-copy
* 5.2.2 Higher-spin \(\otimes\) Adler Zero
* 5.3 Evanescent operator counting
* 6 Conclusions
Introduction
Recent decades have seen significant advances in our ability to compute scattering amplitudes to higher orders in perturbation theory. At the epicenter of this explosion in the generation of sharp \(S\)-matrix data is the unitarity method [1; 2; 3], which bypasses the standard Feynman diagram approach by constructing higher order amplitudes directly from lower order on-shell information. At one-loop, the unitarity method allows one to extract amplitudes by directly computing a series of unitarity cuts [4]. On-shell methods have furthermore unveiled novel amplitude-level structure, like the duality between color and kinematics [5] and associated double-copy construction [6], which are frequently obscured by the traditional Lagrangian description of the theories. While much of the research in perturbative calculations thus far has focused on gauge theory and gravity, recent literature has investigated to what extent the \(S\)-matrix of effective field theory is constrained by on-shell data [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25].
In this paper we make the quantum leap to the multi-loop sector of effective field theory. Namely we study even-point (EP) effective field theory (EFT) at the multi-loop level using a combination of generalized unitarity [1; 2; 3] and the double-copy construction [5; 6]. The EFTs that we consider are the nonlinear sigma model (NLSM) and a related family of gauge theories involving Born-Infeld theory and supersymmetric generalizations known as Dirac-Born-Infeld-Volkov-Akulov (DBIVA) theories. These families are known at tree-level to be double-copies between NLSM and Yang-Mills theories with varying amounts of supersymmetry. While we focus on these specific examples, the methods we develop can be used for perturbative calculations in any even-point effective field theory. The motivation for this work is three-fold.
1. First, perturbative calculations in NLSM and DBIVA effective field theories are in many ways significantly simpler than the typical multi-loop calculations of more phenomenological theories, like quantum-chromodynamics (QCD). As we will show, the multi-loop amplitudes for these theories allow us to recycle many of the \(D\)-dimensional integration tools that are so powerful at one-loop order. To approach this problem, we expand on the now standard procedure of Forde [4], that computes one-loop amplitudes directly from 4D unitarity cuts. In section 3 we'll show that there are only two basis integrals needed at two-loop four-point for even-point effective field theory amplitudes. In the spirit of one-loop unitarity, the integral coefficients can be directly extracted from Even-point Multi-loop Unitarity (EMU) and a tensor reduction algorithm. Our method of EMU is a \(D\)-dimensional approach to integrand construction. This allows us to compute a large catalog of \(D\)-dimensional two-loop amplitudes, which provides fertile ground for cultivating insights about the multi-loop structure of effective field theories. The perturbative depth of our calculations sheds light on the exponential structure of IR divergences for \(\mathbb{CP}^{1}\) NLSM amplitudes through two-loops, the anomalous behavior \(U(1)\) duality invariance through two-loop order in DBIVA theories, as well as the emergence of evanescent operators relevant for anomaly cancellation in pure Born-Infeld theory. We believe these surprisingly rich physical structures could serve as a theoretical laboratory for future studies of the interplay between effective field theory operators and perturbative calculations in quantum field theory.
2. The second motivation is that very little is known about the duality between color and kinematics at multi-loop level for generic models in the web of theories [26]. For a comprehensive review of color-dual representations, see Refs. [26; 27; 28] and references therein. While there has been tremendous success of applying the color-kinematics duality to \(\mathcal{N}=4\) super-Yang-Mills, for which color-dual integrands are known through four-point four-loop [29; 30; 31], five-point through three-loops, and to seven-point at one-loop [32; 33; 34], there are many obstructions for generic gauge/gravity theories. Presently, the state-of-the-art for nonlinear sigma model (NLSM) color-dual numerators come from the _XYZ_ model of Cheung and Shen [35], while \(D\)-dimensional integrands for pure Yang-Mills have only been identified through five-point one-loop [36]. At present, there are no known \(D\)-dimensional representations for either of these theories at two-loop that globally manifest the duality between color and kinematics, despite attempts in the literature [37; 38; 39; 40]. Furthermore, similar bottlenecks exists for less-than-maximal \(\mathcal{N}<2\) sYM [41]. While there have been a number of recent developments in constructing manifestly color-dual Feynman rules [42; 43; 44; 45; 46; 47; 48], a precise definition the kinematic algebra off-shell remains elusive. Considering the known tree-level double-copy relationships between NLSM and DBIVA and higher-derivative corrections [49; 50], all of the amplitudes we compute should in principle participate in a double-copy formulation of our results. At the very least, the amplitudes we compute will serve as important checks on future breakthroughs in multi-loop studies of color-dual representations. In addition to traditional double-copy construction, we find that the quantum effective actions generated by loop effects permit a rather compact construction in terms of symmetric-structure double-copy, which was introduced in recent work by the authors [51].
3. Finally, DBIVA effective field theories touch a wide range research areas at the forefront of high energy physics. Of more formal interest, is the presence of quantum anomalies that violate the \(U(1)\) duality invariance, which the theory enjoys at tree-level [52]. The presence of these \(U(1)\) anomalies in gravity is closely linked to ultraviolet divergences computed at loop-level [53; 54; 55; 56]. It has been argued that cancelling these anomalous matrix elements with \(R^{n}\) counterterms can lead to enhanced UV cancellations [57; 58; 59]. This relationship has been demonstrated both for pure Einstein-Hilbert gravity [60; 61; 62], and also for less than maximal \(\mathcal{N}\leq 4\) supergravity [63]. While DBIVA theories are themselves are ultraviolet divergent in \(D=4\), due to their simplicity they serve as an essential laboratory for probing anomaly cancellation at high loop order. Moreover, in addition to their formal theory relevance, DBI theories have garnered wide phenomenological interest in cosmology, both for sourcing non-gaussianities in CMB bispectrum [64; 65; 66], and for their compatibility with the observed CMB tensor mode suppression [67; 68; 69; 70; 71]. Thus, with inflationary data on the horizon, understanding the perturbative structure of DBI could be particularly relevant for modeling early universe quantum fluctuations.
The outline of the paper is as follows: in section 2, we will provide a review of the on-shell methods and integration techniques needed to probe the multi-loop physics studied in this paper. Then in section 3, we introduce the method of Even-point Multi-loop Unitarity (EMU) and compute the requisite two-loop tensor integrals. In section 4, we present our
results, beginning with a warm-up calculation in section 4.1 where we construct integrands for NLSM through two-loop order, and compute the fully integrated amplitudes. We show that in \(D=2-2\epsilon\), the leading IR divergences of the \(\mathbb{CP}^{1}\) model exponentiate. With the scaffolding of \(D\)-dimensional integration in hand, we then compute two-loop amplitudes for \(\mathcal{N}=4\) DBIVA theory and pure-photon Born-Infeld theory in section 4.2. There we compute the anomalies present beyond one-loop order. Then, using the NLSM integrands of section 4.1, we perform a multi-loop double-copy to \(\mathcal{N}=4\) DBIVA observables with the color-dual basis numerators available in the literature in section 4.3. After computing the catalog of two-loop amplitudes, we study the construction of quantum effective actions in section 5 as they relate to the anomalous matrix elements present in two-loop Born-Infeld amplitudes. We demonstrate in section 5.1 that the one-minus anomaly at two-loop requires the introduction of an evanescent operator at \(\mathcal{O}(\alpha^{\prime 4})\). Then in section 5.2 we take the opportunity to discuss the application of symmetric-structure double copy to cancel these anomalies, along with their relationship to higher-spin modes as studied in a recent work by the authors. Finally, in section 5.3 we lay out a Hilbert series framework for counting evanescent operators at general orders in mass-dimension. To conclude, in section 6 we discuss many directions of future work and summarize the insights gained from this study.
## 2 Review
### Color-dressed and ordered amplitudes
Here we provide an overview of the amplitudes nomenclature and organizational principles we use throughout the work. When working with scattering amplitudes, \(\mathcal{A}\), in an on-shell framework it is convenient to introduce the following graphical description at general multiplicity, \(n\), and loop order, \(L\),
\[\mathcal{A}_{n,L}=\int\prod_{i=1}^{L}\frac{d^{D}l_{i}}{(2\pi)^{D}}\sum_{g} \frac{1}{S_{g}}\frac{\mathcal{N}_{g}}{D_{g}}\,. \tag{1}\]
As written above, \(S_{g}\) are the internal symmetry factors for a Feynman graph, \(g\), the propagator structure is captured by the denominator function, \(D_{g}\), and the theory-dependent interaction vertices determine numerator functions, \(\mathcal{N}_{g}\). As written, the numerator functions carry all non-propagator kinematic information and any color-data that would be encoded in the interaction Feynman rules of the theory. By color-data we mean the typical weighting of feynman diagrams by the representation of the particles described. Most of the theories we describe are defined in the adjoint, so the color-data involves the usual dressing of vertices with antisymmetric structure constants, \(f^{abc}\). Although, as we will see, it will be useful to describe certain counterterms in terms of symmetric \(d^{abc}\) color-weights,
For gauge theories, it is often useful to further decompose the full, or color-dressed, amplitudes of eq. (1) into purely kinematic building blocks where the color information is stripped away. For example, at tree level, gauge theory \(n\)-point amplitudes can be re-expressed
in terms of a trace basis decomposition as follows:
\[\mathcal{A}_{n,\text{tree}}=\sum_{\sigma\in S^{n-1}}\text{Tr}(T^{1}T^{\sigma(2)} \cdots T^{\sigma(n)})A_{n}(1,\sigma(2),...,\sigma(n))\,, \tag{2}\]
where the sum is taken over the inequivalent orderings of group theory generator traces. A similar decomposition applies to one-loop gauge theory amplitudes, which we will describe in section 4. The color-ordered functions, \(A_{n}\), are called partial amplitudes, as they only contain on-shell information for a particular color ordering. However, like full amplitudes, they factor on poles to lower-point partial amplitudes. For vector theories partial amplitudes are independently gauge invariant since they weight linearly independent color traces.
If we consider the kinematic gauge-invariant ordered amplitudes themselves, many gauge theories reveal hidden redundancy. The \((n-1)!\) partial amplitudes for a large catalog [26] of adjoint gauge theories, including Yang-Mills and NLSM, are linearly related to a smaller set of basis amplitudes. The first such set of partial amplitude relations identified by Kleiss and Kluijf [72], known formally as KK relations, relates amplitudes different color orderings and signature to a basis of \((n-2)!\) partial amplitudes:
\[A(1,\alpha,n,\beta)=(-1)^{|\alpha|}\sum_{\sigma\in\alpha\sqcup\beta^{T}}A(1, \sigma,n)\,, \tag{3}\]
where \(\beta^{T}\) denotes the inverse ordering of \(\beta\), and \(\sqcup\) is the shuffle product that runs over ordered permutations of \(\alpha\) and \(\beta^{T}\). More recently, Bern, Johansson and one of the authors, (BCJ) further identified a set of kinematic relations that reduced the size of \(n\)-point partial amplitude basis down to \((n-3)!\) via BCJ relations [5]:
\[\sum_{i=2}^{n-1}k_{1}\cdot(k_{2}+\cdots+k_{i})A(2,...,i,1,...,n)=0\,. \tag{4}\]
Underlying both sets of amplitude relations are hidden graphical principles that manifest the amplitude level redundancy of eq. (3) and eq. (4), which we describe in the next section. Before moving on, we emphasize that everything described thus far applies in arbitrary spacetime dimensions. As such, these building-blocks are well suited for constructing loop integrands compatible with dimensional regularization.
### Color-Kinematics Duality and the Double-Copy
To apprehend this graph-based structure for color-dressed theories, we first rewrite the full amplitude of eq. (1) as a sum over cubic graphs, as follows:
\[\mathcal{A}_{n,L}=\int\prod_{i=1}^{L}\frac{d^{D}l_{i}}{(2\pi)^{D}}\sum_{g\in \Gamma^{(3)}_{n,L}}\frac{1}{S_{g}}\frac{C_{g}N_{g}}{D_{g}}\,, \tag{5}\]
where \(N_{g}\) and \(C_{g}\) are the kinematic numerators and color factors, respectively, and \(\Gamma^{(3)}_{n,L}\) denotes the set of cubic diagrams at \(n\)-point \(L\)-loop. In this formulation of the amplitude,
contact diagrams are absorbed into cubic graphs by multiplying by inverse propagators. The procedure of assigning contact diagrams to cubic graphs is referred to as generalized gauge freedom [5]. In practice, there are many different ways to assign contact diagrams to cubic graphs.
The gauge theories that we study in this paper have sufficient generalized gauge freedom to assign contact diagrams to kinematic numerators such that at every multiplicity at tree level one can write the amplitude in terms of \(N_{g}\) that obey the same algebraic relations as the color factors, \(C_{g}\). For color factors that are composed of adjoint structure constants, \(f^{abc}\) they must be anti-symmetric around vertices, and be related by Jacobi relations about each edge. For \(SU(N_{c})\) we can take \(f^{abc}\propto\text{Tr}(T^{a}[T^{b},T^{c}])\) for arbitrary representation \(T\) of the gauge group.
We will also consider generalization beyond the adjoint to color-weights that can include \(d^{abc}\) and associated algebraic relations. For \(SU(N_{c})\) we can take \(d^{abc}\propto\text{Tr}[T^{a}\{T^{b},T^{c}\}]\). Symmetric-structure color factors obey symmetric algebraic relations rather than anti-symmetric ones.
When a gauge theory can be expressed in a form such that the color and kinematic factors obey the same algebraic relations, we call such theories _color-dual_. For ordered amplitudes, the ability to find kinematic numerators that obey antisymmetry and Jacobi relations is one-to-one with the ordered amplitudes satisfying KK and BCJ relations, respectively.
An important consequence of the duality between color and kinematics is the ability to double-copy color-dual numerators with each other [5], while simultaneously preserving factorization and gauge invariance of scattering amplitudes. Because cubic-graph color-weights are not linearly independent, gauge-invariance of the full amplitude is encoded in algebraic relation between the color factors. Thus, equipped with a set of color-dual numerators, \(\tilde{N}_{g}\), that obey the same algebraic relations as \(C_{g}\), we can make the simple replacement, \(C_{g}\to\tilde{N}_{g}\), giving a new gauge invariant amplitude, \(\mathcal{M}\), as follows:
\[\mathcal{M}_{n,L}=\int\prod_{i=1}^{L}\frac{d^{D}l_{i}}{(2\pi)^{D}}\sum_{g\in \Gamma^{(3)}_{n,L}}\frac{1}{S_{g}}\frac{\tilde{N}_{g}N_{g}}{D_{g}}\,. \tag{6}\]
Note that \(\mathcal{M}\) is colorless. It satisfies manifest gauge invariance on the \(N_{g}\) side of the double copy due to the algebraic properties of \(\tilde{N}_{g}\). In the event that \(\tilde{N}_{g}\) also belongs to a vector theory, then \(\mathcal{M}\) describes amplitudes in a gravitational theory. In such a case, the gauge invariance of constituent \(\tilde{\mathcal{A}}\) and \(\mathcal{A}\) in eq. (5) conspire to generate linearized diffeomorphism invariance in eq. (6). As long as both kinematic factors obey the same algebraic constraints as the color algebra, this double copy construction works for integrands at the multi-loop level since it globally manifests color-kinematics on all possible unitarity cuts [5].
Throughout the text we will reserve \(\mathcal{M}\) to denote double-copy amplitudes. As shorthand, when the kinematic weights of two theories, \(X\) and \(Y\), participate in the double copy
construction of \(\mathcal{M}^{XY}\) we will use the outer-product to mean double copy construction as in,
\[\mathcal{M}^{\rm XY}=X\otimes Y\equiv\int\prod_{i=1}^{L}\frac{d^{D}l_{i}}{(2\pi) ^{D}}\sum_{g\in\Gamma^{(3)}_{n,L}}\frac{1}{S_{g}}\frac{N_{g}^{X}N_{g}^{Y}}{D_{ g}}\,. \tag{7}\]
At tree-level, such double-copy construction in the adjoint case can be understood equivalently [73; 74; 75; 76; 77; 78] in terms of a KLT or momentum kernel matrix, \(S(a|b)\), and a BCJ spanning set of ordered amplitudes for each of the \(X\) and \(Y\) theories,
\[X\otimes Y=\sum_{a,b\in S^{n-3}_{(2,\ldots,n-2)}}A^{X}(1,\{a\},n,n-1)S(a|b)A^{Y }(1,\{b\},n-1,n)\,. \tag{8}\]
The sum runs over both sets, \(a\) and \(b\), of \((n-3)!\) distinct ordered amplitudes. No such momentum kernel has yet been constructed for the symmetric case [51].
As we will see, double-copy construction between symmetric color-dual kinematic numerators [51] is quite natural for capturing Born-Infeld counterterms needed for anomaly cancellation beyond one loop. Indeed, many of the higher derivative counterterms captured by double-copying symmetric kinematic factors are inaccessible1 to the traditional double copy between adjoint kinematics. We will describe this non-adjoint color-dual double-copy in more detail in section 5.2, where we will explicate the tension between local symmetric numerators and higher-spin adjoint numerators.
Footnote 1: As identified in Ref. [51] they can technically be described in terms of adjoint double-copy but between presumably unphysical theories that require factorization involving higher-spin exchange.
### 4D Spinor Helicity vs. \(D\)-dimensions
All the amplitude methods we have discussed thus far are valid in arbitrary dimension. This has a number of advantages that we will touch upon when reviewing integrand construction towards dimensionally regulated amplitudes in section 2.6. While most of our calculations will be carried out completely agnostic to the spacetime dimension, the vector amplitudes we compute will have 4D symmetries that hidden in \(D\)-dimensional kinematics. It is useful therefore to also consider explicitly four-dimensional kinematics.
When working with bosonic kinematics in arbitrary dimensions, we will employ formal Lorentz covariant polarizations, \(\varepsilon_{a}^{\mu}\), and momenta, \(k_{a}^{\mu}\). The mostly minus signature will be used throughout. The only restrictions we will place on physical momenta and polarizations are the standard on-shell constraints: momentum conservation and the null-momenta condition,
\[\sum_{a}k_{a}^{\mu}=0\,,\qquad\qquad k_{a}^{2}=0\,. \tag{9}\]
Furthermore, we will take external polarizations \(\varepsilon_{a}\) formal and transverse throughout:
\[\varepsilon_{a}\cdot k_{a}=0\,. \tag{10}\]
Dimensions will generically be away from four, in \(D=4-2\epsilon\), with \(\epsilon\) arbitrary. As noted, it will be convenient to take \(D\to 4\) in certain circumstances after integration. In such cases we will employ appropriate spinor-helicity variables. Here we apply the same conventions of Ref. [79], which we quote now. For massless momenta, \(k_{a}\) and \(k_{b}\), we have
\[s_{ab}=(k_{a}+k_{b})^{2}=\langle ab\rangle[ba]\,, \tag{11}\]
the component definition of our spinor bracket that are consistent with the conventions above,
\[\langle ab\rangle =\frac{(a_{1}+ia_{2})(b_{0}+b_{3})-(b_{1}+ib_{2})(a_{0}+a_{3})}{ \sqrt{(a_{0}+a_{3})(b_{0}+b_{3})}}\,, \tag{12}\] \[[ab] =\frac{(b_{1}-ib_{2})(a_{0}+a_{3})-(a_{1}-ia_{2})(b_{0}+b_{3})}{ \sqrt{(a_{0}+a_{3})(b_{0}+b_{3})}}\,, \tag{13}\]
where the \(a_{i}\) are component values of the four-vector, \(k_{a}^{\mu}=(a_{0},a_{1},a_{2},a_{3})\). Four-dimensional polarization dot products with fixed helicity states can be mapped as follows:
\[\begin{split} k_{a}\cdot\varepsilon_{b}^{(+)}&= \frac{\langle qa\rangle[ab]}{\sqrt{2}\langle qb\rangle}\,,& k_{a}\cdot\varepsilon_{b}^{(-)}=-\frac{[qa]\langle ab \rangle}{\sqrt{2}[qb]}\,,\\ \varepsilon_{a}^{(-)}\cdot\varepsilon_{b}^{(+)}&= -\frac{\langle qa\rangle[qb]}{[qa]\langle qb\rangle}\,,& \varepsilon_{a}^{(\pm)}\cdot\varepsilon_{b}^{(\pm)}=0\,,\end{split} \tag{14}\]
where \(q^{\mu}\) is some null reference momentum that all polarizations are projected along. While all four-dimensional equivalence relayed in this paper can be achieved analytically, in practice it is often much more convenient to verify equivalence numerically.
### One-loop integral basis and tensor reduction
It is well known that one-loop amplitudes are spanned by a small basis of scalar integrals [80; 4; 81]. This property can be exposed via a \(D\)-dimensional integral reduction algorithm due to Passarino and Veltman [82]. At one-loop, the basis of irreducible scalar products (ISPs) that include loop momenta grows in lock step with the number of propagators. Explicitly, an \(N\)-gon integral can have \(N-1\) factors of \((k_{i}\cdot l)\), due to momentum conservation, and exactly one factor of \(l^{2}\); this matches the number of \(N\)-gon propagators.
Furthermore, factors of \((\varepsilon_{i}\cdot l)\) can be mapped to tensor structures of external kinematics with factors of \((\varepsilon_{i}\cdot k_{j})(k_{i}\cdot l)/(k_{i}\cdot k_{j})\) using polarization completeness relations [57]. Thus, any \(N\)-gon tensor integral permits a partial fraction decomposition in terms of inverse propagators, and thus can be mapped to a scalar basis of integrals:
\[T_{\mu_{1}\mu_{2}\ldots\mu_{r}}I_{N}^{\mu_{1}\mu_{2}\ldots\mu_{r}}=\sum_{M=1} ^{N}C_{M}I_{M}\,, \tag{15}\]
where \(T_{\mu_{1}\mu_{2}\ldots\mu_{r}}\) and \(C_{M}\) are functions exclusively of external kinematics. This relationship holds \(D\)-dimensionally for any \(r\)-rank \(N\)-gon one-loop integral. Thus, any one-loop amplitude can be expressed completely in terms of a basis of one-loop scalar integrals. This special
property of one-loop integration is not the case for generic loop order, where the basis of ISPs grows faster than number of inverse propagators. As such, any universal integral basis beyond one-loop must include integrals with non-unit powers of the propagators.
The remarkable simplicity of one-loop amplitudes is even more dramatic in a fixed space-time dimension. In \(D=4\), the kinematic projection to a basis of scalar integrals saturates at the box integral due to the appearance of Gramm determinants. Roughly speaking, \(N\)-gon integral coefficients of eq. (15) come dressed with a factor of \({\cal G}_{N}\),
\[{\cal G}_{N}=\det(k_{i}\cdot k_{j})\,, \tag{16}\]
where \(k_{i}\) are the external momenta flowing into the \(N\)-gon integral. In fixed spacetime dimension, \(D\), the momenta are \(D\)-component vectors and thus the Lorentz product matrix \((k_{i}\cdot k_{j})\) must have a null space for \(N>D\). As a result, \(C_{M}\neq 0\), only when \(M\leq D+1\). Furthermore, in \(D=4-2\epsilon\) dimensions, any scalar pentagon integral can be rewritten up to \({\cal O}(\epsilon)\) in terms of five pinched scalar box integrals [83]. All the information of one-loop 4D amplitudes can therefore be completely determined by evaluating up to box integrals, regardless of the multiplicity. We will demonstrate in the next section how this can be used in 4D integral construction.
The last important feature of one-loop basis integrals is that many of them can be evaluated in arbitrary dimension for a very general set of parameters appearing in the exponents of the propagators. Take for example the \(D\)-dimensional massless triangle and bubble integrals [84] that we will use throughout the text:
\[\begin{split} I^{(\alpha_{1},\alpha_{2})}_{2,(K)}& =\int\frac{d^{D}l}{(2\pi)^{D}}\frac{1}{[l^{2}]^{\alpha_{1}}[(l+K) ^{2}]^{\alpha_{2}}}\\ &=i\left[-\frac{K^{2}}{4\pi}\right]^{D/2}\frac{\Gamma(D/2-\alpha_ {1})\Gamma(D/2-\alpha_{2})\Gamma(\alpha_{12}-D/2)}{[K^{2}]^{\alpha_{12}}\Gamma (\alpha_{1})\Gamma(\alpha_{2})\Gamma(D-\alpha_{12})}\,,\end{split} \tag{17}\]
\[\begin{split} I^{(\alpha_{1},\alpha_{2},\alpha_{3})}_{3,(K_{12})}& =\int\frac{d^{D}l}{(2\pi)^{D}}\frac{1}{[l^{2}]^{\alpha_{1}}[(l+K_ {1})^{2}]^{\alpha_{2}}[(l+K_{12})^{2}]^{\alpha_{3}}}\\ &=i\left[-\frac{K_{12}^{2}}{4\pi}\right]^{D/2}\frac{\Gamma(D/2- \alpha_{12})\Gamma(D/2-\alpha_{23})\Gamma(\alpha_{123}-D/2)}{[K_{12}^{2}]^{ \alpha_{123}}\Gamma(\alpha_{1})\Gamma(\alpha_{3})\Gamma(D-\alpha_{123})}\,, \end{split} \tag{18}\]
where we have introduced the subscript notation for indexed sums \(X_{12\ldots n}=X_{1}+X_{2}+\cdots+X_{n}\). Throughout the text we will suppress the regularization scale that appears in the argument of the logarithms as they can be restored by dimensional analysis. The above integral expressions hold for massless external kinematics, \(K_{i}^{2}=0\), which applies to all the integrals needed for the physical processes described in the text.
The existence of closed form expressions, like those in eq. (17), while common for one-loop integrals, are incredibly rare for higher loop order outside of very specialized kinematic regimes. Luckily, the integral basis at four-point two-loop order for even-point theories can be reconstructed from the one-loop expressions in eq. (17). This is a special property of
hat we refer to as _recursively one-loop_ amplitudes. We can exemplify this property in the simple context of scalar \(\varphi^{2k}\) theory.
Consider the interactions needed to construct two loop amplitudes for the generic arbitrarily weighted \(\varphi^{2k}\)-scalar theory:
\[\mathcal{L}_{\varphi^{2k}}=\frac{1}{2}(\partial\varphi)^{2}+c_{4}\varphi^{4}+c _{6}\varphi^{6}+c_{8}\varphi^{8}+\cdots \tag{18}\]
The two-loop amplitude for \(\varphi^{2k}\) theory is simply generated by evaluating the following set of scalar 1-particle-reducible (1PI) integrals:
\[\begin{split}\mathcal{M}_{\varphi^{2k}}^{\text{2-loop}}& =\frac{c_{4}^{3}}{4}\parbox{56.905512pt}{ \includegraphics[width=56.905512pt]{figs/1-
extensions of Born-Infeld, have natural formulations in terms of veirbeins that depend on fermionic degrees of freedom. For such theories, it is useful to distinguish between the coordinate independent actions, \(S\), and the coordinate-dependent Lagrangian densities, \(\mathcal{L}\),
\[S=\int d^{D}x\,\mathcal{L}\,. \tag{21}\]
Nonlinear Sigma ModelThe simplest effective field theory that we will consider is the nonlinear sigma model (NLSM). The chiral NLSM Lagrangian density with diagonal symmetry group \(G=SU(N_{c})\) can be expressed in terms of the chiral current, \(j_{\mu}=U^{\dagger}\partial_{\mu}U\), as follows,
\[\mathcal{L}^{\rm NLSM}=\frac{1}{2}\text{tr}[(\partial_{\mu}U)^{\dagger}( \partial^{\mu}U)]=\frac{1}{2}\text{tr}\left[\frac{(\partial_{\mu}\pi)(\partial ^{\mu}\pi)}{(1-f_{\pi}^{-2}\pi^{2})^{2}}\right]\,, \tag{22}\]
where the trace is over the color indices of the gauge group, \(\pi\equiv\pi^{a}T^{a}\). On the right hand side we have applied the Cayley parameterization to the \(SU(N_{c})\) group elements, \(U=\frac{1+\pi/f_{\pi}}{1-\pi/f_{\pi}}\), where \(f_{\pi}\) is the dimensionful pion decay width. By construction, this model has a left-right global symmetry, \(G\times G\), that is nonlinearly realized, and a linearly realized diagonal subgroup, \(G\). This theory is the unique two-derivative EFT that is invariant under constant shifts of the pion field in color space:
\[\pi^{a}\to\pi^{a}+c^{a}\,. \tag{23}\]
We could also consider higher derivative generalizations of the NLSM that are invariant under the shift symmetry of eq. (23). Such EFTs correspond to the so-called chiral limit of chiral perturbation theory [85; 86; 87; 88], describing massless pion scattering below the chiral symmetry breaking scale of QCD:
\[\mathcal{L}^{\chi\rm PT}=\frac{1}{2}\text{tr}[(\partial_{\mu}U)^{\dagger}( \partial^{\mu}U)]+\frac{\beta_{1}}{f_{\pi}^{2}}\text{tr}[(\partial_{\mu}U)^{ \dagger}(\partial^{\mu}U)]^{2}+\frac{\beta_{2}}{f_{\pi}^{2}}\text{tr}[( \partial_{\mu}U)^{\dagger}(\partial_{\nu}U)]\text{tr}[(\partial^{\mu}U)^{ \dagger}(\partial^{\nu}U)]+\cdots \tag{24}\]
By construction, each of the higher derivative operators appearing a bove are invariant under the shift symmetry of eq. (23). These higher derivative generalizations of NLSM have been studied both in the context of the \(S\)-matrix bootstrap [89; 90; 91], and soft theorem bootstraps of Refs. [8; 9; 10; 11; 12]. Given the relevance of eq. (24) in Standard Model pion scattering, the amplitudes for \(G=SU(2)\) isospin pions have been computed through two-loop order [92; 93; 94]. In this work, we add to the literature by computing the two-loop amplitudes of eq. (22) for arbitrary \(SU(N_{c})\) color structure. We then use these amplitudes to compute \(\mathcal{N}=4\) DBIVA amplitudes via the double copy in section 4.3.
In addition to the chiral NLSM described above, our methods apply equally well to a target space model describing pion dynamics on a coset manifold \(G/H\), with a nonlinearly realized global symmetry \(G\) broken down to a linearly realized isometry group \(H\). Recent literature that study the universal soft behavior of these models can be found in Refs. [95; 96; 97; 98]. One such model that we will study in the text is the \(\mathbb{CP}^{N}\) model where the global \(SU(N+1)\) is spontaneously broken down to a local \(U(N)\) symmetry on the target space. The Lagrangian
for the theory we provide below:
\[\mathcal{L}^{\text{NLSM}}_{\mathbb{CP}^{N}}=\frac{1}{2}P(z,\bar{z})^{ij}(\partial _{\mu}\bar{z}_{i})(\partial^{\mu}z_{j})=\frac{1}{2}\frac{(f_{\pi}^{2}+\bar{z}z) \delta^{ij}-z^{i}\bar{z}^{j}}{(f_{\pi}^{2}+\bar{z}z)^{2}}(\partial_{\mu}\bar{z }_{i})(\partial^{\mu}z_{j})\,, \tag{2.25}\]
where \(z=(z_{1},z_{2},...,z_{N})\) is a complex vector in the fundamental representation of \(U(N)\), and \(P(z,\bar{z})\) is the Fubini-Study metric on \(\mathbb{CP}^{N}\). The full \(SU(N+1)\) symmetry of the theory is realized due to the imbedding of \(\mathbb{CP}^{N}\) in a complex space one dimension higher, \(\mathbb{C}^{N+1}\). In section 4.1, we will demonstrate through explicit calculation that the leading IR divergences of the \(\mathbb{CP}^{1}\) model exponentiates in \(D=2\), the critical dimension of the theory.
Born-Infeld Photons and Dirac ScalarsThe second even-point EFT we will study is Born-Infeld theory [99]. This theory describes the dynamics of open string endpoints with \(U(1)\) charge attached to a \(D\)-dimensional spacetime with Dirichlet boundary conditions. Given the stringy dynamics orthogonal to the spacetime, the electric field generated by the charged endpoints has a maximum field strength of order the inverse string tension, \(1/\alpha^{\prime}\). The effective Lagrangian for this theory
\[\alpha^{\prime\,2}\mathcal{L}^{\text{BI}}=1-\sqrt{\det(\eta_{\mu\nu}+\alpha^{ \prime}F_{\mu\nu})}\,. \tag{2.26}\]
A close cousin of Born-Infeld theory is the dimensionally reduction to Dirac scalars, which describes a \(D\)-dimensional spacetime propagating in a \((D+1)\)-dimensional background:
\[\alpha^{\prime\,2}\mathcal{L}^{\text{DBI}}=1-\sqrt{\det(\eta_{\mu\nu}+\alpha^ {\prime\,2}\partial_{\mu}\varphi\partial_{\nu}\varphi)}\,, \tag{2.27}\]
where \(\varphi\) is the spacetime coordinate in the orthogonal space. This theory is uniquely determined by considering the most general polynomial of \(X=(\partial\varphi)^{2}\), \(P(X)\), constrained to be invariant under the field redefinition [10]:
\[\varphi\rightarrow\varphi+c+b^{\mu}(x_{\mu}+\varphi\partial_{\mu}\varphi)\,. \tag{2.28}\]
When the transformation law leaves the polynomial \(P(X)\) theory invariant, there is an additional set of \(D+1\) conserved charges, and the Poincare symmetry of \(P(X)\) EFT is promoted from \(D\) to \((D+1)\) dimensions. One can verify that applying eq. (2.28) to eq. (2.27) shifts the Lagrangian by a total derivative [100]. Similar to the soft theorem constraints on the building blocks of NLSM and \(\chi\)PT, the DBI action can be bootstrapped by requiring the amplitudes vanish at \(\mathcal{O}(p^{2})\) when taking external momentum, \(p\), soft [10].
Volkov-Akulov Fermions and SupersymmetryNow we will describe the supersymmetric extensions of the Born-Infeld action given above in eq. (2.26). First we start by defining the theory of shift symmetric fermions first written down by Volkov, Akulov (VA) [101]. The easiest way to define this shift symmetric VA theory is by constructing a volume form in the veirbein frame out of manifestly supersymmetric 1-forms. That is, define a 1-form,
\(\omega^{m}=e^{m}_{\mu}dx^{\mu}\), where the veirbeins (frame fields), \(e^{m}_{\mu}\), are expressed in terms of the fermion fields, \(\lambda\), as follows:
\[e^{m}_{\mu}=\delta^{m}_{\mu}+i\bar{\lambda}\Gamma^{m}\overset{ \leftrightarrow}{\partial}_{\mu}\lambda=\delta^{m}_{\mu}+i(\bar{\lambda} \Gamma^{m}\partial_{\mu}\lambda-\partial_{\mu}\bar{\lambda}\Gamma^{m}\lambda)\,. \tag{29}\]
This frame field is manifestly invariant under the superspace shift:
\[\delta\lambda\to\eta\qquad\delta\bar{\lambda}\to\bar{\eta}\qquad \delta x^{\mu}\to x^{\mu}-i(\bar{\lambda}\,\Gamma^{\mu}\eta-i\bar{\eta}\, \Gamma^{\mu}\lambda)\,. \tag{30}\]
The Volkov-Akulov action in the veirbein frame is then simply the fermionic world-volume integral over the \(D\)-form [102],
\[S^{\rm VA}=\int\omega^{1}\wedge\omega^{2}\wedge\cdots\wedge\omega^{D}\,. \tag{31}\]
This is not dissimilar from the construction of the pure scalar DBI action above. Likewise, we can write the Lagrangian in the spacetime coordinate frame in parallel to eq. (26) and eq. (27). Transforming back into spacetime coordinates yields the following Lagrangian density for self interaction VA fermions upto four-point interactions:
\[\alpha^{\prime\,2}{\cal L}^{\rm VA}=\det(e^{m}_{\mu})=i\bar{\lambda}\overset{ \leftrightarrow}{\partial}\lambda+\frac{1}{2}(\bar{\lambda}\Gamma^{\mu} \partial^{\nu}\lambda)(\bar{\lambda}\Gamma_{\mu}\partial_{\nu}\lambda)+{\cal O }(\partial^{4}\lambda^{6})\,. \tag{32}\]
Furthermore, a convenient choice of field redefinition was proposed by Komargodski and Seiberg (KS) [103; 104], which defines a new fermion field, \(\psi=\psi(\lambda,\bar{\lambda})\), in a way that is perturbatively equivalent to the VA fermions. This construction is equivalent to a theory of nilpotent chiral superfields [105; 106; 107], relevant for the construction of \(\alpha\)-attractor models of inflation [108]. Applying the KS nonlinear field redefinition yields the considerably simpler Lagrangian for VA theory:
\[\alpha^{\prime\,2}{\cal L}^{\rm VA}=i\bar{\psi}\not{\partial}\psi+\frac{1}{2 }\bar{\psi}^{2}\partial^{2}\psi^{2}+\frac{1}{4}\psi^{2}\bar{\psi}^{2}\partial ^{2}\psi^{2}\partial^{2}\bar{\psi}^{2}\,. \tag{33}\]
With the world-volume formulation of VA theory in hand, we can also trivially construct the action for maximal \({\cal N}=1\) supersymmetric Born-Infeld theory in \(D=10\)[109]. We simply replace the flat space background of eq. (26) with the fermionic background of eq. (32),
\[S^{\rm DBIVA}_{{\cal N}=1}=\int\omega^{1}\wedge\omega^{2}\wedge\cdots\wedge \omega^{10}\sqrt{\det(\eta_{mn}+\alpha F_{mn})}\,. \tag{34}\]
It is worth noting that the fermion-vector interaction is introduced via the non-minimal coupling between the field strength and the fermionic spacetime metric [110]. This differs from the linearly realized supersymmetry of super-Yang-Mills theory, which minimally couples the vector mode to the chiral fermion. This non-minimal coupling is a requirement of super-symmtry due to the even-point nature of abelian vector theories like Born-Infeld.
The action above in eq. (34) can be similarly re-expressed in terms of our canonical spacetime coordinates, giving us the following flat-space Lagrangian for maximally supersymmetric Born-Infeld theory in \(D=10\):
\[\alpha^{\prime\,2}{\cal L}^{\rm DBIVA}s=\sqrt{\det\left(\eta_{\mu\nu}-\alpha^ {\prime}F_{\mu\nu}+\alpha^{\prime\,2}(\bar{\lambda}\Gamma_{\mu}\partial_{\nu} \lambda)-\alpha^{\prime 4}(\bar{\lambda}\Gamma^{\rho}\partial_{\mu}\lambda)(\bar{ \lambda}\Gamma_{\rho}\partial_{\nu}\lambda)\right)}\,. \tag{35}\]
To recover the \(D=4\) theory, we simply dimensionally reduce in a way that preserves the maximum number of supercharges, which is \(\mathcal{N}=4\) in \(D=4\). Implementing a dimensional reduction that preserves the supercharges of eq. (2.35) is rather straightforward in an on-shell framework:
\[\mathcal{M}^{\rm DBIVA}=A^{\rm NLSM}\otimes A^{\rm sYM}\,. \tag{2.36}\]
Since the double copy construction holds \(D\)-dimensionally at tree-level [26], double-copying maximal sYM with NLSM will yield the analogous spectrum of maximally supersymmetry DBIVA theory, whether in \(D=10\), as described above, and in \(D=4\).
Abelian Open StringSimilar to the amplitudes of DBIVA theories described above, the tree-level amplitudes for the open superstring (OSS) [111, 112] likewise permit a double copy construction [113]. To construct open-superstring amplitudes we simply double copy maximally supersymmetric super-Yang-Mills (sYM) and Chan-Paton dressed \(Z\)-theory amplitudes [114, 50, 50], as follows,
\[A^{\rm OSS}=Z\otimes A^{\rm sYM}\,, \tag{2.37}\]
where \(\otimes\) implements the field theory double copy [116, 5] defined in eq. (2.7). Moreover, the amplitudes of DBIVA emerge from the above construction as the field theory limit of the abelian open superstring [117]. To recover the abelian sector, one simply sums over all Chan-Paton color orderings on the side of the bi-colored \(Z\)-theory amplitudes that obey string monodromy relations. In the field theory limit, the observables for so-called abelian \(Z\)-theory are simply the amplitudes generated by the NLSM Lagrangian [50]. That is, given a \(n\)-point amplitude in \(Z\)-theory, with field theory ordering \(a=(a_{1},a_{2},...,a_{n})\) and string theoretic color ordering \(A=(A_{1},A_{2},...,A_{n})\), one finds the following relation:
\[A^{\rm NLSM}(a_{1},a_{2},...,a_{n})\equiv\lim_{\alpha^{\prime}\to 0}(\alpha^{ \prime})^{2-n}Z_{\times}(a_{1},a_{2},...,a_{n})\,, \tag{2.38}\]
where \(Z_{\times}\) are abelian \(Z\)-theory amplitudes of [50], defined by summing all possible orderings of Chan-Paton factors:
\[Z_{\times}(a_{1},a_{2},...,a_{n})\equiv\sum_{A\in S^{n-1}}Z_{(A_{1},...,A_{n- 1},n)}(a_{1},a_{2},...,a_{n})\,. \tag{2.39}\]
Since the field theory limit of abelian \(Z\)-theory produces NLSM amplitudes, the field theory limit of the abelian OSS, gives rise to DBIVA observables at leading order in \(\alpha^{\prime}\):
\[\begin{split}\lim_{\alpha^{\prime}\to 0}Z_{\times}& =A^{\rm NLSM}\otimes A^{\rm BAS}\equiv A^{\rm NLSM}\,,\\ \Rightarrow&\lim_{\alpha^{\prime}\to 0}A^{\rm OSS} _{\times}&=A^{\rm NLSM}\otimes A^{\rm sYM}\equiv A^{\rm DBIVA} \,.\end{split} \tag{2.40}\]
Additional details on the bi-colored \(Z\)-theory amplitudes these string-theoretic double-copy constructions can be found in [118, 50, 113, 114, 50] and references therein. We emphasize that while NLSM and DBIVA are the two theories that we will focus on, the methods that we develop in the next section apply much more broadly to the full catalog of even-point effective field theories.
Duality Invariance and Higher DerivativesThe four-dimensional amplitudes of DBIVA theories have an additional special property that we will study in his work. At tree-level, the amplitudes generated by eq. (26) and eq. (34) exhibit \(U(1)\)_duality invariance_[119; 120]. A theory is said to exhibit duality invariance when the effect of rotating field strengths, \(F^{\mu\nu}\), into the dual-fields, \(G^{\mu\nu}\), defined below,
\[G^{\mu\nu}=\epsilon^{\mu\nu\rho\sigma}\frac{\partial\mathcal{L}}{\partial F^{ \mu\nu}}\,, \tag{41}\]
leaves the equations of motion invariant [121; 122]. This dynamical symmetry gives rise to a 4D helicity selection rule [120] whereby tree-level matrix elements vanish on-shell outside the aligned-helicity sector,
\[\mathcal{M}^{\text{DBIVA}}_{(n_{-},n_{+})}=0\qquad\Leftrightarrow\qquad n_{- }\neq n_{+}\,, \tag{42}\]
where \(n_{(+)}\) and \(n_{(-)}\) is the number of external positive and negative helicity photons, respectively. For \(\mathcal{N}=0\) pure Born-Infeld theory [99; 123], this symmetry is violated at four-point one-loop in the all-plus \((++++)\) helicity sector [124]. In the text, we will show how this anomaly is propagated to two-loop, and demonstrate that \(U(1)\) duality invariance is further broken in the \((-+++)\) configuration beyond one-loop.
Before proceeding, we comment briefly on the higher derivative corrections one might expect to appear in duality invariant effective field theories. In the case of supersymmetric DBIVA at four-points, \(U(1)\) symmetry is promoted to an \(R\)-symmetry, and is thus protected perturbatively to all orders by supersymmetric Ward Identities [125]. We show this explicitly in the case of \(\mathcal{N}=4\) DBIVA through two-loop in section 4.2. However, this supersymmetric enhancement of \(U(1)\) duality does not in general apply to higher multiplicity amplitudes.
While the leading order in \(\alpha^{\prime}\) must be duality invariant at all multiplicity for DBIVA [21], beyond the leading order, \(R\)-symmetry permits non-vanishing matrix elements outside the split helicity sector. The low energy effective action of the OSS is one such exemplar of an EFT that respects \(U(1)\) duality at leading order in \(\alpha^{\prime}\), but produces duality violating matrix elements at higher-orders above four-point [50].
### On-shell Unitarity methods
Above we have reviewed the known behavior of one-loop integral bases in \(D=4\) spacetime dimensions. In this section we will briefly demonstrate why traditional 4D unitarity methods are so powerful - and why we must go beyond them to accommodate the multi-loop calculations of interest here.
Integrands from 4D CutsApplying one-loop integral reduction in 4D tells us that any one-loop massless amplitude, for which tadpoles are scaleless, can be expressed in terms of box, triangle, and bubble diagrams, with each integral weighted by dimensionally-dependent functions of kinematics:
\[\mathcal{A}^{\text{1-loop}}=\sum_{N=2}^{4}C_{N}(D)I_{N}^{D}=\sum_{N=2}^{4}C_{ N}(4)I_{N}^{D}+\mathcal{R}\,, \tag{43}\]
where the scalar basis-integrals, \(I_{N}^{D}\), are evaluated in \(D=4-2\epsilon\), and \({\cal R}\) is a rational term that emerges from series expanding around \(\epsilon=(4-D)/2\). To construct integral coefficients, \(C_{N}\), we make use of the Optical Theorem for the \(S\)-matrix, which states that,
\[S^{\dagger}S=\mathbb{1}\quad\Rightarrow\quad 2\,{\rm Im}(T)=T^{\dagger}T\,, \tag{44}\]
where \(S=\mathbb{1}+iT\). Applying this relation to eq. (43) allows us to identify the imaginary part of \({\cal A}^{\text{1-loop}}\) with the imaginary parts of each basis integral weighted by the respective 4D coefficient, \(C_{N}(4)\):
\[{\rm Im}({\cal A}^{\text{1-loop}})=\sum_{N=2}^{4}C_{N}(4)\,{\rm Im }(I_{N}^{D})\,. \tag{45}\]
Using the optical theorem to constrain scattering amplitudes is known as the _unitarity method_[1; 3]. In order to isolate each \(C_{N}(4)\) appearing in eq. (43), we must place particular internal legs on-shell. That is, for each internal loop propagator that we wish to take on-shell, \(1/P_{i}^{2}\), we make the following replacement inside the integral:
\[\frac{i}{P_{i}^{2}+i\varepsilon}\to 2\pi\delta^{(+)}(P_{i}^{2}+i \varepsilon)\,. \tag{46}\]
By replacing internal propagators with positive-root delta functions, one can isolate the imaginary parts needed to determine the integral coefficients. This approach of taking individual legs on-shell is known as the method of _generalized unitarity_ - and it can be systematically applied to construct all one-loop amplitudes directly from 4D unitarity cuts [4]. Using generalized unitarity, the integral coefficients correspond to the following set of iterated cuts:
\[C_{4}^{(P_{i}P_{j}P_{k})}\sim \tag{47}\] \[C_{3}^{(P_{i}P_{j})}\sim\] (48) \[C_{2}^{(P_{i})}\sim \tag{49}\]
where exposed legs are summed over all internal states crossing the cut, and the labels \(P_{i}\) are external momentum scales flowing into the specified cuts. The subtractions account for the overlapping information in the respective \(N\)-gon cuts. By momentum conservation, the momentum flow in the unlabeled vertex is completely determined by the other \(N-1\) vertices.
As a concrete example, consider the 4D cut construction of the bubble coefficient for pure Born-Infeld theory:
\[- \tag{50}\] \[+ =\sum_{h_{i}\in\text{states}}\mathcal{M}^{\text{BI}}(1^{-},2^{-},l_{2 }^{h_{2}},l_{1}^{h_{1}})\mathcal{M}^{\text{BI}}(-l_{1}^{\bar{h}_{1}},-l_{2}^{ \bar{h}_{2}},3^{+},4^{+})\] (51) \[=\langle 12\rangle^{2}[l_{2}l_{1}]^{2}\langle l_{1}l_{2}\rangle^{2 }[34]^{2}\] (52) \[=[(l_{1}+l_{2})^{2}]^{2}\langle 12\rangle^{2}[34]^{2}\] (53) \[=s_{12}^{2}\langle 12\rangle^{2}[34]^{2}\,, \tag{54}\]
where in the last step we have made the replacement \((l_{1}+l_{2})=-(k_{1}+k_{2})\) by momentum conservation. Determining the leading behavior of the 1-loop Born-Infled amplitude thus amounts to specifying all possible 4D cuts.
While this method is incredible powerful, as we can see from eq. (45) it misses a key piece of the amplitude - namely, the rational terms, \(\mathcal{R}\). For some theories like \(\mathcal{N}=4\) super-Yang-Mills in \(D=4\)[2], rational terms are absent because the loop power counting for an \(N\)-gon cut is bounded to be less than \(N-2\). However, for generic theories of interest in this paper, like Born-Infeld, rational terms are present and necessary for extracting anomalous matrix elements and higher-loop sub-divergences. While there are methods developed to extract these rational terms at one-loop using \(\mu\)-integrals [80], at generic loop order evaluating these \(\mu\)-integrals can become fairly complicated. For this reason, to perform the multi-loop calculations in this paper, we will employ cut construction in general dimensions [126; 127].
Integrands from \(D\)-dimensional CutsThe presence of 4D rational terms in the amplitude is due to the appearance of \((D-4)\) factors. By constructing integrands \(D\)-dimensionally, we can guarantee that our integral coefficients are sensitive to these overall factors. Rather than summing over internal 4D states, we will instead compute \(D\)-dimensional cuts using the formal polarizations described in section 2.3. The \(D\)-dimensional analog of eq. (51) asserts that the one-loop integrand \(\mathcal{I}_{\text{1-loop}}\) of pure Born-Infeld theory should satisfy the following constraint:
\[\text{Cut}(\mathcal{I}_{\text{1-loop}})=\sum_{h_{i}\in\text{states}}\mathcal{ M}_{4}^{\text{BI}}(l_{1}^{h_{1}},-l_{2}^{\bar{h}_{2}})\mathcal{M}_{4}^{\text{ BI}}(l_{2}^{h_{2}},-l_{1}^{\bar{h}_{1}})\,, \tag{55}\]
where \(\text{Cut}(\mathcal{I}_{\text{1-loop}})\) extracts the kinematic numerator by imposing a maximal cut on the one-loop integrand
\[\text{Cut}(\mathcal{I}_{\text{1-loop}})\equiv l_{1}^{2}l_{2}^{2}(\mathcal{I}_ {\text{1-loop}})\Big{|}_{l_{1}^{2},l_{2}^{2}\to 0}\,. \tag{56}\]
To carry out the sum over formal polarization states in eq. (55) we apply the following \(D\)-dimensional completeness relation for on-shell polarizations,
\[\sum_{\text{states}}\varepsilon^{\mu}_{(l)}\varepsilon^{\nu}_{(-l)}=\eta^{\mu \nu}-\frac{l^{\mu}q^{\nu}+l^{\nu}q^{\mu}}{l\cdot q}, \tag{57}\]
with null reference momentum, \(q^{2}=0\). The dependence of \(q\) is a gauge choice that disappears when on-shell kinematic constraints are imposed. At loop level, the state-sum of eq. (56) can potentially lead to dimension dependence in the \(D\)-dimensional integrand. This is due to terms that contain products of internal polarizations:
\[\begin{split}\mathcal{I}_{\text{1-loop}}&\supset \sum_{\text{states}}\varepsilon^{\mu}_{(l)}\varepsilon^{\nu}_{(-l)}\eta_{\mu\nu }\\ &=\left(\eta^{\mu\nu}-\frac{l^{\mu}q^{\nu}+l^{\nu}q^{\mu}}{l\cdot q }\right)\eta_{\mu\nu}\\ &=D_{s}-2\,,\end{split} \tag{57}\]
where \(D_{s}\) is the spin-dimension of the internal photons, and thus \(D_{s}-2\) just counts the number of photon states. To calculate our loop-level amplitudes, we use dimensional regularization to regulate the integrals. We will adopt the conventions of [128; 129] and take the spin-dimension, \(D_{s}\), to be the same as the dimension appearing in dimensional regularization, \(D=4-2\epsilon\). We use them interchangeably throughout. The appearance of these \(D\)-dependent factors in the state sum are the source of rational terms in the Born-Infeld \(S\)-matrix. Throughout the text, we will use the generalized unitarity cut condition of eq. (54) to fix our multiloop integrands. With this in hand, in the spirit of Forde's one-loop cut construction, we will now systematize the procedure for capturing all the cut information needed for even-point theories at the multiloop order.
## 3 Even-point Multi-loop Unitarity
In this section we will describe the methods we have developed to compute the multi-loop results of this paper. We begin by introducing the notion of Even-point Multi-loop Unitarity (EMU), which is an organizational principle at the foundation of our unitarity-based integrand construction. EMU is an extension of the method of maximal cuts [130], which is a hierarchical approach to perturbative calculations [131]. EMU is a constructive algorithm aimed at capturing all the perturbative information needed at general loop order and multiplicity. We describe the algorithm below, and provide a 2-loop 4-point example that we choose to study in this paper.
**Even-point Multi-loop Unitarity (EMU)**
* At \(n\)-point \(l\)-loop, enumerate all 1-particle-irreducible (1PI) graphs constructed from \((n+2l-2)/2\) four-point vertices. We call these diagrams the maximal cut (MC) diagrams. Graphs with higher-point blobs are grouped into the N\({}^{k}\)MC category, where \(k\) is the number of collapsed internal propagators relative to the MC graphs.
* Culling from N\({}^{k}\)MC \(\to\partial\)N\({}^{k}\)MC: * **Step 1A**
- Discard all diagrams at the given order N\({}^{k}\)MC that capture scale-less behavior from **internal** kinematics. For massless theories, this amounts to
throwing out diagrams that contain one of the following internal nodes: (3.1) * **Step 1B** - Discard all diagrams at the given order N\({}^{k}\)MC that capture scaleless behavior from **external** kinematics. At \(n\)-point, this amounts to rejecting diagrams that contain \(n-1\) external edges attached to a single blob: (3.2) * **Step 2** - Collapsing from \(\partial\)N\({}^{k}\)MC \(\to\) N\({}^{k+1}\)MC: If the culling procedure of Step 1 produces an empty set of graphs, \(\partial\)N\({}^{k}\)MC \(=\varnothing\), then the routine terminates. If not, collapse one of the internal propagators for all diagrams in all topologically distinct ways. Collapsing a propagator simply means merging the two nodes connected by that propagator's edge and removing that edge all together. So the resulting graph will have one less internal edge and one less internal node. This collapsing step will take the \(\partial\)N\({}^{k}\)MC set of diagrams to N\({}^{k+1}\)MC. Once there, repeat Step 1A and 1B until the routine terminates. This procedure for collecting all the cut information using EMU can be represented diagrammatically as a sequence of culling and collapsing as follows: \[\text{MC}_{(l,n)}\overset{\text{S1}}{\to}\partial\text{MC}_{(l,n)}\overset{ \text{S2}}{\to}\dots\overset{\text{S2}}{\to}\text{N}^{k}\text{MC}_{(l,n)} \overset{\text{S1}}{\to}\partial\text{N}^{k}\text{MC}_{(l,n)}\equiv\varnothing\] (3.3) After applying EMU and collecting all the diagrams up to some order N\({}^{k}\)MC\({}_{(l,n)}\), the set of cuts we use to fully constrain the \(n\)-point \(l\)-loop integrand, \(\Omega_{(l,n)}\), is the intersection of these sequence of unitarity cuts: \[\Omega_{(l,n)}=\bigcap_{i=0}^{k}\partial\text{N}^{i}\text{MC}_{(l,n)}\] (3.4) Taking the intersection accounts for the overlapping information stored in the cut diagrams2 at different orders in N\({}^{k}\)MC. As a concrete example, let's consider the case of two-loop four-point of interest in this paper. First, we obtain the following set of diagrams at the MC level built from three four-point vertices: (3.5) * **Step 3**
Each blob corresponds to an on-shell tree-level amplitude from the even-point theory of interest. The third diagram is scaleless on support of dimensional regularization, and thus we discard it in Step 1A. This gives the following restricted set of MC diagrams:
\[\partial\text{MC}_{(2,4)}=\left\{\begin{array}{c}\includegraphics[scale=0. 5]{fig/MC_2,4}\end{array},\begin{array}{c}\includegraphics[scale=0.5]{fig/MC_2,4}\end{array}\right\} \tag{10}\]
Since \(\partial\text{MC}_{(2,4)}\) is not empty, we proceed to Step 2. After throwing out the scaleless diagram in \(\text{MC}_{(2,4)}\), this leads to the following set of \(\text{N}^{1}\text{MC}\) diagrams for two-loop four-point:
\[\text{N}^{1}\text{MC}_{(2,4)}=\left\{\begin{array}{c}\includegraphics[scale= 0.5]{fig/MC_2,4}\end{array},\begin{array}{c}\includegraphics[scale=0.5]{fig/MC_2,4}\end{array}\right\} \tag{11}\]
Now the first diagram is discarded in Step 1A, and the second diagram is discarded in Step 1B. Thus, we need not consider any next-to-maximal cuts for two-loop four-point amplitudes in even-point theories. Since \(\partial\text{N}^{1}\text{MC}_{(2,4)}=\varnothing\), this concludes the EMU cut construction.
As we can see from the diagrams that contribute at two-loops four-point in eq. (10), even-point theories can be fully constructed from convolutions of one-loop integrals. As described earlier, we call such integrals recursively one-loop since we can iteratively apply one-loop unitarity and tensor reduction methods in order to recover the full multi-loop structure. In the next two sections, we first illustrate the recursive behavior of convolution integrals, and then use their properties to develop multi-loop tensor reduction methods that we use throughout the paper.
### Multi-loop recursive integrals
As we found above in our example of EMU, the only diagrams that contribute at two-loop four-point are those given in Fig. 1. Due to the bubble integral insertions, both of these integrals can be evaluated in terms of the one-loop basis integrals of eq. (17). This is a direct consequence of the single-scale nature of bubble insertions. Since internal bubbles are single-scale, they will only contribute additional powers of inverse propagators. This has the effect of shifting an \(L\)-loop convolution integral to \((L-1)\)-loop order, with a propagator power determined by the mass-dimension of the evaluated bubble integral. Explicitly, we can evaluate a \(D\)-dimensional nested bubble with internal momentum, \(q^{\mu}\), flowing into the diagram,
\[\begin{array}{c}\includegraphics[scale=0.5]{fig/MC_2,4}\end{array}\sim[q^{2} ]^{D/2-\alpha_{1}-\alpha_{2}}\,, \tag{12}\]
where \(\alpha_{1}\) and \(\alpha_{2}\) are the degree of the denominators appearing in the bubble integral. For example, consider a banana integral diagram constructed from six-point contacts that would appear in a three-loop application of EMU with external momentum scale \(q^{\mu}=(k_{1}+k_{2})^{\mu}\).
Evaluating the simplest case where all \(\alpha_{i}=1\), one finds that recursively applying eq. (3.8) yields the following sequence:
\[\stackrel{{ k_{12}^{\mu}}}{{\longrightarrow}}\;\;\;\;\;\;\;\;\;\; \stackrel{{\alpha_{1}=1}}{{\longrightarrow}}\;\;\;\;\;\;\stackrel{{ \alpha_{2}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\;\stackrel{{ \alpha_{3}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\;\stackrel{{ \alpha_{4}=1}}{{\longrightarrow}}\;\;\;\;\;\stackrel{{ \alpha_{3}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\;\stackrel{{ \alpha_{4}=1}}{{\longrightarrow}}\;\;\;\;\;\stackrel{{ \alpha_{3}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\;\stackrel{{ \alpha_{4}=1}}{{\longrightarrow}}\;\;\;\;\;\stackrel{{ \alpha_{5}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\stackrel{{ \alpha_{6}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{7}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{8}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\stackrel{{ \alpha_{9}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\stackrel{{ \alpha_{10}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{11}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{12}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{13}=D/2-2}}{{\longrightarrow}}\;\;\;\;\;\stackrel{{ \alpha_{14}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{15}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{16}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{17}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{18}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{10}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{11}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{12}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{13}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{14}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{15}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{16}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{17}=1}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{18}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{10}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{12}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{13}=D/2-2}}{{\longrightarrow}}\;\;\;\;\stackrel{{ \alpha_{14}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{15}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{16}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{17}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{18}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{10}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{12}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{13}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{14}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{15}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{16}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{17}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{18}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{10}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{12}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{13}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{14}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{15}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{16}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{17}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{18}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{10}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{12}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{13}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{14}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{15}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{16}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{17}=1}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{18}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{ \alpha_{19}=D/2-2}}{{\longrightarrow}}\;\;\stackrel{{\alpha_{ 19}=D/2-2}}{{\longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\;\;\stackrel{{\alpha_{19}=D/2-2}}{ {\longrightarrow}}\;\;\stackrel{{\alpha_{19}=D/2-2}}{{ \longrightarrow}}\;\
hese linear relations can be used to line up the coefficients with distinct tensor structures, which gives the following:
\[0 =\sum a_{(m,k)}\left[K^{2}\mathcal{T}_{\rm bub}^{(m-1,k)}+(m+1) \mathcal{T}_{\rm bub}^{(m+1,k-1)}+\frac{K^{2}}{2}\mathcal{T}_{\rm bub}^{(m,k)}\right] \tag{3.17}\] \[=\sum\left[a_{(m+2,k)}K^{2}+a_{(m,k+1)}(m+1)+a_{(m+1,k)}\frac{K^ {2}}{2}\right]\mathcal{T}_{\rm bub}^{(m+1,k)}\,, \tag{3.18}\]
and similarly so for the metric contraction:
\[0 =\sum a_{(m,k)}\left[K^{2}\mathcal{T}_{\rm bub}^{(m-2,k)}+\left[ D+2(m+k-1)\right]\mathcal{T}_{\rm bub}^{(m,k-1)}\right] \tag{3.19}\] \[=\sum\left[a_{(m+2,k)}K^{2}+a_{(m,k+1)}\left[D+2(m+k)\right] \right]\mathcal{T}_{\rm bub}^{(m,k)}\,. \tag{3.20}\]
Treating the distinct tensor structures as basis elements, we thus conclude the following set of linear relations between the coefficients for the rank-\(n\) tensor bubble:
\[0 =K^{2}a_{(m+2,k)}+[D+2(m+k)]a_{(m,k+1)}\,, \tag{3.21}\] \[0 =K^{2}a_{(m+2,k)}+(m+1)a_{(m,k+1)}+\frac{1}{2}K^{2}a_{(m+1,k)}\,, \tag{3.22}\]
where \(D=\eta_{\mu\nu}\eta^{\mu\nu}\). These constraints can be rearranged to give the following recursive definition for the coefficients \(a_{(m,k)}\):
\[a_{(m,k)} =-\left[\frac{K^{2}}{D+2(m+k-1)}\right]a_{(m+2,k-1)} \tag{3.23}\] \[a_{(m,0)} =-\left[\frac{D+2(m-2)}{2(D+m-3)}\right]a_{(m-1,0)}\] \[a_{(0,0)} =I_{2}(K)\]
The base step is simply the scalar bubble integral, \(a_{(0,0)}=I_{2}(K)\). Constructing \(a_{(m,k)}\) from this recursive definition, the one-loop integrand of eq. (2.54) that contains factors of \((\varepsilon_{i}\!\cdot\!l)\) and \((k_{i}\!\cdot\!l)\) can be expressed completely in terms of the bubble integral, \(I_{2}(K)\), weighted by dimension dependent numerical factors and external vector kinematics.
We note that these coefficients are also sufficient to evaluate the contribution from the double-bubble integral whose propagator structure is sketched in Fig. 2. This is best demonstrated with an explicit example. Consider the following integral, \(I_{2\rm bub}^{\rm ex.}\), that functionally
Figure 2: Graphical depiction of the the double-bubble integral. Every exposed internal edge represents a propagator in the integrand.
captures terms that could appear in the cut construction of the double-bubble integral at two-loop:
\[I^{\rm ex.}_{\rm 2bub}\equiv\int\frac{d^{D}l_{1}d^{D}l_{2}}{(2\pi)^{2D}} \frac{(l_{1}\!\cdot\!l_{2})^{2}(l_{1}\!\cdot\!v_{1})(l_{2}\!\cdot\!v_{2})}{l_{1 }^{2}(l_{1}+k_{12})^{2}l_{2}^{2}(l_{2}+k_{12})^{2}}\,, \tag{3.24}\]
where \(v_{i}\) is a stand-in for external kinematics, \(\varepsilon_{i}\) or \(k_{i}\). While the numerator mixes factors of \(l_{1}\) and \(l_{2}\), the denominator can be separated. This allows us to re-express the above integral completely in terms of iterated tensor bubbles of eq. (2.17):
\[I^{\rm ex.}_{\rm 2bub}=I_{2}^{\alpha\beta\gamma}(s_{12})I_{2}^{\mu\nu\rho}(s_ {12})\eta_{\alpha\beta}\eta_{\mu\nu}v_{1\gamma}v_{2\rho}\,. \tag{3.25}\]
Then, by applying eq. (3.11), and plugging in the expressions for \(a_{(m,k)}\), the kinematic numerator of \(I^{\rm ex.}_{\rm 2bub}\) no longer mixes loop momenta. Thus, the integral is separable and can be expressed as a product of scalar bubbles integrated over \(l_{1}\) and \(l_{2}\).
A similar procedure can be applied to the ostrich type diagrams of Fig. 3. However, the integration procedure is a little more delicate than the \(I^{\rm ex.}_{\rm 2bub}\) of eq. (3.24). As \(I^{\rm ex.}_{\rm 2bub}\) could be expressed as an iterated bubble, we only needed to consider integer powers of the denominators when constructing the recursion relations. In contrast, ostrich integrals will lead to non-integer, \(\epsilon\)-dependent powers of loop propagators. We are therefore interested in performing an \(x\)-dependent tensor reduction on the triangle integral,
\[I^{\mu_{1}\ldots\mu_{n}}_{3,x}(K_{12}) =\int\frac{d^{D}l}{(2\pi)^{D}}\frac{l^{\mu_{1}}l^{\mu_{2}}\ldots l ^{\mu_{n}}}{l^{2}(l+K_{1})^{2x}(l+K_{12})^{2}} \tag{3.26}\] \[=\sum_{m+l+2k=n}a_{(m,l,k)}^{x}{\cal T}^{(m,l,k)}_{\rm tri}\,, \tag{3.27}\]
with \(K_{12}=K_{1}+K_{2}\) introduced as shorthand notation. We note that \(x\) is a non-integer value that will inherit dependence on \(\epsilon\) from integrating over the internal bubble, \(I_{2}^{4-2\epsilon}(l+K_{1})\sim[(l+K_{1})^{2}]^{-\epsilon}\). The symmetrized triangle tensor takes the following definition:
\[{\cal T}^{(m,l,k)}_{\rm tri}\equiv K_{1}^{(\mu_{1}}\cdots K_{1}^{ \mu_{m}}K_{2}^{\mu_{1}}\cdots K_{2}^{\mu_{l}}\eta^{\mu_{1}\mu_{2}}\cdots\eta^{ \mu_{2k-1}\mu_{2k})}\,. \tag{3.28}\]
Figure 3: Graphical depiction of the the ostrich integral. Every exposed internal edge represents a propagator in the integrand.
Given this definition, when we perform the tensor contractions over \(K_{1}\) and \(K_{2}\), the degree of \(x\) will get shifted:
\[K_{1\mu_{1}}I^{\mu_{1}\dots\mu_{n}}_{3,x} =\int\frac{d^{D}l}{(2\pi)^{D}}\frac{(K_{1}\cdot l)l^{\mu_{2}}\dots l ^{\mu_{n}}}{l^{2}(l+K_{1})^{2x}(l+K_{12})^{2}}=\frac{1}{2}I^{\mu_{2}\dots\mu_{n }}_{3,x-1}\,, \tag{3.29}\] \[K_{2\mu_{1}}I^{\mu_{1}\dots\mu_{n}}_{3,x} =\int\frac{d^{D}l}{(2\pi)^{D}}\frac{(K_{2}\cdot l)l^{\mu_{2}}\dots l ^{\mu_{n}}}{l^{2}(l+K_{1})^{2x}(l+K_{12})^{2}}=-\frac{1}{2}\left[I^{\mu_{2} \dots\mu_{n}}_{3,x-1}+K_{12}^{2}I^{\mu_{2}\dots\mu_{n}}_{3,x}\right]\,. \tag{3.30}\]
For our purposes, external momenta are taken to be null, \(K_{1}^{2}=K_{2}^{2}=0\). By construction, contracting with the metric will yield a scaleless integral, giving us the following constraint:
\[\eta_{\mu_{1}\mu_{2}}I^{\mu_{1}\dots\mu_{n}}_{3,x}=\int\frac{d^{D}l}{(2\pi)^{D }}\frac{l^{\mu_{2}}\dots l^{\mu_{n}}}{(l+K_{1})^{2x}(l+K_{12})^{2}}=0\,. \tag{3.31}\]
Note that in the above contractions with \(K_{1}\) and \(K_{2}\) the degree of the denominator get's shifted from \(x\to x-1\). For normal one-loop triangle integral reductions for which \(x=1\), this would lead to a tensor bubble that we have computed in the previous section,
\[K_{1\mu_{1}}I^{\mu_{1}\dots\mu_{n}}_{3,x=1} =\frac{1}{2}I^{\mu_{2}\dots\mu_{n}}_{2}\,, \tag{3.32}\] \[K_{2\mu_{1}}I^{\mu_{1}\dots\mu_{n}}_{3,x=1} =-\frac{1}{2}\left[I^{\mu_{2}\dots\mu_{n}}_{2}+K^{2}_{12}I^{\mu_{ 2}\dots\mu_{n}}_{3,x=1}\right]\,. \tag{3.33}\]
However, since the two-loop integration leads to \(\epsilon\)-dependence, this will now induce additional factors of scalar \(I_{3,\epsilon-n}\) final tensor reduction. Proceeding with the computation, these integral constraints can be similarly applied to our symmeterized tensors, yielding the following set of contractions:
\[K_{1}\cdot\mathcal{T}^{(m,l,k)}_{\text{tri}} =\frac{1}{2}K_{12}^{2}\mathcal{T}^{(m,l-1,k)}_{\text{tri}}+(m+1) \mathcal{T}^{(m+1,l,k-1)}_{\text{tri}}\,, \tag{3.34}\] \[K_{2}\cdot\mathcal{T}^{(m,l,k)}_{\text{tri}} =\frac{1}{2}K_{12}^{2}\mathcal{T}^{(m-1,l,k)}_{\text{tri}}+(l+1) \mathcal{T}^{(m,l+1,k-1)}_{\text{tri}}\,,\] (3.35) \[\eta\cdot\mathcal{T}^{(m,l,k)}_{\text{tri}} =K_{12}^{2}\mathcal{T}^{(m-1,l-1,k)}_{\text{tri}}+\left[D+2(m+k+l -1)\right]\mathcal{T}^{(m,l,k-1)}_{\text{tri}}\,. \tag{3.36}\]
Keeping this in mind, we obtain the following linear relations between the coefficients,
\[0 =2(m+1)a^{x}_{(m,l,k+1)}+s_{12}a^{x}_{(m+1,l+1,k)}-a^{x-1}_{(m+1, l,k)}\,, \tag{3.37}\] \[0 =2(l+1)a^{x}_{(m,l,k+1)}+s_{12}[a^{x}_{(m+1,l+1,k)}+a^{x}_{(m,l+1,k)}]+a^{x-1}_{(m,l+1,k)}\,,\] \[0 =[D+2(m+l+k)]a^{x}_{(m,l,k+1)}+s_{12}a^{x}_{(m+1,l+1,k)}\,.\]
Just as was done for the scalar bubble, these functional expressions can be rearranged to construct family of recursion relations for the tensor coefficients:
\[a^{x}_{(m,l,k)} =-\left[\frac{s_{12}}{D+2(m+l+k-1)}\right]a^{x}_{(m+1,l+1,k-1)} \tag{3.38}\] \[a^{x}_{(m,l,0)} =-\left[\frac{D+2(m+l-2)}{D+2(m-2))}\right]\left[\frac{1}{s_{12}}a ^{x-1}_{(m-1,l,0)}+a^{x}_{(m-1,l,0)}\right]\] \[a^{x}_{(0,l,0)} =\frac{1}{s_{12}}a^{x-1}_{(0,l-1,0)}\] \[a^{x}_{(0,0,0)} =I_{3,x}(K_{12})\]
This system of equations uniquely fixes the rank-\(n\) massless triangle tensor integral in eq. (3.26) with non-integer powers in the denominator. Using these integral reduction formulae, both the one-loop and two-loop integrals can be expressed completely in terms of scalar one-loop bubble and scalar triangle integrals, which we are provided in eq. (2.17) of the previous section. Before proceeding to our results, we will introduce a bit of notation that will be relevant for capturing loop-level contributions to the vector amplitudes.
### Gauge-invariant on-shell basis
The final bit of machinery we introduce for the multi-loop calculations of this paper is a spanning set of \(D\)-dimensional four-photon on-shell tensor structures. All of the vector integrands constructed in the following sections will be fixed on \(D\)-dimensional cuts, and projected to a basis of \(D\)-dimensional photon tensors.
Being agnostic to dimension allows for a particularly algorithmic approach to identifying rational terms. In addition, by projecting to a \(D\)-dimensional basis of gauge-invariant tensor structures, we can more easily track the one-loop divergences that propagate to two-loop order. This is due to the existence of evanescent operators, those which vanish when plugging in 4D states, but are non-vanishing in general dimensions. As we will see in pure Born-Infeld amplitudes of section 4.2, these so-called evanescent operators are the cause of two-loop divergences that are obscured when looking at the one-loop behavior in \(D=4\) exclusively. Tracking divergences introduced by evanescent operators has been an active area of research Standard Model effective field theory (SMEFT) [140; 141; 142; 143; 144], the UV behavior of quantum gravity [145; 57; 146], and more generally as an area of formal theory interest [147; 148; 149; 150; 151]. Below we will give a concrete example of an evanescent operator expressed with notation used in the text to provide some justification for our \(D\)-dimensional tensor basis.
First, we will use the pair of \(D\)-dimensional tensor structure, \(f_{ij}f_{kl}\) and \(f_{ijkl}\) defined previously in [51],
\[f_{ij}=\frac{1}{2}\text{tr}[F_{i}F_{j}]\,,\qquad f_{ijkl}=\text{tr}[F_{i}F_{j} F_{k}F_{l}]\,, \tag{3.39}\]
where \(F_{i}^{\mu\nu}=k_{i}^{\mu}\varepsilon_{i}^{\nu}-k_{i}^{\nu}\varepsilon_{i}^{\mu}\) are linearized field strengths, and the trace is taken over spacetime indices. With these vector building blocks, there are exactly four \(D\)-dimensional four-photon
matrix elements one can write down at \({\cal O}(k^{8})\):
\[\begin{split}{\cal T}^{F^{2}F^{2}}_{(2,0)}&=s_{12}^{2}f_{12}f_{34}+ \text{cyc}(2,3,4)\,,\\ {\cal T}^{F^{4}}_{(2,0)}&=s_{12}^{2}f_{1324}+\text{cyc}(2,3,4)\,,\\ {\cal T}^{F^{2}F^{2}}_{(0,1)}&=s_{13}s_{14}f_{12}f_{34}+\text{cyc}(2,3,4)\,, \\ {\cal T}^{F^{4}}_{(0,1)}&=s_{13}s_{14}f_{1324}+\text{cyc}(2,3,4)\,, \end{split} \tag{3.40}\]
where the Mandelstam invariants are defined as \(s_{ij}=(k_{i}+k_{j})^{2}\). The index subscripts correspond to powers of Mandelstam invariants that are easily generalized to span arbitrarily high mass-dimension,
\[{\cal T}^{F^{2}F^{2}}_{(x,y)} \equiv s_{12}^{x}(s_{13}s_{14})^{y}f_{12}f_{34}+\text{cyc}(2,3,4)\,, \tag{3.41}\] \[{\cal T}^{F^{4}}_{(x,y)} \equiv s_{13}^{x}(s_{12}s_{14})^{y}f_{1234}+\text{cyc}(2,3,4)\,,\] \[{\cal T}^{F^{3}}_{(x,y)} \equiv\sigma_{3}^{x}\sigma_{2}^{y}[stA^{F^{3}}_{(s,t)}]\,,\]
with \(\sigma_{3}=s_{12}s_{13}s_{14}/8\) and \(\sigma_{2}=(s_{12}^{2}+s_{13}^{2}+s_{14}^{2})/8\). The important takeaway from eq. (3.40) is that one can write down four independent Lorentz invariant photon tensor structures at \({\cal O}(k^{8})\) in general dimension. However, this does obscure the 4-dimensional freedom available at this mass dimension. In fact, there are only three distinct helicity structures in \(D=4\). One particular basis for these helicity structures, exploiting the 4D spinor-helicity products reviewed in section 2.3, can be written as follows,
\[{\cal T}^{\text{4D}}_{(++++)} =(s_{12}^{4}+s_{13}^{4}+s_{14}^{4})\frac{[12][34]}{\langle 12 \rangle\langle 34\rangle}\,, \tag{3.42}\] \[{\cal T}^{\text{4D,1}}_{(--++)} =(s_{13}^{2}+s_{14}^{2})\langle 12\rangle^{2}[34]^{2}\,,\] (3.43) \[{\cal T}^{\text{4D,2}}_{(--++)} =s_{12}^{2}\langle 12\rangle^{2}[34]^{2}\,. \tag{3.44}\]
Immediately we can see that the full \(D\)-dimensional basis must have a non-trivial null space when projected down to \(D=4\). Given that the 4D helicity space is overdetermined, we are able to define the following evanescent amplitude \({\cal T}^{\text{ev.}}\) that vanishes when constrained to any 4D states:
\[{\cal T}^{\text{ev.}}\equiv{\cal T}^{F^{2}F^{2}}_{(2,0)}-{\cal T}^{F^{2}F^{2}} _{(0,1)}+{\cal T}^{F^{4}}_{(0,1)}\,. \tag{3.45}\]
Using the helicity projection rules of eq. (2.14), one can verify that \({\cal T}^{\text{ev.}}\) indeed vanishes in \(D=4\), while clearly non-vanishing in general dimension. This behavior will be particularly relevant for interpreting the loop level results where \(D=4-2\epsilon\). Indeed, it is critically important that we track \(D\)-dimensional contributions when probing for divergences at multi
loop order. Consider a two-loop integral for which we need to integrate the following quantity:
\[=\sum_{\rm states}\int\frac{d^{D}l}{(2\pi)^{D}}\frac{\mathcal{T}^{\rm ev.}_{(1,2,\bar{l}_{1},\bar{l}_{2})}\mathcal{T}^{\rm arb.}_{(l_{1},l_{2},3,4)}}{l ^{2}(l+k_{12})^{2}}\,, \tag{100}\]
where \(\bar{l}_{1}=-l_{1}=l\) and \(\bar{l}_{2}=-l_{2}=-(l+k_{12})\). We have dressed the darker vertex with the evanescent contribution, \(\mathcal{T}^{\rm ev.}\), which could be the result of an unspecified loop integral, and take a generic arbitrary vector structure \(\mathcal{T}^{\rm arb.}\) to dress the lighter vertex. Exposed legs are taken to be on-shell in the numerator of this expression.
To see what can go wrong with relying only on four-dimensional cut information, we can consider a particularly simple case. Take the vector insertion on the right hand side to be \(\mathcal{T}^{\rm arb.}_{(1234)}\equiv f_{12}f_{34}\). By applying the state sum and tensor reduction formulae from the previous sections, this can be evaluated explicitly. The result is given,
\[= \frac{(D_{s}-4)(D_{s}-3)}{8(D_{s}-1)}s_{12}^{4}I_{2}(k_{12})f_{12 }f_{34}\,, \tag{101}\] \[= -\frac{i}{192\pi^{2}}s_{12}^{4}f_{12}f_{34}+\mathcal{O}(\epsilon )\,, \tag{102}\]
where \(I_{2}(k_{12})\) is the scalar bubble integral in the \(s_{12}\)-channel, and in the second line we have plugged in \(D_{s}=4-2\epsilon\). Thus far, this is exactly what we should expect. Since the amplitude \(\mathcal{T}^{\rm ev.}\) vanishes in \(D=4\), the 4-dimensional cut above must vanish. The Optical Theorem thus disallows imaginary parts from logarithms that would appear post integration. This physical constraint is imposed by the factor of \((D-4)\), which absorbs the divergence of \(I_{2}(k_{12})\) and pushes the logarithm to be subleading at \(\mathcal{O}(\epsilon)\).
However, suppose that \(\mathcal{T}^{\rm ev.}\) came dressed with a \(1/\epsilon\)-divergence from some nested integral in a full two-loop calculation. That is, suppose the operator insertion came from the leading divergence of a one-loop integral, such that,
\[\mathcal{M}^{\rm 1\mbox{-}loop}\big{|}_{\rm div.}\sim\raisebox{-14.226378pt}{ \includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/
It is straightforward to see that the three \(D\)-dimensional structures given in eq. (3.41) span the predictions of four-photon effective operators at arbitrary mass-dimension. The first two \({\cal T}^{F^{2}F^{2}}_{(x,y)}\), and \({\cal T}^{F^{4}}_{(x,y)}\), span the external \((\pm\pm\pm\pm)\) and \((\pm\pm\mp\mp)\) helicity sectors. The third \({\cal T}^{F^{3}}_{(x,y)}\) captures the \((\pm\pm\pm\mp)\) helicity configurations.
As an aside it is worth noting that the single \(F^{3}\)-insertion vector permutation invariant, \(stA^{F^{3}}_{(s,t)}\), can be expressed concisely in terms of \({\cal T}^{F^{2}F^{2}}_{(x,y)}\) as follows,
\[stA^{F^{3}}_{(s,t)}=\frac{{\cal T}^{F^{2}F^{2}}_{(0,2)}-g_{1}g_{2}g_{3}g_{4}}{ s_{12}s_{13}s_{14}}\,, \tag{3.51}\]
where \(g_{i}\equiv 2k^{\mu}_{i-1}F^{\mu\nu}_{i}k^{\nu}_{i+1}\). While not obvious when expressed in this form, the numerator of the above tensor structure is proportional to the permutation invariant in the denominator, \({\cal T}^{F^{2}F^{2}}_{(0,2)}-g_{1}g_{2}g_{3}g_{4}\propto\sigma_{3}\), and thus \(stA^{F^{3}}_{(s,t)}\) is local by construction.
As eq. (3.41) represents a set of tensor structures which completely spans the space of-predictions of \(D\)-dimensional permutation invariant photon effective operators, we can write an on-shell effective amplitude parameterized by numeric Wilson coefficients, \(a_{(x,y)}\), as follows,
\[{\cal M}^{\text{photon-EFT}}_{4}=\sum_{x,y}a^{F^{2}F^{2}}_{(x,y)}{\cal T}^{F^{ 2}F^{2}}_{(x,y)}+a^{F^{4}}_{(x,y)}{\cal T}^{F^{4}}_{(x,y)}+a^{F^{3}}_{(x,y)}{ \cal T}^{F^{3}}_{(x,y)}\,. \tag{3.52}\]
In Ref. [51] it was demonstrated that this photonic effective field theory (EFT) amplitude permits a double copy construction by either introducing a set of \(d^{abc}\)-type symmetric algebraic structures, or more traditional adjoint \(f^{abc}\)-type kinematics but at the cost of factorizing over higher spin modes. While we will comment on the double-copy properties of this amplitude in section 5.2, for now we will just stress that these \(D\)-dimensional operators will serve as an useful basis for capturing divergences and anomalies in the Born-Infeld \(S\)-matrix. With that, we are prepared to proceed to the loop level results.
## 4 Loop-level results
We now apply the EMU approach introduced in section 3 to construct two-loop amplitudes in NLSM and DBIVA theories and reduce them to an appropriate basis of integrals. We then evaluate the \(D\)-dimensional scalar integrals in the dimension of interest. We will focus primarily on \(D=4-2\epsilon\) for DBIVA amplitudes, but will also consider \(D=2-2\epsilon\), which is the dimension for which NLSM is critical. We will then project \(D\)-dimensional tensor structures along defined 4D spinor helicity states using the conventions described in section 2.3. This will clarify where anomalous matrix elements contribute for non-supersymmetric BI, which has a classically conserved \(U(1)\) symmetry.
We begin with an instructive example of NLSM pions through the two-loop order. These amplitudes can serve as the scaffolding for constructing \({\cal N}=4\) DBIVA integrands using the
double copy even without explicitly finding a color-dual representation. An exciting realization of our results is that the loop-level double copy construction of \({\cal N}=4\) DBIVA is consistent with the results that we compute directly via unitarity. This is the first application of double-copy construction for multi-loop amplitudes for non-gravitational theories. Furthermore, it suggests that there should exist a color-dual representation of two-loop NLSM integrands, a representation that has yet to be constructed explicitly.
### NLSM via EMU
We begin with the color-dressed NLSM tree amplitudes that will serve as the kinematic building blocks to appear in our unitarity construction. For our purposes, a convenient color basis is the so-called half-ladder3 basis of Del Duca, Dixon and Maltoni (DDM) [152],
Footnote 3: Called multiperipheral diagrams in [152], such half-ladder graphs are also referred to as comb graphs in the literature, see e.g. refs. [153; 154].
\[{\cal A}_{n}^{\text{NLSM}}=\sum_{\sigma\in S^{n-2}}C_{(1\sigma n)}^{\text{H.L }}A_{(1\sigma n)}^{\text{NLSM}}\,. \tag{4.1}\]
The half-ladder color factors are defined in terms of the antisymmetric structure constants as,
\[C_{(1\sigma n)}^{\text{H.L.}}\equiv f^{1\sigma_{2}\beta_{2}}f^{\beta_{2}\sigma _{3}\beta_{3}}\ldots f^{\beta_{n-1}\sigma_{n-1}n}. \tag{4.2}\]
In the above expression, each half-ladder color factor, associated with a particular channel-graph, dresses a color-ordered amplitude \(A_{(1\sigma n)}^{\text{NLSM}}\). At four-point, it is well known that the \((ijkl)\) ordered amplitude for NLSM is simply,
\[A_{(ijkl)}^{\text{NLSM}}=f_{\pi}^{-2}\,s_{ik}\,. \tag{4.3}\]
As the \(s_{23}\)-channel color factor satisfies a Jacobi identity with the \(s_{12}\)- and \(s_{13}\)-channel color factors, \(C_{(2341)}^{\text{H.L.}}=C_{(1234)}^{\text{H.L.}}-C_{(1324)}^{\text{H.L.}}\), the color-dressed four-point amplitude can be expressed completely in terms of the \(s_{12}\)-channel and \(s_{13}\)-channel color factors:
\[{\cal A}_{(1|23|4)}^{\text{NLSM}}=f_{\pi}^{-2}\left(C_{(1234)}^{\text{H.L.}}s _{13}+C_{(1324)}^{\text{H.L.}}s_{12}\right)\,. \tag{4.4}\]
The color-dressed amplitude is of course entirely Bose-symmetric, but we use the subscript \((1|23|4)\) to emphasize a functional choice of the \((1|\sigma|4)\) half-ladder basis.
#### 4.1.1 One-loop
At one-loop, the unitarity construction of gauge theory amplitudes is particularly simple, since the color factors can be decomposed into a DDM-like ordered color basis. That is, all color factors, \(C_{g}\) associated with the following cubic representation of the integrand:
\[{\cal A}_{n}^{\text{1-loop}}=\int\prod_{i=1}^{L}\frac{d^{D}l_{i}}{(2\pi)^{D}} \sum_{g\in\Gamma^{(3)}}\frac{1}{S_{g}}\frac{C_{g}N_{g}}{D_{g}}\,, \tag{4.5}\]
can be expressed uniquely as a sum over a canonical ordering of color factors by iteratively applying the color jacobi identity. In doing so, all 1-loop color factors, \(C_{g}^{\text{1-loop}}\), can be expressed as a sum over \(n\)-gon integrand factors, \(C_{(a_{1}a_{2}\dots a_{n})}^{n\text{-gon}}\) in the following way,
\[C_{g}^{\text{1-loop}}=\sum_{\sigma\in S^{n-1}}\beta_{g}^{(\sigma)}C_{(\sigma_{ 1}\sigma_{2}\dots\sigma_{n-1}n)}^{n\text{-gon}}\,, \tag{101}\]
where \(\beta_{g}^{(\sigma)}\) are factors of \(\{-1,0,1\}\) depending on the color structure of the diagram, \(C_{g}^{\text{1-loop}}\). The \(n\)-gon basis diagrams take the following definition in terms of adjoint structure constants, \(f^{abc}\):
\[C_{(a_{1}a_{2}\dots a_{n})}^{n\text{-gon}}\equiv f^{b_{1}a_{1}b_{2}}f^{b_{2}a_ {2}b_{3}}\dots f^{b_{n}a_{n}b_{1}}\,. \tag{102}\]
This allows us to express the one-loop amplitudes in terms of a sum over the \((n-1)!\) distinct color factors, weighted by the contributions from the color-ordered Feynman diagrams:
\[\mathcal{A}_{n}^{\text{1-loop}}=\sum_{\sigma\in S^{n-1}}C_{(\sigma n)}^{n \text{-gon}}A_{(\sigma n)}^{\text{1-loop}}\,. \tag{103}\]
As this is a minimal basis we will consider cuts that allow a targeted identification of each color-weight's coefficients.
ConstructionUsing the four-point tree-level NLSM amplitude with the reasoning of section 3, it is straightforward to write down an off-shell integrand associated with the one-loop bubble:
\[\tikzfig{fig:loop
ReductionSince there are up to two powers of loop momenta appearing in the integrand, it is clear that there will be dimensional dependence when applying the Passarino-Veltman integral reduction of eq. (3.23). In doing so, we can write the one-loop pion amplitude completely in terms of scalar kinematics and scalar bubble integrals:
\[\mathcal{A}^{\text{NLSM}}_{\text{1-loop}}=f_{\pi}^{-4}C^{\text{box}}_{(1234)} \bigg{[}\frac{s_{12}I_{2}^{D}(k_{12})}{4}\left(s_{12}+\frac{s_{14}-s_{13}}{D- 1}\right)+(1\leftrightarrow 3)\bigg{]}+\text{cyc}(2,3,4)\,. \tag{4.13}\]
At this point, this is a dimensionally agnostic one-loop amplitude for NLSM. The next step is to plug particular values of \(D\) into the evaluated analytic expressions of \(I_{2}^{D}\) provided in eq. (2.17).
IntegrationIn the study of NLSM loop-level amplitudes, we provide two examples for the integration dimension, \(D=4-2\epsilon\) and \(D=2-2\epsilon\), since the critical dimension of NLSM is \(D=2\). In an \(\overline{\text{MS}}\) renormalization scheme, the \(\epsilon\)-expanded bubble integrals for these two dimension choices are as follows:
\[I_{2}^{4-2\epsilon}(k_{ij}) =\frac{i}{16\pi^{2}}\left[\frac{1}{\epsilon}-\ln(-s_{ij})\right] +\mathcal{O}(\epsilon)\,, \tag{4.14}\] \[I_{2}^{2-2\epsilon}(k_{ij}) =\frac{i}{2\pi s_{ij}}\left[\frac{1}{\epsilon}-\ln(-s_{ij})\right] +\mathcal{O}(\epsilon)\,. \tag{4.15}\]
Plugging these into our \(D\)-dimensional expressions, and dropping scheme dependent rational terms at subleading order, yields the following color-ordered 1-loop pion amplitudes for each respective dimension:
\[A_{(1234)}^{4-2\epsilon} =\frac{i}{48\pi^{2}}f_{\pi}^{-4}\left[\frac{4\sigma_{2}}{\epsilon }+\left(s_{12}(s_{13}-s_{12})\frac{\ln(-s_{12})}{2}+(1\leftrightarrow 3) \right)+\frac{1}{6}(s_{13}^{2}+2s_{12}s_{23})\right]+\mathcal{O}(\epsilon) \tag{4.16}\] \[A_{(1234)}^{2-2\epsilon} =\frac{i}{2\pi}f_{\pi}^{-4}\left[\frac{s_{13}}{\epsilon}-s_{13} \frac{\ln(-s_{12})+\ln(-s_{23})-3}{2}\right]+\mathcal{O}(\epsilon)\,, \tag{4.17}\]
where as defined in section 3.3, \(\sigma_{2}=(s_{12}^{2}+s_{13}^{2}+s_{14}^{2})/8\), and the full amplitude is recovered by summing over cyclic permutations of (2,3,4),
\[\mathcal{A}^{\text{NLSM}}_{\text{1-loop}}=C^{\text{box}}_{(1234)}A^{D}_{(1234 )}+\text{cyc}(2,3,4)\,. \tag{4.18}\]
It is worth emphasizing that while the divergence in \(D=4-2\epsilon\) is an ultraviolet divergence, the one in \(D=2-2\epsilon\) is a logarithmic IR divergence, akin to that of \(\mathcal{N}=4\) super-Yang-Mills (sYM) in the critical dimension of \(D=4-2\epsilon\). Since all the particle states above are scalars, there are no helicity structures to map into, and thus we will bypass the final integration step of projection. Equipped with this one-loop warmup, we are now prepared to take on a two-loop example.
#### 4.1.2 Two-loop
The two-loop calculation is exactly the same procedure as one-loop, with some additional details that need to be accounted for when performing the integration step. We begin just as above with integrand construction via \(D\)-dimensional unitarity.
ConstructionThe color-decomposition becomes slightly more complicated at two-loop than the procedure outlined in the previous section at one-loop. Just as before, we will still fix the integrand on color-dressed cuts, however, now we can no longer simply decompose the full color dressed amplitude in terms of integrated color-stripped objects. At two-loop, a convenient basis of adjoint color factors happens to be the double-box and the cross-box, shown below:
\[C^{\rm 2box}_{(12|34)}=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{figs/2-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-
where the internal momenta appearing in the NLSM operators, \(\bar{p}_{i}=-p_{i}\) and \(\bar{q}_{i}=-q_{i}\), are defined in terms of the two loop momenta as follows:
\[p_{1}=l_{2}+k_{12}\qquad p_{2}=-l_{2}\qquad p_{3}=l_{1}\qquad p_{4 }=-(l_{1}+k_{12})\,, \tag{111}\] \[q_{1}=l_{2}+k_{12}\qquad q_{2}=-l_{2}\qquad q_{3}=l_{1}\qquad q_{ 4}=(l_{1}+l_{2}+k_{1})\,. \tag{112}\]
Due to the simplicity of scalar theories like NLSM, the state sum for the double-bubble numerator is quite simple, and can be expressed concisely below:
\[\text{Cut}\left(\begin{array}{c}2\\ 1\end{array}\right) =\sum_{\text{states}}\mathcal{A}^{\text{NLSM}}_{(p_{4}|12|p_{3}) }\mathcal{A}^{\text{NLSM}}_{(\bar{p}_{1}|\bar{p}_{4}\bar{p}_{3}|\bar{p}_{2})} \mathcal{A}^{\text{NLSM}}_{(p_{2}|34|p_{1})} \tag{113}\] \[=C^{\text{2box}}_{(12|34)}\left[\tau_{3}^{(1)}\tau_{13}\tau_{1}^ {(3)}+\tau_{3}^{(1)}\tau_{23}\tau_{2}^{(3)}+\tau_{3}^{(2)}\tau_{23}\tau_{1}^{( 3)}+\tau_{3}^{(2)}\tau_{13}\tau_{2}^{(3)}\right]\] \[+C^{\text{2box}}_{(12|43)}\left[\tau_{3}^{(2)}\tau_{23}\tau_{2}^{ (3)}+\tau_{3}^{(2)}\tau_{13}\tau_{1}^{(3)}+\tau_{3}^{(1)}\tau_{23}\tau_{1}^{(3) }+\tau_{3}^{(1)}\tau_{13}\tau_{2}^{(3)}\right]\]
where we have introduced the following notation above that combines internal and external momenta, \(\tau_{j}^{(i)}=(k_{i}+p_{j})^{2}\) and \(\tau_{ij}=(p_{i}+p_{j})^{2}\). Plugging in the color-dressed NLSM amplitudes, the same can be done for the ostrich-diagram cut, giving us
\[\text{Cut}\left(\begin{array}{c}1\\ 3\end{array}\right) =\sum_{\text{states}}\mathcal{A}^{\text{NLSM}}_{(2|q_{4}\bar{q}_{ 3}|\bar{q}_{1})}\mathcal{A}^{\text{NLSM}}_{(1|\bar{q}_{4}q_{3}|\bar{q}_{2})} \mathcal{A}^{\text{NLSM}}_{(q_{2}|34|q_{1})} \tag{114}\] \[=C^{\text{Xbox}}_{([12|34)}\left[\tau_{1}^{(3)}+\tau_{2}^{(3)} \right]\left[\tau_{3}^{(2)}\tau_{3}^{(1)}+\tau_{4}^{(2)}\tau_{4}^{(1)}\right]\] \[-\left[C^{\text{2box}}_{(12|34)}\tau_{1}^{(3)}+C^{\text{2box}}_{ (12|43)}\tau_{2}^{(3)}\right]\left[\tau_{3}^{(2)}\tau_{4}^{(1)}+\tau_{4}^{(2) }\tau_{3}^{(1)}\right]\,.\]
The kinematic variables \(\tau_{j}^{(i)}\) and \(\tau_{ij}\) are the same, except we have made the replacement \(p_{i}\to q_{i}\). The full two-loop NLSM amplitude can thus be computed by summing over the distinct labels of the resulting integrals, each weighted by suitable internal symmetry factors:
\[\mathcal{A}^{\text{NLSM}}_{\text{2-loop}}=\frac{1}{4}\left[\begin{array}{c} 2\\ 1\end{array}\right]+\frac{1}{2}\left[\begin{array}{c}1\\ 3\end{array}\right]+\,4\,\begin{array}{c}3\\ 2\end{array}\right]+\text{cyc}(2,3,4)\,. \tag{115}\]
We stress that the integrals appearing above are complicated by tensor integrals with many powers of loop momenta. However, as noted in the introduction, since both integrals are recursively one-loop, we can again apply a two-loop generalization of Passarino-Veltman for both the bubble and triangle integrals, stated in eq. (109) and eq. (110), respectively. With this, we proceed to the next step in the calculation.
ReductionJust as with one-loop, we will now reduce the integrals of eq. (4.28) to the a basis of \(D\)-dimensional scalar integrals using the EMU loop reduction. The double-bubble contribution yields the following:
\[\begin{split}\includegraphics[scale=0.5]{fig/diagram_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop__loop_loop_loop_loop_loop_loop__loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop__loop_loop_loop_loop_loop_loop_loop_loop_loop__loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop_loop__loop_loop__loop_loop_loop_loop_loop_loop_loop_loop_loop__loop
The delta function indices select out (anti)-holomorphic pion fields that live on the \(\mathbb{CP}^{1}\) target space. Plugging in explicit values for the "color" indices, we obtain the following tree-level amplitude:
\[\mathcal{A}_{\text{tree}}^{\mathbb{CP}^{1}}(Z_{1},\bar{Z}_{2},Z_{3},\bar{Z}_{4}) =f_{\pi}^{-2}s_{13}\,. \tag{111}\]
Using this tree-level contact as the seed in our EMU construction, we can directly compute the one- and two-loop amplitudes from eq. (109) and eq. (110). Furthermore, the leading divergences for the loop integrals in \(D=2-2\epsilon\) dimensions are the following:
\[I_{2}^{2-2\epsilon}(k_{ij})I_{2}^{2-2\epsilon}(k_{ij}) =-\frac{1}{s_{ij}^{2}}\frac{1}{(2\pi\epsilon)^{2}}+\mathcal{O}( \epsilon^{-1})\,, \tag{112}\] \[\left[I_{3}\circ I_{2}\right]^{2-2\epsilon}(k_{ij}) =-\frac{3}{8s_{ij}}\frac{1}{(2\pi\epsilon)^{2}}+\mathcal{O}( \epsilon^{-1})\,. \tag{113}\]
The analogous divergences for \(D=4-2\epsilon\) can be extracted directly from the integral expressions in eq. (17). Plugging these values in for the scattering process \(\mathcal{A}(Z_{1},\bar{Z}_{2},Z_{3},\bar{Z}_{4})\), we can extract the full leading logarithmic divergence from the one-loop color dressed amplitude in eq. (109),
\[\mathcal{A}_{\text{1-loop}}^{\mathbb{CP}^{1}}=-\left[\frac{if_{\pi}^{-2}}{4\pi \epsilon}\right]f_{\pi}^{-2}s_{13}+\mathcal{O}(\epsilon^{0})\,, \tag{114}\]
and likewise for the two-loop result upon evaluating the integrals in \(D=2-2\epsilon\),
\[\mathcal{A}_{\text{2-loop}}^{\mathbb{CP}^{1}}=\frac{1}{2}\left[\frac{if_{\pi} ^{-2}}{4\pi\epsilon}\right]^{2}f_{\pi}^{-2}s_{13}+\mathcal{O}(\epsilon^{-1})\,. \tag{115}\]
We write the two-loop amplitude in this suggestive form to emphasize that the logarithmic divergence present at one-loop in eq. (114) appears to exponentiate! More concretely, we have demonstrated through explicit calculation that through two-loop order, the leading divergence of the full color dressed amplitude goes as follows:
\[\mathcal{A}_{\text{2-2\epsilon}}^{\mathbb{CP}^{1}}\bigg{|}_{\text{div.}}= \mathcal{A}^{\text{tree}}\left(1+\frac{\mathcal{A}^{\text{1-loop}}}{\mathcal{ A}^{\text{tree}}}+\frac{1}{2}\left[\frac{\mathcal{A}^{\text{1-loop}}}{ \mathcal{A}^{\text{tree}}}\right]^{2}+\cdots\right)\,. \tag{116}\]
This iterative form suggests that the IR divergence of the \(\mathbb{CP}^{1}\) model can be neatly extracted from the perturbative series as an overall factor, \(e^{\Omega}\), where \(\Omega\) is the divergent piece of the one-loop amplitude. Similar exponentiation of the IR has a long history in the context of gravity amplitudes [155], for which IR divergences appear due to soft graviton emissions that mediate long range interactions. Considering the established relationship [156; 157; 158; 159] between coset manifolds (target spaces) and supergravity theories, there's a possibility that the exponentiation of the \(\mathbb{CP}^{1}\) model in eq. (116) is related to the analogous behavior of supergravity amplitudes [160; 161; 162; 163; 164].
In addition to the exponential structure of eq. (116) that we have computed for the \(\mathbb{CP}^{1}\) model, there is potentially a parallel story for the planar limit of the chiral NLSM amplitudes. Indeed, if one takes \(D\to 2\) in eq. (104), we find that the leading divergence of the double
bubble in \(D=2-2\epsilon\) is precisely one half the square of the one-loop divergence in eq. (4.17). However, while the cross-box term appearing in eq. (4.30) is subleading when \(N_{c}\to 0\), there are still relevant contributions from the double-boxes appearing in the ostrich integral. In principle one could add additional operators [165] or particle states [166] to eq. (2.22) that cancel the additional double-box contributions from the ostrich integral, while preserving the soft behavior needed for our EMU construction. Doing so would extend the exponential behavior to the subleading logarithms in the planar limit, similar to what was found in [167; 168] for planar \(\mathcal{N}=4\) sYM in \(D=4-2\epsilon\). The iterated structure of planar \(\mathcal{N}=4\) has been linked to the integrability of the theory [169; 170; 171; 172; 173; 174; 175; 176; 177; 178]. We leave identifying such an extended theory consistent with exponentiation as a compelling direction of future study.
Now that we have walked through our procedure for computing two-loop even point EFT amplitudes with NLSM as an exemplar, we are prepared to proceed to the main results of the text. While the construction of DBIVA integrands is more involved than constructing the simple expressions of NLSM, the general procedure is exactly the same. The only differences being additional gauge independent state sums, and the generally more complex tensor reduction.
### DBIVA via EMU
In the remainder of this section, we will construct four-photon matrix elements in DBIVA theories through two-loop order. While we will consider a number of different internal matter states inside the loops, the interactions for which are not captured by the DBIVA Lagrangian eq. (2.35), we will require that all the external states are vectors. This will allow us to map our loop-level results to the basis of gauge invariant tensors described in section 3.3. Furthermore, in the last section, we will investigate how the loop level contributions turn on new operators in the EFT written in eq. (2.26).
As we noted in the introduction, DBIVA tree-level amplitudes, which appear in the field theory limit of the abelianized open superstring, can be realized as a double copy between NLSM and sYM [49],
\[\mathcal{M}^{\rm DBIVA}=\mathcal{A}^{\rm NLSM}\otimes\mathcal{A}^{\rm sYM}\,. \tag{4.39}\]
To carry out this construction at the (multi-)loop level, we would simply replace the color factors of NLSM with color-dual loop-level numerators, and then integrate the result. In the case of \(\mathcal{N}=4\) sYM, color-dual representations are known through four-loop [6; 29]. Indeed, in section 4.3 we will use use double-copy to verify the \(\mathcal{N}=4\) DBIVA results we construct here via unitarity. If we had access to color-dual representations for NLSM at two-loop, we additionally could plug those into the sYM integrands constructed from simplified methods of supersymmetric sums [179].
Just as in the previous section where we computed NLSM amplitudes, the recursively one-loop behavior will allow us to only consider the four-point contacts when constructing the physical parts of the two-loop integrand. We will start by reproducing the known one-loop
results from a completely \(D\)-dimensional framework, and then move onto our novel two-loop results.
#### 4.2.1 One-loop DBIVA
Analogous to our procedure in the previous section where we defined \(\mathcal{A}^{\rm NLSM}_{(1|23|4)}\), first we will define a set of four-point operators for each distinct set of particle interaction. The \(D\)-dimensional contacts needed for DBIVA amplitudes at one-loop are as follows:
\[\mathcal{M}(1_{\gamma},2_{\gamma},3_{\gamma},4_{\gamma}) =2{\rm tr}(F_{1}F_{2}F_{3}F_{4})-\frac{1}{2}{\rm tr}(F_{1}F_{2}) {\rm tr}(F_{3}F_{4})+{\rm cyc}(1,2,3)\equiv t_{8}F^{4}\,, \tag{112}\] \[\mathcal{M}(1_{\lambda},2_{\gamma},3_{\gamma},4_{\overline{ \lambda}}) =s_{13}\bar{u}_{1}(\not{\varepsilon}_{2}\not{\bar{k}}_{12}\not{ \varepsilon}_{3})\bar{v}_{4}+s_{12}\bar{u}_{1}(\not{\varepsilon}_{3}\not{ \bar{k}}_{13}\not{\varepsilon}_{2})\bar{v}_{4}\,,\] (113) \[\mathcal{M}(1_{X},2_{\gamma},3_{\gamma},4_{\overline{X}}) =2(k_{1}F_{2}F_{3}k_{1})+2(k_{4}F_{3}F_{2}k_{4})\,, \tag{114}\]
where we have introduced notation \((k_{a}F_{b}F_{c}k_{a})\equiv k_{a}^{\mu}F_{b}^{\mu\nu}F_{c}^{\nu\rho}k_{a}^{\rho}\) for the mixed scalar-vector amplitude. The particle content is labeled by \(\gamma\) for the BI photons, \(\lambda\) for the VA fermions, and \(X\) for the Dirac scalars. With these tree-level amplitudes in hand, we define the following set of matrix-elements needed for integrand construction:
\[\mathcal{M}^{\gamma\gamma\gamma}_{(1|23|4)} =\mathcal{M}(1_{\gamma},2_{\gamma},3_{\gamma},4_{\gamma})\,, \tag{115}\] \[\mathcal{M}^{\lambda\gamma\gamma\bar{\lambda}}_{(1|23|4)} =\mathcal{M}(1_{\lambda},2_{\gamma},3_{\gamma},4_{\overline{ \lambda}})\,,\] (116) \[\mathcal{M}^{X\gamma\gamma\bar{X}}_{(1|23|4)} =\mathcal{M}(1_{X},2_{\gamma},3_{\gamma},4_{\overline{X}})\,. \tag{117}\]
While these are a subset of the four-point operators we will use in the two-loop construction, they are sufficient for all the one-loop external-photon amplitudes. To construct the integrands, we will use the \(D\)-dimensional state sums for vectors and fermions, which we provide below:
\[\sum_{\rm states}\varepsilon^{\mu}_{(l)}\varepsilon^{\nu}_{(-l)} =\eta^{\mu\nu}-\frac{l^{\mu}q^{\nu}+l^{\nu}q^{\mu}}{l\cdot q}, \tag{118}\] \[\sum_{\rm states}u_{(l)}\bar{v}_{(-l)} =\frac{1}{2}(1\pm\Gamma_{5})\Gamma_{\mu}l^{\mu}\,, \tag{119}\]
where \(q^{2}=0\) is a null-reference momentum. Here, \(\Gamma_{\mu}\) are \(D\)-dimensional gamma matrices endowed with all the normal Clifford algebraic relations. Since we want to keep this as \(D\)-dimensional as possible, we define \(\Gamma_{5}\) as the symbol representing the \((D+1)\)-th gamma matrix that anti-commutes with all other \(\Gamma_{\mu}\). Using this definition of \(\Gamma_{5}\), we could also define a chiral projection state operator \(P_{\pm}=\frac{1}{2}(1\pm\Gamma_{5})\) in a dimension agnostic form. Chiral operators become relevant for our external photon amplitudes when computing internal fermion contributions at two-loop. To see why, note that since we are only computing external photon amplitudes, the matrix elements must be parity even. This means that if there is a single chiral trace, the \(\Gamma_{5}\) contribution must integrate to zero:
\[2{\rm tr}_{\pm}[\cdots]\equiv{\rm tr}[(1\pm\Gamma_{5})\cdots]\xrightarrow{ \int d\Pi_{\rm loop}}{\rm tr}[\cdots]\,. \tag{120}\]
This property significantly simplifies the two-loop reduction in the presence of single chiral trace. However, at two-loop we can also get multi-trace contributions. In this case, the parity odd contribution (sourced by odd powers of \(\Gamma_{5}\)) must vanish after integration as follows:
\[4\text{tr}_{\pm}[\cdots]\text{tr}_{\pm}[\cdots]\xrightarrow{\int d\Pi_{\text{ loop}}}\text{tr}[\cdots]\text{tr}[\cdots]+\text{tr}[\Gamma_{5}\cdots]\text{tr}[ \Gamma_{5}\cdots]\,. \tag{111}\]
The second term is also parity even, and will contribute to \(D\)-dimensional Gramm determinants in the presence of two internal fermion loops. While in principle this term is relevant for two-loop diagrams with internal fermions, due to the simplicity of \(\mathcal{N}=4\) DBIVA state sums, we won't need to account for it in our analysis at two-loop.
Before constructing the integrands, below we provide our conventions for the \(D\)-dimensional spinors and gamma matrices. We will normalize the gamma matrices as follows:
\[\text{Tr}(\Gamma_{\mu}\Gamma^{\mu})=2^{D/2-1}D\,. \tag{112}\]
Furthermore, since we will eventually be evaluating our matrix elements in \(D=4\) after integration, we assume that the \(D\)-dimensional generalization of the Majorana condition holds throughout the calculation. That is, the \(\bar{u}\) and \(v\) spinors obey the following relationship:
\[\bar{u}=v^{T}\mathcal{C}\qquad v=-\mathcal{C}\bar{u}^{T}\,, \tag{113}\]
where the \(D\)-dimensional charge conjugation operator can be defined in terms of the gamma matrices, \(\Gamma_{\mu}\), as follows:
\[\Gamma_{\mu}=-\mathcal{C}^{-1}\Gamma_{\mu}^{T}\,\mathcal{C}\,. \tag{114}\]
From this, the spinor strings obey another in addition to the normal Clifford algebra identities [180]:
\[\bar{u}(\Gamma_{\mu_{1}}\cdots\Gamma_{\mu_{n}})v=(-1)^{n}\bar{u}(\Gamma_{\mu_ {n}}\cdots\Gamma_{\mu_{1}})v\,. \tag{115}\]
This relationship essentially just imposes Fermi statistics on the identical Majorana fermions [181]. Now we are prepared to constructing the integrand with the state sums and operators defined above.
ConstructionJust as before, the first step in our procedure is to construct the integrand with all the internal loop dependence present. Taking the operators defined above, and applying the appropriate state sums, we obtain:
where exposed internal particles are taken to be on-shell. Above we have defined the internal momenta as \(q_{1}=l\) and \(q_{2}=-(l_{1}+k_{12})\). The vector and fermion contributions are rather complicated - however, to get a sense of what pops out of this \(D\)-dimensional construction, we provide the scalar cut below as an example:
\[\text{Cut}(\mathcal{I}^{\text{1-loop}})=16(q_{1}F_{1}F_{2}q_{1})(q_{1}F_{3}F_{ 4}q_{1})\,. \tag{4.57}\]
Constructing the full amplitude is then just a matter of summing over all cut contributions, each weighted by symmetry factors \(S_{\alpha}\) and the number, \(N_{\alpha}\), of \(\alpha\)-type particles:
\[\mathcal{M}^{\text{1-loop}}=\sum_{\alpha}\frac{N_{\alpha}}{S_{\alpha}}\int \frac{d^{D}l}{(2\pi)^{D}}\frac{\text{Cut}(\mathcal{I}^{\text{1-loop}}_{N_{ \alpha}})}{l^{2}(l+k_{12})^{2}}+\text{cyc}(2,3,4)\,. \tag{4.58}\]
Since the scalars are complex, and the fermions are oriented, they come dressed with symmetry factors of \(S_{X}=-S_{\lambda}=1\). The minus sign is due to the presence of a single fermion loop. Furthermore, since the photons are indistinguishable, they carry an internal symmetry factor of \(S_{\gamma}=2\). Now we are prepared to carry out the integral reduction and evaluate each contribution to the one-loop amplitude above.
ReductionWhile the full \(D\)-dimensional integral at one-loop is rather complicated for the vectors and fermions, the tensor reduced integrals are actually quite simple - and its worth showing the result of applying eq. (3.23) to the integrands above in eq. (4.54). After reducing the tensor integrals that appear in eq. (4.58), we obtain
\[\begin{split}\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/fig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/fig/figfig/fig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfigfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfigfig/figfig/fig/figfig/figfig/fig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfigfig/figfig/figfig/figfig/figfig/fig/figfig/figfigfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/fig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/
Plugging in \(D=4-2\epsilon\) to the above evaluated one-loop integrals, the EFT expansion above forms a basis for all the algebraic (non-transcendental) parts of the one-loop integral. Explicitly, the Wilson coefficients above take on the follow values when we expand the integral in \(D=4-2\epsilon\) dimensions,
\[a_{(2,0)}^{F^{2}F^{2}} =\frac{i}{(4\pi)^{2}}\frac{1}{\epsilon}\left[\frac{N_{\gamma}}{75 }(-60+61\epsilon)+\frac{N_{X}}{225}(30+47\epsilon)-\frac{N_{\lambda}}{75}(15+ \epsilon)\right]+\mathcal{O}(\epsilon) \tag{111}\] \[a_{(0,1)}^{F^{2}F^{2}} =\frac{i}{(4\pi)^{2}}\frac{1}{\epsilon}\left[\frac{N_{\gamma}}{3 }(6+4\epsilon)-\frac{N_{\lambda}}{3}\epsilon\right]+\mathcal{O}(\epsilon)\] (112) \[a_{(2,0)}^{F^{4}} =\frac{i}{(4\pi)^{2}}\frac{1}{\epsilon}\left[\frac{N_{\gamma}}{7 5}(105+17\epsilon)+\frac{N_{X}}{225}(15+16\epsilon)+\frac{N_{\lambda}}{150}( 15-19\epsilon)\right]+\mathcal{O}(\epsilon)\] (113) \[a_{(0,1)}^{F^{4}} =\frac{i}{(4\pi)^{2}}\frac{1}{\epsilon}\left[\frac{N_{\gamma}}{25 }(-20+22\epsilon)-\frac{N_{X}}{225}(30+32\epsilon)-\frac{N_{\lambda}}{25}(5+2 \epsilon)\right]+\mathcal{O}(\epsilon) \tag{114}\]
We remind the reader that while there are four spanning \(D\)-dimensional four-photon structures, they have a non-trivial null space in four-dimensions where only three structures span the non-vanishing space. Since all of the above Wilson coefficients come dressed with a leading \(1/\epsilon\) divergence for pure photon amplitudes, this indicates the emergence of a divergent evanescent operator that will contribute at two-loop order. We will see the consequences of this when calculating the anomalous matrix elements for pure BI at the next loop order.
ProjectionWhile the expressions above capture the full behavior of the photon amplitudes, they obscure the 4D physics captured by spinor helicity variables. Rather than plugging in explicit values for \(D\) in the integral, it can be more informative to first project down onto a 4D basis of states, leaving the internal dimensional dependence untouched. Indeed, below we can see the effect of plugging in all-plus helicity configurations into the evaluated one-loop integrals above:
\[\frac{1}{2}\] \[+\] \[+\] \[+\] \[+\] \[+\] \[+\] \[+\] \[\frac{1}{2}\]
Here we can see two characteristic properties of the anomaly cancellation that takes place when we introduce scalars and fermions into our spectrum.
* First, all integrals carry a factor of \((D-4)\). This reflects the property that in \(D=4\), all the tree-level amplitudes that contribute to the cut must vanish outside the MHV sector. Thus, any contribution to the all plus matrix element must be suppressed by a factor of \(\epsilon=(4-D)/2\) in order to push the logarithms to \(\mathcal{O}(\epsilon)\).
* Second, the common bracketed factor in all three integrals is weighted differently depending on whether the internal particle is a boson or a fermion. Bosonic contributions are weighted by the number of particles in the loop (in this case, 2 real scalars or \((D-2)\) helicity states), whereas the fermion weight is dependent on the dimension of the gamma matrix representation, where \(\text{tr}[\Gamma_{\mu}\Gamma^{\mu}]=2^{D/2-1}D\). In \(D=4\), all of these contribute equal magnitude to the full amplitude.
Putting this all together, we obtain the following non-vanishing algebraic parts of the one-loop matrix elements:
\[\mathcal{M}^{\text{DBIVA}}_{(-\rightarrow+)} =\frac{1}{\epsilon}\frac{i}{(4\pi)^{2}}\left[\frac{N_{\gamma}}{2} s_{12}^{2}+\left(\frac{N_{\gamma}}{5}+\frac{N_{\lambda}}{20}+\frac{N_{X}}{30} \right)(s_{13}^{2}+s_{14}^{2})\right][12]^{2}\langle 34\rangle^{2}+\mathcal{O}( \epsilon^{0}) \tag{111}\] \[\mathcal{M}^{\text{DBIVA}}_{(-\rightarrow++)} =0\] (112) \[\mathcal{M}^{\text{DBIVA}}_{(++++)} =-\frac{i}{(4\pi)^{2}}\frac{1}{60}\left(N_{\gamma}+N_{X}-N_{ \lambda}\right)(s_{12}^{4}+s_{13}^{4}+s_{14}^{4})\frac{[12][34]}{\langle 12 \rangle\langle 34\rangle}+\mathcal{O}(\epsilon^{1}) \tag{113}\]
The \((-+++)\) matrix element is identically zero due to \(D\)-dimensional four-point kinematics -- the only available helicity structure is \(\langle 1|2|3]^{2}[24]^{2}\) which must be weighted by a mass-dimension 2 permutation invariant. The only such permutation invariant is \(s+t+u=0\), which vanishes regardless of the integration dimension. As a check, above we have reproduced the results found in [52], which used four-dimensional spinor helicity and dimension-shifting relations when performing the integration. This serves as a nice verification of our \(D\)-dimensional methods. Now we will proceed with the two-loop calculation.
#### 4.2.2 Two-loop Born-Infeld
For all of the two-loop calculations, we will drop the header labelling which step in the process we are presenting. While we will omit these markers, our procedure is still the same: **construction**, **reduction**, **integration** and then **projection**. Due to the formidably large expressions that result from doing this calculation completely covariantly, most of our amplitudes will be presented after projecting down to 4D helicity states.
To begin, we will compute the pure Born-Infeld two-loop amplitude. Like pions, there are only two diagrams that contribute at this loop order. Including symmetry factors, the full Born-Infeld amplitude at two-loop can be expressed as follows:
\[\mathcal{M}^{\text{BI}}_{\text{2-loop}}=\frac{1}{2} \tag{114}\]
As in the previous section, exposed legs implicitly sum over internal states. The grey blobs above represent \(D\)-dimensional \(t_{8}F^{4}\) operator insertions from the four-point BI tree amplitudes, which we labelled as \(\mathcal{M}^{\gamma\gamma\gamma\gamma}_{(1234)}\). We give the internal loop momenta the same internal labels as we did the pions in eq. (4.26) and eq. (4.30), giving us the following integrals to evaluate for two-loop pure BI,
\[=\sum_{\text{states}}\int\frac{d^{D}l_{1}d^{D}l_{2}}{(2\pi)^{2D}} \frac{\mathcal{M}^{\gamma\gamma\gamma\gamma}_{(p_{4}12p_{3})}\mathcal{M}^{ \gamma\gamma\gamma\gamma}_{(\bar{p}_{1}\bar{p}_{4}\bar{p}_{3}\bar{p}_{2})} \mathcal{M}^{\gamma\gamma\gamma\gamma}_{(p_{2}34p_{1})}}{l_{1}^{2}(l_{1}+k_{1 })^{2}l_{2}^{2}(l_{2}+k_{12})^{2}}\,, \tag{4.72}\] \[=\sum_{\text{states}}\int\frac{d^{D}l_{1}d^{D}l_{2}}{(2\pi)^{2D}} \frac{\mathcal{M}^{\gamma\gamma\gamma\gamma}_{(2q_{4}\bar{q}_{3}\bar{q}_{1}) }\mathcal{M}^{\gamma\gamma\gamma\gamma}_{(1\bar{q}_{4}q_{3}\bar{q}_{2})} \mathcal{M}^{\gamma\gamma\gamma\gamma}_{(q_{2}34q_{1})}}{l_{1}^{2}(l_{1}+l_{2} +k_{1})^{2}l_{2}^{2}(l_{2}+k_{12})^{2}}\,. \tag{4.73}\]
where the internal momenta obey the same convetion, \(\bar{p}_{i}=-p_{i}\) and \(\bar{q}_{i}=-q_{i}\). Performing the tensor reduction on the internal loop momenta and projecting to 4D helicity states yields divergent quantities for all helicity configurations. In the interest of projecting to a \(D\)-dimensional basis of operators, and analyzing the \(U(1)\) anomaly present at two-loop, we will just focus on the leading divergences for each helicity configuration.
Leading \((++++)\) divergenceAs we saw above, the one-loop matrix element has a rational all-plus contribution. This manifestly breaks the \(U(1)\) duality invariance present as tree-level. Moreover, this will contribute to non-vanishing 4D cut of the form:
\[\text{Cut}\left[\mathcal{M}^{\text{BI,2-loop}}_{(++++)}\right]^{D=4}=\mathcal{ M}^{\text{BI,1-loop}}_{(++++)}\times\mathcal{M}^{\text{BI,tree}}_{(--++)}\,. \tag{4.74}\]
This cut should source a logarithmic discontinuity, and thus, the leading contribution for two-loop all-plus matrix element should diverge as \(1/\epsilon\). Indeed this is what we find:
\[\mathcal{M}^{\text{BI,2-loop}}_{(++++)}=\frac{1}{\epsilon}\frac{29}{600}\frac {1}{(4\pi)^{4}}(s_{12}^{6}+s_{13}^{6}+s_{14}^{6})\frac{[12][34]}{(12)\langle 34 \rangle}+\mathcal{O}(\epsilon^{0})\,. \tag{4.75}\]
In principle, this divergence could be cancelled by the addition of a counterterm at \(\mathcal{O}(\alpha^{\prime 4})\) inserted into a one-loop matrix element with \(t_{8}F^{4}\) at \(\mathcal{O}(\alpha^{\prime\,2})\). We will explore the effect of such anomaly cancelling counterterms in the next section, and we will see that this alone is not sufficient to cancel the divergence above. As we noted previously, there are additional 4D operators that appear at one-loop that will vanish when plugging in \((++++)\) physical states. These too could secretly contribute to the divergence expressed above. We can see this more clearly for the \((-+++)\) result below.
Leading \((-+++)\) divergenceThe same cut construction above suggests a different story for \((-+++)\) at two-loop. Indeed, the following cut should vanish when taken on-shell in \(D=4\):
\[\text{Cut}\left[\mathcal{M}^{\text{BI,2-loop}}_{(-+++)}\right]^{D=4}=\mathcal{ M}^{\text{BI,1-loop}}_{(-+++)}\times\mathcal{M}^{\text{BI,tree}}_{(--++)}\,, \tag{4.76}\]
However, we found that \({\cal M}^{\rm BI,1-loop}_{(-+++)}=0\) due to \(D\)-dimensional four-point kinematics. Despite this, we find similarly to the all-plus helicity configuration above in eq. (110), the one-minus matrix element also carries a leading order \(1/\epsilon\) divergence:
\[\boxed{\cal M}^{\rm BI,2-loop}_{(-+++)}=-\frac{1}{\epsilon}\frac{1}{75}\frac{1 }{(4\pi)^{4}}(s_{12}^{3}+s_{13}^{3}+s_{14}^{3})\langle 1|2|3]^{2}[24]^{2}+{\cal O}( \epsilon^{0})\,. \tag{111}\]
At first glance, this appears to be a violation of the Optical Theorem. However, this divergence is sourced by one-loop evanescent operators that vanish in \(D=4\), but which carries a \(1/\epsilon\) divergence in \(D=4-2\epsilon\). In the section 5.1, we will demonstrate this in more detail, and show how higher derivative four-photon operators can be used to cancel the \(U(1)\) anomaly. In doing so, we can construct a \(D\)-dimensional quantum effective action that satisfies duality invariance in \(D=4\) through two-loop order in perturbation theory.
Leading \((--++)\) divergenceFinally, we express below the leading divergence for the aligned-helicity matrix elements. After reducing to the two-loop scalar integral basis, and projecting along 4D helicity states, we obtain the following expression at leading order in the \(\epsilon\)-expansion:
\[{\cal M}^{\rm BI,2-loop}_{(-++)}=-\frac{1}{\epsilon^{2}}\frac{\langle 12 \rangle^{2}[34]^{2}}{(4\pi)^{4}}\left[\frac{19}{60}s_{12}^{4}+\frac{17}{150}( s_{13}^{4}+s_{14}^{4})\right]+{\cal O}(\epsilon^{-1})\,. \tag{112}\]
Similar to the one-loop result, we can see that \(s_{12}^{4}\langle 12\rangle^{2}[34]^{2}\) and \((s_{13}^{4}+s_{14}^{4})\langle 12\rangle^{2}[34]^{2}\) helicity sectors are asymmetrically weighted in pure photon amplitudes. In the next section, we will show that these two operators carry equal weight when introducing additional states consistent with maximal supersymmetry.
Moreover, adding additional scalar and fermion states consistent with supersymmetry is the simplest way to cancel the \(U(1)\) anomaly computed above outside of the aligned helicity sectors. When summing over superfield contributions at loop level, the \(U(1)\) symmetry at tree-level is promoted to a \(U(1)_{R}\) symmetry, and thus is protected pertubatively by supersymmetric Ward Identities. We demonstrate this explicitly in the next section by computing the two-loop four-photon matrix element in \({\cal N}=4\) DBIVA via our \(D\)-dimensional unitarity methods.
#### 4.2.3 Two-loop \({\cal N}=4\) Dbiva
Much of the complication in performing a two-loop calculation in pure BI theory is the proliferation of high power of loop momenta left over after the state sum. These factors of loop momenta not only conspire with external momenta, but also mingle with external polarizations. There are a number of integral reduction algorithms [132; 133; 134; 135; 136; 137; 138; 139] that are very effective for reducing factors of \((k_{i}\cdot l)\), since they can trivially be expressed as a linear combinations of inverse propagators,
\[(k_{i}\cdot l)=\frac{1}{2}\left[(l+k_{i})^{2}-l^{2}\right]\,. \tag{113}\]
However, the factors of \((\varepsilon_{i}\cdot l_{j})\) can be more tedious, as they do not permit an inverse propagator expansion. This is part of the motivation for applying Passarino-Veltman to the recursively one-loop integrals present in the two-loop Born-Infeld amplitude.
Luckily, the state-sewing for maximal supersymmetry is dramatically simpler than less than maximal supersymmetry [3, 182]. This is most easily seen by considering the covariant operators we defined for one-loop DBIVA amplitudes. When one applies the conditions for maximal supersymmetry, stated below,
\[D=10 N_{\lambda}=1 N_{X}=0\,, \tag{112}\] \[D=6 N_{\lambda}=2 N_{X}=1\,,\] (113) \[D=4 N_{\lambda}=4 N_{X}=3\,,\] (114) \[D=3 N_{\lambda}=8 N_{X}=8\,, \tag{115}\]
then the state sum is completely independent of loop momenta and precisely reproduces the supersymmetric matrix element \(s_{12}^{2}(t_{8}F^{4})\) when cut along the \(s_{12}\)-channel. This statement holds covariantly in general dimension. Concretely, we find that for theories with maximal supersymmetry:
\[\sum_{\rm states}(t_{8}F^{4})^{\rm(max)}_{(12l_{1}l_{2})}(t_{8}F^{4})^{\rm( max)}_{(\bar{l}_{1}\bar{l}_{2}34)}=s_{12}^{2}(t_{8}F^{4})^{\rm(max)}_{(1234)}\,. \tag{116}\]
This can similarly be used in the integrand construction \({\cal N}=4\) super-Yang-Mills, which has been computed to six-loop order [183]. In the case of maximal supersymmetry in \(D=4\), we can write the \((t_{8}F^{4})^{\rm(max)}_{(1234)}\) operator as a supersymmetric delta function with the all-plus permutation invariant introduced previously:
\[{\cal M}^{\cal N=4\rm DBIVA}_{(1234)}=\delta^{(8)}(Q)\frac{[12][34]}{\langle 1 2\rangle\langle 34\rangle}\equiv(t_{8}F^{4})^{\rm(max)}_{(1234)}\Big{|}_{D=4} \tag{117}\]
where the delta function is a Grassmann valued polynomial of spinor-helicity variables:
\[\delta^{(8)}(Q)=\prod_{a=1}^{4}\sum_{i\neq j}\langle ij\rangle\eta_{i}^{a} \eta_{j}^{a}\,. \tag{118}\]
By applying this supersymmetric state sum, we find that the integrands for two-loop \({\cal N}=4\) DBIVA is trivially easy to construct - even simpler than NLSM integrands. Below we represent an internal on-shell superfield with a green line, and obtain the following integrand for the double-bubble:
\[{\rm Cut}\left(\begin{array}{c}\includegraphics[width=142.26378pt]{42.eps} \end{array}\right)=\sum_{\rm states}(t_{8}F^{4})^{\rm(max)}_{(p_{4}12p_{3})}(t _{8}F^{4})^{\rm(max)}_{(\bar{p}_{1}\bar{p}_{4}\bar{p}_{3}\bar{p}_{2})}(t_{8}F ^{4})^{\rm(max)}_{(p_{2}34p_{1})}=s_{12}^{4}(t_{8}F^{4}) \tag{119}\]
and similarly so for the ostrich-diagram integral contribution:
\[\text{Cut}\left(\raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{\includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{ \includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.26378pt]{ \includegraphics[height=14.26
the ostrich-diagram, we obtain the following expression for the two diagrams:
\[=s_{12}^{4}(I_{2}^{D}(k_{12}))^{2}(t_{8}F^{4})\,, \tag{111}\] \[=\frac{2}{3}s_{12}^{3}[I_{3}\circ I_{2}]^{D}(k_{12})(t_{8}F^{4})\,. \tag{112}\]
This is easily evaluated in \(D=4-2\epsilon\), from which we find the following expression for the leading order divergence of the four-photon two-loop amplitude in \(\mathcal{N}=4\) DBIVA theory:
\[\mathcal{M}_{\text{2-loop}}^{\mathcal{N}=4\text{\,DBIVA}}=-\frac{1}{12 \epsilon^{2}}\frac{1}{(4\pi)^{4}}(s_{12}^{4}+s_{13}^{4}+s_{14}^{4})(t_{8}F^{4} )+\mathcal{O}(\epsilon^{-1})\,. \tag{113}\]
With this result in hand, we will now demonstrate that the same calculation performed via generalized unitarity can be reproduced using loop-level double copy construction.
### DBIVA via double copy
The calculation above was completely agnostic to (but verified by) the known tree-level relationship of DBIVA amplitudes as a double copy of NLSM and super Yang-Mills. As we will now show, the amplitude can be equivalently produced using the two-loop color-dual numerators of \(\mathcal{N}=4\) sYM, and the NLSM integrands constructed earlier in the section. This observation provides further evidence for the consistency of double-copy construction at multi-loop order in perturbation theory, and serves as an existence proof for color-dual NLSM numerators at two-loop.
At loop-level, the double-copy construction amounts to replacing the color factors in a cubic-graph representation of the integrand with a set of color-dual numerators. That is, starting with an \(L\)-loop NLSM integrand of the form:
\[\mathcal{A}_{n}^{\text{NLSM}}=\int\prod_{i=1}^{L}\frac{d^{D}l_{i}}{(2\pi)^{d} }\sum_{g\in\Gamma^{(3)}}\frac{1}{S_{g}}\frac{C_{g}N_{g}^{\text{NLSM}}}{D_{g}}\,, \tag{114}\]
we can construct \(\mathcal{N}=4\) DBIVA by replacing the color factors, \(C_{g}\), with the kinematic numerators, \(N_{g}^{\mathcal{N}=4}\), of sYM amplitudes:
\[C_{g}\to N_{g}^{\mathcal{N}=4}\quad\Rightarrow\quad\mathcal{M}_{n}^{\text{ DBIVA}}=\int\prod_{i=1}^{L}\frac{d^{D}l_{i}}{(2\pi)^{d}}\sum_{g\in\Gamma^{(3)}} \frac{1}{S_{g}}\frac{N_{g}^{\mathcal{N}=4}N_{g}^{\text{NLSM}}}{D_{g}}\,, \tag{115}\]
where the kinematic numerators depend on particular choice of generalized gauge. In order for this construction to work, the kinematic numerators on at least one side of the double copy must satisfy all the same algebraic relations as the color factors. Since generalized
unitarity allows us to construct the integrands on-shell, this construction conjecturally holds at loop-level as long as the gauge-theory is tree-level color-dual.
While there are currently no color-dual NLSM numerators identified in the literature beyond one-loop, there are color-dual representation available for \(\mathcal{N}=4\) sYM through four-loop four-point [31]. Below are the one- and two-loop basis numerators relevant for our construction:
\[N_{\mathcal{N}=4}^{\text{box}}=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{figs-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1
This is in agreement with the leading divergence of \({\cal N}=4\) DBIVA previously found in [52], along with our result in eq. (114) when taking \(N_{\gamma}=1\), \(N_{\lambda}=4\) and \(N_{X}=3\), in concordance with \(D=4\) maximal supersymmetry
#### 4.3.2 Two-loop \({\cal N}=4\) Dbiva
At two-loop it sufficient to show that replacing the color factors reproduces the integrals we found above with the unitarity construction. Starting with the NLSM integrands in eq. (113) and eq. (112), we make the same replacement of color factors post integration as was done at one-loop. We carry this out first for the double-bubble, and find that the full \(D\)-dimensional NLSM integral simplifies dramatically:
(114)
The cancellation between dimension dependent factors is even more startling for the ostrich-diagram integral:
(115)
This alone is sufficient to demonstrate the validity of the double-copy, as these integrated quantities are exactly the same as those produced via generalized unitarity of \({\cal N}=4\) DBIVA at two-loop in eq. (112) and eq. (113). We emphasize that based on our analysis, the double-copy at two loop clearly holds in any dimension, as all the \(D\)-dependent prefactors drop out when replacing the color factors with \({\cal N}=4\) numerators. Considering the consistency of these two construction, this also serves as strong evidence for the existence of color-dual representation for two-loop pion integrands. We leave identifying such valid representation as an enticing future direction worth investigation.
Effective Actions
In this section we demonstrate how our basis of higher-derivative four-photon operators in eq. (3.41) can be used to construct quantum effective actions that capture loop-level effects. We will distinguish between position space operators that appear in the Lagrangian, \(\mathcal{O}\), and their corresponding matrix elements, \(\mathcal{T}\), which are on-shell quantities in momentum space:
\[\mathcal{T}\equiv\langle\text{out}|\mathcal{O}|\text{in}\rangle\,. \tag{5.1}\]
Similar to the matrix elements, the operators can be expressed in both \(D\)-dimensional and 4D representations. For example, we can define \(\mathcal{O}^{F^{2}F^{2}}\) operators as follows:
\[\mathcal{O}^{F^{2}F^{2}}_{(2,0)} \sim(D_{\mu}F_{\rho\sigma}D^{\mu}F^{\rho\sigma})^{2}\,, \tag{5.2}\] \[\mathcal{O}^{F^{2}F^{2}}_{(0,1)} \sim(D_{\mu}F_{\rho\sigma}D^{\nu}F^{\rho\sigma})(D_{\nu}F_{\alpha \beta}D^{\mu}F^{\alpha\beta})\,, \tag{5.3}\]
where the spacetimes indices run from \(1,2,...,D\), and \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) are operator valued abelian field strengths. These operator valued expressions can be normalized so that for an all outgoing four-photon scattering process, \(|\text{out}\rangle=|k_{1},k_{2},k_{3},k_{4}\rangle\), they are related to the on-shell tensor basis elements defined previously in eq. (3.41) as follows:
\[\mathcal{T}^{F^{2}F^{2}}_{(2,0)} =\langle\text{out}|\mathcal{O}^{F^{2}F^{2}}_{(2,0)}|\text{in} \rangle\,, \tag{5.4}\] \[\mathcal{T}^{F^{2}F^{2}}_{(0,1)} =\langle\text{out}|\mathcal{O}^{F^{2}F^{2}}_{(0,1)}\left|\text{ in}\rangle\,. \tag{5.5}\]
Likewise we can construct field theory operators for the split helicity operators using the spinor form of the field strengths:
\[F_{\pm}=(F^{\mu\nu}\pm i\tilde{F}^{\mu\nu})\sigma^{\mu\nu}_{\pm}\,, \tag{5.6}\]
where the dual field strength are defined as \(\tilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}F^{\mu\nu}\), and the spin matrices are expressed as \(\sigma^{\mu\nu}_{\pm}=\frac{1}{4}\sigma^{[\mu}_{\pm}\sigma^{\nu]}_{\mp}\) in terms of the Pauli matrices as follows:
\[\sigma^{\mu}_{\pm}=(\mathbb{1},\pm\vec{\sigma})^{\mu}\,. \tag{5.7}\]
In the above expressions, the spacetime indices now run from 1 to 4. Using these manifestly 4D operators, we can similarly define operators that select out particular helicity states and derivative structures:
\[\mathcal{O}^{\text{4D,1}}_{(--++)} \sim(D_{\mu}F_{-}D_{\nu}F_{-})(D^{\mu}F_{+}D^{\nu}F_{+})\,, \tag{5.8}\] \[\mathcal{O}^{\text{4D,2}}_{(--++)} \sim(D_{\mu}F_{-}D^{\mu}F_{-})(D_{\nu}F_{+}D^{\nu}F_{+})\,, \tag{5.9}\]
where the trace is over the spinor indices of the Pauli matrices. As defined, there operators will produce the 4D helicity structures defined in eq. (3.43) and eq. (3.44) when contracted between the outgoing helicity state, \(|\text{out}\rangle=|k_{1}^{-},k_{2}^{-},k_{3}^{+},k_{4}^{+}\rangle\), and the incoming vacuum state:
\[\mathcal{T}^{\text{4D,1}}_{(--++)} =\langle\text{out}|\mathcal{O}^{\text{4D,1}}_{(--++)}|\text{in} \rangle=(s_{13}^{2}+s_{14}^{2})\langle 12\rangle^{2}[34]^{2}\,, \tag{5.10}\] \[\mathcal{T}^{\text{4D,2}}_{(--++)} =\langle\text{out}|\mathcal{O}^{\text{4D,2}}_{(--++)}|\text{in} \rangle=s_{12}^{2}\langle 12\rangle^{2}[34]^{2}\,. \tag{5.11}\]
In the duality invariant quantum actions we construct in this section, we will reserve \(\mathcal{O}\) for effective actions, and \(\mathcal{T}\) for on-shell matrix elements used in the EMU construction. However, since the field theory operators are redundant up to field redefinition and equations of motion, we'll find it convenient to work in terms of the on-shell matrix elements, and then implicitly define the operators in terms of these on-shell expressions.
Below we first study anomaly cancellation for the multi-loop photon amplitudes computed above. After this, we proceed by interpreting these higher derivative operators as double copies between NLSM and higher derivative Yang-Mills amplitudes with off-shell higher-spin modes. As was demonstrated in previous work by the authors [51], the full set of \(D\)-dimensional four-photon operators can be constructed via adjoint double-copy, but at the cost of introducing off-shell higher spin modes in the single-copy vector theory. However, these higher-spin modes can be absorbed consistently in a double-copy framework by introducing symmetric algebraic structures. We discuss future applications of this construction in section 6.
### Anomaly cancellation
Now we begin with our study of higher-derivative extensions to BI theory. Our goal is to identify the higher derivative four-photon operators that are needed to cancel the \(U(1)\) anomalous matrix elements computed in the previous section. We will start with the one-loop corrections, captured by tree level insertions at \(\mathcal{O}(\alpha^{\prime 4})\), and then proceed with the two loop corrections, which combine both \(\mathcal{O}(\alpha^{\prime 4})\) operator insertions at one-loop and \(\mathcal{O}(\alpha^{\prime 6})\) operator insertions at tree-level. In doing so, we demonstrate that cancelling the \(U(1)\) anomaly through two loop order can be achieved with local finite counterterms if and only if we introduce an evanescent operator at \(\mathcal{O}(\alpha^{\prime 4})\) to the Born-Infeld action.
#### 5.1.1 One-loop
In section 4.2 we computed the one-loop matrix element for a general DBIVA theory. Plugging in the values \(N_{\gamma}=1\) and \(N_{\lambda}=N_{X}=0\) we obtain the following anomalous all-plus matrix element for pure Born-Infeld theory:
\[\mathcal{M}^{\text{BI,1-loop}}_{(++++)}=-\frac{i}{(4\pi)^{2}}\frac{1}{60}(s_ {12}^{4}+s_{13}^{4}+s_{14}^{4})\frac{[12][34]}{\langle 12\rangle\langle 34 \rangle}+\mathcal{O}(\epsilon)\,. \tag{110}\]
In Ref. [52], Elvang, Hadjiantonis, Jones, and Paranjape identify a candidate 4D counterterm that cancels this anomalous matrix element, whose prediction we have called \(\mathcal{T}^{\text{4D}}_{(++++)}\), thereby restoring duality invariance through one-loop. As noted in the previous section, the one-loop matrix element can be mapped to our \(D\)-dimensional operator basis. In general, all available 4D tensor structures at \(\mathcal{O}(\alpha^{\prime 4})\) map onto our \(D\)-dimensional basis. One particular map we
provide below:
\[\mathcal{T}^{\text{4D}}_{(++++)} =a_{(\text{ev.})}\mathcal{T}^{\text{ev.}}+2\mathcal{T}^{4+}\,, \tag{5.13}\] \[\mathcal{T}^{\text{4D,1}}_{(--++)} =a_{(\text{ev.})}\mathcal{T}^{\text{ev.}}+2\mathcal{T}^{F^{4}}_{(2,0)}-4\mathcal{T}^{F^{2}F^{2}}_{(0,1)}\,,\] (5.14) \[\mathcal{T}^{\text{4D,2}}_{(--++)} =a_{(\text{ev.})}\mathcal{T}^{\text{ev.}}+2\mathcal{T}^{F^{4}}_{( 2,0)}+4\mathcal{T}^{F^{2}F^{2}}_{(0,1)}\,, \tag{5.15}\]
where we have defined the following \(D\)-dimensional operator that projects down to the all-plus configuration,
\[\mathcal{T}^{4+}=2\mathcal{T}^{F^{2}F^{2}}_{(2,0)}-\mathcal{T}^{F^{4}}_{(2,0) }-2\mathcal{T}^{F^{4}}_{(0,1)}\,, \tag{5.16}\]
and all the 4D operators have the freedom to add the previously defined evanescent operator, \(\mathcal{T}^{\text{ev.}}\),
\[\mathcal{T}^{\text{ev.}}=\mathcal{T}^{F^{2}F^{2}}_{(2,0)}-\mathcal{T}^{F^{2} F^{2}}_{(0,1)}+\mathcal{T}^{F^{4}}_{(0,1)}\,. \tag{5.17}\]
Thus, we can construct the new effective photon Lagrangian, \(\mathcal{L}^{\text{BI+CT}}\), with the addition of the all-plus counter-terms to our Born-Infeld Lagrangian:
\[\mathcal{L}^{\text{BI+CT}}=\mathcal{L}^{\text{BI}}+\frac{\alpha^{\prime 4}}{(4 \pi)^{2}}\frac{1}{30}\left(\mathcal{O}^{4+}+a_{(\text{ev.})}\mathcal{O}^{ \text{ev.}}\right)\,, \tag{5.18}\]
where we have used eq. (5.1) to implicitly define the operators appearing in the quantum effective action above. While there are a number of perturbatively equivalent construction of these operators, below we provide a couple expressions that resemble the on-shell basis elements:
\[\mathcal{O}^{4+} \sim 2(D_{\mu}F_{\alpha\beta}D^{\mu}F^{\alpha\beta})^{2}-\eta^{\mu (\nu}\eta^{\rho\sigma)}(D_{\mu}F_{\alpha\beta}D_{\nu}F^{\gamma\delta}D_{\rho} F_{\beta\gamma}D_{\sigma}F^{\delta\alpha})\,, \tag{5.19}\] \[\mathcal{O}^{\text{ev.}} \sim(D_{\mu}F_{\alpha\beta}D^{\mu}F^{\alpha\beta})^{2}-(D_{\mu}F_ {\alpha\beta}D_{\nu}F^{\alpha\beta})(D^{\mu}F_{\alpha\beta}D^{\nu}F^{\alpha \beta})\] \[\quad+(D_{\mu}F_{\alpha\beta}D^{\mu}F^{\gamma\delta}D_{\nu}F_{ \beta\gamma}D^{\nu}F^{\delta\alpha})\,.\]
Computing the one-loop amplitudes from the Lagrangian of eq. (5.18) yields the following matrix elements at \(\mathcal{O}(\alpha^{\prime 4})\):
\[\mathcal{M}^{\text{BI+CT,1-loop}}_{(--++)}\big{|}_{\alpha^{\prime 4}} =\mathcal{M}^{\text{BI,1-loop}}_{(--++)}\,, \tag{5.20}\] \[\mathcal{M}^{\text{BI+CT,1-loop}}_{(++++)}\big{|}_{\alpha^{\prime 4}} =0\,,\] (5.21) \[\mathcal{M}^{\text{BI+CT,1-loop}}_{(++++)}\big{|}_{\alpha^{\prime 4}} =\mathcal{O}(\epsilon)\,. \tag{5.22}\]
Thus, eq. (5.18) constitutes a duality invariant photon theory through one-loop order. With this, we can identify what additional operators will be needed to cancel the anomaly through two-loop.
#### 5.1.2 Two-loop
The first step in identifying the requisite operators needed to cancel the two-loop anomaly at \(\mathcal{O}(\alpha^{\prime 6})\) is to perform another one-loop calculation at this mass dimension, which includes the counterterms of \(\mathcal{L}^{\text{BI+CT}}\) defined above. The one-loop amplitude is constructed as follows:
(5.23)
Both of these operator insertions can be evaluated using the same \(D\)-dimensional procedure used throughout the text. The all-plus counterterm yields the following contributions to \((++++)\) and \((-+++)\) helicity configurations:
\[+ \tag{5.24}\] \[+\] \[+\] \[+\] \[+\] \[- \tag{5.25}\]
We note that there is a distinction between the first and second \((-+++)\) expressions. The first expression is dressed with \((D-4)^{2}\), which pushes the leading contribution to \(\mathcal{O}(\epsilon)\). Whereas the second term is identically zero because the 4D helicity structure carries an overall factor of \((s+t+u)=0\). In addition, since there is a non-vanishing 4D residue for the all-plus integrand, the integral has a leading order divergence in \(\epsilon\).
Below we find it instructive to show the \(D\)-dependence of the evanescent operator insertion, which yields the following matrix element contributions:
\[+ \tag{5.26}\] \[+\] \[+\] \[+\] \[+\] \[+\] \[+\] \[+\] \[+\] (5.27) \[+\] \[+\] \[+\] \[+\] \[+\] \[+ \tag{5.28}\]
This will produce \(\mathcal{O}(\epsilon^{0})\) matrix elements in the \((-+++)\) helicity sector. Thus, in order to cancel the divergent part of the two-loop \((-+++)\) anomaly computed in eq. (4.77), we
must weight the evanescent operator by a numerical factor that diverges in \(D=4\). Given the particular numerical value computed in the previous section, we find the evanescent Wilson coefficient must take the following \(D\)-dependent value:
\[\mathcal{L}^{\text{BI+CT}}=\mathcal{L}^{\text{BI}}+\frac{\alpha^{\prime 4}}{(4 \pi)^{2}}\frac{1}{30}\left[\mathcal{O}^{4+}-\frac{8}{(D-4)}\mathcal{O}^{\text{ ev.}}\right]+\mathcal{O}(\alpha^{\prime 6})\,, \tag{111}\]
where the \((D-4)\) in the denominator cancels the factor in the numerator above. In order to further absorb the remaining rational terms, we must introduce an additional set of tree-level operators at \(\mathcal{O}(\alpha^{\prime 6})\). At this order in mass-dimension, there are seven distinct operators:
\[\{\mathcal{O}_{(4,0)}^{F^{2}F^{2}},\mathcal{O}_{(2,1)}^{F^{2}F^{2}},\mathcal{ O}_{(0,2)}^{F^{2}F^{2}},\mathcal{O}_{(4,0)}^{F^{4}},\mathcal{O}_{(2,1)}^{F^{4}}, \mathcal{O}_{(0,2)}^{F^{3}},\mathcal{O}_{(1,0)}^{F^{3}}\}\,. \tag{112}\]
By adding these operators to the effective Lagrangian above, we have verified that there is sufficient freedom to absorb the remaining rational terms present at two-loop. Of these available operators, only the \(F^{3}\) tensor structure is non-vanishing when projected along the \((-+++)\) helicity configuration. Furthermore, just as at \(\mathcal{O}(\alpha^{\prime 4})\), there is a single evanescent matrix element which we define below:
\[\mathcal{T}_{\alpha^{\prime 6}}^{\text{ev.}}=\mathcal{T}_{(4,0)}^{F^{2}F^{2}}-2 \mathcal{T}_{(2,1)}^{F^{2}F^{2}}+\mathcal{T}_{(0,2)}^{F^{2}F^{2}}+\mathcal{T} _{(2,1)}^{F^{4}}-\mathcal{T}_{(0,2)}^{F^{4}}\,. \tag{113}\]
Thus, of the seven available \(D\)-dimensional operators, they are projected to only six distinct 4D tensor structures. We will describe the counting of 4D versus general dimension photon operators in generality at all orders in \(\alpha^{\prime}\) in more depth at the end of this section.
### Double copy construction
As we have stated in the text, it is well known that DBIVA theory can be constructed at tree-level as an adjoint double copy between NLSM and sYM amplitudes. There is now a large body of literature studying double-copy construction of higher derivative gauge theory counterterms [13; 15; 16; 17; 18; 19; 21; 22; 23; 24], like those used above to cancel \(U(1)\) anomalous matrix elements in pure Born-Infeld theory. Indeed, recent work by the authors demonstrated that all four-photon operators can be constructed consistently via the double-copy [51]. Here we briefly describe the single-copy gauge theory that when double copied with NLSM produces the higher derivative operators of the previous section.
#### 5.2.1 Symmetric-structure double-copy
To realize the double-copy construction that produces the counterterms above, Ref. [51] first decomposed NLSM pions amplitude into symmetric structure constants using the \(U(N)\) color identity
\[f^{abe}f^{ecd}=d^{ade}d^{ebc}-d^{ace}d^{ebd}\,, \tag{114}\]
where the symmetric structure constant is defined as follows
\[d^{abc}=\text{tr}[T^{a}\{T^{b},T^{c}\}]\,. \tag{115}\]
By applying this color algebra identity to the four-point NLSM amplitudes of eq. (4.4), one finds that pions can similarly be expressed as a symmetric-structure double copy:
\[\mathcal{M}_{4}^{\rm NLSM}=\sum_{g\in\Gamma^{3}}\frac{c_{g}^{\rm dd}n_{g}^{\rm dd,\pi}}{d_{g}}=d^{abe}d^{ecd}\,s+d^{ade}d^{ebd}\,t+d^{ace}d^{ebd}\,u\,, \tag{5.34}\]
where \(c_{s}^{\rm dd}\equiv d^{abe}d^{ecd}\) and the NLSM symmetric \(s\)-channel numerator is \(n_{g}^{\rm dd,\pi}=s^{2}\). By identifying a set of gauge theory numerators that obey the same algebraic relations as the color factors, one can construct consistent double copy theories.
For example, consider the two-loop divergence for the \((++++)\) anomalous matrix element in pure BI theory. The 4D helicity structure can be captured by a symmetric structure double-copy between NLSM pion numerators and a local gauge theory contact at \(\mathcal{O}(\alpha^{\prime 5})\). The \(s\)-channel numerators for this symmetric-structure double copy are as follows
\[n_{s}^{\rm dd,\pi}=s^{2}\qquad n_{s}^{\rm dd,HD}=s^{5}\frac{[12][34]}{\langle 1 2\rangle\langle 34\rangle}\,. \tag{5.35}\]
Double-copying these kinematic numerators yields a matrix element of the form:
\[\mathcal{M}^{\rm BI+HD}=\sum_{g\in\Gamma^{3}}\frac{n_{g}^{\rm dd,HD}n_{g}^{\rm dd,\pi}}{d_{g}}=(s^{6}+t^{6}+u^{6})\frac{[12][34]}{\langle 12\rangle\langle 34 \rangle}\,. \tag{5.36}\]
While this construction lacks any algebraic relations between the four-point kinematic-factors, similar symmetric numerators were found at six-point for NLSM, which obey non-trivial algebraic relations [51]. In addition to the photon counterterms constructed above, symmetric double copy is likewise a natural description of gravitational counterterms. Rather than composing the local vector numerators, \(n^{\rm dd,HD}\), with symmetric NLSM numerators, \(n_{g}^{\rm dd,\pi}\), they can also be squared - yielding a gravitational counterterm:
\[\mathcal{M}^{\rm GR+HD}=\sum_{g\in\Gamma^{3}}\frac{(n_{g}^{\rm dd,HD})^{2}}{d_ {g}}=(s^{9}+t^{9}+u^{9})\frac{[12][34]}{\langle 12\rangle\langle 34 \rangle}\,. \tag{5.37}\]
In general, any four-point symmetric vector numerator can constructed from two \(D\)-dimensional gauge theory building blocks [51],
\[n_{s}^{\rm dd,vec,1}=f_{12}f_{34}\,,\qquad n_{s}^{\rm dd,vec,2}=f_{1324}\,, \tag{5.38}\]
by composing with the symmetric scalar numerators, \(n_{s}^{\rm dd,1}=s\) and \(n_{s}^{\rm dd,2}=tu\). With these building blocks, symmetric double copy construction captures the exceptional four graviton amplitude of Ref. [185], which only considered linear combinations of gauge theory amplitudes rather than gauge theory numerators. We note that a similar construction was recently identified in Ref. [186] beyond four-point, which demonstrated that gravity amplitudes permit double copy construction in terms of gauge-invariant numerators that do not obey any algebraic relations between themselves, much like the symmetric numerators above in eq. (5.35).
We see further exploring higher multiplicity examples of symmetric double copy as an important direction of future study.
We will now show that the matrix element above in eq. (110) needed to cancel the two-loop anomaly can be constructed from an equivalent adjoint double, at the cost of introducing a spin-5 off-shell mode in the single copy gauge theory. This is a special property of symmetric double copy between NLSM numerators and symmetric vector building blocks.
#### 5.2.2 Higher-spin \(\otimes\) Adler Zero
Guided by the dual description of NLSM pion amplitudes as symmetric and adjoint double copies, one can easily cast the symmetric-structure numerators back to adjoint kinematics. Due to the duality between color and kinematic factors, we construct partner adjoint numerators using the following color relation:
\[c_{s}^{\rm ff}=c_{t}^{\rm dd}-c_{u}^{\rm dd}\quad\Leftrightarrow\quad n_{s}^{ \rm ff}=n_{t}^{\rm dd}-n_{u}^{\rm dd}\,. \tag{111}\]
Applying this identity to the symmetric vector numerator needed to reproduce the two-loop all-plus counterterm yields the following adjoint color-dual \(s\)-channel numerator:
\[n_{s}^{\rm HD,(2)}=(t^{5}-u^{5})\frac{[12][34]}{\langle 12\rangle\langle 34 \rangle}\,. \tag{112}\]
Following the argument of [21], this degree five kinematic numerator indicates that there is a spin-5 mode on top of the \(s\)-channel pole. However, when double-copied with NLSM the residue is suppressed by the Adler zero satisfying four-point contact of pion amplitudes. Thus, this adjoint color-dual numerator serves as a consistent single-copy theory when composed with color-dual NLSM numerators.
Guided by the structure of the anomalous BI matrix elements computed through two-loop in the text, a possible guess for the single-copy HD vector numerators needed to cancel the leading \(L\)-loop divergence might go as
\[n_{s}^{\rm HD,(L)}\stackrel{{?}}{{=}}(t^{2L+1}-u^{2L+1})\frac{[1 2][34]}{\langle 12\rangle\langle 34\rangle}\,. \tag{113}\]
This all-loop guess mirrors the structure of the one-loop adjoint numerator identified in [51], and the two-loop counterterm numerator expressed above in eq. (112), in that at each loop order we would require the addition of a higher odd-integer-spin mode.
The physical picture one should have for this class of photon effective operators, is that symmetric-structure double-copy and adjoint double-copy with higher spin modes are one in the same. In exchange for constructing color-dual adjoint numerators needed adjoint double-copy, or equivalently the KLT kernel, one must admit the addition of higher spin modes. However, as long as these higher spin modes are composed via adjoint double-copy, \(\stackrel{{\rm adj.}}{{\otimes}}\), with contact numerators, they map to the same local vector numerators one would achieve with
the symmetric double-copy kernel, \(\stackrel{{\rm sym.}}{{\otimes}}\),
\[\begin{array}{c}\includegraphics[scale=0.5]{fig/som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_s_som_som_som_som_som_som_som_som_som_som_som_s_som_som_som_som_som_s_som_som_som_s_som_som_s_som_som_som_som_som_som_som_som_s_som_som_s_som_s_som_som_som_s_som_som_s_som_som_s_som_som_s_som_som_som_som_s_som_som_som_s_som_s_som_som_som_som_s_som_som_som_som_som_som_som_s_som_som_s_som_s_som_som_som_s_som_som_som_som_s_som_s_som_som_som_s_som_som_s_som_som_som_som_s_som_som_som_som_som_s_som_som_som_s_som_som_som_som_som_s_som_s_som_som_som_som_som_som_som_s_som_som_s_som_som_som_som_som_som_som_s_som_som_som_s_som_som_s_som_som_s_som_som_s_som_s_som_s_som_s_som_s_som_s_som_som_s_som_som_s_som_som_s_som_som_s_som_som_s_som_s_som_som_som_s_som_s_som_s_som_s_som_som_s_som_s_som_s_som_s_som_s_som_s_som_s_som_s_som_som_s_som_s_som_som_s_som_s_som
and the second sequence counts symmetric invariants, \(s^{x}_{ij}(s_{ik}s_{jk})^{y}\), which we denote as \(\mathcal{H}^{(ij)(kl)}\),
\[[\mathcal{H}^{(ij)(kl)}]=1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,... \tag{111}\]
Above, we have defined the bracket \([\mathcal{H}]\) such that it maps Hilbert series with integer coefficients into a sequence of numbers at successive orders in \(\alpha\). Both of these Hilbert series are simple rational functions of \(\alpha\), which we state below:
\[\mathcal{H}^{(ij)(kl)} =\frac{1}{(\alpha-1)^{2}(\alpha+1)}=\alpha^{0}+\alpha^{2}+\alpha ^{3}+\alpha^{4}+\alpha^{5}+2\alpha^{6}+\alpha^{7}+2\alpha^{8}+\cdots \tag{112}\] \[\mathcal{H}^{(ijkl)} =\frac{1}{(\alpha-1)^{2}(\alpha+1)(\alpha^{2}+\alpha+1)}=\alpha^ {0}+\alpha^{1}+2\alpha^{2}+2\alpha^{3}+3\alpha^{4}+3\alpha^{5}+\cdots \tag{113}\]
Using these, we can infer the operator counting for both general dimension and the \(D=4\) operators. The general dimension operator basis that we have used throughout scales as follows:
\[\mathcal{T}^{F^{2}F^{2}}_{(x,y)}\sim s^{x}_{ij}(s_{ik}s_{jk})^{y}\qquad \mathcal{T}^{F^{4}}_{(x,y)}\sim s^{x}_{ij}(s_{ik}s_{jk})^{y}\qquad\mathcal{T} ^{F^{3}}_{(x,y)}\sim\sigma^{x}_{3}\sigma^{y}_{2}\,, \tag{114}\]
where \(\mathcal{T}^{F^{4}}_{(x,y)}\) begins at \(\mathcal{O}(\alpha^{\prime 3})\) and both \(\mathcal{T}^{F^{2}F^{2}}_{(x,y)}\) and \(\mathcal{T}^{F^{4}}_{(x,y)}\) begin at \(\mathcal{O}(\alpha^{\prime\,2})\). Thus, the Hilbert series \(\mathcal{H}^{\text{gen.}D}\) can be defined as follows:
\[\mathcal{H}^{\text{gen.}D}=2\mathcal{H}^{(ij)(kl)}+\alpha\mathcal{H}^{(ijkl)}\,. \tag{115}\]
In contrast, the 4D helicity structures scale as,
\[\mathcal{T}^{(--++)}_{(x,y)}\sim s^{x}_{ij}(s_{ik}s_{jk})^{y}\qquad\mathcal{T} ^{(-+++)}_{(x,y)}\sim\sigma^{x}_{3}\sigma^{y}_{2}\qquad\mathcal{T}^{(++++)}_ {(x,y)}\sim\sigma^{x}_{3}\sigma^{y}_{2}\,. \tag{116}\]
Similar to the \(D\)-dimensional operators above, \(\mathcal{T}^{(--++)}_{(x,y)}\) starts at \(\mathcal{O}(\alpha^{\prime\,2})\) and \(\mathcal{T}^{(-+++)}_{(x,y)}\) begins the sequence at at \(\mathcal{O}(\alpha^{\prime 3})\). However, the all plus behavior is slightly abnormal relative to the other counting sequences. Rather than pushing of the sequence to higher orders in \(\alpha^{\prime}\), it starts the sequence specified by \(\mathcal{H}^{(ijkl)}\) at the third entry at \(\mathcal{O}(\alpha^{\prime\,2})\). Thus, the \(D=4\) Hilbert series can be defined as follows:
\[\mathcal{H}^{D=4}=\mathcal{H}^{(ij)(kl)}+\alpha\mathcal{H}^{(ijkl)}+(1+\alpha -\alpha^{3})\mathcal{H}^{(ijkl)}\,. \tag{117}\]
Putting this all together we obtain the following expression for the general dimension and 4D four-photon operator Hilbert series:
\[\begin{split}\mathcal{H}^{\text{gen.}D}&=\frac{( \alpha+2)(\alpha+1)+\alpha^{2}}{(\alpha-1)^{2}(\alpha+1)(\alpha^{2}+\alpha+1 )}\,,\\ \mathcal{H}^{D=4}&=\frac{(\alpha+2)(\alpha+1)-\alpha ^{3}}{(\alpha-1)^{2}(\alpha+1)(\alpha^{2}+\alpha+1)}\,.\end{split} \tag{118}\]
With this we can determine the number of four-photon evanescent operators that contribute at each successive order in \(\alpha^{\prime}\). The number of evanescent operators is determined by the difference in operator dimension between the general-\(D\) Hilbert series and the \(D=4\) Hilbert series, \(\mathcal{H}^{\text{ev.}}=\mathcal{H}^{\text{gen.}D}-\mathcal{H}^{D=4}\). Thus we obtain the following Hilbert series for the number of evanescent four-photon operators at \(\mathcal{O}(\alpha^{\prime n+2})\):
\[\mathcal{H}^{\text{ev.}}=\frac{\alpha^{2}}{(\alpha-1)^{2}(\alpha^{2}+\alpha+1)}\,. \tag{110}\]
It would be interesting to determine whether the three Hilbert series stated above owe their construction to some hidden geometric origin. Indeed all of the operator counting in the SMEFT literature can be traced to the geometry of the group theory representations that underly the Standard Model [195; 196]. We leave identifying these concealed mathematical structures as an exciting direction of future study.
## 6 Conclusions
In this manuscript, we have carried out a detailed analysis of even-point effective field theories through two-loop in the perturbative expansion. In section 2, we provided a review of the generalized unitarity and integration methods employed throughout the text. Then in section 3 we introduced and developed the on-shell constructive method of Even-point Multi-loop Unitarity (EMU), and computed the tensor reduction for triangle and bubble integrals in \(D\)-dimensions of arbitrary rank along with a spanning set of four-photon operators needed to capture higher-loop effects. Due to the simplicity of even-point multi-loop amplitudes of the nonlinear sigma model (NLSM) and Dirac-Born-Infeld-Volkov-Akulov (DBIVA) theories, these methods allowed us to compute fully integrated two-loop amplitudes for NLSM, pure Born-Infeld, and \(\mathcal{N}=4\) DBIVA theory in section 4. Finally, in section 5 we studied the quantum effective actions that capture the aforementioned loop effects, and studied the all-order counting of higher-derivative four-photon operators using Hilbert series. In doing so, we have identified a variety of rich physical structures that we summarize below:
ExponentiationIn eq. (104) and eq. (105) we computed in general dimension, \(D\), the two contributions to NLSM two-loop amplitudes. Plugging in explicit color structures, we found that the leading divergence of NLSM amplitudes on a \(\mathbb{CP}^{1}\) target space exponentiate in \(D=2-2\epsilon\), akin to the IR exponentiation of gravity amplitudes [160; 161; 162; 163; 164; 155]. In addition, evaluating these diagrams in the planar limit, where \(N_{c}\to\infty\), we found that the double-bubble integral iterates the leading divergence and subleading logarithms in the full one-loop amplitude. Thus, identifying corrections to the NLSM Lagrangian that absorb the ostrich diagram would imbue the scale-dependent logarithms with exponential structure at loop-level. This non-trivial property is found in theories that are conjectured to be integrable [206; 207; 208], like \(\mathcal{N}=4\) super-Yang-Mills [167; 168] in \(D=4-2\epsilon\). In future work, we hope to study whether this iterative structure can be further applied beyond the \(\mathbb{CP}^{1}\) model.
AnomaliesEquipped with our \(D\)-dimensional integration methods, we performed a similar calculation at two-loop for pure Born-Infeld in section 4.2.2. In doing so, we demonstrated that the previously identified one-loop counterterm [52] is not sufficient to cancel the two-loop anomaly. In fact, the \((-+++)\) anomaly of eq. (106), which was absent at one-loop, diverges at two-loop order due to the presence of a one-loop evanescent operator in the \(D\)-dimensional
formulation of Born-Infeld theory. One resolution to this anomaly comes in the form of introducing \(\mathcal{N}=4\) DBIVA superfields in the two-loop state-sum. This protects the \(S\)-matrix from anomalies by promoting the classically conserved \(U(1)\) duality to a supersymmetric \(R\)-symmetry. The evaluated integrals that contribute to the \(\mathcal{N}=4\) DBIVA two-loop amplitudes are provided in eq. (112) and eq. (113).
As mentioned at the close of section 2.5, higher-multiplicity tree-level abelianized open-superstring amplitudes, in four-dimensions, violates \(U(1)\) duality at higher-derivative corrections beyond the leading DBIVA predictions. It would be fascinating if these higher derivative violations of \(U(1)\) duality in the OSS spectrum are precisely those needed to cancel anomalous behavior at the quantum level. To test this would require computing a six-point one-loop amplitude in supersymmetric DBIVA. Such a calculation could in principle be performed with the double copy using the color-dual integrands recently constructed in Ref. [34]. We see this as a natural future direction and application of the methods we have developed here.
EvanescenceAnother resolution to the two-loop anomaly comes in the form of higher-derivative pure-photon counterterms. As we demonstrated in section 5.1, one must introduce a divergent evanescent operator at one-loop in order to absorb the two-loop anomaly. This is similar to the anomalies of pure Einstein-Hilbert gravity [145; 146], which vanish at one-loop since the \(R^{2}\) Gauss-Bonet term is evanescent in \(D=4\), but which diverge at two-loop order [60; 61; 62]. We have constructed the higher derivative Lagrangian through \(\mathcal{O}(\alpha^{\prime 6})\) in eq. (111) that cancels the divergent part of the anomaly, along with a spanning set of counterterms in eq. (112) needed to absorb the finite part left over at two-loop. Given the complexity of gravity calculations at high loop order, further studies of multi-loop Born-Infeld amplitudes could serve as an accessible laboratory for studying evanescent effects beyond one-loop in double-copy constructible theories. To this end, we have used Hilbert series to count the number of four-photon evanescent operators to higher-order derivative corrections in section 5.3 to aid in future studies.
In addition to these themes woven throughout the text, we have, _en passant_, identified novel double-copy structures at two-loop. In section 4.3 we found that two-loop \(\mathcal{N}=4\) DBIVA amplitudes can be constructed via the double copy of color-dual \(\mathcal{N}=4\) sYM integrands with the generalized unitarity cuts of NLSM. This construction was \(D\)-dimensionally identical to the result obtained via generalized unitarity and maximal supersymmetric state sums. While not a proof, this provides strong evidence for the compatibility of NLSM with color-kinematics duality beyond one-loop. This result also serves as the first non-gravitational double-copy beyond one-loop, further supporting the consistency of color-kinematics duality at loop-level, which as of today remains a conjecture.
Moreover, recent work by the authors has demonstrated that color-kinematics duality can be used as a bootstrap principle to constrain higher derivative operators. This has been shown both for gauged NLSM amplitudes [19] and YM \(+F^{3}\) theory [18], the later of which is particularly relevant for anomaly cancellation, and possibly UV completion, of both \(\mathcal{N}=4\) supergravity and the \(R^{3}\) modification to Einstein-Hilbert gravity [18]. Indeed, similar structure
has been recently identified in color-dual scalar theories [22; 23; 24]. This observation suggests a new paradigm that elevates color-kinematics duality from a mathematical correspondence capable of encoding IR symmetries, to a principle that probes signatures of UV physics captured by higher-derivative corrections. Guided by this new paradigm, a natural next step is to determine whether the anomaly cancelling counterterms of eq. (110) and eq. (111) source additional higher-loop counterterms constrained by double-copy consistency, in the spirit of [18]. We see this as an exciting future direction in further understanding the loop-level constraints imposed by the duality between color and kinematics.
AcknowledgmentsThe authors would like to thank Rafael Aoude, Alex Edison, Kezhu Guo, Kays Haddad, Ian Low, James Mangan, Frank Petriello, Paolo Pichini, Nia Robles, Radu Roiban, Aslan Seifi, and Suna Zekioglu for insightful conversations, related collaboration, and encouragement throughout the completion of this work. That authors additionally would like to thank James Mangan for incredibly thoughtful comments on earlier versions of the draft. The completion of this manuscript benefited from the hospitality of NORDITA during the workshop "Amplifying Gravity at All Scales". This work was supported by the DOE under contract DE-SC0015910 and by the Alfred P. Sloan Foundation. N.H.P. acknowledges the Northwestern University Amplitudes and Insight group, the Department of Physics and Astronomy, and Weinberg College for their generous support.
|
2309.10717 | Driven-dissipative four-mode squeezing of multilevel atoms in an optical
cavity | We utilize multilevel atoms trapped in a driven resonant optical cavity to
produce scalable multi-mode squeezed states for quantum sensing and metrology.
While superradiance or collective dissipative emission by itself has been
typically a detrimental effect for entanglement generation in optical cavities,
in the presence of additional drives it can also be used as an entanglement
resource. In a recent work [Phys. Rev. Lett. 132, 033601 (2024)], we described
a protocol for the dissipative generation of two-mode squeezing in the dark
state of a six-level system with only one relevant polarization. There we
showed that up to two quadratures can be squeezed. Here, we develop a
generalized analytic treatment to calculate the squeezing in any multilevel
system where atoms can collectively decay by emitting light into two
polarization modes in a cavity. We show that in this more general system up to
four spin squeezed quadratures can be obtained. We study how finite-size
effects constrain the reachable squeezing, and analytically compute the scaling
with $N$. Our findings are readily testable in current optical cavity
experiments with alkaline-earth-like atoms. | Bhuvanesh Sundar, Diego Barbarena, Ana Maria Rey, Asier Piñeiro Orioli | 2023-09-19T16:02:15Z | http://arxiv.org/abs/2309.10717v3 | # Driven-dissipative four-mode squeezing of multilevel atoms in an optical cavity
###### Abstract
We utilize multilevel atoms trapped in a driven resonant optical cavity to produce scalable multimode squeezed states for quantum sensing and metrology. While superradiance or collective dissipative emission by itself has been typically a detrimental effect for entanglement generation in optical cavities, in the presence of additional drives it can also be used as an entanglement resource. In a recent work [1], we described a protocol for the dissipative generation of two-mode squeezing in the dark state of a six-level system with only one relevant polarization. There we showed that up to two quadratures can be squeezed. Here, we develop a generalized analytic treatment to calculate the squeezing in any multilevel system where atoms can collectively decay by emitting light into two polarization modes in a cavity. We show that in this more general system up to four spin squeezed quadratures can be obtained. We study how finite-size effects constrain the reachable squeezing, and analytically compute the scaling with \(N\). Our findings are readily testable in current optical cavity experiments with alkaline-earth-like atoms.
## I Introduction
Creating many-body states of matter with large useful entanglement that can be harnessed for quantum sensing and metrology is a highly sought-after goal. Optical cavities are natural candidates for creating such types of entangled states since photon-mediated interactions between atoms allow for the generation of _collective_ (i.e., fully symmetric) quantum many-body states with entanglement that grows with the atom number \(N\). One particular type of entangled states that can be created in this way are spin squeezed states [2; 3; 4; 5; 6; 7; 8], i.e. states with a reduced variance along some spin direction.
Most of the effort so far has been focused on the generation of squeezing by restricting the dynamics to two levels per atom [9], using either coherent interactions [10; 11; 12; 13; 14], or dissipation [15; 16; 17; 18; 19]. However, the use of the full multilevel atomic structure can open up new opportunities for creating different types of collective entangled states [20; 21], such as _multimode_ squeezed states, i.e. states with two or more squeezed spin directions. Multimode squeezed states are not easily accessible in collective two-level systems, and they could be useful for multi-parameter estimation [22].
In Ref. [1], we proposed to use coherent driving and superradiance on multilevel systems with one relevant cavity polarization as a resource for generation of scalable two-mode squeezing. We also showed ways to store squeezed states in dark manifolds that are robust to collective dissipation [1; 23]. In this paper, we describe the dissipative squeezing dynamics for a wide range of multilevel structures in the case of two relevant cavity polarizations. We derive the condition for the system to be stable to quantum fluctuations, and show that up to four spin variables are typically squeezed in this more general system. We study how finite-size effects constrain the reachable squeezing, and analytically compute the scaling of the squeezing with \(N\).
The paper is outlined as follows. In Sec. II, we describe the proposed experimental setup and derive the effective master equation. In Sec. III, we describe the mean-field physics and stability to quantum fluctuations. In Sec. IV, we calculate the quantum correlations that develop between the atoms, and the emergent squeezing, during the driven-dissipative dynamics. In Sec. V and VI, we apply the techniques developed in prior sections to an effective two-level and multilevel system, respectively. We conclude in Sec. VII.
We note that we included a reference table with all symbols used in Table 1 that the reader might find helpful.
## II System and initial state
We consider an ensemble of \(N\) multilevel atoms pinned in a deep optical lattice within an optical cavity [see Fig. 1(a)]. We consider the atoms to have a degenerate ground manifold with \(2F_{g}+1\) levels, labeled \(\left|g,m\right\rangle(-F_{g}\leq m\leq F_{g})\), and a long-lived degenerate excited manifold with \(2F_{e}+1\) levels labeled \(\left|e,m\right\rangle(-F_{e}\leq m\leq F_{e})\). Here, \(F_{g}\) and \(F_{e}\) are the spin in the ground and excited manifolds, and \(m\) denotes the angular momentum projection along the quantization axis. The ground-excited transition frequency is \(\omega\equiv\omega_{a}\).
The cavity is assumed to be resonant with the atomic transition and to support a pair of photon modes with degenerate angular frequency \(\omega_{c}=\omega_{a}=\omega\) and orthogonal polarizations [see Figs. 1(b-c)], both perpendicular to the cavity axis [see Figs. 1(b-c)]. The atoms couple to these two cavity modes with single-photon Rabi frequency \(2g\). The cavity modes are also driven with a resonant laser with frequency, \(\omega_{l}=\omega\). Additionally, photons can leak out of the cavity at a rate \(\kappa\).
If \(\kappa\gg g\sqrt{N}\), we can adiabatically eliminate the cavity photons and obtain an effective master equation for the
atoms only, \(\hbar\frac{d\rho}{dt}=-i[\hat{H}_{\rm drive},\rho]+\mathcal{L}[\rho]\). The effective Hamiltonian and dissipation terms for the atoms are
\[\hat{H}_{\rm drive}=\sum_{\alpha}\frac{\hbar\Omega_{\alpha}}{2}( \hat{D}_{\alpha}^{-}+\hat{D}_{\alpha}^{+}), \tag{1}\] \[\mathcal{L}[\rho]=\sum_{\alpha}\hbar\Gamma\left(\hat{D}_{\alpha}^ {-}\rho\hat{D}_{\alpha}^{+}-\frac{1}{2}\hat{D}_{\alpha}^{+}\hat{D}_{\alpha}^ {-}\rho-\frac{1}{2}\rho\hat{D}_{\alpha}^{+}\hat{D}_{\alpha}^{-}\right), \tag{2}\]
where \(\Omega_{\alpha}\) is the intracavity drive strength, and \(\hat{D}_{\alpha}^{+}\) is a collective atomic operator that excites atoms by absorbing an \(\alpha\)-polarized photon. If the \(\alpha\)-polarized photon has angular momentum projection \(l_{\alpha}=\pm 1,0\) along the quantization axis, then \(\hat{D}_{\alpha}^{+}=\sum_{i}\hat{d}_{i,\alpha}^{+}\) with \(i\) running over the atoms, and \(\hat{d}_{i,\alpha}^{+}=\sum_{m}C_{\alpha}^{m}\hat{s}_{m,i\alpha}^{+}\), where the sum runs over the ground state atomic levels. The single-particle spin-raising operator \(\hat{s}_{m,i\alpha}^{+}=\left|e,m+l_{\alpha}\right\rangle_{i}\left\langle g,m \right|_{i}\) drives a transition between the levels \(\left|g,m\right\rangle\) and \(\left|e,m+l_{\alpha}\right\rangle\), and \(C_{\alpha}^{m}=\left\langle F_{g},m;1,l_{\alpha}|F_{e},m+l_{\alpha}\right\rangle\) is the Clebsch-Gordan coefficient for this transition. \(\Gamma=4g^{2}/\kappa\) is the cavity-induced decay of an atom from the excited manifold.
The master equation \(\hbar\frac{d\rho}{dt}=-i[\hat{H}_{\rm drive},\rho]+\mathcal{L}[\rho]\) can also be written as:
\[\hbar\frac{d\rho}{dt}=\mathcal{L}^{\prime}[\rho]\equiv\sum_{\alpha}\hbar \Gamma\left(\hat{\mathscr{D}}_{\alpha}^{-}\rho\hat{\mathscr{D}}_{\alpha}^{+}- \frac{1}{2}\hat{\mathscr{D}}_{\alpha}^{+}\hat{\mathscr{D}}_{\alpha}^{-}\rho- \frac{1}{2}\rho\hat{\mathscr{D}}_{\alpha}^{+}\hat{\mathscr{D}}_{\alpha}^{-}\right) \tag{3}\]
where \(\hat{\mathscr{D}}_{\alpha}^{-}=\hat{D}_{\alpha}^{-}+i\Omega_{\alpha}/\Gamma\). Detailed derivations of Eqs. (1), (2), and (3) are given in Appendix A.
The basis states \(\left|g,m\right\rangle\) and \(\left|e,m\right\rangle\) are associated with a particular choice of the quantization axis. In this paper, we will either choose the quantization axis to be along the cavity axis, or perpendicular to the cavity axis, and we will explicitly specify this where necessary. Similarly, the cavity supports two polarizations of light, which we will choose to decompose into either linear modes or circular modes. Whenever we choose the atomic quantization axis along the cavity axis, we will choose the polarizations as left-handed (denoted \(\alpha=L\) and having \(l_{\alpha}=-1\)) and right-handed (denoted \(\alpha=R\) and having \(l_{\alpha}=+1\)) [Fig. 1(b)]. Whenever we choose the atomic quantization axis to be perpendicular to the cavity axis, we will define the polarizations as vertical (denoted \(\alpha=\Pi\) and having \(l_{\alpha}=0\)) and horizontal (denoted \(\alpha=\Sigma\), which includes both \(l_{\alpha}=1\) and \(-1\)) [Fig. 1(c)] and define \(\hat{D}_{\Sigma}^{+}=(\hat{D}_{L}^{+}+\hat{D}_{R}^{+})/\sqrt{2}\).
We initialize the atoms in a product of single-particle ground states \(\left|G_{\vec{\beta}}\right\rangle=\sum_{m}\beta_{m}\left|g,m\right\rangle\), and apply a laser pulse of duration \(\tau\) and polarization \(\alpha_{0}\) such that \(\hat{H}_{\rm drive}\tau/\hbar=\theta_{0}\hat{D}_{\alpha_{0}}^{x}\). This leaves the atoms in the coherent state
\[\left|\Psi(\alpha_{0},\theta_{0};\vec{\beta})\right\rangle=\exp(-i\theta_{0} \hat{D}_{\alpha_{0}}^{x})\left|G_{\vec{\beta}}\right\rangle^{\otimes N}, \tag{4}\]
where \(\hat{D}_{\alpha}^{x}=(\hat{D}_{\alpha}^{+}+\hat{D}_{\alpha}^{-})/2\), and \(\hat{D}_{\alpha}^{y}=(\hat{D}_{\alpha}^{+}-\hat{D}_{\alpha}^{-})/2i\). We will denote the polarization that is orthogonal to
as \(\alpha_{1}\).
The goal of this paper is to study the properties of the system at the steady state, i.e. \(\mathcal{L}^{\prime}[\rho_{\text{ss}}]=0\). In general, our multilevel system contains a continuum of steady-states, since the steady state realized by the dynamics as \(t\rightarrow\infty\) depends on the choice of initial state and the parameter \(\Omega_{\alpha}/N\Gamma\). To constrain the number of possibilities, we will only consider initial states \(\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\) as given in Eq. (4) for which the single-particle observables \(\bra{\hat{O}}\) are approximately stationary from the beginning and focus on the behavior of the fluctuations captured by higher-order observables.
## III Mean-field approximation
We discuss first the properties of the steady state in a mean-field (MF) approximation. For collective systems, MF assumes \(\bra{\hat{O}_{1}\hat{O}_{2}}\approx\bra{\hat{O}_{1}}\bra{\hat{O}_{2}}\) for any set of collective single-body spin operators \(\hat{O}_{1}\) and \(\hat{O}_{2}\)[24]. This approximation works well when \(N\) is large and can be seen as the leading order expansion in powers of \(1/N\). Under this approximation, the master equation [Eq. (3)] for any collective single-body spin variable \(\bra{\hat{O}}\) reduces to
\[\frac{d}{dt}\bra{\hat{O}}_{\text{MF}}\approx \frac{\Gamma}{2}\sum_{\alpha}\bra{\hat{\mathscr{D}}^{+}_{\alpha}} _{\text{MF}}\bra{[\hat{O},\hat{\mathscr{D}}^{-}_{\alpha}]}_{\text{MF}}\] \[+\bra{[\hat{\mathscr{D}}^{+}_{\alpha},\hat{O}]}_{\text{MF}}\bra{ \hat{\mathscr{D}}^{-}_{\alpha}}_{\text{MF}}. \tag{5}\]
Here, we used that the commutator \([\hat{\mathscr{D}}^{\pm}_{\alpha},\hat{O}]\) is a collective single-body spin operator.
### Mean-field stationary state
A sufficient condition for making all spin variables stationary at the mean-field level is to choose the drive \(\Omega_{\alpha}\) and the initial state \(\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\) such that \(\bra{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\lvert\hat{\mathscr{D}}^{-}_{ \alpha}\rvert\Psi(\alpha_{0},\theta_{0};\vec{\beta})\rangle=0\), see Eq. (5). Satisfying this requires two conditions:
\[\bra{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\lvert\hat{D}^{x}_{ \alpha}\rvert\Psi(\alpha_{0},\theta_{0};\vec{\beta})\rangle =0, \tag{6}\] \[\bra{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\lvert\hat{D}^{y}_{ \alpha}\rvert\Psi(\alpha_{0},\theta_{0};\vec{\beta})\rangle =\Omega_{\alpha}/\Gamma. \tag{7}\]
Throughout this work, we will choose \(\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\) and \(\Omega_{\alpha}\) to satisfy Eqs. (6) and (7), i.e. we only consider initial states that are stationary states at the mean-field level. Moreover, we will later consider only examples where the continuous drive has the same polarization as the preparation pulse, i.e. \(\Omega_{\alpha_{1}}=0\), but the discussion in the following sections does not assume this.
### Stability of the mean-field state
The mean-field stationary state may be stable or unstable to quantum fluctuations. If it is stable, the fluctuations remain small and the mean-field state \(\rho_{\text{MF}}=\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\bra{\Psi( \alpha_{0},\theta_{0};\vec{\beta})}\) turns out to be a good approximation to the full quantum steady state \(\rho_{\text{ss}}\), which satisfies [25; 26; 27]
\[\hat{\mathscr{D}}^{+}_{\alpha}\hat{\mathscr{D}}^{-}_{\alpha}\rho_{\text{ss}} \approx 0. \tag{8}\]
Note that the approximate sign '\(\approx\)' means that the above expression is zero up to higher-order corrections in \(1/N\) which are qualitatively irrelevant for our purposes. In our analytical approximation, we have \(\bra{\hat{\mathscr{D}}^{+}_{\alpha}\hat{\mathscr{D}}^{-}_{\alpha}}=0\) at the steady state, as is shown in App. B.
In the stable phase, the dynamics of \(\bra{\hat{\mathscr{D}}^{+}_{\alpha}\hat{\mathscr{D}}^{-}_{\alpha}}\) is well captured by making an approximation where we set the third-order cumulant to zero, \(\bra{\hat{O}_{1}\hat{O}_{2}}\bra{\hat{O}_{3}}+\bra{\hat{O}_{1}\hat{O}_{3}} \bra{\hat{O}_{2}}+\bra{\hat{O}_{2}\hat{O}_{3}}\bra{\hat{O}_{1}}-2\bra{\hat{O}_{ 1}}\bra{\hat{O}_{2}}\bra{\hat{O}_{3}}-\bra{\hat{O}_{1}\hat{O}_{2}\hat{O}_{3}} \approx 0\). Under this approximation and further assuming the mean-field stationary condition [Eqs. (6) and (7)] is also met, the master equations for \(\bra{\hat{\mathscr{D}}^{+}_{\alpha_{0}}\hat{\mathscr{D}}^{-}_{\alpha_{0}}}\) and \(\bra{\hat{\mathscr{D}}^{+}_{\alpha_{1}}\hat{\mathscr{D}}^{-}_{\alpha_{1}}}\) couple to the equations for \(\bra{\hat{\mathscr{D}}^{+}_{\alpha_{0}}\hat{\mathscr{D}}^{-}_{\alpha_{1}}}\) and \(\bra{\hat{\mathscr{D}}^{+}_{\alpha_{1}}\hat{\mathscr{D}}^{-}_{\alpha_{0}}}\). The coupled equations are
Figure 1: (a) An ensemble of atoms trapped in a deep lattice in an optical cavity, with the cavity frequency on resonance with the atomic transition from the ground states to the excited states (with spins \(F_{g}\) and \(F_{e}\)), \(\omega_{c}=\omega_{a}\). The cavity is driven by a resonant laser, and the atoms superradiantly decay at rate \(\Gamma\) from the excited to the ground states. (b,c) Transitions driven by the collective atomic excitation operators. (b) illustrates the \(L^{\pm}\) and \(R^{\pm}\) transitions due to coupling to a left or right circularly polarized photon when we choose the quantization axis to be parallel to the cavity axis. (c) illustrates the \(\Pi^{\pm}\) and \(\Sigma^{\pm}\) transitions due to coupling to a vertically or horizontally polarized photon when we choose the quantization axis to be perpendicular to the cavity axis.
\[\partial_{t}\left(\begin{array}{ccc}\langle\hat{\mathscr{D}}_{\alpha_{0}}^{+} \hat{\mathscr{D}}_{\alpha_{0}}^{-}\rangle\\ \langle\hat{\mathscr{D}}_{\alpha_{0}}^{+}\hat{\mathscr{D}}_{\alpha_{1}}^{-} \rangle\\ \langle\hat{\mathscr{D}}_{\alpha_{1}}^{+}\hat{\mathscr{D}}_{\alpha_{1}}^{-} \rangle\\ \langle\hat{\mathscr{D}}_{\alpha_{1}}^{+}\hat{\mathscr{D}}_{\alpha_{1}}^{-} \rangle\\ \end{array}\right)\approx-\Gamma\left(\begin{array}{ccc}\lambda_{00}&\frac{ \lambda_{01}}{2}&\frac{\lambda_{10}}{2}&0\\ \frac{\lambda_{02}}{2}&\frac{\lambda_{02}+\lambda_{11}}{2}&0&\frac{\lambda_{1 0}}{2}\\ \frac{\lambda_{01}}{2}&0&\frac{\lambda_{00}+\lambda_{11}}{2}&\frac{\lambda_{01 }}{2}\\ 0&\frac{\lambda_{1}}{2}&\frac{\lambda_{10}}{2}&\lambda_{11}^{-}\end{array} \right)\left(\begin{array}{ccc}\langle\hat{\mathscr{D}}_{\alpha_{0}}^{+}\hat{ \mathscr{D}}_{\alpha_{0}}^{-}\rangle\\ \langle\hat{\mathscr{D}}_{\alpha_{1}}^{+}\hat{\mathscr{D}}_{\alpha_{0}}^{-} \rangle\\ \langle\hat{\mathscr{D}}_{\alpha_{1}}^{+}\hat{\mathscr{D}}_{\alpha_{0}}^{-} \rangle\\ \langle\hat{\mathscr{D}}_{\alpha_{1}}^{+}\hat{\mathscr{D}}_{\alpha_{1}}^{-} \rangle\\ \end{array}\right), \tag{9}\]
where we denoted \(\lambda_{ij}=\langle[\hat{\mathscr{D}}_{\alpha_{i}}^{-},\hat{\mathscr{D}}_{ \alpha_{j}}^{+}]\rangle\). The value of \(\lambda_{ij}\) is a constant at leading order in \(N\), \(\lambda_{ij}\approx\left([\hat{\mathscr{D}}_{\alpha_{i}}^{-},\hat{\mathscr{D}}_ {\alpha_{j}}^{+}]\right)_{\rm MF}\), since it is a single-particle observable and is therefore stationary. Note that Eq. (9) is independent of \(\Omega\), which means that the stability is determined by the light emission properties alone.
Generically, demanding that \(\langle\hat{\mathscr{D}}_{\alpha_{0}}^{+}\hat{\mathscr{D}}_{\alpha_{0}}^{-}\rangle\) and \(\langle\hat{\mathscr{D}}_{\alpha_{1}}^{+}\hat{\mathscr{D}}_{\alpha_{1}}^{-}\rangle\) decay to zero, Eq. (9) requires that \(\langle\hat{\mathscr{D}}_{\alpha_{0}}^{+}\hat{\mathscr{D}}_{\alpha_{1}}^{-}\rangle\) and \(\langle\hat{\mathscr{D}}_{\alpha_{1}}^{+}\hat{\mathscr{D}}_{\alpha_{0}}^{-}\rangle\) also decay to zero, since they are coupled. This means that the matrix in Eq. (9) needs to be positive definite. We show in Appendix B that this condition is equivalent to requiring that the following smaller matrix is positive definite,
\[\mathcal{H}=\left(\begin{array}{cc}\lambda_{00}&\frac{\lambda_{01}+\lambda_ {10}}{2}\\ \frac{\lambda_{01}+\lambda_{10}}{2}&\lambda_{11}\\ \end{array}\right)\succ 0. \tag{10}\]
The time scale of the dynamics due to Eq. (9) is \(O(1/N\Gamma)\). Corrections beyond the cumulant approximation drive dynamics on a time scale of \(O(1/\sqrt{N}\Gamma)\gg 1/N\Gamma\) [see Appendix B.1].
### Superradiance potential
We now introduce the concept of a superradiance potential as a visual aid to understand Eq. (10). For any state \(\ket{\Psi}\), we define the potential \(V(\zeta_{0},\zeta_{1};\Psi)\) as
\[V(\zeta_{0},\zeta_{1};\Psi)=\frac{1}{N}\bra{\Psi}e^{i(\zeta_{0}\hat{\mathscr{ D}}_{\alpha_{0}}^{\sigma}+\zeta_{1}\hat{\mathscr{D}}_{\alpha_{1}}^{\sigma}) \hat{n}_{e}e^{-i(\zeta_{0}\hat{\mathscr{D}}_{\alpha_{0}}^{\sigma}+\zeta_{1}\hat {\mathscr{D}}_{\alpha_{1}}^{\sigma})}}\ket{\Psi}, \tag{11}\]
where \(\zeta_{i}\) have a similar interpretation to \(\theta_{0}\) in Eq. (4), and \(\hat{n}_{e}=\sum_{i}^{N}\sum_{m}\ket{e,m}_{i}\bra{e,m}_{i}\) is the occupation in the excited states.
The matrix \(\mathcal{H}\) in Eq. (10) is proportional to the Hessian matrix of \(V\) evaluated at \(\zeta_{0}=\zeta_{1}=0\) and for \(\ket{\Psi}=\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\) [see Appendix B]:
\[\mathcal{H}=2N\left(\begin{array}{cc}\frac{\partial^{2}V}{\partial\zeta_{0}^ {2}}&\frac{\partial^{2}V}{\partial\zeta_{0}\partial\zeta_{1}}\\ \frac{\partial^{2}V}{\partial\zeta_{0}\partial\zeta_{1}}&\frac{\partial^{2}V}{ \partial\zeta_{1}^{2}}\\ \end{array}\right)_{\zeta_{0}=\zeta_{1}=0}. \tag{12}\]
The stability condition, i.e. the requirement that both the eigenvalues of \(\mathcal{H}\) are positive, therefore corresponds to the requirement that the potential has positive curvature along all \((\zeta_{0},\zeta_{1})\) directions at \(\zeta_{0}=\zeta_{1}=0\). This condition generalizes the stability condition for single-polarization potential described in Refs. [1; 23] to consider fluctuations along arbitrary polarizations \((\alpha_{0},\alpha_{1})\).
As illustrative examples, we plot the eigenvalues of the Hessian matrix for two specific parameter regions of the initial state \(\ket{\Psi}=\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\) in Fig. 2. We choose an effective two-level system in Fig. 2(a-b), and \(F_{g}=F_{e}=3/2\) in Fig. 2(c-d) for concreteness. We will calculate the spin squeezing in these examples later in Secs. V and VI. Our arguments, however, are general and work for any \(F_{g},F_{e}\) and \(\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\).
In the first example [Fig. 2(a-b)], we consider an effective two-level system (\(F_{g}=F\), \(F_{e}=F+1\)), real
Figure 2: (a) An ensemble of effective two-level atoms is driven by a right-circularly polarized laser with strength \(\Omega\) and superradiantly decay at rate \(\Gamma\) to the ground state. The quantization axis is parallel to the cavity axis. (b) Negative of the curvature of the potential, \(-d^{2}V(\zeta,0;\Psi)/d\zeta_{2}^{2}\big{|}_{\zeta=0}\). There are two critical points, at \(\theta_{0}=\pm\pi/2\), indicated by two dots. The thick black line marks the stable region. (c) An ensemble of eight-level atoms are driven by a \(\Sigma\)-polarized laser, and superradiantly emit \(\Sigma\)- and \(\Pi\)-polarized light. The quantization axis is perpendicular to the cavity axis. (d) Regions of stability and instability versus the angle \(\beta\) in the ground state manifold and state preparation pulse area \(\theta_{0}\) [see text]. The system is stable to emission of both polarizations in regions marked 1, unstable to emission of \(\Sigma\)-polarized light in region 2, unstable to emission of \(\Pi\)-polarized light in region 3, and unstable to emission of both in region 4. Green lines and dots mark critical manifolds and points where the system crosses from a stable/unstable region to an unstable/stable region for each polarization, and cyan and red lines show the critical manifolds where emission of only one polarization crosses from stable to unstable.
ized with the levels \(\left|g,F\right\rangle\) and \(\left|e,F+1\right\rangle\) and we choose the quantization axis as the cavity axis [Fig. 1(b)]. We initialize the atoms in \(\left|g,F\right\rangle\) and drive the system with right-circularly polarized light, thus preparing \(\left|\Psi(R,\theta_{0};\widehat{\beta})\right\rangle=\exp(-i\theta_{0} \tilde{D}_{R}^{\pm})\left|g,F\right\rangle^{\otimes N}\), with parameters that satisfy Eqs. (6) and (7). In this case, only the right-handed polarization is relevant, and therefore there is only one nontrivial eigenvalue for \(\mathcal{H}\), plotted in Fig. 2(b). The stable region corresponds to \(\left|\theta_{0}\right|\leq\pi/2\) (marked by a thick black line).
In the second example [Fig. 2(c-d)], we consider an eight-level system with \(F_{g}=F_{e}=3/2\) where all levels and both cavity polarizations are relevant. We initialize the atoms in \(\left|G_{\beta}\right\rangle=\cos\frac{\beta}{2}\left|g,-\frac{3}{2}\right\rangle +\sin\frac{\beta}{2}\left|g,\frac{1}{2}\right\rangle\). For simplicity, we choose the quantization axis to be perpendicular to the cavity axis [Fig. 1(c)]. We then drive the system with a \(\Sigma\)-polarized laser: \(\left|\Psi(\Sigma,\theta_{0};\beta)\right\rangle=e^{-i\theta_{0}\tilde{D}_{ \Sigma}^{\mp}}\left|G_{\beta}\right\rangle^{\otimes N}\). This choice of quantization axis makes \(\mathcal{H}\) diagonal [28]. Figure 2(d) shows the regions where the two diagonal elements are positive or negative, with the stable phase (in black) being the one where both are positive.
The superradiance potential has further significance beyond the stability criterion. To see this, note that the drive strength required to maintain the system stationary at the mean-field level [Eq. (7)] can be written as
\[\Omega_{\alpha}=N\Gamma\frac{\partial V}{\partial\zeta_{\alpha}}\big{|}_{ \zeta_{0}=\zeta_{1}=0}. \tag{13}\]
Thus, the slope of \(V\) helps determine the location of MF steady states. In the examples above, \(\Omega_{L}\propto dV/d\zeta_{L}=0\) in Fig. 2(c), and \(\Omega_{\Pi}\propto dV/d\zeta_{\Pi}=0\) in Fig. 2(d). Furthermore, we showed in previous works [1; 23] that when there is only one relevant polarization, the superradiance potential fully describes the mean-field time evolution [29]. However, we note that in general a two-parameter potential \(V(\zeta_{0},\zeta_{1})\) cannot describe the mean-field dynamics when both polarizations are relevant, because of the noncommutativity of \(\tilde{D}_{\alpha_{0}}^{\pm}\) and \(\tilde{D}_{\alpha_{1}}^{\pm}\).
### Critical manifolds
Critical manifolds are manifolds where one or both eigenvalues go to zero, i.e. \(\det(\mathcal{H})=0\). Typically, these manifolds separate regions where one or both eigenvalues of \(\mathcal{H}\) have opposite signs, i.e. regions that are stable and unstable to either one or both polarizations. The critical manifolds will be crucial when we study dissipative squeezing generation, because the system acquires scalable squeezing near the critical regions.
The critical points for the two-level system in Fig. 2(a) are at \(\theta_{0}=\pm\frac{\pi}{2}\) (black dots in Fig. 2(b)). As we will show in Sec. V, the system acquires scalable squeezing in _one_ mode near these critical points.
For multilevel systems where two polarizations are relevant, the system can be critical to emission in one polarization, i.e. one of the eigenvalues of \(\mathcal{H}\) is zero, indicated by red or blue dashed lines in Fig. 2(d), or the system can be critical to emission in two polarizations, i.e. \(\mathcal{H}=0\), indicated by green dashed lines or dots. In Fig. 2(d), there are two lines and four points in the \((\zeta_{0},\zeta_{1})\) plane where \(\mathcal{H}=0\). The system is stable to emission of both polarizations in regions marked 1, unstable to emission of \(\Sigma\)-polarized light in region 2, unstable to emission of \(\Pi\)- polarized light in region 3, and unstable to emission of both in region 4. For the multilevel example, we will show that it is possible to generate scalable squeezing in _four_ different quadratures nearby critical lines between regions 1 and 4, whereas only _two_ squeezed directions can be created close to critical lines between regions 1 and 2 or 1 and 3.
We emphasize that the only region where the MF state is a good approximation of the full quantum steady state is where the system is stable to both polarizations, i.e. the black region in Fig. 2(d). Outside this region, quantum fluctuations destabilize this MF state and drive it towards a mixture of stable steady states. Calculating the steady state for an initial state in the unstable region is outside the scope of this paper. In the remainder of this work, we will focus on the properties of the steady state in the stable region.
It is also reasonable to ask what the critical lines are separating exactly. In the two-level system it is well-known that the critical points are associated to a normal to superradiant phase transition [25; 26; 27; 30; 31; 32; 33; 34; 35]. The stable region corresponds to the system being in the superradiant phase, where the quantum steady state is close to the MF state. The normal phase corresponds to the case where \(\Omega_{\alpha}/N\Gamma\) is large enough that the system oscillates forever in the MF approximation. In the multilevel system, a similar normal to superradiant phase transition can take place, but other possibilities can emerge as well, such as superradiant to superradiant transitions. This will be investigated in future work.
## IV Quantum correlations
Even though we initialize the system in a mean-field steady state, the quantum fluctuations around this state are not stationary. In particular, we will show next that the dissipative quantum dynamics towards the full quantum steady state leads to the formation of entanglement between the atoms manifested in the form of spin squeezing. We treat these quantum fluctuations as a small perturbation around the mean-field state via bosonic degrees of freedom in the large-\(N\) approximation, which allows us to analytically compute the value of the variances in the bosonic quadratures at the steady state.
This calculation proceeds in four steps, described in detail in Secs. IV.1-IV.4, and exemplified in Secs. V and VI. First, we define an exact map between collective spin operators and Schwinger bosons. Second, we make the master equation quadratic in boson operators by making a large-\(N\) approximation. Third, we diagonal
ize the master equation by making a Bogoliubov transformation. And fourth, we solve the master equation. In this way, we demonstrate the presence of spin squeezing. We determine the finite-size scaling of the best achievable squeezing by including higher order terms in the large-\(N\) approximation.
### Schwinger bosons
We can define \(\ell\) Schwinger bosons for a collective system of atoms with \(\ell\) relevant internal atomic levels, i.e. those levels that participate in the dynamics. The most straightforward way to set the Schwinger bosons is by defining bosonic operators \(\hat{a}_{g(e),m}\) which annihilate a particle in \(\ket{g(e),m}\). However, this choice is inconvenient because the mean-field state \(\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}=\ket{\psi(\alpha_{0},\theta_{0}; \vec{\beta})}^{\otimes N}\) is in a superposition of states created by \(\hat{a}_{g(e),m}\). A more convenient choice is to define Schwinger boson operators \(\hat{c}_{\mu}\) which annihilate particles in a different orthonormal manifold of states \(\ket{\mu},\ \mu\in[0,\ell-1]\), where \(\ket{\mu=0}\) is defined as \(\ket{0}\equiv\ket{\psi(\alpha_{0},\theta_{0};\vec{\beta})}\). We call these Schwinger c-bosons. The basis states \(\ket{\mu}\) are related to \(\ket{g(e),m}\) by a unitary transformation. Our main results do not depend on this basis choice, but choosing \(\ket{0}\) in this way simplifies the calculation. For brevity, we will hereafter drop the symbols \(\alpha_{0}\), \(\theta_{0}\), and \(\vec{\beta}\) from \(\hat{c}_{\mu}(\alpha_{0},\theta_{0};\vec{\beta})\).
Any collective spin operator can be formally expressed in this basis in a matrix form. For example, we can write the jump operators as \(\hat{\mathscr{D}}_{\alpha}^{-}=\sum_{i\mu\nu}g_{\alpha,\mu\nu}\ket{\mu}_{i} \bra{\nu}_{i}\), where \(g_{\alpha,\mu\nu}\) are their matrix elements. In terms of the Schwinger c-bosons, the jump operators then have a quadratic form,
\[\hat{\mathscr{D}}_{\alpha}^{-}=\sum_{\mu,\nu}g_{\alpha,\mu\nu}\hat{c}_{\mu}^{ \dagger}\hat{c}_{\nu}. \tag{14}\]
Due to the mean-field stationary state conditions, Eqs. (6) and (7), we have that the coefficient \(g_{\alpha,00}=\frac{1}{N}\bra{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\hat{ \mathscr{D}}_{\alpha}^{-}\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}=0\).
### The Holstein-Primakoff approximation
If the quantum state \(\rho\) is close to the mean-field state \(\rho_{\text{MF}}\), which is a macroscopically occupied state of the \(\hat{c}_{0}\) operator, we can assume that \(\rho\) also has macroscopic occupation for \(\hat{c}_{0}\), i.e., \(\langle\hat{c}_{0}^{\dagger}\hat{c}_{0}\rangle\simeq N\) at all times. Therefore, we make the generalized Holstein-Primakoff (HP) approximation \(\hat{c}_{0}\approx\sqrt{N}\)[36]. Under this approximation, the jump operators simplify to
\[\hat{\mathscr{D}}_{\alpha}^{-}=\sqrt{N}\sum_{\mu>0}(x_{\alpha,\mu}\hat{X}_{ \mu}^{c}+iy_{\alpha,\mu}\hat{Y}_{\mu}^{c})+\sum_{\mu,\nu>0}g_{\alpha,\mu\nu} \hat{c}_{\mu}^{\dagger}\hat{c}_{\nu}, \tag{15}\]
where \(\hat{X}_{\mu}^{c}=\frac{\hat{c}_{\mu}+\hat{c}_{\mu}^{\dagger}}{\sqrt{2}}\) and \(\hat{Y}_{\mu}^{c}=\frac{\hat{c}_{\mu}-\hat{c}_{\mu}^{\dagger}}{i\sqrt{2}}\) are the real and imaginary parts of \(\hat{c}_{\mu}\) and are analogous to position and momentum quadratures. The coefficients \(x_{\alpha,\mu}\) and \(y_{\alpha,\mu}\) are given by
\[x_{\alpha,\mu}+y_{\alpha,\mu} =\sqrt{2}\bra{0}_{j}\,\hat{d}_{j,\alpha}^{-}\ket{\mu}_{j},\] \[x_{\alpha,\mu}-y_{\alpha,\mu} =\sqrt{2}\bra{\mu}_{j}\,\hat{d}_{j,\alpha}^{-}\ket{0}_{j}. \tag{16}\]
As we will show, the \(x_{\alpha,\mu}\) and \(y_{\alpha,\mu}\) terms in \(\hat{\mathscr{D}}_{\alpha}^{-}\) determine the leading-order [\(O(1)\)] behavior of the quantum correlations, while the \(g_{\alpha,\mu\nu>0}\) terms lead to finite-size corrections of order \(O(1/N)\). For brevity, we will collect the components of \(x_{\alpha,\mu}\) and \(y_{\alpha,\mu}\) into the vectors \(\vec{x}_{\alpha}\) and \(\vec{y}_{\alpha}\), and the components \(g_{\alpha,\mu\nu>0}\) into the matrix \(\overleftrightarrow{g}_{\alpha}\).
While the values of \(x_{\alpha,\mu}\) and \(y_{\alpha,\mu}\) are related to matrix elements of \(\hat{\mathscr{D}}_{\alpha}^{-}\), and are therefore basis-dependent, physically relevant quantities such as the curvature of the superradiance potential, critical points, and spin squeezing do not depend on the basis choice. Instead, they only depend on the physical parameters \((\alpha_{0},\theta_{0};\vec{\beta})\). For example, to leading order the Hessian matrix \(\mathcal{H}\) can be written as
\[\mathcal{H}=\] \[N\left(\begin{array}{cc}2\text{Re}(\vec{x}_{\alpha_{0}}^{*} \cdot\vec{y}_{\alpha_{0}})&\text{Re}(\vec{x}_{\alpha_{0}}^{*}\cdot\vec{y}_{ \alpha_{1}}+\vec{x}_{\alpha_{1}}^{*}\cdot\vec{y}_{\alpha_{0}})\\ \text{Re}(\vec{x}_{\alpha_{0}}^{*}\cdot\vec{y}_{\alpha_{1}}+\vec{x}_{\alpha_{1 }}^{*}\cdot\vec{y}_{\alpha_{0}})&2\text{Re}(\vec{x}_{\alpha_{1}}^{*}\cdot \vec{y}_{\alpha_{1}})\end{array}\right). \tag{17}\]
Basis rotations lead to \(\text{SU}(\ell-1)\) rotations of \(\vec{x}_{\alpha}\) and \(\vec{y}_{\alpha}\), and their dot products are invariant under \(\text{SU}(\ell-1)\) rotations. For our examples, \(x_{\alpha,\mu}\) and \(y_{\alpha,\mu}\) are real, and as such we will set them to be real hereafter.
### Bogoliubov transformation
To leading order in \(N\), i.e. ignoring \(g_{\alpha,\mu\nu>0}\) in Eq. (15), the jump operators are linear and the master equation is quadratic in the Schwinger bosonic variables in the HP approximation. Thus, we can analytically solve the system using a Bogoliubov transformation. For this purpose, note first that this order, the jump operator \(\hat{\mathscr{D}}_{\alpha}^{-}\simeq\sqrt{N}\sum_{\mu>0}(x_{\alpha,\mu}\hat{X}_ {\mu}^{c}+iy_{\alpha,\mu}\hat{Y}_{\mu}^{c})\) can be interpreted as being proportional to a single annihilation operator. Specifically, we define two Bogoliubov operators \(\hat{b}_{\alpha_{0}}\) and \(\hat{b}_{\alpha_{1}}\) as
\[\hat{b}_{\alpha}\equiv\frac{1}{\sqrt{2N\bar{x}_{\alpha}\cdot\vec{y}_{\alpha}}} \hat{\mathscr{D}}_{\alpha}^{-},\quad\alpha\in\{\alpha_{0},\alpha_{1}\}. \tag{18}\]
We call them Bogoliubov b-bosons. Importantly, the steady-state condition \(\hat{\mathscr{D}}_{\alpha}^{-}\rho=0\) implies that the steady state is the vacuum of \(\hat{b}_{\alpha_{0}}\) and \(\hat{b}_{\alpha_{1}}\).
The commutators \([\hat{b}_{\alpha},\hat{b}_{\alpha^{\prime}}^{\dagger}]\) are proportional to the Hessian matrix elements in Eq. (10). The normalization factor in Eq. (18) ensures that \([\hat{b}_{\alpha},\hat{b}_{\alpha}^{\dagger}]=1\). If \(\mathcal{H}\) is not diagonal, then \([\hat{b}_{\alpha_{0}},\hat{b}_{\alpha_{1}}^{\dagger}]\) is nonzero. In this case, a convenient
method is to first find the basis of atomic jump operators that diagonalizes \(\mathcal{H}\), i.e. a convenient polarization basis, and then define Bogoliubov operators corresponding to those jump operators. Thus, without loss of generality, we can assume that \([\hat{b}_{\alpha_{0}},\hat{b}_{\alpha_{1}}^{\dagger}]=0\) and the Hessian is diagonal.
Since there are \((\ell-1)\) Schwinger c-bosons \(\hat{c}_{\mu}\) with \(\mu>0\), there have to be \((\ell-3)\) other independent Bogoliubov b-bosons, \(\hat{b}_{\gamma},\gamma\in[1,\ell-3]\), in addition to \(\hat{b}_{\alpha_{0}}\) and \(\hat{b}_{\alpha_{1}}\). These Bogoliubov operators commute with \(\hat{\mathscr{D}}_{\alpha}^{\pm}\), and correspondingly are conserved during the evolution to the steady state in this approximation. Despite their dynamics being trivial, they can still play an important role in shaping the dynamics of the Schwinger c-bosons, as will be explained in Sec. VI.
### Calculating the quantum correlations
Starting from \(\ket{\Psi(\alpha_{0},\theta_{0};\vec{\beta})}\), which is the vacuum of \(\hat{c}_{\mu>0}\), the driven-dissipative dynamics leads to the development of correlations between the bosonic fields \(\hat{c}_{\mu>0}\). We quantify the correlations via the covariance matrix \(\Xi^{c}=\left(\begin{array}{cc}\Xi^{c}_{XX}&\Xi^{c}_{XY}\\ (\Xi^{c}_{XY})^{T}&\Xi^{c}_{YY}\end{array}\right)\), where \(\Xi^{c}_{XX},\Xi^{c}_{XY}\), and \(\Xi^{c}_{YY}\) are the covariance matrices for the Schwinger c-boson variables \(\hat{X}^{c}\) and \(\hat{Y}^{c}\):
\[(\Xi^{c}_{XX})_{\mu\nu} =\bra{\hat{X}^{c}_{\mu}\hat{X}^{c}_{\nu}+\hat{X}^{c}_{\nu}\hat{X}^ {c}_{\mu}}-2\bra{\hat{X}^{c}_{\mu}}\bra{\hat{X}^{c}_{\nu}},\] \[(\Xi^{c}_{XY})_{\mu\nu} =\bra{\hat{X}^{c}_{\mu}\hat{Y}^{c}_{\nu}+\hat{Y}^{c}_{\nu}\hat{X}^ {c}_{\mu}}-2\bra{\hat{X}^{c}_{\mu}}\bra{\hat{Y}^{c}_{\nu}},\] \[(\Xi^{c}_{YY})_{\mu\nu} =\bra{\hat{Y}^{c}_{\mu}\hat{Y}^{c}_{\nu}+\hat{Y}^{c}_{\nu}\hat{Y}^ {c}_{\mu}}-2\bra{\hat{Y}^{c}_{\mu}}\bra{\hat{Y}^{c}_{\nu}}, \tag{19}\]
and \(\mu,\nu>0\). At \(t=0\), \(\Xi^{c}\) is the identity matrix.
In the HP approximation, the bosonic operators \(\hat{c}_{\mu}\) approximate the spin operators \(\hat{\Lambda}_{\mu}=\frac{1}{\sqrt{N}}\sum_{i}\ket{0}_{i}\bra{\mu}_{i}\), which are \(2(\ell-1)\) spin variables perpendicular to the collective spin vector. Therefore, the matrix elements of \(\Xi^{c}\) are approximately equal to covariances of the real and imaginary parts of \(\hat{\Lambda}_{\mu}\). These are the only relevant variables as they have \(O(1)\) fluctuations in the initial state \(\ket{0}\); the remaining orthogonal variables \(\hat{\Lambda}_{\mu\nu}=\frac{1}{\sqrt{N}}\sum_{i}\ket{\mu}_{i}\bra{\nu}_{i}\) with \(\mu,\nu>0\) are suppressed by \(1/\sqrt{N}\). Therefore, the covariance matrix \(\Xi^{c}\) describes the quantum noise (normalized spin variances) perpendicular to the collective spin vector. Any eigenvalue of \(\Xi^{c}\) decreasing below \(1\) indicates a reduction in spin projection noise perpendicular to the collective spin vector, as compared to the initial coherent state. Such noise reduction perpendicular to the collective spin vector is spin squeezing in a multilevel system when the spin length is order \(N\)[20; 36], and is analogous to spin-squeezing for spin-\(1/2\) atoms.
The simplest way to calculate \(\Xi^{c}\) is in the Bogoliubov framework. As argued above, the dynamics brings the system to a steady state where the occupation of the Bogoliubov modes associated to \(\hat{b}_{\alpha_{0}}\) and \(\hat{b}_{\alpha_{1}}\) relaxes to the vacuum value, any correlations associated with \(\hat{b}_{\alpha_{0}}\) or \(\hat{b}_{\alpha_{1}}\) decay to \(0\), and correlations of all other Bogoliubov modes are left untouched. Analogous to \(\Xi^{c}\), we define \(\Xi^{b}=\left(\begin{array}{cc}\Xi^{b}_{XX}&\Xi^{b}_{YY}\\ (\Xi^{b}_{YY})^{T}&\Xi^{b}_{YY}\end{array}\right)\) where \(\hat{X}^{b}_{\mu}=\frac{\hat{b}_{\mu}+\hat{b}_{\mu}^{T}}{\sqrt{2}}\) and \(\hat{Y}^{b}_{\mu}=\frac{\hat{b}_{\mu}-\hat{b}_{\mu}^{T}}{i\sqrt{2}}\), and \(\Xi^{b}_{XX}\), \(\Xi^{b}_{YY}\), and \(\Xi^{b}_{XY}\) are defined similar to Eq. (19) but in terms of quadratures of Bogoliubov b-bosonic operators instead of Schwinger c-bosons. We obtain \(\Xi^{c}\) by inverting the Bogoliubov transformation, and the squeezing can be inferred from the eigenvalues of \(\Xi^{c}\). Note that since the Bogoliubov transformation is not unitary, the eigenvalues of \(\Xi^{c}\) are different from those of \(\Xi^{b}\).
In the following sections, we show concrete examples of this procedure using effective two-level [Sec. V] and multilevel [Sec. VI] systems.
## V Two-level system
First, we review the driven-dissipative dynamics of an effective two-level system realized within the \(\ket{g,F}\) and \(\ket{e,F+1}\) manifold of a system with \(F_{g}=F\) and \(F_{e}=F+1\), when driven by right-circularly polarized light as shown in Figs. 2(a) and 3(a). Even though the driven-dissipative dynamics of two-level systems has been studied extensively in the literature [25; 26; 27; 28; 30; 31; 32; 33; 34; 35] we discuss it first to facilitate the understanding of the more complex multilevel systems presented below.
The jump operator for emission of right-circularly polarized light is \(\hat{\mathscr{D}}_{R}^{-}\). The left-handed polarization is irrelevant. The only relevant term in \(\hat{\mathscr{D}}_{R}^{-}\) for the two levels is \(C_{R}^{F}\hat{S}_{F,R}^{-}\) with \(\hat{S}_{F,R}^{-}=\sum_{i}\hat{s}_{F,i,R}^{-}\), and all the relevant dynamics can be visualized on one Bloch sphere whose axes are \(\hat{S}_{F,R}^{\alpha},\alpha\in\{x,y,z\}\). The Clebsch-Gordan coefficient for this transition is \(C_{R}^{F}=1\). For brevity in this section, we will refer to \(\hat{S}_{F,R}^{\alpha}\) as simply \(\hat{S}^{\alpha}\), dropping the subscripts referring to the angular momentum and polarization. The mean spin direction on this Bloch sphere, i.e. the mean Bloch vector, initially points along \(\hat{S}_{\text{Bloch}}=(0,\sin\theta_{0},-\cos\theta_{0})\), where \(\theta_{0}\) is the angle that the Bloch vector makes with the south pole.
From Sec. III, the superradiance potential for the two-level system is \(V(\zeta,0;\Psi)=\sin^{2}\frac{\zeta+\theta_{0}}{2}\). The mean-field state is stationary if \(\Omega_{R}=\frac{N\Gamma}{2}\sin\theta_{0}\). Figure 2(b), which plots \(d^{2}V/d\zeta^{2}\big{|}_{\zeta=0}\) shows that the system is stable in the region \(-\frac{\pi}{2}<\theta_{0}<\frac{\pi}{2}\), as discussed in Sec. III.
### Quantum correlations
The system has quantum fluctuations along the directions \((1,0,0)\) and \((0,\cos\theta_{0},\sin\theta_{0})\), which are the two directions perpendicular to the Bloch vector \(\hat{S}_{\text{Bloch}}\) on the Bloch sphere. The driven-dissipative dynamics squeezes
and antisqueezes the quantum noise in these orthogonal directions, which we calculate with the Bogoliubov framework in the HP approximation.
To do this, we define two Schwinger c-bosons via
\[\hat{c}_{0} =\cos\frac{\theta_{0}}{2}\hat{a}_{g,F}+i\sin\frac{\theta_{0}}{2} \hat{a}_{e,F+1}\] \[\hat{c}_{1} =i\sin\frac{\theta_{0}}{2}\hat{a}_{g,F}+\cos\frac{\theta_{0}}{2} \hat{a}_{e,F+1} \tag{20}\]
This determines \(g_{\alpha,\mu\nu}\):
\[\overleftrightarrow{g}_{R}=\left(\begin{array}{cc}0&\frac{\sqrt{N}}{2}(1+ \cos\theta_{0})\\ \frac{\sqrt{N}}{2}(1-\cos\theta_{0})&-\frac{i}{2}\sin\theta_{0}\end{array} \right), \tag{21}\]
and thus \(\hat{\mathscr{D}}_{R}^{-}=\sqrt{\frac{N}{2}}(\hat{X}_{1}^{c}+i\cos\theta_{0} \hat{Y}_{1}^{c})-\frac{i\sin\theta_{0}}{2}(\hat{X}_{1}^{c}+i\hat{Y}_{1}^{c})( \hat{X}_{1}^{c}-i\hat{Y}_{1}^{c})\). The Schwinger c-bosonic variables \(\hat{X}_{1}^{c}\) and \(\hat{Y}_{1}^{c}\) are proportional to the orthogonal spin variables \(\hat{S}^{x}\) and \(\cos\theta_{0}\hat{S}^{y}+\sin\theta_{0}\hat{S}^{z}\), respectively. Following Sec. IV.3, we define the Bogoliubov b-boson \(\hat{b}_{R}=\frac{\hat{X}_{1}^{c}+i\cos\theta_{0}\hat{Y}_{1}^{c}}{\sqrt{2\cos \theta_{0}}}\). At leading order, \(\hat{\mathscr{D}}_{R}^{-}=\sqrt{N\cos\theta_{0}}\,\hat{b}_{R}\).
The master equations for the elements of \(\Xi^{b}\) are
\[\partial_{t}\Xi^{b}_{XX}= N\Gamma\cos\theta_{0}(1-\Xi^{b}_{XX})\] \[+\underbrace{\Gamma\sin^{2}\theta_{0}\left(\frac{\Xi^{b}_{YY}}{ \cos^{2}\theta_{0}}-\Xi^{b}_{XX}\right)}_{\text{finite size}},\] \[\partial_{t}\Xi^{b}_{YY}= N\Gamma\cos\theta_{0}(1-\Xi^{b}_{YY})\] \[+\underbrace{\Gamma\sin^{2}\theta_{0}\left(\Xi^{b}_{XX}\cos^{2} \theta_{0}-\Xi^{b}_{YY}\right)}_{\text{finite size}},\] \[\partial_{t}\Xi^{b}_{XY}= -(N\Gamma\cos\theta_{0}+4\Gamma\sin^{2}\theta_{0})\Xi^{b}_{XY}. \tag{22}\]
The higher-order terms are highlighted with an under-brace for clarity, and we will use them to calculate the finite-size corrections for the steady-state squeezing. Solving Eqs. (22) and inverting the Bogoliubov transform, we obtain the leading \([O(N\Gamma)]\) and subleading \([O(\Gamma)]\) terms for the time evolution of the covariance matrix for the Schwinger c-bosons. The solution for \(\Xi^{c}\) due to only the leading \([O(N\Gamma)]\) terms is
\[\Xi^{c}_{XX} =\cos\theta_{0}+(1-\cos\theta_{0})e^{-N\Gamma t\cos\theta_{0}},\] \[\Xi^{c}_{YY} =\frac{1}{\cos\theta_{0}}+\left(1-\frac{1}{\cos\theta_{0}}\right) e^{-N\Gamma t\cos\theta_{0}},\] \[\Xi^{c}_{XY} =0. \tag{23}\]
Therefore, \(\Xi^{c}\) is diagonal, and its eigenvalues are \(\Xi^{c}_{XX}\) and \(\Xi^{c}_{YY}\). Of these, \(\Xi^{c}_{XX}<1\) (for \(0<|\theta_{0}|<\frac{\pi}{2}\)), and is therefore squeezed. The squeezing is along \(\hat{X}_{1}^{c}\propto\hat{S}^{x}\), and reaches a steady-state value of \(\cos\theta_{0}\) as \(t\rightarrow\infty\), at the rate \(1/(N\Gamma\cos\theta_{0})\). The antisqueezing is along \(\hat{Y}_{1}^{c}\propto\sin\theta_{0}\hat{S}^{z}+\cos\theta_{0}\hat{S}^{y}\), with a steady-state value \(1/\cos\theta_{0}\). The squeezing \(\Xi^{c}_{XX}\) approaches \(0\) in the steady state at the critical points \(\theta_{c}=\pm\frac{\pi}{2}\). In Figs. 3(b-c) we illustrate the steady state noise distribution in the Bogoliubov basis \((\hat{X}_{R}^{b},\hat{Y}_{R}^{b})\), the Schwinger basis \((\hat{X}_{1}^{c},\hat{Y}_{1}^{c})\), and on the Bloch sphere.
### Finite-size corrections in the steady state
Although to leading order the squeezing at the critical points goes to zero, in reality, higher-order corrections limit the amount of attainable squeezing. Close to the critical points we have \(\cos\theta_{0}\approx 0\). Thus, an approximate solution at next-to-leading order in \(N\) for the steady-state noise in the Bogoliubov b-bosons can be obtained by ignoring \(\Xi^{b}_{XX}\) relative to \(\Xi^{b}_{YY}/\cos^{2}\theta_{0}\) in Eq. (22). This yields \(\Xi^{b}_{XX}\approx 1+\frac{\sin^{2}\theta_{0}}{N\cos^{2}\theta_{0}}\Xi^{b}_{YY}\) in the steady state and a similar expression for \(\Xi^{b}_{YY}\). At leading order in \(1/N\), we set \(\Xi^{b}_{YY}\approx 1\) and \(\sin\theta_{0}\approx 1\), to obtain \(\Xi^{b}_{XX}\approx 1+1/(N\cos^{3}\theta_{0})\). Transforming back to the Schwinger c-bosons, the steady-state squeezing is
\[\Xi^{c}_{XX}\approx\cos\theta_{0}+\underbrace{\frac{1}{N\cos^{2}\theta_{0}}}_{ \text{finite size}}. \tag{24}\]
Figure 3: (a) An ensemble of effective two-level atoms [see also Fig. 2(a)]. (b) Illustration of the steady-state squeezing in bosonic quadratures. The steady state is the coherent vacuum of \(\hat{X}^{b}\) and \(\hat{Y}^{b}\), which makes it squeezed in \(\hat{X}^{c}\) and antisqueezed in \(\hat{Y}^{c}\) [see text for definitions of the quadratures]. (c) Visualizing the collective spin squeezing on a Bloch sphere in the two-level system. The squeezing is along \(\hat{S}^{x}\). The visualization shows a particular example where the steady state is near \(\theta_{c}=\pi/2\). (d) Steady state squeezing versus \(\theta_{0}\) in the two-level case, obtained from an exact numerical calculation with \(N=100\) atoms (solid line). The black dashed line plots the HP prediction, \(\cos\theta_{0}\), and blue dotted line plots \(\Xi^{c}_{XX}\) in the coherent state. (e) The best squeezing achievable versus \(N\), has a scaling close to \(N^{-1/3}\).
This shows that the squeezing reaches an optimum value of \(\frac{3}{(4N)^{1/3}}\) when the Bloch vector's angle with the south pole is \(\theta_{0}\sim\frac{\pi}{2}-\left(\frac{2}{N}\right)^{1/3}\).
Figure 3(d) shows the steady-state squeezing versus \(\theta_{0}\), obtained from an exact numerical calculation with \(N=100\) atoms. The squeezing agrees well with the HP leading order prediction \(\Xi_{XX}^{-}=\cos\theta_{0}\), until finite-size effects kick in and set a limit on squeezing. Figure 3(e) shows that the best squeezing reaches an \(N^{-1/3}\) scaling in agreement with previous literature [26; 27; 35; 37], and close to the scaling predicted by our analysis. More accurate estimations can be made using the full time-dependent solution for Eq. (22) including the leading and sub-leading terms, which is given in Appendix D.
## VI Multilevel system
Next, we consider the squeezing generated in multilevel atoms. In principle, there are multiple level structures and initial conditions one may consider. However, our main conclusions will be the same for most other internal structures or initial states, namely:
1. The system generally hosts two squeezed modes for each relevant cavity polarization. If only one polarization is relevant [1], two modes are squeezed; if both polarizations are relevant, four modes are squeezed, as we show below.
2. The best squeezing attainable close to the critical point generally scales as \(N^{-1/4}\).
We note that there are some fringe cases where the system behaves like a two-level system for emission of one of the polarizations, and the number of squeezed modes is reduced to either \(1\) or \(3\). We will explain these fringe cases in Sec. VI.3.
To illustrate these findings, we choose the example of Fig. 2(c) with \(F_{g}=F_{e}=3/2\), where all \(\ell=8\) internal levels and both cavity polarizations are relevant. We choose to decompose the polarizations in the linear basis, such that the system's evolution is governed by \(\hbar\frac{d\rho}{dt}=\mathcal{L}_{\Sigma}[\rho]+\mathcal{L}_{\Pi}[\rho]\), where the respective jump operators for \(\mathcal{L}_{\Sigma}\) and \(\mathcal{L}_{\Pi}\) are \(\hat{\mathscr{D}}_{\Sigma}^{-}\) and \(\hat{\mathscr{D}}_{\Pi}^{-}\).
### Holstein-Primakoff approximation and Bogoliubov transformation
The jump operators \(\hat{\mathscr{D}}_{\Sigma}^{-}\) and \(\hat{\mathscr{D}}_{\Pi}^{-}\) expressed in terms of the Schwinger c-boson operators are
\[\hat{\mathscr{D}}_{\Sigma}^{-} =\sqrt{N}\sum_{\mu>0}\left(x_{\Sigma,\mu}\hat{X}_{\mu}^{c}+iy_{ \Sigma,\mu}\hat{Y}_{\mu}^{c}\right)+\sum_{\mu\nu>0}g_{\Sigma,\mu\nu}\hat{c}_{ \mu}^{\dagger}\hat{c}_{\nu},\] \[\hat{\mathscr{D}}_{\Pi}^{-} =\sqrt{N}\sum_{\mu>0}\left(x_{\Pi,\mu}\hat{X}_{\mu}^{c}+iy_{\Pi, \mu}\hat{Y}_{\mu}^{c}\right)+\sum_{\mu\nu>0}g_{\Pi,\mu\nu}\hat{c}_{\mu}^{ \dagger}\hat{c}_{\nu}. \tag{25}\]
The values of \(x_{\Sigma(\Pi),\mu}\), \(y_{\Sigma(\Pi),\mu}\), and \(g_{\Sigma(\Pi),\mu\nu}\) depend on the basis states \(|\mu\rangle\) used to define the Schwinger c-bosons, and we detail one specific basis in Appendix E.
Following Sec. IV.3, we define _two_ Bogoliubov b-bosons,
\[\hat{b}_{\Sigma} =\sum_{\mu=1}^{7}\frac{x_{\Sigma,\mu}\hat{X}_{\mu}^{c}+iy_{\Sigma,\mu}\hat{Y}_{\mu}^{c}}{\sqrt{2\vec{x}_{\Sigma}\cdot\vec{y}_{\Sigma}}},\] \[\hat{b}_{\Pi} =\sum_{\mu=1}^{7}\frac{x_{\Pi,\mu}\hat{X}_{\mu}^{c}+iy_{\Pi,\mu} \hat{Y}_{\mu}^{c}}{\sqrt{2\vec{x}_{\Pi}\cdot\vec{y}_{\Pi}}}, \tag{26}\]
such that \(\hat{\mathscr{D}}_{\Sigma}^{-}=\sqrt{N\vec{x}_{\Sigma}\cdot\vec{y}_{\Sigma}} \,\hat{b}_{\Sigma}+O(1)\) and \(\hat{\mathscr{D}}_{\Pi}^{-}=\sqrt{N\vec{x}_{\Pi}\cdot\vec{y}_{\Pi}}\,\hat{b} _{\Pi}+O(1)\). Additionally, there are \((\ell-3)=5\) more Bogoliubov b-bosons, which we can write as
\[\hat{b}_{\nu}=\sum_{\mu=1}^{7}\frac{y_{\nu,\mu}\hat{X}_{\mu}^{c}+ix_{\nu,\mu} \hat{Y}_{\mu}^{c}}{\sqrt{2}}. \tag{27}\]
The normalization condition \([\hat{b}_{\nu},\hat{b}_{\nu}^{\dagger}]=1\) is equivalent to setting \(\vec{x}_{\nu}\cdot\vec{y}_{\nu}=1\). Because of the commutation relations and using an appropriate choice of basis, all the \(\vec{x}\) vectors can be made mutually orthogonal to each other, and the \(\vec{y}\) vectors can be made mutually orthogonal to each other (see Appendix C). This is why for convenience we reversed the definition of \(x\) and \(y\) in Eq. (27) compared to Eq. (26).
Since \(\mathcal{H}\) is diagonal for this choice of polarization basis and \(|\Psi(\alpha_{0},\theta_{0};\vec{\beta})\rangle\), the system is critical to emission of \(\alpha\)-polarized light if \(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}=0\), (\(\alpha=\Sigma,\Pi\)).
### Quantum correlations
Next, we calculate the \(14\times 14\) covariance matrix for the Bogoliubov b-bosons, and invert the Bogoliubov transformation to get the covariance matrix for the Schwinger c-bosons. As in the two-level system [Sec. V], we again have \(\Xi_{XY}^{b}=0\) at all times for our choice of initial conditions, and therefore \(\Xi^{b}\) is block-diagonal. Similar to the two-level system [Eq. (22)], we write and solve the master equations for the elements of \(\Xi_{XX}^{b}\) and \(\Xi_{YY}^{b}\).
The initial values for the elements of \(\Xi_{XX}^{b}\) are
\[\langle\hat{X}_{\alpha}^{b}\hat{X}_{\beta}^{b}\rangle =\frac{\vec{x}_{\alpha}\cdot\vec{x}_{\beta}}{4\sqrt{(\vec{x}_{ \alpha}\cdot\vec{y}_{\alpha})(\vec{x}_{\beta}\cdot\vec{y}_{\beta})}},\] \[\langle\hat{X}_{\alpha}^{b}\hat{X}_{\mu}^{b}\rangle =\frac{\vec{x}_{\alpha}\cdot\vec{y}_{\mu}}{2\sqrt{2\vec{x}_{ \alpha}\cdot\vec{y}_{\alpha}}},\] \[\langle\hat{X}_{\mu}^{b}\hat{X}_{\nu}^{b}\rangle =\frac{1}{2}\vec{y}_{\mu}\cdot\vec{y}_{\nu}=\frac{1}{2}\delta_{ \mu\nu}, \tag{28}\]
where \(\alpha,\beta\in\{\Sigma,\Pi\}\) and \(\mu,\nu\neq\Sigma,\Pi\). Their subsequent evolution at leading order is governed by the master equa
tions,
\[\partial_{t}\left\langle(\hat{X}_{\alpha}^{b})^{2}\right\rangle =N\Gamma\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}(1-2\left\langle(\hat{X }_{\alpha}^{b})^{2}\right\rangle),\] \[\partial_{t}\left\langle\hat{X}_{\Sigma}^{b}\hat{X}_{\Pi}^{b} \right\rangle =-N\Gamma(\vec{x}_{\Sigma}\cdot\vec{y}_{\Sigma}+\vec{x}_{\Pi} \cdot\vec{y}_{\Pi})\left\langle\hat{X}_{\Sigma}^{b}\hat{X}_{\Pi}^{b}\right\rangle,\] \[\partial_{t}\left\langle\hat{X}_{\alpha}^{b}\hat{X}_{\mu}^{b} \right\rangle =-N\Gamma\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}\left\langle\hat{X }_{\alpha}^{b}\hat{X}_{\mu}^{b}\right\rangle,\] \[\partial_{t}\left\langle\hat{X}_{\mu}^{b}\hat{X}_{\nu}^{b}\right\rangle =0. \tag{29}\]
The solution to the first line is that \(\left\langle(\hat{X}_{\alpha}^{b})^{2}\right\rangle\) exponentially decays to its value in the vacuum state, \(\left\langle(\hat{X}_{\alpha}^{b})^{2}\right\rangle_{\rm ss}=1/2\). The second and third lines describe correlations between \(\hat{X}_{\Sigma(\Pi)}^{b}\) and a different quadrature, and they exponentially decay to \(0\). The correlation in the last line stays constant. Similar equations can be obtained for the elements of \(\Xi_{YY}^{b}\). Thus, the steady-state solution for the covariance matrices for the Bogoliubov bosons is
\[\Xi_{XX}^{b} =\mathrm{diag}\left(1,1,\vec{y}_{1}\cdot\vec{y}_{1},\vec{y}_{2} \cdot\vec{y}_{2},\cdots\right)\] \[\Xi_{YY}^{b} =\mathrm{diag}\left(1,1,\vec{x}_{1}\cdot\vec{x}_{1},\vec{x}_{2} \cdot\vec{x}_{2},\cdots\right),\] \[\Xi_{XY}^{b} =0. \tag{30}\]
### Squeezing in the multilevel system
The steady-state covariance matrices of the Schwinger c-bosons, obtained by inverting the Bogoliubov transformation, have a nontrivial form, and host squeezed modes. Inverting the Bogoliubov transformation for the \(14\times 14\) dimensional matrix, and understanding why there is squeezing, is nontrivial. However, an appropriate basis rotation of the \(\left|\mu>0\right\rangle\) states makes the calculations simpler and gives a geometric understanding of the generation of squeezing (the squeezing itself is independent of the basis transformation).
This basis transformation is such that the transformed \(\vec{x}_{\alpha}\) and \(\vec{y}_{\alpha}\) vectors are
\[\vec{x}_{\Sigma}=\|\vec{x}_{\Sigma}\|(1,0,0,\cdots),\] \[\vec{y}_{\Sigma}=\|\vec{y}_{\Sigma}\|(\cos\phi_{\Sigma},\sin\phi _{\Sigma},0,\cdots),\] \[\vec{x}_{\Pi}=\|\vec{x}_{\Pi}\|(0,0,1,0,0,\cdots),\] \[\vec{y}_{\Pi}=\|\vec{y}_{\Pi}\|(0,0,\cos\phi_{\Pi},\sin\phi_{\Pi},0,\cdots),\] \[\vec{x}_{1}\propto(0,1,0,\cdots)\] \[\vec{y}_{1}\propto(-\sin\phi_{\Sigma},\cos\phi_{\Sigma},0,\cdots),\] \[\vec{x}_{2}\propto(0,0,0,1,0,\cdots),\] \[\vec{y}_{2}\propto(0,0,-\sin\phi_{\Pi},\cos\phi_{\Pi},0,\cdots),\] \[x_{\mu,\nu}=y_{\mu,\nu}=\delta_{\mu,\nu+2}\quad(\mu>2). \tag{31}\]
The basis transformation is explicitly given in Appendix F, and it is useful because it block-diagonalizes the covariance matrix of the Schwinger \(c\)-bosons at all times. This is because in this basis \(\tilde{b}_{\Sigma}\) and \(\tilde{b}_{1}\) only depend on \(\hat{c}_{1,2}\), \(\hat{b}_{\Pi}\) and \(\hat{b}_{2}\) on \(\hat{c}_{3,4}\), and \(\hat{b}_{\mu\geq 3}=\hat{c}_{\mu+2}\). This facilitates the visualization of each pair of squeezed and corresponding antisqueezed modes in a two-dimensional space that is independent of the other squeezed and antisqueezed modes, as we explain below.
Equation (29) shows that the \(\hat{X}\) quadratures evolve independently from the \(\hat{Y}\) quadratures, so we will consider them separately, focusing first on the \(\hat{Y}\) quadratures and applying a similar argument to the \(\hat{X}\) quadratures. Because of the structure of the basis choice in Eq. (31),
Figure 5: Steady-state squeezing for the 8-level system shown in Fig. 2(c). (a-b) Squeezing in a combination of the \(\hat{X}\) and \(\hat{Y}\) quadratures, respectively, when the system collectively emits only \(\Sigma\)-polarized light. (c) Squeezing when the system collectively emits only \(\Pi\)-polarized light. There are two squeezed modes, one in a combination of \(\hat{X}\) quadratures and one in a combination of \(\hat{Y}\) quadratures, and they both have the same value. Red and blue dashed lines are critical lines, and white regions are unstable. (d-f) Squeezing when the system collectively emits light of both polarizations. The system is squeezed only in the regions where it is stable to emission of both polarizations, and the value of the squeezing in this region is the same as in (a-c).
the covariances of \(\hat{Y}_{1}^{c}\) and \(\hat{Y}_{2}^{c}\) have coupled master equations, the covariances of \(\hat{Y}_{3}^{c}\) and \(\hat{Y}_{4}^{c}\) have coupled master equations, and all other covariances evolve independently. Thus, we will focus on the evolution of \(\hat{Y}_{1}^{c}\) and \(\hat{Y}_{2}^{c}\) first.
The two most important elements to understand the evolution of these covariances are: (I) The noise along \(\hat{Y}_{\Sigma}^{b}\propto(\cos\phi_{\Sigma}\hat{Y}_{1}^{c}+\sin\phi_{\Sigma} \hat{Y}_{2}^{c})\) evolves towards its vacuum value, as discussed previously; (II) the noise in \(\hat{Y}_{2}^{c}\) is conserved, since \(\hat{Y}_{2}^{c}\) commutes with \(\hat{\mathscr{D}}_{\Sigma}^{-}\) and \(\hat{\mathscr{D}}_{\Pi}^{-}\), see Eqs. (26), (27) and (31). Note that the initial noise of \(\hat{Y}_{1}^{c}\) and \(\hat{Y}_{2}^{c}\) in the initial coherent state is equal, i.e. the noise has a circular distribution. In the general case where \(\phi_{\Sigma}\neq 0\), the conservation of \(\hat{Y}_{2}^{c}\) sets a constraint on \(\hat{Y}_{\Sigma}^{b}\) resulting in an evolution that shears the circle into an ellipse as shown in Fig. 4(a), which leads to one squeezed and one antisqueezed mode. Similar shearing on the \(\hat{Y}_{3}^{c}\)-\(\hat{Y}_{4}^{c}\) plane due to emission of \(\Pi\) polarization leads again to one squeezed and one antisqueezed mode, and a similar process happens in the \(\hat{X}^{c}\) quadratures. In total, there are four squeezed and four antisqueezed modes. The values of the squeezing and antisqueezing are given in Appendix F.
The special case \(\phi_{\Sigma}=0\) is qualitatively different from the general case of \(\phi_{\Sigma}\neq 0\) [see Fig. 4(b)]. In this case, we have \(\hat{Y}_{\Sigma}^{b}=\sqrt{\frac{\|\vec{y}_{\Sigma}\|}{\|\vec{x}_{\Sigma}\|}} \hat{Y}_{1}^{c}\) and thus its evolution towards the vacuum value is unconstrained by \(\hat{Y}_{2}^{c}\). Therefore, the noise in \(\hat{Y}_{\Sigma}^{b}\) increases to its vacuum state value if \(\|\vec{y}_{\Sigma}\|<\|\vec{x}_{\Sigma}\|\) leading to antisqueezing in \(\hat{Y}_{1}^{c}\), and the noise in \(\hat{Y}_{2}^{b}\) decreases to its vacuum state value if \(\|\vec{y}_{\Sigma}\|>\|\vec{x}_{\Sigma}\|\) leading to squeezing in \(\hat{Y}_{1}^{c}\). The opposite happens in \(\hat{X}_{\Sigma}^{b}\) and \(\hat{X}_{1}^{c}\). Because of this, the number of squeezed modes is reduced by 1 compared to the general case \(\phi_{\Sigma}\neq 0\). This is essentially what happens in the two-level system of Sec. V, where we had that \(\vec{x}_{R}\) was parallel to \(\vec{y}_{R}\) (instead of \(\vec{x}_{\Sigma}\) and \(\vec{y}_{\Sigma}\)).
Figure 5 plots the steady-state values of the squeezing versus \(\theta_{0}\) and \(\beta\). Figures 5(a-b) show the value of the squeezing in two modes if the system collectively emitted \(\Sigma\)-polarized light only. Figure 5(c) shows the value of the squeezing if the system collectively emitted \(\Pi\)-polarized light only. In the latter case, two modes are squeezed but the amount of squeezing in both modes is the same, so we plot them together. The white regions are unstable to quantum fluctuations, and critical lines (dashed) separate the stable and unstable regions (the small white gaps between the critical lines and the gray regions are due to truncating the squeezing at \(10^{-2}\)). Note that emission of \(\Sigma\)- and \(\Pi\)-polarized light have different regions of stability. For example, the system may be stable to emission of \(\Sigma\)-polarized light, but unstable to emission of \(\Pi\)-polarized light, or vice versa. Figures 5(d-f) plot the squeezing in the same four modes as Figs. 5(a-c), but only in the region where the system is stable to emission of both polarizations, which is the physically relevant case. The squeezing due to emission of \(\Sigma\)-polarized light approaches 0 near the red lines, the squeezing due to emission of \(\Pi\)-polarized light approaches 0 near the blue lines, and squeezing in all four modes approaches 0 near the green lines and points.
### Finite-size corrections in the steady state
As in the two-level system [Sec. V.1], the best squeezing reachable near the critical point is limited by \(N\). Here, we calculate the finite-size corrections to the steady-state squeezing by including the higher-order terms in the HP approximation.
Near any critical point in a multilevel system, \(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}\) approaches 0, which means generally that the angle \(\phi_{\alpha}\) [Eq. (31)] between them approaches \(\pi/2\) (For the two-level-like case [see Fig. 4(b)], \(\|\vec{y}_{\alpha}\|\) approaches 0 near the critical point, and the arguments below do not apply). From Fig. 4(a), we see that for \(\phi_{\alpha}\approx\pi/2\), the squeezed variable is approximately \(\hat{Y}_{\alpha,\mathrm{sq}}^{c}\equiv\frac{\sum_{\mu_{\alpha}}y_{\alpha,\mu} \hat{Y}_{\mu}^{c}}{\|\vec{y}_{\alpha}\|}\propto\hat{Y}_{\alpha}^{b}\). Similarly, \(\hat{X}_{\alpha,\mathrm{sq}}^{c}\equiv\frac{\sum_{\mu}x_{\alpha,\mu}\hat{X}_{ \mu}^{c}}{\|\vec{x}_{\alpha}\|}\propto\hat{X}_{\alpha}^{b}\), and the antisqueezed quadratures are \(\hat{X}_{\alpha,\mathrm{antisq}}^{c}=\frac{\sum_{\mu}y_{\alpha,\mu}\hat{X}_{ \mu}^{c}}{\|\vec{y}_{\alpha}\|}\) and \(\hat{Y}_{\alpha,\mathrm{antisq}}^{c}=\frac{\sum_{\mu}x_{\alpha,\mu}\hat{Y}_{\mu}^ {c}}{\|\vec{x}_{\alpha}\|}\), \(\alpha=\Sigma,\Pi\) [see Appendix F]. Since \(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}\) approaches 0, we expand the squeezing and antisqueezing in powers of \(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}\). Focusing on only one pair of these variables, \(\hat{X}_{\alpha,\mathrm{sq}}^{c}\) and \(\hat{X}_{\alpha,\mathrm{anti-sq}}^{c}\), as an example, their steady-state values are [see Appendix F]
\[\xi_{X,\alpha,\mathrm{sq}}^{2}\approx 2\frac{\vec{x}_{\alpha}\cdot\vec{y}_{ \alpha}}{\vec{x}_{\alpha}\cdot\vec{x}_{\alpha}}+O(1/N)\] \[\xi_{X,\alpha,\mathrm{anti-sq}}^{2}\approx\xi_{Y,\alpha,\mathrm{ anti-sq}}^{2}\approx 2\frac{(\vec{x}_{\alpha}\cdot\vec{x}_{\alpha})(\vec{y}_{\alpha} \cdot\vec{y}_{\alpha})}{(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha})^{2}}+O(1/N). \tag{32}\]
The \(O(1/N)\) terms in both equations arise due to the higher-order terms, i.e. the \(g_{\alpha,\mu\nu>0}\) terms, in the master equation. As the critical point is approached, those \(O(1/N)\) terms increase proportionally to \((\xi_{X,\alpha,\mathrm{anti-sq}}^{2})/(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha})\). This in particular affects the squeezing,
\[\xi_{X,\alpha,\mathrm{sq}}^{2}\approx\frac{\vec{x}_{\alpha}\cdot\vec{y}_{ \alpha}}{\vec{x}_{\alpha}\cdot\vec{x}_{\alpha}}+\underbrace{\frac{A}{N(\vec{x}_{ \alpha}\cdot\vec{y}_{\alpha})^{3}}}_{\mathrm{finite\ size}}, \tag{33}\]
where \(A\) is some constant. Thus, the squeezing does not decrease monotonically, instead it increases as the critical point is approached. The optimal value of \(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}\) scales as \(\vec{x}_{\alpha}\cdot\vec{y}_{\alpha}\propto N^{-1/4}\), and the optimum squeezing also scales \(\propto N^{-1/4}\) [see also Ref. [1]].
## VII Discussion
We described a method to produce a collective four-mode squeezed state of matter using the interplay of driving and dissipation in a cavity. For the model considered,
there are two main differences in the nature of the squeezing dynamics in the multilevel systems as compared to the well-known case of two-level systems.
First, driven-dissipative dynamics in two-level systems generate only one squeezed mode, whereas dynamics in multilevel systems can generically produce up to _two_ squeezed modes per polarization. In Ref. [1], we studied cases when only one cavity polarization is relevant and explained that squeezing emerges from shearing perpendicular to two conserved spin variables. Here we generalized the analysis for the more general case when two polarizations are in play.
The second difference between two-level and multilevel systems is the finite-size scaling of the best squeezing near the critical points. Near the critical point, the squeezing gets an admixture of the antisqueezing, which limits the best squeezing achievable. The antisqueezing increases faster in the multilevel system than in the two-level system, as the critical point is approached. Therefore, the scaling of the best squeezing in a multilevel system is usually worse (\(\propto N^{-1/4}\)) than a two-level system (\(\propto N^{-1/3}\)).
We have focused on a specific level structure and type of initial conditions. However, there is still a large parameter space to explore the dynamics and squeezing generation of multilevel atoms. While our results hold for cases with a single ground and excited manifold, one might consider more general level structures with multiple hyperfine ground/excited manifolds, which could be relevant to alkali-metal atoms. These more general cases might show richer behaviors. We note that our formalism can be straightforwardly applied to these richer cases as well.
For the sake of simplicity, we only considered cases when the mean field dynamics starts at a stable stationary state. However, extending the analysis to more general situations where the mean-field dynamics is non-trivial could lead to more interesting steady states and phases. For example, quantum fluctuations may drive an initially unstable state towards a mixture of stable macroscopic steady states which could be entangled. Furthermore, the large number of steady-states and unstable regions anticipates a rich phase diagram with superradiant to normal transitions analogous to two-level atoms, as well as potentially other types of transitions such as superradiant to superradiant.
While we considered the generation of squeezing in a system with only coherent driving and collective emission of light, the cavity can also mediate elastic interactions between the atoms via exchange of photons [38; 39; 27]. The interplay between elastic interactions and the dissipation could be an interesting question for the evolution and finite-size scaling of the squeezing. The effects of other decoherence sources such as spontaneous emission or dephasing on the squeezing, as well as the effect of experimental details such as inhomogeneous couplings are also important questions to address in future work.
Finally, in Ref. [1] we showed that it is possible to prepare a squeezed state and rotate it into a state that is dark to emission on one polarization by taking advantage of the conserved quadratures. That analysis can be extended to the case of two polarizations, such that the four squeezed modes discussed here can be preserved in dark states. Furthermore, since atoms with many levels will contain many conserved quadratures (\(\ell-3\)), it is in principle possible to create squeezing, store it in a conserved quadrature, then create squeezing again and store it in the remaining conserved quadratures. This would allow the creation of multilevel spin states with many squeezed directions which might be useful for multi-parameter quantum sensing protocols [40].
|
2309.06175 | AKEM: Aligning Knowledge Base to Queries with Ensemble Model for Entity
Recognition and Linking | This paper presents a novel approach to address the Entity Recognition and
Linking Challenge at NLPCC 2015. The task involves extracting named entity
mentions from short search queries and linking them to entities within a
reference Chinese knowledge base. To tackle this problem, we first expand the
existing knowledge base and utilize external knowledge to identify candidate
entities, thereby improving the recall rate. Next, we extract features from the
candidate entities and utilize Support Vector Regression and Multiple Additive
Regression Tree as scoring functions to filter the results. Additionally, we
apply rules to further refine the results and enhance precision. Our method is
computationally efficient and achieves an F1 score of 0.535. | Di Lu, Zhongping Liang, Caixia Yuan, Xiaojie Wang | 2023-09-12T12:37:37Z | http://arxiv.org/abs/2309.06175v2 | # AKEM: Aligning Knowledge Base to Queries with Ensemble Model for Entity Recognition and Linking
###### Abstract
This paper presents a novel approach to address the Entity Recognition and Linking Challenge at NLPCC 2015. The task involves extracting named entity mentions from short search queries and linking them to entities within a reference Chinese knowledge base. To tackle this problem, we first expand the existing knowledge base and utilize external knowledge to identify candidate entities, thereby improving the recall rate. Next, we extract features from the candidate entities and utilize Support Vector Regression and Multiple Additive Regression Tree as scoring functions to filter the results. Additionally, we apply rules to further refine the results and enhance precision. Our method is computationally efficient and achieves an F1 score of 0.535.
## 1 Introduction
The aim of Entity Recognition [1; 2] and Linking [3; 4] in Chinese Search Queries is to evaluate the current advancements of techniques in aligning named entities in short search queries to entities in a reference Chinese knowledge base [5]. This task presents three main challenges.
The first challenge involves the basic tasks of Chinese natural language processing, such as Chinese word segmentation [6], POS tagging [7], and syntactic analysis [8].
The second challenge arises from the ambiguity and multiplicity of names: the same named entity string can occur in different contexts with different meanings (e.g., "\(\clubsuit\)" can refer to a type of fruit or Apple Inc.). Furthermore, the same named entity may be denoted using various strings, including abbreviations (e.g., "\(\clubsuit\)") and full names (e.g., "\(\clubsuit\)").
The final challenge is that the queried named entity may not exist in the knowledge base at all. This situation necessitates the ability to comprehend semantics and make inferences. This means the system needs to be capable of understanding the context and meaning of the query, and then use that understanding to infer possible matches or related entities, even if the exact entity queried is not present in the knowledge base.
The Entity Recognition and Linking Task can be divided into two sequential subtasks:
1) Tagging mention, which is a typical word segmentation and named entity recognition (NER) task.
2) Linking mentions to entities. Once mentions are recognized, the remaining difficulty lies in linking these mentions to entities and eliminating noisy candidate entities in cases where the mention can be linked with many candidate entities.
In this paper, we propose AKEM(Aligning Knowledge Base to Queries with Ensemble Model), which is designed to ensure high recall by initially recognizing all possible named entities at a relatively broad level. This is achieved by expanding the existing knowledge base and mining external knowledge to locate candidate entities. Subsequently, to ensure high precision, the method eliminates noisy entities through ensemble ranking and filtering rules.
## 2 Related Work
The concept of Entity Linking was first proposed by Paul McNamee and Hoa Trang Dang [9], marking a relatively new area of study within the field of natural language processing. The task primarily focuses on conducting entity extraction based on large-scale data and performing entity disambiguation.
Generally, a typical entity linking system comprises the following modules:
1. Query Extension Module: This leverages structured information from Wikipedia and reference background documents to extend the query and enrich its information content.
2. Candidate Entity Collection Generation Module: This identifies the set of candidate entities that may be linked to target entities.
3. Candidate Entity Collection Ranking Module: This employs an appropriate sorting algorithm to select the best matching candidate entity from the collection.
4. Non-KB-Entity Detection and Clustering Module: This detects entities that do not exist in the Knowledge Base (KB) and clusters their corresponding queries.
In the Entity Linking task, there isn't a significant difference in the query extension technology used across various systems. For entities that exist in the Knowledge Base (KB), the likelihood of a candidate entity matching the Gold Entity has reached a high level. Therefore, the critical step in Entity Linking is the candidate entity collection ranking module.
Common techniques employed in the candidate entity collection ranking module include:
1. The Unsupervised Ranking Method: this method relies on the similarity between the background document of the query and the document of the candidate entity in the Knowledge Base as the basis for sorting. The target entity is more likely to have greater similarity. The advantage of this method is its simplicity and strong operability. However, this method is mostly based on an unannotated corpus, without utilizing the information in the training corpus. The lack of parameter learning and threshold adjustment in this method leads to inadequate information, making it less satisfactory.
2. The Supervised Ranking Method: this method treats the entity in the query and its corresponding entity to be linked in the KB as a pair for classification, and establishes a classification model. For instance, the Listwise Learning to Rank model employs the supervised ranking method [10]. This method has the advantage of using the information in the training corpus and mining potential rules and information. However, it doesn't leverage some semantic information describing the target entity in the background document, thereby ignoring the role of the semantic information of target entities.
3. The Ranking Method Based on Graph Model: this method [11] a global optimization model using all entities in the document. This graph model method is adopted to improve the candidate collection sorting process. However, this method is rarely used in Entity Linking tasks and is left for further research.
4. The Ranking Method Based on Information Retrieval or Rules: Ne meskey et al. [12] employ an information retrieval engine proposed by Daroczy [13] to sort the candidate entity set. This method retrieves the most relevant entity document of KB in Wikipedia. However, as there are many other entities unrelated to the querying entity, using the whole document as a retrieval condition adds a lot of noisy information to the retrieval. Therefore, this method has significant limitations and does not perform well. Gao et al. [14] sort the candidate entity set using a rule-based method. In their system, they design specific rules for three different types of entities: Pre, Org, and Gpe. However, due to the variability of test corpora, these established rules may not necessarily apply universally. It is unpredictable whether these established rules will enhance performance when processing different test corpora. Therefore, this method has its limitations.
In this paper, we introduce a novel candidate entity collection ranking module that integrates multiple methods. Our module primarily employs a supervised learning method based on Support Vector Regression and Multiple Additive Regression Trees. Concurrently, it also utilizes the ranking method based on information retrieval and rules. Our method is computationally efficient and achieves an F1 score of 0.535.
## 3 Framework
The overall processing flow of our system is shown in Figure 1. As shown in Figure 1, AKEM has two phases. The first phase is to improve the recall rate by extending the existing Chinese knowledge base and using search engine. The second phase is to remove noisy candidate entities by using SVR-MART joint scoring functions and rule-based filter.
Details of the three parts are given one by one in followings.
### Improving Recall
In order to improve recall rate, our model need to recognize as many candidate entities as possible.
#### 3.1.1 Extending Knowledge Base
NLPCC 2015 [15] provides us with a reference Chinese Knowledge Base (KB). To fully leverage this KB, we need to expand it using the following methods:
1. Processing English Entities: The English entities in the Knowledge Base are in a unique format. For example, "Microsoft_Word" or "As_Long_As_You_Love_Me". Names in the Knowledge Base start with capital letters and are connected by underscores, complicating direct matching. Therefore, we need to remove underscores and convert the names to lowercase.
2. Removing Brackets: To identify more named entities in the KB, we remove brackets in entity names and split English names in the KB. For example, new entity names like "" and " are mapped to the entity " in the KB by removing brackets. " is mapped to the entity " by splitting the English name.
3. Establishing a Place Directory by Heuristic Inference: All entity names of places in the KB are full names, like " and ". We can infer whether a place is a city, county, province, village, or a county based on the last word of the entity name.
4. Entity Name Extension by Heuristic Method: Many entity names are not common. For instance, " is a nickname for ". Some entities in the KB have descriptions that lead us to find nicknames of entities. Therefore, regular expressions can be used to extract these nicknames. For example, the regular expression " or " can extract a nickname of an entity from the object description of an entity in the KB.
#### 3.1.2 Searching For Candidate Entities
An entity ID may correspond to multiple mentions. When a query is input, our method manages Chinese word segmentation. We search for each mention resulting from the word segmentation using the Baidu search engine, selecting the top 10 search results. We then match each of these search results with the entity names in the extended Knowledge Base. Each successful match and its corresponding entity ID are saved. This approach allows us to
Figure 1: Flowchart of AKEM. Initially, in Part I, we expand the Knowledge Base and conduct a search for a broader set of candidate entities. Following this, in Part II, we refine these results using a Scoring Function.
recognize a greater number of candidate entities, thereby improving the recall rate.
### Filtering the Results
After identifying as many candidate entities as possible to enhance the recall rate, it becomes necessary to eliminate noisy entities to improve the precision rate. Initially, we extract features using a template method and our defined formulas. Subsequently, we employ scoring functions based on Support Vector Regression and Multiple Additive Regression Trees to filter out noisy entities. Finally, we apply specific rules to select the appropriate entities. In this way, we first utilize a statistical method followed by a rule-based method to effectively filter out noisy entities.
#### 3.2.1 Feature Extraction
We extract features for each mention-candidate entity pair, employing the following selection process for these features.
1. The similarity between the query and the object description of the candidate entity is considered. We define this similarity, denoted as \(Similarity_{1}(query,object)\), as follows: \[Similarity_{1}(query,object)=\frac{2\cdot c_{1}}{l_{1}+l_{2}}\] (1) In the aforementioned function, \(c_{1}\) represents the count of identical characters between the query and the object description of the candidate entity. \(l_{1}\) denotes the character count in the query, while \(l_{2}\) signifies the character count in the object description.
2. Whether the mention of candidate entity exclusively contains numerical characters(0-9).
3. Whether the mention of candidate entity contains numerical characters (0-9).
4. Whether the mention of candidate entity exclusively contains letters, either lower case (a-z) or upper case (A-Z).
5. Whether the mention of entity candidate contains letters, either lower case (a-z) or upper case (A-Z).
6. Whether the candidate entity name is a substring of the query.
7. Whether the query is a substring of the candidate entity name.
8. The maximum similarity between each word in the object description of the current mention and other mentions in a query is calculated. Word embeddings are used to represent a mention or a word. The Word2Vec model is employed to train the word embedding using the existing Knowledge Base provided by NLPCC 2015 as training data. We use cosine similarity to calculate the similarity between a word and a mention. \[\forall k\neq i,\forall j,\] (2) \[f_{8}(\mathbf{m}_{i})=\max(Similarity_{2}(\mathbf{w}_{ij}\cdot \mathbf{m}_{k})),\] \[Similarity_{2}(\mathbf{w}_{ij},\mathbf{m}_{k})=\frac{\mathbf{w} _{ij}\cdot\mathbf{m}_{k}}{||\mathbf{w}_{ij}||+||\mathbf{m}_{k}||}.\] In the function above, \(f_{8}(\mathbf{m}_{i})\) represents the feature value of this dimension for the i_th_ mention in a query. \(\mathbf{w}_{ij}\) refers to the word embedding of the j_th_ word in the object description of the i_th_ mention. \(\mathbf{m}_{k}\) is the word embedding of the k_th_ mention in the query. If we are unable to obtain the word embedding of a word, the similarity value between this word and any other word is set to a default value of -1.
9. Whether there exists other mention in the query which is a substring of object description of the candidate entity.
#### 3.2.2 Statistical-Based Filtering
After obtaining the candidate entities of a query and the feature vector for each candidate entity, our method utilizes statistical methods to construct a scoring function that filters out noisy entities. We carry out these processes sequentially:
**Training Set.** The training set is derived from the 159 queries and their corresponding entities provided by NLPCC 2015. We employ our method to identify candidate entities for each query and compute the feature vector for each candidate entity. If a candidate entity \(e_{i}\) matches the standard entity list of the query, we tag \(e_{i}\) with 1. Otherwise, we tag \(e_{i}\) with 0.
\[y=\begin{cases}1,&\text{if candidate entity $e_{i}$ in $s$}\\ 0,&\text{otherwise}\end{cases}\]
In this manner, we can construct the training set:
\(\{(x_{1},y_{1}),\dots,(x_{n},y_{n})\}\subseteq X\times[0.0,1.0]\). Here,
\(x_{i}\) represents the feature vector of \(e_{i}\), and \(y_{i}\) represents the tag of \(e_{i}\).
**Support Vector Regression(SVR).** Drawing on the principles of Support Vector Machines [16; 17], Support Vector Regression [18; 19; 20] is a well-established method for identification, estimation, and prediction. We utilize the tagged training set to train a SVR model. Subsequently, this model is applied to score each candidate entity, facilitating an effective filtering process. Let \(f_{SVR}(\mathbf{x_{e}})\) represent the score assigned by the SVR model to a candidate entity \(e\), and \(\mathbf{x_{e}}\) be the feature vector of the candidate entity \(e\). Then, \(f_{\text{SVR}}(\mathbf{x_{e}})\in[0.0,1.0]\).
**Multiple Additive Regression Tree(MART).**_Multiple Additive Regression Trees_[21; 22] is an ensemble model of boosted regression trees, known for delivering high prediction accuracy across diverse tasks and widely used in practice. We train a MART model using the tagged training set. This model is then employed to score each candidate entity that is to be filtered. Let \(f_{MART}(\mathbf{x_{e}})\) represent the score assigned by the MART model to a candidate entity \(e\). Then, \(f_{\text{MART}}(\mathbf{x_{e}})\in[0.0,1.0]\).
**Scoring function.** We define the scoring function \(Score(\mathbf{x_{e}})\) as follow:
\[Score(\mathbf{x_{e}})=\frac{f_{SVR}(\mathbf{x_{e}})+f_{MART}(\mathbf{x_{e}})} {2}. \tag{3}\]
In the scoring function mentioned above, \(\mathbf{x_{e}}\) represents the feature vector of the candidate entity \(e\), and \(Score(\mathbf{x_{e}})\in[0.0,1.0]\).
**Filtering.** Each mention identified in a query may correspond to one or more candidate entities. To eliminate noisy entities, we use a threshold \(\alpha\) to filter the candidate entities for each mention. Let \(L\) represent the list of candidate entities for a mention \(m\). We sort the candidate entities in \(L\) in descending order based on their scores. For each mention in a query, we select the top \(k\) candidate entities (if the number of candidate entities is less than \(k\), we select all candidate entities). This results in a candidate list \(L_{k}\) for further filtering. Next, we select the top \(l\) candidates in \(L_{k}\) for each mention. Here, \(l\leq k\). We determine \(l\) using the following equation:
\[l=\arg\min_{i\leq k}\sum_{i=1}^{k}Score(e_{i})\geq\alpha \tag{4}\]
In the equation above, we begin by adding up the scores of candidate entities in \(L_{k}\), starting with the highest score. Once the sum reaches the threshold \(\alpha\), we retain the candidate entities that have been included in the calculation and remove the others. If \(\sum_{i=1}^{k}Score(e_{i})<\alpha\), it indicates that all candidate entities in \(L_{k}\) are unqualified, and we therefore remove all candidate entities in \(L_{k}\).
#### 3.2.3 Rule-Based Filtering
The final stage of our method relies on the results filtered by the scoring function. We implement the following rules to eliminate noisy entities:
1. If an English string is divided into several candidate entities, these candidates are removed.
2. Single Chinese character candidates are filtered out.
3. If multiple mentions in a query link to a single candidate entity, we use formula (1) to select the mention most similar to the candidate entity as the entity mention.
4. If a mention in a query can link to more than one candidate entity, and one of the candidates is identical to the mention, we select that candidate entity and remove the others.
## 4 Experiments
We utilize 159 labeled queries, provided by NLPCC 2015, as training data to train the SVR and MART models, which we use as scoring functions. We then apply our model to a set of 3849 queries in order to evaluate our method. We set threshold \(\alpha=0.3\) and \(k=3\). The performance of our method on the NLPCC 2015 evaluation dataset is presented in Table 1.
In the evaluation results, AKEM significantly outperforms the average scores in terms of Precision, Recall, Link-F1, and Average-F1. Specifically, our method exceeds the average Recall score
\begin{table}
\begin{tabular}{l l l l l} \hline
**Team** & **Precision** & **Recall** & **Link-F1** & **Average-F1** \\ \hline Our team & 0.480 & **0.656** & 0.555 & **0.535** \\ Average & 0.450 & 0.497 & 0.458 & 0.423 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation results in the NLPCC 2015 NEL task.
by 15.9% and surpasses the average Average-F1 score by 11.2%. In terms of final rankings, our method secures the fourth position among all participating teams in both Recall rate and Link-F1 score, as well as Average-F1 score. This demonstrates the competitive performance of our approach in the context of the other teams.
## 5 Conclusion
We aim to construct an experimental framework, AKEM, using a reference Chinese knowledge base. This framework is developed and applied to the Entity Recognition and Linking task of NLPCC 2015. Initially, the framework recognizes as many candidate entities as possible by extending the knowledge base and utilizing a search engine, thereby increasing the recall rate. Subsequently, noisy candidate entities are eliminated through feature extraction, SVR-MART filtering, and rule-based filtering.
Our AKEM method exhibits a relatively strong performance in the Entity Recognition and Linking task during NLPCC 2015. It ranks fourth among all teams in terms of recall rate, Link-F1 score, and Average-F1 score.
## Acknowledgements
This paper was completed during the NLPCC 2015 Shared Task: Entity Recognition and Linking in Search Queries. As a participating team, we competed in the Entity Linking task of NLPCC 2015. The evaluation results were officially assessed and announced by the NLPCC 2015 committee, and the task data copyright belongs to NLPCC 2015.
|
2309.09246 | Image-level supervision and self-training for transformer-based
cross-modality tumor segmentation | Deep neural networks are commonly used for automated medical image
segmentation, but models will frequently struggle to generalize well across
different imaging modalities. This issue is particularly problematic due to the
limited availability of annotated data, making it difficult to deploy these
models on a larger scale. To overcome these challenges, we propose a new
semi-supervised training strategy called MoDATTS. Our approach is designed for
accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An
image-to-image translation strategy between imaging modalities is used to
produce annotated pseudo-target volumes and improve generalization to the
unannotated target modality. We also use powerful vision transformer
architectures and introduce an iterative self-training procedure to further
close the domain gap between modalities. MoDATTS additionally allows the
possibility to extend the training to unannotated target data by exploiting
image-level labels with an unsupervised objective that encourages the model to
perform 3D diseased-to-healthy translation by disentangling tumors from the
background. The proposed model achieves superior performance compared to other
methods from participating teams in the CrossMoDA 2022 challenge, as evidenced
by its reported top Dice score of 0.87+/-0.04 for the VS segmentation. MoDATTS
also yields consistent improvements in Dice scores over baselines on a
cross-modality brain tumor segmentation task composed of four different
contrasts from the BraTS 2020 challenge dataset, where 95% of a target
supervised model performance is reached. We report that 99% and 100% of this
maximum performance can be attained if 20% and 50% of the target data is
additionally annotated, which further demonstrates that MoDATTS can be
leveraged to reduce the annotation burden. | Malo de Boisredon, Eugene Vorontsov, William Trung Le, Samuel Kadoury | 2023-09-17T11:50:12Z | http://arxiv.org/abs/2309.09246v1 | # Image-level supervision and self-training for transformer-based cross-modality tumor segmentation
###### Abstract
Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly problematic due to the limited availability of annotated data, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS. Our approach is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An image-to-image translation strategy between imaging modalities is used to produce annotated pseudo-target volumes and improve generalization to the unannotated target modality. We also use powerful vision transformer architectures and introduce an iterative self-training procedure to further close the domain gap between modalities. MoDATTS additionally allows the possibility to extend the training to unannotated target data by exploiting image-level labels with an unsupervised objective that encourages the model to perform 3D diseased-to-healthy translation by disentangling tumors from the background. The proposed model achieves superior performance compared to other methods from participating teams in the CrossMoDA 2022 challenge, as evidenced by its reported top Dice score of \(0.87\pm 0.04\) for the VS segmentation. MoDATTS also yields consistent improvements in Dice scores over baselines on a cross-modality brain tumor segmentation task composed of four different contrasts from the BraTS 2020 challenge dataset, where 95% of a target supervised model performance is reached. We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is additionally annotated, which further demonstrates that MoDATTS can be leveraged to reduce the annotation burden.
Msc: 41A05, 41A10, 65D05, 65D17
Tumor Segmentation, Semi-supervised Learning, Domain adaptation, Self-training
## 1 Introduction
Deep learning has shown outstanding performance and potential in various medical image analysis applications (Chen et al., 2022). Notably, it has been successfully leveraged in medical image segmentation, showing equivalent accuracy to manual expert annotations (Minaee et al., 2021). However, these breakthroughs are tempered by the issue of performance degradation when models face data from an unseen domain (Torralba and Efros, 2011). This problem is particularly important in medical imaging, where distribution shifts are common. Annotating data from all domains would be inefficient and intractable, notably in image segmentation where expert pixel-level labels are expensive and difficult to produce (Prevedello et al., 2019). Building models that can generalize well across domains without any additional
annotations is thus a challenge that needs to be addressed. Specifically, cross-modality generalization is a key contribution towards the reduction of the data dependency and the usability of deep neural networks at a larger scale. Applications of such models are manifold, as it is common that one imaging modality lacks annotated training examples. For instance, acquisition of contrast-enhanced T1-weighted (T1ce) MR images is the most commonly used protocol for Vestibular Schwannoma (VS) detection. Accurate diagnosis and delineation of VS is of considerable importance to avoid boundless tumor growth, which can lead to irreversible hearing loss. However, in order to reduce scan time in T1ce imaging and alleviate the risks associated with the use of Gadolinium contrast agent, high resolution T2-weighted (hrT2) has recently gained popularity in clinical workflows (Dang et al., 2020). Existing annotated T1ce databases can thus be leveraged to address the lack of training data for VS segmentation on hrT2 images. Furthermore, such models could be used for anomaly detection across modalities and pathologies, if the associated lesions show similar patterns. An example is to use pixel-level annotations of brain gliomas in MRIs to learn the distribution of intraparenchymal hemorrhages on CT scans (Dong et al., 2022).
Recently, a method to segment images from a wide range of contrasts without any retraining or fine-tuning was proposed by Billot et al. (2023). Using a generative approach conditioned on segmentations they synthetically generate images of random contrasts and resolutions, used at a later stage to train a segmentation network robust to highly heterogeneous data. However, although this domain randomisation strategy demonstrates improved generalization capability for brain parcellation, the model performance when exposed to tumours and pathologies was not quantified. More commonly, the key challenge of cross-modality generalization can be tackled through unsupervised domain adaptative (UDA) methods, which aim at leveraging the information learned from a "source" domain with abundant labeled data to improve the performance of a model on a "target" domain where labeled data is scarce or unavailable (Wang and Deng, 2018). In medical imaging applications, several UDA strategies are based on feature space alignment and have been widely adopted for cross-modality organ segmentation (e.g. delineation of cardiac structures (Dou et al., 2019; Wu and Zhuang, 2020, 2021)). More widespread UDA models are based on generative strategies and tackle the issue by teaching the model to perform image-to-image translations (Hoyez et al., 2022) between modalities. Through adversarial training, annotated synthetic pseudo-target images can be generated from annotated source modality images and used to train a segmentation network. These methods demonstrate satisfactory results but solely rely on pixel-level annotations for source modality images. Furthermore, due to dataset and training resource constraints, these end-to-end models tend to be limited to 2D. While modality translation can be performed in 2D without performance loss, segmentation tasks highly benefit from computations on 3D volumes rather than 2D slices.
Hence we propose in this paper MoDATTS, a new **M**odality **D**omain **A**daptation **T**ransformer-based pipeline for **T**umor Segmentation which aims at bridging the gap between a partially annotated source modality and an unannotated target modality. As illustrated in Fig. 1, our model comprises two stages for training. In the first stage a 2D network is taught to translate between imaging modalities, to eventually generate pseudo-target images from the source brain volumes. The translation generators are bounded to preserve the tumor information during the modality transfer by sharing the latent representations with segmentation decoders (see Fig. 2). The resultant annotated synthesized target images are then used in a second stage to teach a 3D network to perform the segmentation task (see Fig. 3). To alleviate the need for source annotations and extend the training to original target images, we incorporate an unsupervised anomaly detection objective on the target modality. This is done by leveraging a 2D generative strategy (GenSeg) that uses image-level "diseased" or "healthy" labels for semi-supervised segmentation (Vorontsov et al., 2022). Similarly to low-rank atlas based methods (Liu et al., 2015; Lin et al., 2019; Changfa et al., 2021) the model is taught to find and remove the lesions, which acts as a guide for the segmentation. An iterative self-training procedure is also implemented to further close the gap between source and target modalities. Finally, MoDATTS leverages powerful vision transformer architectures to enhance the modality translation and segmentation.
Considering the challenge of cross-modality tumor segmentation, our main contributions are as follow :
1. We propose a tumor-aware modality translation training procedure that can accurately retain the shape of lesions.
2. We develop a 3D segmentation network that can leverage volumes known to be healthy, and explore its potential for unsupervised tumor delineation on cross-modality segmentation tasks.
3. We build our domain adaptation framework with effective vision transformer architectures.
4. The proposed model has the ability to augment the training set using pseudo-labeling and self-training mechanisms.
MoDATTS is evaluated on two distinct cross-modality tumor segmentation tasks: (i) a customized version of the BraTS 2020 dataset (Menze et al., 2015; Bakas et al., 2017, 2018), where each of the four contrast sequences (T1, T2, T1ce, and FLAIR) were treated as separate modalities, and (ii) the CrossMoDA 2022 data challenge (Shapey et al., 2021; Dorent et al., 2023). We demonstrate that our model can better generalize than other state-of-the-art methods to the target modality and yield robust performance even with few source modality annotations.
## 2 Related Works
### Unsupervised domain adaptation
Domain adaptation has emerged as a popular solution to address the common issue of domain shifts and heterogeneity in medical imaging. By minimizing distribution differences between related but different domains, UDA methods facilitate the use of machine learning models across varied medical
image datasets (Guan and Liu, 2022). Latent space alignment strategies have been widely adopted in different applications to deal with heterogeneity between sets of images from different centers (e.g. knee tissue segmentation (Panfilov et al., 2019) or mass detection on mammograms (Shen et al., 2020)) or with different modalities (e.g. cardiac structures segmentation on MRI using CT scans (Wu and Zhuang, 2021)). In these models, an encoder is trained to learn modality invariant representations of the images either through divergence minimization of the feature distributions or adversarial training on the latent spaces. A segmentation decoder trained on annotated source data is then bounded to produce consistent segmentation maps for the target images. For tumor segmentation tasks, generative approaches based on cross-modality translation are more frequent and will be reviewed in next section.
To alleviate the need for source data availability during the adaptation stage, some source-free domain adaptation methods have been developed. Using a source segmentation model, Yang et al. (2022) proposed to transform target images into high-quality source-like images with batch norm constraints. As low-frequency components in the Fourier domain can represent style information, refinement of the generated images is achieved with the mutual Fourier Transform. Feature-level and output-level alignment is then performed based on the generated paired source-like and target images. Liu et al. (2021) used a pre-trained tumor segmentation model on source T2-weighted MRI brain images and fine-tuned its parameters on the target domain (T1, T1-weighted or FLAIR) by explicitly enforcing high order batch statistics consistency and minimizing the self-entropy of predictions on the target distribution. The adaptation phase proposed by Bateson et al. (2022) involved minimizing a loss function that incorporates the Shannon entropy of predictions and a prior based on the class-ratio in the target domain. However, these approaches underperform in comparison to state-of-the-art generative methods and often relies on image-level labels incurring substantial annotation costs (Bateson et al., 2022).
### Style transfer and cross-modality segmentation
Style transfer neural networks, which involve transferring the visual appearance (or style) of one image to another while preserving the content of the latter, were first introduced by Gatys et al. (2016). Their approach enables the generation of novel images with high perceptual quality that combine the content of any given photograph with the visual style of various well-known artworks. Such models can be leveraged for domain adaptation purposes in medical image applications by generating synthetic images in the target domain to supervise a target modality segmentation model. Notably, the CycleGAN model proposed by Zhu et al. (2017) became the standard for transfers between imaging modalities. CycleGAN is an unpaired bidirectional image translation network based on generative adversarial training, and preserves content specific information through cyclic pixel-level reconstruction constraints. Several works proposed a domain adaptation framework based on a CycleGAN-like approach to perform modality translation (Zhang et al., 2018; Huo et al., 2019; Chen et al., 2019; Jiang et al., 2020; Li et al., 2021; Zhou et al., 2021). Segmentation is jointly trained in an end-to-end manner on the labeled synthetic target images translated from the annotated source modality. Alternatively modality translation can be combined with latent space alignment to further regularize the model. Pei et al. (2021) retained the principle of cyclic modality translations but proposed to jointly disentangle the domain specific and domain invariant features between each modality and train a segmenter on top of the domain invariant features. Similarly, Ouyang et al. (2019) proposed a VAE-based feature prior matching mechanism to learn domain invariant features while training for modality translation and segmentation.
Note that in these methods the modality translation networks are able to maintain the structures of interest (e.g. the tumours) by integrating features from the segmentation network. Due to memory constraints, performing segmentation end-to-end with modality translation on full 3D volumes is not tractable. Thus,
Fig. 1: Overview of MoDATTS. Stage 1 (green) consists in generating realistic target images from source data by training cyclic cross-modality translation. In stage 2 (blue), segmentation is trained in a semi-supervised approach on a combination of synthetic and original target modality images. Finally, pseudo-labeling is performed and the segmentation model is refined through several self-training iterations.
performing domain adaptation in a two-stage manner with 2D tumor-aware modality translation followed by 3D segmentation is an adequate setting.
### Self-training
Self-training is a semi-supervised learning technique with pseudo-labeling. A teacher model trained on labeled data is used to produce pseudo-labels on unlabeled data. Only pseudo-labels that the model predicts with a high probability are retained. This process can be iterated on the expanded label set, generating additional pseudo-labels. Augmenting the training set with pseudo-labeled examples increases the model's robustness towards out-of-distribution data (Xie et al., 2020). It was shown to have great potential in leveraging unlabeled data in semantic segmentation applications (Zou et al., 2021; Zhu et al., 2021). It is also a suitable candidate for improvements in domain adaptation tasks (Zou et al., 2018; Kumar et al., 2020; Liu et al., 2021; Yu et al., 2021). Self-training was introduced for cross-modality segmentation in the context of the CrossMoDA 2021 domain adaptation challenge (Shin et al., 2021). In the reiteration of the challenge in 2022, self-training was a core strategy among the top ranked methods (Kang et al., 2023; Salle et al., 2023).
## 3 Methods
Let us consider the scenario where we have a set of images \(X_{T}\) without pixel-level tumor annotations for a "target" modality T. The objective of this work is to learn consistent segmentations on the target data using a second set of images \(X_{S}\) of the "source" modality S, that is partially or totally annotated with labels \(Y_{S}\). Note that the datasets are considered to be unpaired.
### Tumor-aware cross-modality translation
The first phase of our model consists in augmenting the source images into realistic pseudo-target images, so that the pixel-level annotations available in the source modality can be reused to train a segmentation network on the target modality. Based on the CycleGan model (Zhu et al., 2017), we perform modality translations via two distinct encoder-decoder networks (see Fig. 2). Encoders \(E_{S}\) and \(E_{T}\) are used to encode source and target modality images, respectively. Combined with \(E_{S}\), a decoder \(G_{T}\) enables performing S\(\rightarrow\)T modality translation, while \(E_{T}\) and a second decoder \(G_{S}\) performs the T\(\rightarrow\)S modality translation. To preserve the anatomical contents, the model is forced to reconstruct the original images after mapping back to the original modality. This is referred to as cycle-consistency. We note \(X_{S^{*}}=G_{T}\circ E_{S}(X_{S})\) and \(X_{S^{*}}=G_{S}\circ E_{T}(X_{S^{*}})\), respectively the translation and reconstruction of \(X_{S}\) in the S\(\rightarrow\)T\(\rightarrow\)S translation loop. Similarly we have \(X_{T^{\prime}}=G_{S}\circ E_{T}(X_{T})\) and \(X_{T^{\prime\prime}}=G_{T}\circ E_{S}(X_{T^{\prime}})\) for the T\(\rightarrow\)S\(\rightarrow\)T cycle. We specify that \(\circ\) is the composition operation.
Because image reconstruction from cycle-consistency is imperfect in practice (Cohen et al., 2018), the model is guided to specifically retain detailed geometrical tumor structures by incorporating two segmentation decoders \(G_{seg}^{S}\) and \(G_{seg}^{T}\). This has shown to be efficient for two-stage domain adaptation methods (Shin et al., 2022). The segmentation decoders share the same latent input representation as the modality decoders, which constrains the encoders to learn features that encompass the tumors information. For each annotated source image, the model outputs a segmentation map of the original image (\(\hat{Y}_{S}=G_{seg}^{S}\circ E_{S}(X_{S})\)) and the image's translation to the target domain (\(\hat{Y}_{S^{*}}=G_{seg}^{T}\circ E_{T}(X_{S^{*}})\)).
The loss function for this stage is therefore composed of' three terms :
1. An **adversarial objective** based on the hinge loss that aims at discriminating between real and generated images of the same modality: \[\begin{split}\mathcal{L}_{adv}^{mod}=&\sum_{m\in \{S,T\}}\min_{G}\max_{D}\mathbb{E}_{X_{m}}\left(\min\left(0,D_{m}(X_{m})-1 \right)\right)\\ &-\mathbb{E}_{\hat{X}_{m}}\left(\min\left(0,-D_{m}(\hat{X}_{m})- 1\right)\right)-\mathbb{E}_{\hat{X}_{m}}D_{m}(\hat{X}_{m})\end{split}\] (1)
2. A **reconstruction loss** enforcing cycle consistency: \[\mathcal{L}_{cyc}=\|X_{S}-X_{S^{*}}\|_{1}+\|X_{T}-X_{T^{\prime}}\|_{1}\] (2)
3. A **segmentation objective**, based on a differentiable soft Dice loss like in Drozdzal et al. (2016): \[\mathcal{L}_{seg}^{mod}=Dice\left(Y_{S},\hat{Y}_{S}\right)+Dice\left(Y_{S}, \hat{Y}_{S^{*}}\right)\] (3)
The overall translation loss \(\mathcal{L}_{trans}\) is a weighted sum of the aforementioned terms:
\[\mathcal{L}_{trans}=\lambda_{seg}^{mod}\mathcal{L}_{seg}^{mod}+\lambda_{adv}^{ mod}\mathcal{L}_{adv}^{mod}+\lambda_{cyc}\mathcal{L}_{cyc} \tag{4}\]
Note that to facilitate the hyper-parameter search, weights are normalized so that their sum always equals to 1.
### Target modality segmentation
**Supervision from pseudo-target data**. Once modality translation is learned, we yield a dataset of pseudo-target images \(X_{pT}\) and their corresponding pixel-level annotations \(Y_{pT}\). Like in state-of-the-art methods, prior to self-training iterations we train a 3D segmentation network by teaching an encoder \(E\) and a decoder \(G_{seg}\) the segmentation task on this synthetic data. Based on images \(X_{pT}\), we predict segmentation maps \(\hat{Y}_{pT}=G_{seg}\circ E(X_{pT})\) that can be compared to the ground-truths \(Y_{pT}\) for model optimization. The corresponding loss function is termed :
\[\mathcal{L}_{seg}^{pT}=Dice\left(Y_{pT},\hat{Y}_{pT}\right) \tag{5}\]
**Semi-supervised segmentation**. Unlike prior methods that simply train the segmentation network on the annotated pseudo-target images before performing self-training, we propose a semi-supervised approach. By using the GenSeg training strategy (Vorontsov et al., 2022), we believe the model can better fit the target modality distribution than with only supervision from the pseudo-target data which may still suffer from a small distribution shift. This also allows us to model
relevant tumor representations even when only few source images have pixel-level annotations.
To localize lesions, we use image-level labels that describe whether an image contains a lesion or not. These "diseased" and "healthy" labels can be efficiently leveraged by a generative model by translating between presence and absence (of tumor lesions) domains, referred to as P and A. In this setup, we seek to separate the information that is shared between the two domains (A and P) from the information that is specific to domain P (that is, separate out the lesions). We therefore divide the latent representation of each image into two distinct codes: \(c\) and \(u\). The common code \(c\) contains information that is inherent to both domains, such as organs and other structures, while the unique code \(u\) stores features specific to domain P, such as tumor shapes and localization.
Presence to absence translation.Given an original image of the target modality in the presence domain \(X_{P}\), we use the encoder \(E\) to compute its latent representation \([c_{P},u_{P}]\). A common decoder \(G_{com}\) interprets the common code \(c_{P}\) and generates a healthy version \(X_{PA}\) of that image by removing the apparent tumor region. At the same time, a residual decoder \(G_{res}\) employs both common and unique codes to produce a residual image \(\Delta_{PP}\), which represents the additive modification required to shift the generated healthy image back to the presence domain. In other words, the residual is the disentangled tumor that can be added to the generated healthy image to create a reconstruction \(X_{PP}\) of the initial diseased image:
\[X_{PA}=G_{com}(c_{P}), \tag{6}\]
\[\Delta_{PP}=G_{res}(c_{P},u_{P}), \tag{7}\]
\[X_{PP}=X_{PA}+\Delta_{PP}. \tag{8}\]
Absence to presence translation.In parallel, a similar process is implemented for images in the healthy domain. Given an original target image \(X_{A}\) in the absence domain A, a translated version in domain P is generated. Hence, a synthetic tumor \(\Delta_{AP}\) is created by sampling a code from a prior distribution \(\mathcal{N}(0,I)\) and substituting the encoded unique code for that image. The reconstruction \(X_{AA}\) of the original image in domain A and the synthetic diseased image \(X_{AP}\) in domain P are calculated from the encoded features \([c_{A},u_{A}]\) in the following manner:
\[X_{AA}=G_{com}(c_{A}), \tag{9}\]
\[X_{AP}=X_{AA}+G_{res}(c_{A},u\sim\mathcal{N}(0,I)). \tag{10}\]
A tumor can have various appearances, which means that translating from the absence to the presence domain requires a one-to-many mapping. For this reason, the unique code is replaced by a code sampled from a normal distribution \(\mathcal{N}(0,I)\). Each different sampled unique code can then be interpreted by the residual decoder as a different tumor. Additionally, we reconstruct the latent representations of the generated images in both translation directions to ensure that the information from the original image is preserved. Note that in the absence-to-presence direction this enforces the distribution of unique codes to match the prior \(\mathcal{N}(0,I)\). Indeed, we make \(u_{AP}\) match \(u\), where \(u_{AP}\) is obtained by encoding the fake diseased sample \(X_{AP}\) produced with random sample u. It is worth noting that translating from the absence to the presence domain indirectly augments the target modality data, which in turn improves the domain adaptation.
Weight sharing.We use a configuration where the segmentation decoder \(G_{seg}\) shares most of its weights with the residual decoder \(G_{res}\) and only differs from the latter by a distinct set of normalization parameters and the addition of a classifying layer. Therefore, through the Absence and Presence translations the segmentation decoder is implicitly learning how to disentangle the tumors from the background on original target modality samples. Additional supervision from the pseudo-target data is still required to teach the segmentation decoder how to transform the resulting residual representation into appropriate segmentation maps. However the requirement
Figure 2: Overview of the proposed tumor-aware cross-modality translation. The model is trained to translate between modalities in a CycleGAN approach. \(G_{T}\circ E_{S}\) and \(G_{S}\circ E_{T}\) encoder-decoders respectively perform \(\mathrm{S}\rightarrow\mathrm{T}\) and \(\mathrm{T}\rightarrow\mathrm{S}\) modality translations. Same latent representations are shared with co-trained segmentation decoders \(G_{seg}^{X}\) and \(G_{seg}^{T}\) to preserve the semantic information related to the tumors. Note that the \(\mathrm{T}\rightarrow\mathrm{S}\rightarrow\mathrm{T}\) translation loop is not represented for readability. We precise that the latter does not yield segmentation predictions since we assume no annotations are provided for the target modality.
for source pixel-level annotations is reduced in comparison to usual domain adaptation methods.
Loss functionTo enforce the diseased-healthy translation we rely on the three components exposed below. Note that the synthetic images \(X_{pT}\) are excluded from these terms:
1. A healthy-diseased translation **adversarial loss**. We build a hinge loss \(\mathcal{L}_{adv}^{gen}\) like in Eq. 1 aiming at discriminating between pairs of real/synthetic images of the same output domain i.e. \(X_{A}\) vs \(X_{PA}\) and \(X_{P}\) vs \(X_{AP}\).
2. A pixel-level **image reconstruction loss**\(\mathcal{L}_{rec}\) to regularize the translation between A and P domains : \[\mathcal{L}_{rec}=\|X_{AA}-X_{A}\|_{1}+\|X_{PP}-X_{P}\|_{1}\] (11)
3. A **latent code reconstruction loss**\(\mathcal{L}_{lat}\) that forces the model to preserve information, enforcing a one-to-one correspondence between latent codes and their corresponding images.
The image and latent code reconstruction losses together prevent mode collapse and makes sure that when a tumor is added or removed, the background tissue remains the same. To train the 3D segmentation model, we define a global weighted sum \(\mathcal{L}_{Seg}^{ini}\) that encompass the diseased-healthy translation losses along with the synthetic supervision term \(\mathcal{L}_{seg}^{pr}\) (Eq. 5):
\[\mathcal{L}_{Seg}^{init}=\lambda_{adh}^{gen}\mathcal{L}_{adv}^{gen}+\lambda_{ lat}\mathcal{L}_{lat}+\lambda_{rec}\mathcal{L}_{rec}+\lambda_{seg}^{pT}\mathcal{L}_{ reg}^{pT} \tag{12}\]
In the same way as for modality translation in the first phase (Sec. 3.1), weights are normalized so that their is sum equal to 1 in order to ease hyper-parameter tuning.
Self-trainingAt this stage the model has already been trained on real target modality images through the diseased-healthy translation objective. However, the segmentation decoder would specifically benefit from tuning on the original data as it was essentially trained on the synthetic pseudo-target images. We thus further include non-annotated original data to the segmentation objective with a self-training procedure as in Shin et al. (2022). To do so, the segmentation model is used to output probability maps for each target domain images. These are then thresholded with a value \(\alpha\) to keep only the predictions in which the model has a high confidence. The resulting pseudo-labels \(Y_{pl}\) are considered as new ground-truth annotations for the unannotated target images for finetuning the segmentation model. This procedure can be repeated \(k\) times to iteratively refine the pseudo-labels and improve the model segmentation
Figure 3: Overview of the segmentation stage. A segmentation decoder \(G_{seg}\) is trained for tumor segmentation on annotated pseudo-target images resulting from stage 1. Simultaneously a common decoder \(G_{conv}\) and a residual decoder \(G_{res}\) are jointly trained for unsupervised tumor delineation on real target images by performing diseased \(\rightarrow\) healthy and healthy \(\rightarrow\) diseased translations. The segmentation decoder shares parameters with the residual decoder to benefit from this unsupervised objective. Finally, The segmentation encoder-decoder \(G_{seg}\circ E\) is used to generate pseudo-labels for unannotated target images. The model is then further refined through several self-training iterations.
performance on the unannotated modality. During the \(i^{th}\) self-training iteration we thus compute predictions \(\hat{Y}_{pl}\) that can be compared to the pseudo-labels \(Y_{pl}^{i-1}\) resulting from the \((i-1)^{th}\) training stage. This is done with an additional self-training segmentation term \(\mathcal{L}_{seg}^{st}=Dice\left(Y_{pl}^{i-1},\hat{Y}_{pl}\right)\). Hence we obtain the following global loss for self-training iterations:
\[\mathcal{L}_{Seg}^{ST}=\mathcal{L}_{Seg}^{init}+\lambda_{seg}^{st}\mathcal{L}_ {seg}^{st} \tag{13}\]
Here, we set \(\lambda_{seg}^{st}=\lambda_{seg}^{PT}\) for a balanced segmentation objective between the annotated pseudo-target images and the pseudo-labeled original target images.
## 4 Experiments and results
### Experimental settings
#### 4.1.1 Datasets
_BraTS_. We first evaluate MoDATTS on the BraTS 2020 challenge dataset (Menze et al., 2015; Bakas et al., 2017, 2018), adapted for the cross-modality brain tumor segmentation problem where images are known to be diseased (presence of tumors) or healthy (absence tumors). Amongst the 369 brain volumes available in BraTS, 37 volumes were allocated to each of the validation (10%) and test (10%) sets. The 295 volumes left were used for training (80%). Based only on brain tissue, each volume was mean-centered, divided by five times the standard deviation and clipped to the [-1,1] interval. Datasets were then assembled from each distinct pair of the four MRI contrasts available (T1, T2, T1ce and FLAIR) for the modality adaptation task. To constitute unpaired training data, we used only one specific contrast (source or target) per training volume. We could therefore experiment with twelve different combinations of unpaired source/target modalities. Even though it is not clinically useful to learn cross-sequence segmentation if multi-parametric acquisitions are performed as is the case in BraTS, this modified version of the dataset provides an excellent study case to assess the actual performance of any modality adaptation method for tumor segmentation. Although the dataset offers several segmentation classes (enhancing tumor, peritumoral edema, necrotic and non-enhancing tumor core), note that we only consider the whole tumors as our segmentation objective.
_CrossMoDA_. We also used the dataset from the 2022 CrossMoDA domain adaptation challenge (Shapey et al., 2021; Dorent et al., 2023) for a cross-modality vestibular schwannoma segmentation task. The training dataset is composed of 210 contrast-enhanced T1-weighted MR volumes with pixel-level annotations and 210 unannotated unpaired high-resolution T2-weighted MR volumes. An additional test set of 64 unannotated hrT2 images was available for performance evaluation of the models on the data challenge platform. The images were equally acquired in 2 distinct centers, _London_ and _Tilburg_, and showed different resolutions and sizes. To mitigate these disparities, all the 3D images were first resampled to a spacing of \(0.41\times 0.41\times 1\). Then, to align the volumes and later facilitate the split into known healthy and diseased samples, we selected a hrT2 image as an atlas to perform inter and intra modality affine registrations. We used the Advanced Normalization Tools module (Avants et al., 2009), and employed the mutual information loss for T1ce images and the normalized cross-correlation loss for hrT2 volumes. Images were then cropped to an ROI of \(256\times 256\times 60\). Each volume was finally mean-centered, divided by five times the standard deviation and clipped to the [-1,1] interval.
#### 4.1.2 Tumor-aware cross-modality translation
_2D slicing_. Due to resource constraints, prior to 3D segmentation, MoDATTS achieves 2D cross-modality translation to generate pseudo-target samples from the source modality. Therefore, CrossMoDA and BraTS volumes were respectively split into full \(256\times 256\) and \(240\times 240\) 2D slices before being fed to the modality translation network.
_Architecture_. For the architecture, we leverage the recent works on vision transformers (Alexey et al., 2021), which were shown to be suitable for translation tasks (Dubey and Singh, 2023) when combined with fully-convolutional discriminators. We exploit the TransUnet model (Chen et al., 2021), a powerful 2D U-shaped network for medical images that has shown great performance on several segmentation tasks. The architecture of our generators is based on the hybrid "R50-ViT" TransUnet configuration that combines a ResNet-50 and a ViT model with 12 transformer layers. The encoder backbones were pre-trained on ImageNet (Deng et al., 2009). For each of the two modality generators, the TransUnet decoder is duplicated producing a segmentation decoder with a sigmoid output activation and a translation decoder with a _tanh_ output activation. As for discriminators, we use a convolutional multi-scale architecture as in Wang et al. (2018) that averages output values across several scales. Further details on the different layers are showed in Table 1. We use leaky ReLU with a slope of 0.2 as the non-linear activation function.
_Training_. Our modality translation model was trained for 200 epochs. We used a batch size of 15 with the AMSGrad optimizer with \(\beta_{1}=0.5\), \(\beta_{2}=0.999\), and a learning rate of 0.0001. Pixel-level ground-truth annotations were provided only for the source modality slices. The same on-the-fly 2D data augmentation as in Vorontsov et al. (2022) was
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{4}{c}{**Discriminators**} \\ \hline \hline Layer & Channels & Kernel size & Stride \\ \hline C & 60 & \(4\times 4\) & 1 \\ LN+LR+C & 60 & \(4\times 4\) & 2 \\ LN+LR+C & 120 & \(4\times 4\) & 2 \\ LN+LR+C & 240 & \(4\times 4\) & 2 \\ LN+LR+C & 480 & \(4\times 4\) & 2 \\ C & 1 & \(1\times 1\) & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Discriminator architecture used for the modality translation stage. **LN** = Layer Normalization, **LR** = Leaky ReLU activation, **C** = Convolution. The same architecture is used to train the diseased-healthy translation in the second stage, but kernels are expanded to 3D.
applied. The following loss parameters, defined in section 3.1, yielded great translation and preserved tumor appearance across modalities for both datasets : \(\lambda_{adv}^{mod}=1\), \(\lambda_{seg}^{mod}=1\) and \(\lambda_{cyc}=10\).
Synthetic target data generationOnce the translation model was trained, 2D source slices were augmented into pseudo-target images using the last state of the translation model. The latter were then assembled back to constitute the synthetic target 3D volumes required for the segmentation stage.
#### 4.1.3 Semi-supervised target modality segmentation
_Diseased/Healthy labeling_
For the segmentation task, we created the sets of known healthy and diseased volumes. Each pseudo and real target 3D brain volumes were split into two hemispheres. For BraTS we attributed to each hemisphere the label P (presence of tumor) if any of its pixels was indicated to be part of a tumor by the ground truth segmentation, or the label A (absence of tumor) otherwise. Since Vestibular Schwann (VS) segmentations were available for T1ce in the CrossMoDA dataset, the same process was applied for the synthetized hrT2 images. However as the ground truth VS segmentations were not provided for original hrT2 data, manual labels were added to left or right hemispheres containing the VS for each of the 210 hrT2 volumes. We specify that in all the experiments the images are provided with absence/presence weak labels, distinct from the pixel-level annotations that we provided only to a subset of the data.
ArchitectIntegrating the diseased/healthy translation to the domain adaptation framework requires a common and a residual decoder in addition to the standard segmentation encoder-decoder. As stated before, weights between the segmentation decoder and the residual decoder are shared so that segmentation is implicitly learned from the unsupervised objective. Therefore our model actually involves one unique residual/segmentation decoder but with two sets of normalization parameters. The latter also contains two distinct output layers, with tanh activation to generate residuals and sigmoid activation to yield segmentation maps. Our encoder and decoder architectures are based on a 3D Medformer Gao et al. (2022), a recent data-scalable transformer architecture that outperformed the nnU-Net Isensee et al. (2021) and other vision transformers Hatamizadeh et al. (2022); Zhou et al. (2022) on several medical image segmentation tasks. Note that we propose a self-supervised setup which only includes the supervision over pseudo-target samples and self-training (no common/residual decoders), and a semi-supervised setup which additionally performs the diseased/healthy translation on original target images. The number of weights per encoder/decoder for the semi-supervised variant had to be decreased in comparison to the self-supervised variant due to memory constraints. Further details on the different encoder and decoder layers are provided in Table 2. In the semi-supervised variant, we also introduce two discriminators that are responsible for discriminating between real and generated diseased/healthy samples. Their architecture is the same as in the modality translation stage (cf Table 1).
TrainingAll segmentation models were trained for 300 epochs. We then performed three self-training iterations of 150 epochs each. For all runs, we applied 3D nnU-Net
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{6}{c}{**Encoder**} \\ \hline \hline & Ch. sf. & Ch. sm. & Conv. & Trans. & Heads & Kernel \\ \hline Conv. & 16 & 32 & 1 & 0 & 0 & \(3\times 3\times 3\) \\ Stern & 32 & 64 & 2 & 0 & 0 & \(\dagger\)\(3\times 3\times 3\) \\ B-MDH & 64 & 128 & 0 & 2 & 2 & \(\dagger\)\(3\times 3\times 3\) \\ blocks & 128 & 256 & 0 & 4 & 8 & \(\dagger\)\(3\times 3\times 3\) \\ & 256 & 320 & 0 & 6 & 10 & \(\dagger\)\(3\times 3\times 3\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Architectures used for training at the segmentation stage. Except for the common decoder for which we add an extra convolution layer at the bottleneck with kernel size \(1\times 1\times 1\) to map the common code back to the encoder output channel number, we use identical architectures for all decoders. Ch. sm. and Ch. sf. respectively refers to the number of channels for the semi-supervised and self-supervised variants. At each layer we show the number of convolution blocks (Conv.), and the number of B-MDH - Bidirectional Multi-Head Attention - blocks (Trans.) along with the number of heads for each of these blocks (Heads). \(\dagger\) = 2\(\times\) down-scaling with tri-linear interpolation and passing semantic maps to multi-scale fusion module. \(\ddagger\) = 2\(\times\) up-sampling with tri-linear interpolation and long skip connection from multi-scale fusion module. \(\star\) indicates which layer is duplicated in the shared residual/segmentation decoder.
Figure 4: Cross-modality translation examples for the CrossMoDA dataset. We display several T1ce \(\rightarrow\) hrT2 translations along with the VS segmentation ground truth. Tumor structures are preserved in the pseudo hrT2 images.
on-the-fly data augmentation and weights with the highest validation Dice score were saved for the next step. We used a batch-size of 2, and the same optimizer parameters as in the modality translation phase. The threshold \(\alpha\) that defines the level of confidence required to keep the pseudo-labels was set to 0.6 as in Dong et al. (2021). For BraTS, each training experiment was repeated three times, with a different random seed for weight initialization. We therefore report the mean of all test Dice scores with standard deviation across the three runs. Specifically in the segmentation stage, we performed 5-fold cross-validation on the training set for each CrossMoDA experiment. Ensembling was achieved with the resulting models for performance evaluation on the test dataset. As we were limited for quantitative evaluation on the online CrossMoDA data challenge platform, each experiment was evaluated only once. Note that for VS segmentation we applied up-sampling on large heterogeneous and small-sized tumors along with tumor intensity augmentation, as encouraged by Salle et al. (2023).
_Hyper-parameter search_. Our approach involved the following strategy : (1) increasing the weights of the reconstruction terms \(\lambda_{rec}\) and \(\lambda_{lat}\) relatively to the adversarial term \(\lambda_{adv}^{gen}\) until mode dropping stops occuring; and (2) subsequently determining the optimal weight \(\lambda_{seg}^{pT}\) for supervision from the synthetic data. We found that the following parameters (normalized to equal 1) yielded great diseased/healthy translations for BraTS : \(\lambda_{adv}^{gen}=5\), \(\lambda_{rec}=50\), and \(\lambda_{lat}=5\). For CrossMoDA, stronger reconstruction constraints were required as \(\lambda_{rec}=75\) and \(\lambda_{lat}=10\) yielded visually better results. In a standard domain adaptation scenario where 100% of source data is provided with pixel-level annotations and all the target images are unnanotated, we found that \(\lambda_{seg}^{pT}=100\) was optimal. Note that, during the first training of the segmentation model (prior to self-training), increasing \(\lambda_{seg}^{pT}\) involves higher dependence on the synthetic target data. When lowering the number of samples provided with pixel-level annotations in the source modality, (1) the tumor appearances in the generated pseudo-target images are likely to be less accurate and (2) the segmentation model is fed with fewer annotated synthetic samples. This requires adjusting \(\lambda_{seg}^{pT}\) down, so that the segmentation model relies more on the unsupervised tumor delineation objective rather than on the supervision from the synthetic data. When using 70%, 40%, 10% and 1% of the source modality annotations, we set \(\lambda_{seg}^{pT}\) to respectively 50, 25, 1 and 0.1. These \(\lambda_{seg}^{pT}\) values were tuned on CrossMoDA and reused for BraTS.
#### 4.1.4 Model comparison
When evaluating our model on BraTS, we compare the performance of the proposed approach against state-of-the-art domain-adaptive medical image segmentation models AccSegNet Zhou et al. (2021) and AttEnt Li et al. (2021). We used available GitHub code for the two baselines and performed fine-tuning on our data. Because these two methods are 2D, for fair comparison, we also evaluate a 2D version of MoDATTS. Note that unless "2D" is specified, MoDATTS refers to the 3D version of our model. For CrossMoDA, we compare the performance of MoDATTS against the top 4 teams in the validation phase of the data challenge. The VS Dice scores and Average Symmetric Surface Distances (ASSD) for these methods were provided in the leader-board. For further experiments, the team Super-Poly Han et al. (2022) was the only one to make its code available. Their MSF-nnU-Net model ranked \(4^{th}\) in the data challenge but we believe their approach
Figure 5: Cross-modality translation examples for the BraTS dataset. Each group of images represents all possible translations towards a specific modality (T1, T1ce, FLAIR or T2) for one case. For visual evaluation of the method we also show the corresponding ground truth target images and the whole tumor segmentations. The resulting visual appearances differ depending on the source modality, but tumor information is retained across all translations.
constitutes a reasonable baseline as it used a nnU-Net in the segmentation stage, similarly to the other top methods.
### Domain adaptation
The first set of experiments consists in evaluating MoDATTS in a standard domain adaptation scenario where all of the source data is provided with pixel-level annotations and all the target images are unannotated. This is the standard scenario for CrossMoDA as hrT2 segmentations are not available. As for BraTS we drew all possible source and target modality pairs from T1, T2, FLAIR and T1ce, and pixel-level annotations were only retained for the source modality.
_Qualitative results_. We show in Fig. 5 several generated samples of pseudo-target brain images after training of the modality translation model, when all the source samples were provided with pixel-level annotations. Interestingly, each source modality leaves its own style footprint in the generated images as the tumor appearances in the resulting target translations differ accordingly. An interesting feature of our model is that the tumor structures seem to be visually preserved across the modality translation. For instance, note that the different substructures of the tumor can still be differentiated in the FLAIR \(\rightarrow\) T1 modality translation. Also notice that the translation model can successfully augment hypo-intense tumors that are hardly distinguishable from the background (e.g. T1ce \(\rightarrow\) FLAIR or T1 \(\rightarrow\) FLAIR). Similarly we show in Fig. 4 several T1ce \(\rightarrow\) hrT2 VS translations. This further proves successful maintenance of tumor layouts during the pseudo-target sample generations, even for small lesions.
An illustration of several translations from the diseased to the healthy domain for the brain tumor and VS segmentation tasks are displayed in Fig. 6. As depicted in the figure, even without any pixel-level annotations for the target modality, the tumors were effectively separated from the brain, leading to a successful translation from the presence to the absence domain, as well as accurate segmentation. It is worth noting that even for lesions appearing hypo-intense, as in brain T1 and T1ce sequences, MoDATTS can effectively handle complex residuals and alternatively convert them into reliable segmentation results.
_Quantitative results_. We present the resulting absolute Dice scores for MoDATTS and each evaluated baseline on the brain tumor dataset in Fig. 7. Note that the results for MoDATTS correspond to the self-supervised variant as it was more effective than the semi-supervised variant when 100% of the source data was annotated (see section 4.4). Additionally we
Figure 6: Examples of translations from Presence to Absence domains and resulting segmentation in a domain adaptation application where target modality had no pixel-level annotations provided. For BraTS, we show in each column a different source \(\rightarrow\) target scenario. We do not display ground truth VS segmentations for CrossMoDA as hrT2 segmentation maps were not provided in the data challenge.
displayy the results obtained by a supervised Medformer model with the same backbone architecture as the self-supervised variant of MoDATTS, trained on the one hand with source data without any domain adaptation strategy and on the other hand with fully annotated target data, which respectively act as lower and upper bounds of the domain adaptation task. As expected, models without domain adaptation approaches trained on the source data fail to properly segment tumors on the target modality, particularly for modality pairs that show high domain shifts (e.g. T1/FLAIR or T1ce/T2). Note that MoDATTS shows great performance as it outperforms AttENT and AccSegNet with a considerable margin on the target modality. On average, over the 12 different domain adaptation experiments we report that 3D MoDATTS reaches 95% of the target supervised model performance. These results demonstrate that our transformer-based modality translation approach is effective and is able to produce reliable pseudo-target images to train a segmentation model to delineate tumors in the target modality. In comparison AttENT and AccSegNet reached 79% and 82%, respectively. The 2D version of MoDATTS, reached 87% of the target supervised model performance. This demonstrates that the superior performance of our model is not solely attributable to working in 3D.
We also report in Table 3 the VS Dice scores and ASSD on the CrossMoDA dataset. MoDATTS shows similar VS segmentation performance as team LaTIM who ranked first in the data challenge, and outperforms runner-up entries. This further proves that our method is effectively able to reduce the performance gap due to domain shifts in cross-modality tumor segmentation. Although the performance gains are limited, note that these approaches were specifically designed to perform on the CrossMoDA challenge and may not generalize well to other datasets. In contrast, our results on BraTS and CrossMoDA indicate that MoDATTS is competitive in several domain adaptation tumor segmentation tasks. We further note that the only competing approach (LaTIM) requires training a SinGAN (Shaham et al., 2019) purposely adapted for CrossMoDA to augment and diversify the target VS appearances, in addition to the conventional modality translation and segmentation models. MoDATTS also achieves such data augmentation through the healthy \(\rightarrow\) diseased translation objective. However it is encompassed in the segmentation model, therefore mitigating the need for an additional step.
### Reaching supervised performance
As mentionned in the previous section, MoDATTS performs well when the target modality is completely unannotated, as on average 95% of the target supervised model performance is reached on BraTS. With the aim of determining the fraction of target modality annotations required to match the performance of a target supervised model, we trained
\begin{table}
\begin{tabular}{c c c} \hline Model & VS Dice \(\uparrow\) & VS ASSD \(\downarrow\) \\ \hline (Dong et al., 2021) ne2e & 0.847 \(\pm\) 0.063 & 0.551 \(\pm\) 0.303 \\ (Han et al., 2022) Super-Poly & 0.849 \(\pm\) 0.068 & 0.520 \(\pm\) 0.229 \\ (Kang et al., 2023) MAI & 0.852 \(\pm\) 0.089 & 0.475 \(\pm\) 0.207 \\ (Salle et al., 2023) LaTIM & 0.868 \(\pm\) 0.060 & **0.430 \(\pm\) 0.178** \\ MoDATTS (Ours) & **0.870 \(\pm\) 0.048** & 0.432 \(\pm\) 0.175 \\ \hline \end{tabular}
\end{table}
Table 3: Dice score and ASSD for the VS segmentation on the target hrT2 modality in the CrossMoDA challenge. We compare our performance with the top 4 teams in the validation phase. The standard deviations reported correspond to the performance variation across the 64 test cases.
Figure 7: Dice performance on the target modality for each possible modality pair. Pixel-level annotations were only provided in the source modality indicated on the x axis. We compare results for MoDATTS (2D and 3D) with AccSegNet and AttENT domain adaptation baselines. We also show Dice scores for supervised segmentation models respectively trained with all annotations on source data (No adaptation) and on target data (Target supervised) as for lower and upper bounds of the cross-modality segmentation task.
MoDATTS (self-supervised) with a fully annotated source modality and increasing fractions of target annotations (0%, 10%, 20%, 30% and 50%) on the BraTS T2 \(\rightarrow\) T1ce domain adaptation task. Results are provided in Table 4. We show that with a fully annotated source modality, it is sufficient to annotate 20% of the target modality to reach 99% (T1ce : \(0.839\pm 0.005\)) of the target supervised model performance (T1ce : \(0.848\pm 0.006\)). This emphasizes that the annotation burden could be reduced with our approach.
### Semi-supervision and annotation deficit
MoDATTS introduces the ability to train with limited pixel-level annotations available in the source modality, a distinct feature over previous baselines. We show in Fig. 8 the Dice scores for models trained when 1%, 10%, 40%, 70% or 100% of the source modality's annotations were available combined with 0% for the target modality. Note that the modality translation networks were retrained accordingly. While the performance of the baselines and the self-supervised variant of MoDATTS show a significant drop with fewer source annotations, the semi-supervised variant exhibits consistent performance with only slight degradation. Notably, the semi-supervised variant outperforms the self-supervised variant when less than 40% of the source samples are annotated. For instance for BraTS T1\(\rightarrow\)T2 domain adaptation, semi-supervised MoDATTS with 1% source annotations still reaches 88% of the performance of a target (T2) supervised model, while the self-supervised variant only achieves 75%. These findings validate that MoDATTS has the potential achieve robust performance even with a small fraction of annotated source images.
Note that when most of the source data is annotated, the performance gap between the self-supervised and semi-supervised variants remains small. When 100% of source data is annotated, we report (self-supervised vs semi-supervised) target modality Dice scores of 0.863 vs 0.857 for T1 \(\rightarrow\) T2 (BraTS), 0.758 vs 0.748 for T2 \(\rightarrow\) T1 (BraTS), and 0.870 vs 0.851 for T1ce \(\rightarrow\) hrT2 (CrossMoDA). This indicates that supervision from highly reliable synthetic data combined with self-training provide enough information to the segmentation model to close the domain gap. Finally, this small gap may be closed or reduced with access to better hardware since we had to reduce the size of the segmentation encoder-decoder in the semi-supervised variant from 38 million parameters (self-supervised) to 8 million parameters (semi-supervised).
### Attention in MoDATTS
Attention-based networks like transformers allow us to interrogate the model by analyzing the learned attention mechanisms. As suggested by Voita et al. (2019), we computed the "confidence" of each attention head in the model as the average of its maximum attention weight. We show in Fig. 9 the attention maps generated by the most confident heads in MoDATTS on CrossMoDA. A confident head can be interpreted as one that assigns high attention values to specific regions of the image. We note that the transformer component (encoder) in the modality translation network tends to focus on global anatomical details of the image. Interestingly, it is also highlighting the VS, which
\begin{table}
\begin{tabular}{l|c|c|c|c|c} T1ce annotations & 0\% & 10\% & 20\% & 30\% & 50\% \\ \hline T1ce Dice Score & \(0.801\pm 0.006\) & \(0.826\pm 0.006\) & \(0.839\pm 0.005\) & \(0.844\pm 0.007\) & \(0.847\pm 0.005\) \\ - \% of TSMP & 94.5\% & 97.5\% & 99\% & 99.5\% & 100\% \\ \hline \end{tabular}
\end{table}
Table 4: Brain tumor segmentation Dice scores when using reference annotations for 100% of source T2 data and various fractions (0%, 10%, 30% and 50%) of target T1ce data during training. We also show the % of the target supervised model performance (TSMP) that is reached by MoDATTS for the corresponding fractions of T1ce annotations.
Figure 8: Dice scores when using reference annotations for 0% of target data and various fractions (1%, 10%, 40%, 70% and 100%) of source data during training. For BraTS, we picked the T1/T2 modality pair and ran the experiments in both T1 \(\rightarrow\) T2 and T2 \(\rightarrow\) T1 directions. For CrossMoDA, hrT2 annotations are not available so the experiment is run only in the T1ce \(\rightarrow\) hrT2 direction. While performance is dropping at low % of annotations for the baselines, semi-supervised MoDATTS shows in comparison only a slight decrease. For readability, standard deviations across the runs are not shown.
emphasizes its ability to preserve tumor structures during the pseudo-target image synthesis. In the segmentation phase, the heads of the common and residual/segmentation decoders have different behaviors. Interestingly, the common decoder seems to avoid the tumor location, as a way to focus on the anatomical and healthy content of the image. As expected, the joint residual/segmentation decoder focuses on tumor areas in order to generate accurate residuals and segmentation maps. It also looks beyond the tumor. This is not surprising because the network has to compare the tumor to background tissue; also, a tumor can impact surrounding structures and the way the tumor appears in the image depends on the rest of the tissue.
### Ablation studies
In the scenario where all the source samples have pixel-level annotations and none are available in the target modality, we conduct the following ablation experiments and report the results in Table 5. Specifically, we focus here on the self-supervised variant of MoDATTS (), as it exhibited the highest segmentation performance in this particular setup. Note that we chose T1 and T2 to be respectively the source and target modalities for the ablations on BraTS.
_Self-training._ We evaluate the performance of MoDATTS before performing iterative self-training (). We notice an improvement of around +0.02 Dice score in the segmentation performance after the process for VS and Brain tumor
\begin{table}
\begin{tabular}{c l l l} \hline \hline Source annotations & Ablations & BraTS : T1 \(\rightarrow\) T2 & CrossMoDA : T1ce \(\rightarrow\) hrT2 \\ \hline \multirow{4}{*}{100\% (Self-supervised Variant)} & \multirow{4}{*}{w/o TAMT w/o ST} & \(0.817\pm 0.005\) (93\%) & \(0.826\pm 0.100\) \\ & & \(0.844\pm 0.001\) (96\%) & \(0.851\pm 0.051\) \\ & & \(\mathbf{0}\)**Proposed (Self-sup.)** & \(\mathbf{0.863\pm 0.002}\) (98\%) & \(\mathbf{0.870\pm 0.048}\) \\ \cline{2-4} & & \(\mathbf{\hat{\omega}}\)w/o \(\mathrm{\bar{P}}\)\(\rightarrow\)\(\mathrm{\hat{A}}\) and \(\mathrm{\hat{A}}\)\(\rightarrow\)\(\mathrm{\bar{P}}\) & \(0.658\pm 0.009\) (75\%) & \(0.621\pm 0.269\) \\ \cline{2-4} & & \(0\) w/o A \(\rightarrow\)\(\mathrm{P}\) & \(0.715\pm 0.012\) (82\%) & \(0.660\pm 0.222\) \\ \cline{2-4} & & \(\mathbf{\hat{\omega}}\)w/o dual-use Res./Seg. & \(0.739\pm 0.008\) (84\%) & \(0.707\pm 0.177\) \\ \cline{2-4} & & \(\mathbf{\hat{\omega}}\)**Proposed (Semi-sup.)** & \(\mathbf{0.770\pm 0.005}\) (88\%) & \(\mathbf{0.727\pm 0.173}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies : absolute Dice scores on the target modality. For BraTS, we selected T1 and T2 to be respectively the source and target modalities for these experiments. Ablations of the tumor-aware modality translation (TAMT) and self-training (ST) were performed when 100% of the source data was annotated with the self-supervised variant of MoDATTS, as it yielded the best performance. Alternatively, the ablations related to the diseased-healthy translation were performed on the semi-supervised variant when 1% of the source data was annotated. We also report for BraTS the % of a target supervised model performance that is reached by the model (values indicated in parenthesis). Note that for CrossMoDA the standard deviations reported correspond to the performance variation across the 64 test cases.
Figure 9: Attention maps yielded by the most confident transformer heads in MoDATTS. Red color indicates areas of focus while dark blue corresponds to locations ignored by the network. Note that the maps presented in the modality translation stage are produced by the encoder of the network as the decoder is fully convolutional and does not contain transformer blocks.
segmentation. Qualitative evaluation of the impact of self-training in the domain adaptation task for BraTS and CrossMoDA datasets is also provided in Fig. 10. The latter shows that the tumors on the test set are either filled or refined after self-training.
_Tumor-aware modality translation_. To assess the value of additional tumor supervision in the modality translation stage, we retrained the translation model with \(\lambda^{\text{mod}}_{seg}=0\). Iterative self-training was not applied in the segmentation stage (). As expected, the target modality segmentation performance dropped as compared to the previous ablation (BraTS : \(-0.027\) and CrossMoDA : \(-0.025\) in Dice). This implies that the joint tumor supervision in the modality translation stage actually helps to retain detailed lesion structures and provide more accurate pseudo-target images.
The next ablations focus on the semi-supervised variant of MoDATTS () and the contribution of the diseased-healthy translation when few annotated source samples are provided. We experiment with 1% of annotated source data, as it is where the semi-supervised variant is the most relevant. Values are reported in Table 5, and interpretations are provided below.
_Image-level supervision_. We notice that only training the translation from diseased to healthy domains (P \(\rightarrow\) A) suffices (O), as 82% of a target supervised model performance is reached on BraTS. But teaching the model to perform healthy to diseased (A \(\rightarrow\) P) yields better performance (BraTS : +0.055 and CrossMoDA : +0.061 in Dice) by making more efficient use of the data. Note that when the whole diseased-healthy unsupervised objective is removed (), the segmentation performance on the target modality is on par with the one achieved by the self-supervised variant.
_Separate residual and segmentation decoders_. Finally, we explored the effect of the decoders by employing a separate segmentation decoder instead of sharing the residual and segmentation weights (). This separate version shows lower performance on the brain (\(-0.31\) Dice) and VS (\(-0.20\) Dice) datasets. This observation emphasizes that disentangling tumors from the background to perform diseased to healthy translations is similar to a segmentation objective, and therefore is beneficial for accurate cross-modality tumor segmentation when few source samples are annotated.
## 5 Applications and extensions
We have introduced a domain adaptation method to segment tumors on unannotated target modality datasets from annotated or partially annotated source modality images. We have demonstrated the competitiveness and robustness of MoDATTS on cross-modality brain tumor and vestibular schwannoma MR sequences.
_Self-supervised vs semi-supervised variant_. The proposed model offers (1) a self-supervised variant that achieves supervision over pseudo-target samples and self-training; and (2) a semi-supervised variant that further includes real target modality images provided with diseased or healthy labels through unsupervised tumor disentanglement. Training the semi-supervised model requires more memory and expensive computation. Due to our limited resources, the semi-supervised model is equipped with fewer parameters. As a consequence, when enough pixel-level annotations are available in the source modality, the self-supervised model outperforms the semi-supervised model. This implies that a larger and more optimal segmentation network, relying solely on supervision from annotated synthetic pseudo-target data generated by our modality translation network, yields better performance than a smaller model provided with additional weak labels. This observation raises that there is a trade-off between training a bigger model and training a semi-supervised model. However, when annotated source data is scarce (less than 50% of annotated samples), the semi-supervised variant enables to
Figure 10: Qualitative evaluation of the impact of self-training on the test set for brain tumor and VS segmentation tasks. Last two columns show, respectively, the segmentation map for the same model before and after the self-training iterations. For BraTS, each row illustrates a different scenario where the target modality - indicated in the first column - was not provided with any annotations. We also show ground truth tumor segmentations to visually assess the improvements of the model when self-training is performed, along with the dice score obtained on the whole volume. For CrossMoDA the ground truth VS segmentations on target hrT2 MRIs were not provided. Self-training helps to better leverage unmanotated data in the target modality and acts as a refining of the segmentation maps, which helps to reach better performance.
preserve consistent segmentations and outperforms the self-supervised variant, even with a smaller segmentation network. Indeed, in this scenario training the model on actual target images becomes crucial as it has access to fewer synthetic samples and the generated pseudo-target images may be less reliable. Therefore we claim that MoDATTS has the potential to alleviate the annotation burden in cross-modality segmentation tasks.
Although producing the diseased and healthy image-level labels for 3D images does not add substantial annotation cost, it still represents a limitation when compared to unsupervised domain adaptation methods that rely only on pixel-level annotations available in the source modality. We thus specify that the self-supervised variant of MoDATTS does not require these weak labels to be trained, which is an advantage over the semi-supervised variant. In the end, the choice of either variant depends on the proportion of unannotated samples that the concerned dataset holds and the computational resources available.
_Limitations_. As the pathologies we studied in this article (brain tumor and VS) mostly showed lesions on one side of the volumetric images, we were able to yield healthy and diseased samples by splitting the data into hemispheres. In real scenarios, full volumes showing healthy conditions should be collected to train the model. However, even though we experimented outside this setting we believe our results are representative of the actual behavior of the different models.
MoDATTS has demonstrated encouraging performance in the challenging task of cross-modality domain adaptation. However, it remains a heavy 3D method that requires training two distinct models (modality translation and segmentation). This represents high training times and consequent computational resources, specifically for the semi-supervised variant that contains several decoders and discriminators. Furthermore, unlike source-free models that solely rely on a segmentation model trained on the source modality for target adaptation, MoDATTS requires the presence of source images during training. However, it is worth noting that source-free methods under-perform in comparison to generative methods like MoDATTS and sometimes rely on image-level labels incurring substantial annotation costs (Bateson et al., 2022).
_Extensions_. We have tested MoDATTS on CrossMoDA and a modified version of BraTS, which both offer an ideal environment to test any domain adaption strategy for cross-modality segmentation. However they remain limited to segmentation between different MR contrasts. Further work will explore MR to CT adaptation. We even consider evaluating MoDATTS on a cross-pathology and cross-modality task. Specifically, we consider that leveraging annotated gliomas in FLAIR sequences from BraTS to segment intraparenchymal hemorrhages on CT scans (Dong et al., 2022) is in the range of applications of our model. We believe that the semi-supervised variant of MoDATTS might provide benefits to mitigate the shift between these two conditions.
## 6 Conclusion
MoDATTS is a new 3D transformer-based domain adaptation framework to handle unpaired cross-modality medical image segmentation when target modality lacks annotated samples. We propose a self-supervised variant relying on the supervision from generated pseudo-target images and self-training, bridging the performance gap related to domain shifts in cross-modality tumor segmentation and outperforming other baselines in such scenarios. We offer as well a semi-supervised variant that additionally leverages diseased and healthy weak labels to extend the training to unannotated target images. We show that this annotation-efficient setup helps to maintain consistent performance on the target modality even when source pixel-level annotations are scarce. MoDATTS's ability to achieve 99% and 100% of a target supervised model performance when respectively 20% and 50% of the target data is annotated further emphasizes that our approach can help to mitigate the lack of annotations. The evaluation of MR to CT adaptation tasks will provide further insights into the potential applications of our approach.
## Acknowledgments
This research has been funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Canada Research Chair. We thank Compute Canada for providing the essential computational resources to complete this study.
|
2309.14571 | Software Citation in HEP: Current State and Recommendations for the
Future | In November 2022, the HEP Software Foundation and the Institute for Research
and Innovation for Software in High-Energy Physics organized a workshop on the
topic of Software Citation and Recognition in HEP. The goal of the workshop was
to bring together different types of stakeholders whose roles relate to
software citation, and the associated credit it provides, in order to engage
the community in a discussion on: the ways HEP experiments handle citation of
software, recognition for software efforts that enable physics results
disseminated to the public, and how the scholarly publishing ecosystem supports
these activities. Reports were given from the publication board leadership of
the ATLAS, CMS, and LHCb experiments and HEP open source software community
organizations (ROOT, Scikit-HEP, MCnet), and perspectives were given from
publishers (Elsevier, JOSS) and related tool providers (INSPIRE, Zenodo). This
paper summarizes key findings and recommendations from the workshop as
presented at the 26th International Conference on Computing in High Energy and
Nuclear Physics (CHEP 2023). | Matthew Feickert, Daniel S. Katz, Mark S. Neubauer, Elizabeth Sexton-Kennedy, Graeme A. Stewart | 2023-09-25T22:53:02Z | http://arxiv.org/abs/2309.14571v2 | # Software Citation in HEP: Current State and Recommendations for the Future
###### Abstract
In November 2022, the HEP Software Foundation (HSF) and the Institute for Research and Innovation for Software in High-Energy Physics (IRIS-HEP) organized a workshop on the topic of Software Citation and Recognition in HEP. The goal of the workshop was to bring together different types of stakeholders whose roles relate to software citation and the associated credit it provides in order to engage the community in a discussion on: the ways HEP experiments handle citation of software, recognition for software efforts that enable physics results disseminated to the public, and how the scholarly publishing ecosystem supports these activities. Reports were given from the publication board leadership of the ATLAS, CMS, and LHC experiments and HEP open source software community organizations (ROOT, Scikit-HEP, MCnet), and perspectives were given from publishers (Elsevier, JOSS) and related tool providers (INSPIRE, Zenodo). This paper summarizes key findings and recommendations from the workshop as presented at the 26th International Conference on Computing In High Energy and Nuclear Physics (CHEP 2023).
## 1 Introduction
Software is a research product -- an asset created as a byproduct of scientific research -- that is ubiquitously used in and necessary to physics research, though it is not always given the same levels of importance and scholarly weight as other research products like publications and data products [1]. In November 2022, the HEP Software Foundation (HSF) and the Institute for Research and Innovation for Software in High-Energy Physics (IRIS-HEP) [2, 3] organized a topical workshop on software citation and recognition in the field of high energy physics (HEP) [4, 5]. The goal of the workshop was to provide a community discussion around ways in which HEP experiments handle citation of software and recognition for software efforts that enable physics results disseminated to the public. The workshop participants and primary presentations were from the LHC experiments that are primary stakeholders in IRIS-HEP operations: ATLAS, CMS, and LHCb; the particle physics open source software development communities: ROOT Team, Scikit-HEP [6], MCnet, and IRIS-HEP; as well as
the scientific publishing community and ecosystem most involved with HEP: Elsevier, the Journal of Open Source Software (JOSS) [7], and INSPIRE [8].
The principles of software citation that the HEP community is interested in engaging with are those established by the FORCE11 Software Citation working group [9]. These principles are defined as:
1. **Importance**: Software should be considered a legitimate and citable product of research. Software citations should be accorded the same importance in the scholarly record as citations of other research products, such as publications and data; they should be included in the metadata of the citing work, for example in the reference list of a journal article, and should not be omitted or separated. Software should be cited on the same basis as any other research product such as a paper or a book, that is, authors should cite the appropriate set of software products just as they cite the appropriate set of papers.
2. **Credit and Attribution**: Software citations should facilitate giving scholarly credit and normative, legal attribution to all contributors to the software, recognizing that a single style or mechanism of attribution may not be applicable to all software.
3. **Unique Identification**: A software citation should include a method for identification that is machine actionable, globally unique, interoperable, and recognized by at least a community of the corresponding domain experts, and preferably by general public researchers.
4. **Persistence**: Unique identifiers and metadata describing the software and its disposition should persist -- even beyond the lifespan of the software they describe.
5. **Accessibility**: Software citations should facilitate access to the software itself and to its associated metadata, documentation, data, and other materials necessary for both humans and machines to make informed use of the referenced software.
6. **Specificity**: Software citations should facilitate identification of, and access to, the specific version of software that was used. Software identification should be as specific as necessary, such as using version numbers, revision numbers, or variants such as platforms.
Today the global research community now has these principles, citation policies from journal publishers, and modern open source tooling to facilitate the generation of software citations. There has also been growing movement among research software developers, research paper authors, and journal reviewers and editors [7] towards an increase in software citation. For the HEP community it is important to understand the current state (as of 2023) of software citation norms and culture in the field and how its importance can be conveyed and supported through community tooling, standards, and practices.
## 2 Current State of Software Citation in HEP
### LHC Experiments
To understand the current state of software citation in the field reports from the ATLAS, CMS, and LHCb experiments were given that summarized the experiments current standards and practices and future plans. ATLAS takes the approach of using a "catch-all" citation of all ATLAS software and firmware through the citation of an ATLAS public note that "briefly
describes the software and provides links to dynamic and persistent repositories wherein the code resides". [10] This public note is then cited in many ATLAS papers. ATLAS additionally cites the paper for the ATLAS detector simulation software [11] as well as GEANT4 [12], and the Monte Carlo simulation generators [13; 14; 15; 16]. In terms of statistical analysis ATLAS cites the methodology papers that describe the techniques used in analyses, but in general does not cite the actual software that implements the techniques, with the notable exception of machine learning libraries [17; 18]. Citation practices are not uniformly consistent in the experiment though, with some physics groups beginning to regularly cite statistical libraries that provide clear citation guidelines [19; 20] (Principles 1 and 2).
CMS similarly has an established culture of regularly and consistently citing the Monte Carlo generators, GEANT4, and machine learning tools. However, they note there could be improvement in the citation of the software that CMS itself produces, both in experimental internal notes and documentation as well as scientific publications. CMS also expressed positive views towards starting practices of publishing papers -- either as CMS Collaboration publications or as limited authorship papers from the CMS Software and Computing Group -- on CMS software, bringing with it increased visibility of scientific software development, documentation standards, and references of software version information (Principles 1 and 2).
LHCb has taken a more proactive stance on software citation following recommendations presented at the CHEP 2018 Conference [21] by providing an internal LHCb software citation starting template for software commonly used in analysis. Analysis teams are then encouraged to revise the template with the citations of the software used in their analysis with the goal that all high-level software used is properly cited (Principles 1, 2, and 6). These practices are encouraged in the collaboration, but not explicitly required, and so analysis teams may require citation guidelines to be provided. LHCb also noted that the citation practices of the HEP community are largely due to cultural norms rather than technical challenges, and that while LHCb strives to be citing more software in the future having LHC community recommendations on software citation would be useful for motivating better practices.
### Software Projects
Views from prominent open source software projects and software communities inside of HEP were also discussed, with a broad range of community cultural views and practices. The ROOT team noted they explicitly are not interested in ROOT's software citation, as the ROOT team does not view it as adding value to their work, that updating citation information would require additional effort, and in the team's view the current HEP culture of citation with journal publications for larger software projects is working well. The ROOT team was careful to note though that these views are specifically limited to software citation for ROOT [22] and should not be viewed as being universal. In contrast, the Scikit-HEP community project has prioritized adopting software citation recommendations and tooling from the broader scientific open source community (e.g. Zenodo [23], CITATION.cff files [24]) to provide credit to the developers producing community tools (Principle 2) as well as recognize project contributions of multiple types [25]. Scikit-HEP views software citation as important to their community and would welcome HEP community guidelines to guide users of the community tools to easily and correctly cite the software. The MCnet community noted that as a community of Monte Carlo generator software projects they have benefited from consistent citation by the LHC experiments. Several community factors lead to this culture, including the MCnet community becoming organized in the leadup to the start of the LHC and providing clear citation guidelines and often making programmatic citation information available from the software itself. MCnet raised the potential problems with the current citation model of citing
papers for large releases of the software as this does not equally value or reward the development and maintenance labor that occurs between the long intervals between publications. As a result, MCnet is interested in both technical solutions as well as community guidelines and policy regarding software citation.
### Publishing Community
Following the state of software citation in the HEP community, views and recommendations from INSPIRE, Elsevier, and JOSS were shared given their different roles related to scientific publishing and citation. INSPIRE is an integral part of how HEP interacts with publications, related metadata, and acquires updated citation information as tracked submissions move from preprints through publication. Having these capabilities for the citation information for software in HEP would be a technical boon. While INSPIRE currently only handles software papers, there are plans to add support for data products and software in the future, initially by harvesting metadata from relevant trusted repositories (e.g. INSPIRE HEP Zenodo community, HEPData, CERN OpenData). This information would be gathered by software digital object identifier (DOI), and could be aggregated across multiple releases of the same software. It is therefore important that software projects that seek citations in the future provide DOIs now (Principles 3, 5, and 6). Elsevier noted that it is the responsibility of the scientific community to reach a consensus on how to cite software and to share these guidelines with publishers, which can then better instruct journal editors and referees what the expectations for citation are and how to support them. JOSS noted that in addition to incentivizing high quality research software with the journal guidelines and review standards, JOSS can also help bridge the cultural and technical gaps between traditional publication citation and the citation of software directly.
## 3 Recommendations
In addition to establishing guidelines for the HEP community, providing recommendations of software citation best practices and supported tooling aids in community adoption of new guidelines. A behavior step that can be implemented is for software projects to clearly document a recommended citation and have this information be easily findable anywhere the software source code or distributions are hosted or documented (e.g. version control repositories, public documentation websites, package indexes, archives). There has been historical precedent in HEP for tools to provide recommendations for how to cite the software being used by printing it as a runtime banner to standard output, as seen in Listing 1. This method was developed before citation conventions were established more broadly in the scientific computing community, and modern practices would generally avoid interrupting user logs with this information. It is instead preferable, in addition to having a clearly documented and advertised recommended citation, to provide citation APIs in the software -- both at the language level and at the command line interface if the software supports one.
In addition to having clear citation recommendations, it is beneficial to adopt a standardized citation file format. A strong choice is the recent Citation File Format [24] which is serialized as YAML as a CITATION.cff, as seen in Listing 2. CITATION.cff files have the benefit of being both human- and machine-readable with a well defined, versioned schema. Through related tooling CITATION.cff can also be programmatically validated against schemas and converted to other citation formats (e.g., BibTeX, CodeMeta, EndNote, RIS, schema.org, Zenodo, APA). CITATION.cff also benefits through supported integration
with GitHub1, Zenodo, and Zotero, allowing for the citation information to be reliably exported to multiple services from a single file (Principle 2). The integration with Zenodo is significant, as the HEP community is already frequent users of Zenodo for long term archival of source code (Principle 4) and DOI generation for the source code of software releases (Principle 3). Software projects that adopt the use of CITATION.cff and archive the source code with Zenodo have a clearly defined toolchain provenance for citation information dissemination (Principle 5). Given this, it is recommended that there is a single source of truth for citation information, such as a CITATION.cff file, that is under version control with the software source code and is used to generate all other metadata or forms of citation information by other services.
Footnote 1: Providing a “Cite this repository” button on a repository with a CITATION.cff file.
## 4 Conclusions
Revisiting the software citation principles in the view of current approaches and technologies in HEP provides a structure for starting community guidelines:
1. **Importance**: As a field HEP understands software is important, but improvements could be made on views towards software as a research product.
2. **Credit and Attribution**: The giving of credit is improving in HEP, but the community can leverage software friendly journals (i.e., JOSS) to help accelerate this.
3. **Unique Identification**: Use of Zenodo archives is common in HEP, which provides well integrated tooling for DOI generation. The use of CITATION.cff files in software repositories can help as well.
4. **Persistence**: Zenodo provides long term archival of source code and project metadata.
5. **Accessibility**: HEP is becoming more FAIR [26; 27] focused, bringing with it an increased focus on accessibility. As CITATION.cff provides a common framework for metadata, adopting it as a community standard for software citation information allows for greater accommodation and discovery by citation discovery tools.
6. **Specificity**: Version numbers of software should be included in CITATION.cff files and the version used for analysis should be reported in publications.
It is seen there are both social and technical tooling challenges to be addressed to reach HEP community guidelines and recommendations for software citation. While there exist multiple practices towards software citation in the HEP community today, this should not be viewed as a large challenge towards global community standards adoption as variations in homogeneity of practice are common even in journal publication. The community wide agreement that software citation is important, should be practiced more often, and provides both social and technical benefits gives sufficient motivation to develop HEP community wide recommendations in the near future.
## Acknowledgments
Matthew Feickert and Daniel S. Katz are supported by the U.S. National Science Foundation (NSF) under Cooperative Agreement OAC-1836650 (IRIS-HEP). Mark S. Neubauer is supported by the U.S. Department of Energy, Office of Science, High Energy Physics, under contract number DE-SC0023365, and by the National Science Foundation under Cooperative Agreement OAC-1836650 (IRIS-HEP).
|
2309.12114 | AutoPET Challenge 2023: Sliding Window-based Optimization of U-Net | Tumor segmentation in medical imaging is crucial and relies on precise
delineation. Fluorodeoxyglucose Positron-Emission Tomography (FDG-PET) is
widely used in clinical practice to detect metabolically active tumors.
However, FDG-PET scans may misinterpret irregular glucose consumption in
healthy or benign tissues as cancer. Combining PET with Computed Tomography
(CT) can enhance tumor segmentation by integrating metabolic and anatomic
information. FDG-PET/CT scans are pivotal for cancer staging and reassessment,
utilizing radiolabeled fluorodeoxyglucose to highlight metabolically active
regions. Accurately distinguishing tumor-specific uptake from physiological
uptake in normal tissues is a challenging aspect of precise tumor segmentation.
The AutoPET challenge addresses this by providing a dataset of 1014 FDG-PET/CT
studies, encouraging advancements in accurate tumor segmentation and analysis
within the FDG-PET/CT domain. Code:
https://github.com/matt3o/AutoPET2-Submission/ | Matthias Hadlich, Zdravko Marinov, Rainer Stiefelhagen | 2023-09-21T14:34:17Z | http://arxiv.org/abs/2309.12114v2 | # AutoPET Challenge 2023: Sliding Window-based Optimization of U-Net
###### Abstract
Tumor segmentation in medical imaging is crucial and relies on precise delineation. Fluorodeoxyglucose Positron-Emission Tomography (FDG-PET) is widely used in clinical practice to detect metabolically active tumors. However, FDG-PET scans may misinterpret irregular glucose consumption in healthy or benign tissues as cancer. Combining PET with Computed Tomography (CT) can enhance tumor segmentation by integrating metabolic and anatomic information. FDG-PET/CT scans are pivotal for cancer staging and reassessment, utilizing radiolabeled fluorodeoxyglucose to highlight metabolically active regions. Accurately distinguishing tumor-specific uptake from physiological uptake in normal tissues is a challenging aspect of precise tumor segmentation. The AutoPET challenge addresses this by providing a dataset of 1014 FDG-PET/CT studies, encouraging advancements in accurate tumor segmentation and analysis within the FDG-PET/CT domain.
Code: [https://github.com/matt3o/AutoPET2-Submission/](https://github.com/matt3o/AutoPET2-Submission/)
Keywords:Semantic Segmentation Sliding Window U-Net
## 1 Introduction
In the domain of oncological diagnostics, the integration of Fluorodeoxyglucose Positron-Emission Tomography (FDG-PET) and Computed Tomography (CT) has assumed a pivotal role, facilitating comprehensive insights into the metabolic dynamics of various malignant solid tumor entities [1]. FDG-PET, acknowledged for its capacity to delineate glucose consumption within tissues, holds significant promise in therapy control and monitoring, owing to the characteristic escalated glucose uptake by tumor lesions [3]. However, the non-specificity of FDG-PET often introduces interpretational ambiguities, as it may also manifest in benign or healthy tissue [6], potentially leading to erroneous diagnoses.
To mitigate this diagnostic challenge, the fusion of PET with CT has emerged as an integrated approach, combining metabolic data with precise anatomical
information. This combination enhances tumor detection accuracy [1], [13], offering a cohesive synergy particularly valuable in clinical practice [6].
Within this evolving landscape of medical diagnostics, the Automatic Lesion Segmentation in Whole-Body FDG-PET/CT Challenge (AutoPET)1 embodies a critical juncture. It motivates researchers and practitioners to develop automated, bi-modal methodologies for the three-dimensional segmentation of tumor lesions embedded within FDG-PET and CT scans [6]. The challenge accelerates advancements in deep learning-based automated tumor lesion segmentation through the provision of a large densely annotated dataset of 1014 volumes.
Footnote 1: [https://autopet-ii.grand-challenge.org/](https://autopet-ii.grand-challenge.org/)
In this work, we propose using the well-known U-Net architecture [15] to tackle the AutoPET challenge. Despite the ubiquity of U-Net models in medical segmentation tasks [9], [4], achieving high performance in the domain of whole-body PET/CT lesion segmentation has remained elusive [12], [17], [8], [16], [7] largely due to the scarcity of training data in preceding studies [3]. Drawing upon the insights provided by the AutoPET Challenge U-Net-based winner from 2022 [16], we undertake a practical investigation to understand the important training parameters of the U-Net model for segmenting lesions. We believe that it is possible to achieve a better and more robust model by focusing on the intricacies of data pre-processing, data augmentation, learning rate scheduling, and crop-size selection during model training. Our work and model are based on prior experiments in interactive segmentation [14]. Thus, for our hyperparameter tuning experiments, we present results using our interactive model. Nonetheless, for our final submission, we exclude the integration of interactive clicks into the model and employ its optimal hyperparameter configuration.
## 2 Methodology
### Model Architecture
The model used for the challenge is called DynUNet, which is an adaption of the UNet for the MONAI library [2]. Contrary to the default UNet, DynUNet does not use max-pooling for downsampling but instead uses strided convolutions. Additionally, the residual is passed through a convolutional layer such that the input size from the downsampling layer matches the output size of this layer. All of the changes can be traced back to three prior works: [10], [11], and [5].
Our default configuration of the network consists of six layers of filter size [32, 64, 128, 256, 320, 320]. As discussed above, the convolutions are strided with a size of [1, 2, 2, 2, 2, [2, 2, 1]], and the upsampling is thus done in the inverse order. An architectural diagram can be found in Figure 1.
### Data Pre-processing and Augmentation
**Pre-processing.** We restrict ourselves by using only the PET volumes from the paired PET/CT scans. We apply multiple pre-processing transformations to
each batch of data. Apart from changing the channel order, the orientation is set to a RAS (Right-Anterior-Superirror) coordinate system. As the AutoPET spacing is \(\approx\)[2, 2, 3]mm\({}^{3}\), the data is resampled accordingly with this fixed voxel size. The intensity of each PET image is scaled, based on its voxel intensity statistics, with MONAI's ScaleIntensityRangePercentiled to the 0.05 and 99.95 percentiles. During training, a random crop of size 224x224x224 is sampled, with a probability of 0.6 of being centered around a tumor lesion and 0.4 of being centered around the background. To achieve this, we utilize the RandCropByPosNegLabelM MONAI transform. This crop is balanced by the class label of the voxel in the crop's center - in 60% of the cases the voxel is positive, and in the other 40% it is negative. This ensures that the network learns about positive and negative samples in a more balanced training regime.
**Data Augmentation.** We apply two types of data augmentation - random flipping and random rotation. We apply a random flip on each spatial axis with a probability of 0.1. We also apply a random 90-degree rotation with a probability of 0.1 for each axis.
### Data Post-processing
Since we are using a sliding window approach, the final prediction volume gets stitched from the various output patches. This process is done with a user defined overlap, in our case this was set to 75%.
After the result prediction a softmax is applied.
Figure 1: An overview of the used DynUNet architecture.
For the ensemble based solution the two steps mentioned above are done for each of the five networks prediction separately. After the softmax on each prediction a voting mechanism combines the different predictions into a single one.
### Hyperparameter Tuning
As explained above most of the different experiments have been run on interactive code. Nevertheless they should be representative in terms of general performance of the network. Variations of +/- 0.5% Dice are to be expected since the guidance signal was non-deterministic.
#### 2.4.1 Sliding window versus normal inferer
First of all we compare the sliding window infererence to the normal one, figure 4. As it can be seen in the table, on the interactive code the sliding window inferer wins with a lead of 2.81% Dice.
Next different region of interest sizes have been tried out. The best performing one here was the 128x128x128 crop. Note that the sliding window was active during training. In the thesis it is shown that training with overlap active gains about 1% of Dice.
This overlap means for the 128x128x128 instead a window of size 320x320x320 has been calculated, with calculations being equal to a normal inferer of size 384x384x384. As we can see a lot of overhead calculations are being done by the sliding window inferer. However in the next subsection we will show that the overhead calculations for the overlap actually lead to a better Dice score.
#### 2.4.2 Sliding window overlap
Now we will look at the overlap of the sliding window inferer. Table 4 shows that increasing the overlap also increases the Dice score
\begin{table}
\begin{tabular}{|l|c|c|} \hline & **Sliding Window** & **Simple Inferer** \\ \hline Dice & **83.83\%** & 81.02\% \\ \hline \end{tabular}
\end{table}
Table 1: Interactive run of Sliding Window versus Simple Inferer
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & **64x64x64** & **128x128x128** & **192x192x192** & **256x256x256** \\ \hline Dice (validation) & 84.74\% & **85.22\%** & 83.66\% & 84.75\% \\ \hline Dice (training) & 87.99\% & 88.46\% & **88.98\%** & 88.79\% \\ \hline \end{tabular}
\end{table}
Table 2: Different region of interest sizes compared. Trained on a crop of size 256x256x256.
of the network. In our experiments the higher the overlap the better the results have been. This can be seen as a way of creating a mini Ensemble with same weights. The overlap uses a Gaussian fade away to make the regions closer the center weight more heavily when stitching together the final output.
Additionally experiments have been run to verify the impact of training with overlap on. Table 4, which shows a network trained on 0% overlap, overall shows slightly worse results, especially for the higher overlaps it becomes significant. As expected running it with 0 overlap returns slightly better results than the network trained with overlap being forced to use none. We can thus conclude that activating overlap during training enhances the final score.
#### 3.2.2 Convergence behaviour with different losses
Figure 2 shows the convergence behaviour of the Dice loss vs the DiceCELoss. As it can be seen the DiceCELoss start with a higher initial validation Dice in epoch 10, 73.62% against 70.09%. Also the final Dice metric was a little bit higher, 85.47% for DiceCELoss and 84.62% for Dice loss. However a plateau appears to be reached for both losses. In other experiments with more iterations it was shown that this method can reach a validation Dice of up to 87.60%.
We can thus fully recommend the DiceCELoss as a standard choice for training. It converges faster and also yields higher final scores especially in terms of Dice.
#### 3.2.3 Intensity scaling options
Finally a quick comparison of different intensity scaling options. The base run was a pre-calculated batch statistics normaliza
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Experiment** & **Overlap** & **Dice** \\ \hline v\_208\_0.0 & 0 & 66.57\% \\ v\_208\_0.25 & 0.25 & 71.35\% \\ v\_208\_0.5 & 0.5 & 71.99\% \\ v\_208\_0.75 & 0.75 & **72.86**\% \\ \hline \end{tabular}
\end{table}
Table 4: Non-interactive validation runs with different settings for the overlap. The network has been trained on 0% overlap.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Experiment** & **Overlap** & **Dice** \\ \hline
201 & 0 & 66.33\% \\
202 & 0.25 & 73.04\% \\
203 & 0.5 & 73.54\% \\
207 & 0.75 & **74.07\%** \\ \hline \end{tabular}
\end{table}
Table 3: Non-interactive validation runs with different settings for the overlap. The network has been trained on 25% overlap.
tion to the 0.005 and 99.95 percentiles of the intensity. The first ScaleIntensityRangePercentiled applied the same percentiles but this time based on the statistics of a each item. The last ScaleIntensityRangePercentiled is a base run with no clipping of the intensities, it only normalizes the intensity from 0 to 1.
As we can see the item-wise statistics outperformed the batch-wise statistics and the clipless method.
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline Model & \# Epochs & Best Train Dice & Notes \\ \hline
**\(\downarrow\)** & 414 & 83.92\% & Network did not get included due to NaN errors \\
2 & 564 & 87.33\% & \\
3 & 447 & 85.92\% & \\
4 & 411 & 85.44\% & \\
5 & 391 & 86.76\% & \\ \hline \end{tabular}
\end{table}
Table 6: Number of training epochs for each cross-validation model in our final submission of an ensemble and its corresponding best Train Dice.
Figure 2: Comparing the Dice Loss, in MONAI called MeanDice to the DiceCELoss.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Base run**} & \multicolumn{2}{c|}{**ScaleIntensity-**} & \multicolumn{2}{c|}{**ScaleIntensity-**} \\ & **CosineAnnealingLr** & **RangePercentiled** & **RangePercentiled 2** \\ & (104) & (148) & (149) \\ \hline Dice & 85.63\% & **86.69\%** & 85.44\% \\ \hline \end{tabular}
\end{table}
Table 5: Different ScaleIntensity settings compared.
## 3 Proposed solutions to the AutoPET2 Challenge
We propose two different approaches for the challenge as final submissions:
* A single network with six layers as stated above, trained for 400 epochs.
* An ensemble of five networks, each with same six layers. Four of the networks were trained with cross-validation on five splits of the data. The network trained on the first split did not get included, since it ran into NaN errors very quickly. They were trained without using the validation split for 800 epochs with no validation runs. However, all of the five networks did not finish in time for the challenge, so the most recent checkpoint was picked instead, where Table 6 summarizes how many epochs each model was trained for. Additionally, the best-performing single network was integrated as a teacher, bringing the total to five networks working collaboratively. The results of the different networks got combined with an equally weighted voting mechanism.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter name** & **Setting** \\ \hline Network & DynUNet with [32, 64, 128, 256, 320, 320] filters \\ & and a depth of sex layers \\ \hline Loss & DiceCELoss with squared\_pred=True \\ & and include\_background=True \\ \hline Optimizer & Adam \\ \hline Learning rate scheduler & CosineAnnealingLR (initial lr=2e-4, eta\_min=1e-8) \\ \hline Inferer & Sliding window inferer with ROI size 128x128x128, sliding \\ & window overlap 0.75 \\ \hline Intensity scaling with & Custom Scaling to 0.05\% and 99.95\% intensity percentiles \\ & using ScaleIntensityRanged \\ \hline Automatic Mixed Precision & Active \\ \hline \end{tabular}
\end{table}
Table 7: Best settings
## 4 Results
The results of our two final submissions can be seen in Table 8. The Dice score is similar in both approaches but the FPV is significantly reduced in the ensemble, perhaps due to the smoothening effect on the predictions which filters outliers outside of the object. However, the single network has a much lower FNV, signifying a higher sensitivity to tumor detection.
## 5 Post mortem: NaN errors during training if AMP is active
In the preparation for the challenge we ran into NaN errors when training on A100 GPUs, but only when automated mixed precision was on. During the debugging we found out our input already contained NaNs.
The reason in our case was a training crop to positive / negative areas of size 224x224x224. At the borders of the volume this resulted in crops which contained almost only 0s or even only 0s. Our current hypothesis is that the normalization on the crop produces division by 0 errors. This would make especially sense for the intensity scaling which might degrade if the input tensor only contains 0s. However more debugging is necessary to find out the exact transform which produces the NaN errors.
The solution is to add a filter after the pre-transform to remap all NaN values to 0. In our case this fixed the problem and we could resume training with AMP on the A100 GPUs.
## 6 Acknowledgment
The present contribution is supported by the Helmholtz Association under the joint research school "HIDSS4Health - Helmholtz Information and Data Science School for Health". This work was performed on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Wurttemberg and by the Federal Ministry of Education and Research.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Method** & **Train dice** & \multicolumn{3}{c|}{**Results on the preliminary test set**} \\ \hline & & Dice score & False negative volume & False positive volume \\ \hline Single network & 86.76\% & 56.52\% & 0.0249 & 1.8015 \\ \hline Ensemble & 86.16\% & 56.60\% & 0.0572 & 1.0475 \\ \hline \end{tabular}
\end{table}
Table 8: The results of our method in the AutoPET2 challenge. |
2310.10660 | Analysis and Detection against Network Attacks in the Overlapping
Phenomenon of Behavior Attribute | The proliferation of network attacks poses a significant threat. Researchers
propose datasets for network attacks to support research in related fields.
Then, many attack detection methods based on these datasets are proposed. These
detection methods, whether two-classification or multi-classification, belong
to single-label learning, i.e., only one label is given to each sample.
However, we discover that there is a noteworthy phenomenon of behavior
attribute overlap between attacks, The presentation of this phenomenon in a
dataset is that there are multiple samples with the same features but different
labels. In this paper, we verify the phenomenon in well-known
datasets(UNSW-NB15, CCCS-CIC-AndMal-2020) and re-label these data. In addition,
detecting network attacks in a multi-label manner can obtain more information,
providing support for tracing the attack source and building IDS. Therefore, we
propose a multi-label detection model based on deep learning, MLD-Model, in
which Wasserstein-Generative-Adversarial- Network-with-Gradient-Penalty
(WGAN-GP) with improved loss performs data enhancement to alleviate the class
imbalance problem, and Auto-Encoder (AE) performs classifier parameter
pre-training. Experimental results demonstrate that MLD-Model can achieve
excellent classification performance. It can achieve F1=80.06% in UNSW-NB15 and
F1=83.63% in CCCS-CIC-AndMal-2020. Especially, MLD-Model is 5.99%-7.97% higher
in F1 compared with the related single-label methods. | Jiang Xie, Shuhao Li, Yongzheng Zhanga, Peishuai Sun, Hongbo Xu | 2023-09-13T01:59:26Z | http://arxiv.org/abs/2310.10660v1 | # Analysis and Detection against Network Attacks in the Overlapping Phenomenon of Behavior Attribute
###### Abstract
The proliferation of network attacks poses a great threat. Traditional detection methods, whether two-classification or multi-classification, belong to single-label learning and classify a sample into a separate category. However, we discover that there is a noteworthy phenomenon of behavior attribute overlap between attacks in the real world, i.e., a network behavior may be multi-labeled and can be classified into multiple attacks. We verify the phenomenon in well-known datasets(UNSW-NB15, CCCS-CIC-AndMal-2020). In addition, detecting network attacks in a multi-label manner can obtain more information behind them, providing support for tracing the attack source and building IDS. Therefore, we propose a multi-label detection model based on deep learning, MLD-Model, in which WGAN-GP with improved loss performs data enhancement to alleviate the class imbalance problem, and Auto-Encoder(AE) performs classifier parameter pre-training. Experimental results demonstrate that MLD-Model can achieve excellent classification performance. It can achieve \(F1\)=\(80.06\%\) in UNSW-NB15 and \(F1\)=\(83.63\%\) in CCCS-CIC-AndMal-2020. Especially, MLD-Model is \(5.99\%\)\(\sim\)\(7.97\%\) higher in \(F1\) compared with the related single-label methods.
keywords: Overlapping attribute, Multi-label, Network attack detection, Data enhancement, Pre-training +
Footnote †: journal: Nuclear Physics B
## 1 Introduction
The development of Internet makes us pay more attention to cyber security. It is important to construct corresponding detection schemes for different network attacks and obtain more information from samples[18]. Traditional detection methods just classify attacks into
two-classification or multi-classification, which belongs to single-label learning, i.e., a sample has only one label. And there is little related work to explore the correlation of intrinsic features between different attacks.
In this paper, we study the characteristics of various network attacks in the real world, and find that network attacks exist overlapping phenomenon of behavior attribute. Samples belonging to different network attacks show the same behavior features, are multi-labeled. For instance, DoS and Fuzzers attacks will show same features in certain circumstances[29]. The fundamental reason is that malicious behaviors can naturally be defined as different categories from different perspectives. We further describe the overlapping phenomenon and analyze the reasons in Section 3 and Section 6.1.
The overlapping phenomenon causes a network behavior may be multi-labeled. And if we find that a sample belongs to multiple attacks, we can obtain more information from this sample. Taking DoS and Fuzzers attack in the UNSW-NB15 dataset for example, DoS means a malicious attempt to make a server unavailable to users and Fuzzers mean that attacker attempts to cause a network suspended by feeding it the randomly generated data[29]. Therefore, if a sample is detected as belonging to both DoS and Fuzzers, we can infer that the attacker attempts to make a server unavailable to users by continuously feeding it the randomly generated data. However, traditional methods can only detect this sample as one of DoS or Fuzzers at most, so that we cannot obtain more information (such as the specific method and purpose behind it) to support the tracing of network attack source.
We call the network attacks exist overlapping phenomenon of behavior attribute as multi-label network attacks. And it is meaningful to find a multi-label detection method to detect those attacks, which means that we can obtain more information from the detection process to trace the source of network attacks and formulate stronger defense schemes. However, those detection methods belong to multi-label learning(MLL). Currently, they are mainly aimed at natural language processing (nlp) and image fields. Nlp mainly includes sentiment classification([16, 43, 42]), text classification([47, 6, 34, 4]), _etc._, and the image field includes image annotation([27, 21, 38]) and image classification([44, 49]), _etc._. These detection methods of nlp and image fields cannot be directly used in the processing of network attack data. Currently, there is no multi-label detection technology for network attacks in cyber security. Therefore, it is necessary to find a effective method for the detection of multi-label network attacks.
The contributions of this paper are as follows.
* We discover that there is overlapping phenomenon of behavior attribute between network attacks in the real world. A network behavior may be multi-labeled and belongs to multiple attacks. We formally describe this phenomenon and analyze its causes.
* We perform statistical analysis in well-known network attack datasets (UNSW-NB15[29], CCCS-CIC-AndMal-2020[22, 31]). The results validate our findings about the overlapping phenomenon. In UNSW-NB15, a sample has 1.689 labels on average. In CCCS-CIC-AndMal-2020, a sample has 1.413 labels on average. In addition, we pro
cess these data and make them publicly available to support related research1. Footnote 1: The dataset and code can be found at _[https://github.com/BitBrave-Xie/processed-multi-label-dataset_](https://github.com/BitBrave-Xie/processed-multi-label-dataset_).
* We propose a Multi-Label Detection method based on the WGAN-GP[15] (with improved loss) and unbalanced Auto Encoder (AE)[3], MLD-Model, for the detection of multi-label network attacks. WGAN-GP is used for data enhancement to alleviate class imbalance problem. Unbalanced AE is used to extract data features, and pre-training adjust classifier parameters based on the augmented data. Finally, the raw labeled data is used for fine-tuning of the classifier.
* We design a prototype system based on MLD-Model, and conduct experiments on two network attack datasets. In UNSW-NB15, there are 10 categories (benign and 9 types of network attacks), and MLD-Model can reach \(Acc\)=79.87%, \(F1\)=80.06%. In CCCS-CIC-AndMal-2020, there are 15 categories (benign and 14 types of network attacks), and MLD-Model can reach \(Acc\)=83.17%, \(F1\)=83.63%. Specially, MLD-Model is 5.99%\(\sim\)7.97% higher in \(F1\) compared with the related single-label network attack detection methods and 1.65%\(\sim\)58.25% higher in \(F1\) compared with other multi-label baseline detection methods.
The remainder of this paper is organized as follows. Section 2 introduces the related work. In Section 3, we analyze the overlapping phenomenon of behavior attribute between network attacks. Preliminaries is in Section 4. Subsequently, Section 5 is the methodology and we introduce the composition of MLD-Model. In Section 6, we evaluate our method and show the relevant experimental results. Finally, we discuss and summarize in Section 7 and Section 8, respectively.
## 2 Related work
We analyze and detect various network attacks existing the overlapping phenomenon of behavior attribute, which belong to the field of intrusion detection. The detection method belongs to the field of multi-label learning(MLL). This section will introduce related work from these two perspectives.
### Intrusion detection
Intrusion detection against network attacks is one of the important research fields of cyber security. An Intrusion Detection System (IDS)[26] is built based on various technical means to detect, resist and warn against network attacks. Existing methods for intrusion detection can generally be divided into feature detection and anomaly detection. Feature detection, also called misuse detection, fits the behavior patterns of known attacks, and judges network behaviors with similar behavior patterns as malicious. Feature detection can maintain a high detection rate for known network attacks,but it cannot effectively detect unknown attack, i.e., the Zeroday attack. Anomaly detection, also called behavior detection, is one of the
mainstream methods of intrusion detection. It mainly fits the patterns of normal network behavior. When a network behavior does not conform to the pattern of the feature library, it is judged as malicious. Therefore, anomaly detection can detect Zeroday attack more effectively, which is important for the current cyber security situation where new network attacks continue to emerge.
Zhiqiang _et al._[48] propose a IDS based on deep learning, and experiments show that the proposed classifier is better than other models. Kumar _et al._[23] propose a IDS based on feature detection, which can detect 5 types of intrusions in the network: Exploits, DoS, Probe, Generic, and Normal. Yang _et al._[41] propose a novel intrusion detection model called ICVAE-DNN. The NSL-KDD[36] and UNSW-NB15 datasets are used to evaluate. Experiments show that it outperforms 6 well-known models. Then, Yang _et al._[40] also propose a network intrusion detection model called SVAER-DNN, which uses WGAN-GP to learn the latent data distribution. Experiments show that the SVAER-DNN outperforms 8 well-known classification models. Jing _et al._[20] propose a SVM for two-classification and multi-classification. Experiments in UNSW-NB15 show that the proposed method can achieve accuracy of 75.77%.
Durmucs _et al._[9] apply statistical calculation methods to the analysis output model. Then, an understandable scenario is created and a model for cyber security intervention is provided. Fiky _et al._[11] proposes two machine learning methods for dynamic analysis of Android malware: one is used to detect and identify the Android malware category, and the other is used to detect and identify the Android malware family. In general, a method for high-precision dynamic analysis of Android malware is provided, and an accuracy rate of more than 96% can be obtained in the CCCS-CIC-AndMal2020 dataset. Liu _et al._[28] applies an unsupervised malware detection method in order to detect Zeroday attack, and proposes an unsupervised feature learning algorithm called Restricted Boltzmann Machine based on Subspace (SRBM) [24] to reduce the data dimension. Experimental results show that the features learned by SRBM perform better than those learned by other feature reduction methods. Abusitta _et al._[1] proposes a new framework for detecting malware in a non-stationary environment. It uses deep learning technology to extract useful features and is robust to changing environments. Experimental results on the actual dataset show that the framework improves the detection accuracy compared with the existing methods.
The above-mentioned various detection methods have excellent performance on the single-label classification of network attacks. However, the overlapping phenomenon of behavior attribute between network attacks is not considered. This results in their theoretical upper limit of accuracy not being 100%.
### Multi-label learning
Multi-label learning(MLL) tasks are more difficult than single-label tasks. Multi-label means that a sample has multiple labels. Multi-label learning(MLL) classification tasks are more difficult than single-label classification tasks. Multi-label means that a sample has multiple labels. In some cases, these labels have priority. The difficulty in constructing a multi-label learning algorithm is mainly due to the exponential growth of the output space. For instance, in the multi-label learning of \(M\) basic categories, the theoretical output space is
\(2^{M}\). According to the strength of label correlation mining, methods for multi-label detection mainly have three strategies: first-order, second-order, and high-order[46].
**First-order:** It ignores the correlation with labels, and only builds binary-classifiers between single-label and samples[5; 45; 8]. For instance, a multi-label classification of \(M\) basic categories, is decomposed into \(M\) independent 2-class problems. First-order is simple and can be quickly constructed using basic classifiers. However, the correlation between labels cannot be effectively used in this way.
**Second-order:** It explores the correlation features between pairs of labels, such as divides the labels into related and unrelated sets[13; 30; 37], or divide the labels into related and unrelated sets[10; 12]. For instance, in a multi-label problem of \(M\) basic categories, it constructs \(\frac{M(M-1)}{2}\) two-classifiers of label pairs. Compared with the first-order, the second-order considers the correlation between label pairs.
**High-order:** It considers the association between multiple labels[7; 14; 19; 39]. For instance, the label subset is directly converted into a specific natural number. It converts the multi-label problem into a single-label multi-classification problem. Generally, high-order can achieve the better detection results, but the structure is also relatively more complicated.
There are two mainstream methods based on the above three strategies: problem transformation and algorithm adaptation. Problem transformation is to convert multi-label into a single-label combination of two-classification problems or multi-classification problems, and then use the corresponding mature algorithms. Algorithm adaptation is to directly adopt existing algorithms, such as ML-KNN[45] determines the label subset of samples based on the neighbor features.
Many researchers conduct multi-label detection in cyber security. Li _et al.[25]_ conduct apt-related potential threat detection. Han _et al.[17]_ proposes a weakly supervised multi-label learning method based on the collaborative embedding to solve the problem of incomplete data collection. However, there is no relevant research on multi-label learning algorithms for network attack detection in the overlapping phenomenon of behavior attribute.
## 3 Analysis of the overlapping behavior attribute
We discover that there is the overlapping phenomenon of behavior attribute between network attacks in the real world. Various network attacks are not clear-cut, but overlap and contain each other. We show a demo in Fig.1, assuming that network attacks are distributed in a two-dimensional space. As shown in the Fig.1, each point represents a sample, and it can be seen that network attacks A and B have overlap of behavior attribute, and some samples of network attacks C and D also partially overlap of behavior attribute. The phenomenon causes a sample may be multi-labeled. Here we cite some typical cases in practice, and give an analysis. The specific data analysis is in Section 6.1.
### Definition of the overlapping behavior attribute
The formal description of the overlapping phenomenon is as follows. In a network attack dataset \(D\), a sample \(x=[x^{(1)},x^{(2)},...,x^{(d)}]\). \(D\) consists of multiple attack sub-datasets
\((D_{1},D_{2},...,D_{M})\), where the attack \(i\) has \(|D_{i}|\) samples. We define two samples \(x\) and \(x^{\prime}\) with overlapping behavior attribute as \(x=x^{\prime}\), which are exactly the same in all features, as follows:
\[x=x^{\prime}\Leftrightarrow\left\{x^{(i)}=x^{\prime(i)};i=1,2,...,d\right\}\]
Therefore, we define that if there is the overlapping phenomenon of behavior attribute between network attacks in \(D\), then: \(\exists\left(x_{1}\in D_{1},x_{2}\in D_{2},...,x_{k}\in D_{k};k\leqslant M\right)\), has \(x_{1}=x_{2}=...=x_{k}\).
### Case analysis of the overlapping behavior attribute
**Analysis, Backdoor, DoS, Exploits and Fuzzers in UNSW-NB15:** There are a total of 9 attacks in UNSW-NB15. And there is the overlapping phenomenon of behavior attribute in the Analysis, Backdoor, DoS, Exploits and Fuzzers, i.e., they have same records. Taking DoS and Fuzzers for example. DoS is a malicious attempt to make a server unavailable to users and Fuzzers are a technique that attempts to cause a program or network suspended by feeding it the randomly generated data [29]. Therefore, if an attacker uses the technology of Fuzzers during the DoS attack, then this behavior can be considered to belong to both DoS and Fuzzers.
**Trojan and Zeroday in CCCS-CIC-AndMal-2020:** Trojan is a software or script that accepts instructions to perform malicious actions in the victim's host[31]. The attacker usually communicate with the Trojan by C&C channels. Zeroday is relatively broader. Generally, any network attack that uses unknown vulnerabilities or backdoors can be considered Zeroday[22; 31]. Therefore, if an attacker uses unknown vulnerabilities to transmit Trojan or directly uses it as a C&C channel, it can be considered to belong to both Trojan an Zeroday.
There are other overlapping phenomenon, such as Reconnaissance and Exploits in UNSW-NB15. In general, the overlapping phenomenon between network attacks is an universal problem that cannot be ignored.
Figure 1: The overlapping phenomenon of behavior attribute between network attacks.
### Cause analysis of the overlapping behavior attribute
We analyze why there is the overlapping behavior attribute between network attacks. And we believe that there are the following reasons based on experience and survey results.
* The conceptual definitions of different attacks contain each other, so that a network behavior is inherently multi-labeled. In some circumstances, the definition of two attacks can be considered the same because the attack methods used are the same (such as the Trojan and Zeroday).
* The actions taken by the attacker in the process of implementing the attack behavior are complex, and the malicious features exposed from different stages make a same behavior be classified into different attack types.
* The existing feature extraction methods are incompleteness, so that the unique mutually exclusive features of attack are not extracted. For instance, although the external features are very similar for two different attacks based on the same C&C encrypted channel, the content that mainly show the characteristics of the attack cannot be directly represented by those statistics features.
In this paper, we investigate the overlapping phenomenon of behavior attribute between network attacks in UNSW-NB15 and CCCS-CIC-AndMal-2020. And we quantitatively verify the phenomenon in Section 6.1.
## 4 Preliminaries
The overlapping phenomenon enables the network attack samples to be multi-labeled. Therefore, we construct a multi-label detection method based on WGAN-GP and AE. In this section, we first introduce the multi-label classification problem. Then the WGAN-GP and AE are introduced.
### Definition of multi-label attack classification problem
We consider network attack detection problem as multi-label learning based on the overlapping phenomenon of behavior attribute. There is \(\mathbf{X}\in\mathbb{R}^{d}\), \(d\) is the size of feature space, \(\mathbf{Y}=\{y_{1},y_{2},...,y_{M}\}\) means the label space. There is a network attack dataset \(\mathbf{D}=\{(x_{i},Y_{i})\left|i=1,2,...,N;x_{i}\in\mathbf{X};Y_{i}\subseteq \mathbf{Y}\right\}\). Our task is to find a multi-label classification function \(h\) that maps \(x\) from the feature space to the label space. For a sample \(x\), \(h(x)\subseteq\mathbf{Y}\) is given as the label set of the sample.
### Wgan-Gp
WGAN-GP[15] is a neural network model. It provides a powerful algorithm framework for unsupervised learning. As shown in Fig.2, WGAN-GP includes generator \(G\) and discriminator \(D\). \(G\) is used to imitate the real data distribution \(\mathbb{P}_{r}\) to generate a fake data distribution \(\mathbb{P}_{g}\). \(D\) is used to determine whether the sample is generated by \(G\). \(G\) and \(D\) play against each other and finally strike a balance.
WGAN-GP is more stable and easier to converge than vanilla GAN, and can generate more diverse data. Vanilla GAN adopts JS divergence to measure the difference between \(\mathbb{P}_{r}\) and \(\mathbb{P}_{g}\). However, the JS divergence is infinite when the two distributions have no intersection, so it is easy to have gradients disappear. WGAN adopts Earth-Mover distance instead, referred to as EM distance, or Wasserstein distance to calculate the difference between two distributions. However, tThe 1-Lipschitz constraint need to be satisfied in EM distance. Therefore, the gradient penalty (GP) is added to the loss of WGAN-GP, as shown in Eq(1), where \(\mathbb{P}_{\hat{x}}\) represents the distribution of all data and \(\lambda\) is the weight.
\[\mathbf{L}=\underbrace{\mathbb{E}}_{\hat{x}\sim\mathbb{P}_{g}}[D(\tilde{x})]- \underset{x\sim\mathbb{P}_{r}}{\mathbb{E}}[D(x)]}+\underbrace{\lambda\underset {\hat{x}\sim\mathbb{P}_{\hat{x}}}{\mathbb{E}}\left[(\left\|\nabla_{\hat{x}}D( \hat{x})\right\|_{2}-1)^{2}\right]}_{\text{Gradient penalty}} \tag{1}\]
### Auto-Encoder
Auto-Encoder (AE)[3] is a feed-forward neural network, usually used for data dimension reduction and feature extraction. Different from the traditional neural network focusing on the final loss reduction and the improvement of classification accuracy, AE focuses on extracting effective features from the data and reconstructing them.
AE is mainly composed of two parts, encoder and decoder, as shown in Fig.3. Encoder is used to map the features of raw data to another feature space. Decoder reconstructs the converted features back to the raw space to ensure that it better inherits the feature information of the raw data.
Figure 3: The infrastructure of traditional AE.
Figure 2: The infrastructure of traditional WGAN-GP.
## 5 Methodology
### Overview
In this paper, we build MLD-Model to detect multi-label network attack in overlapping phenomenon. First, WGAN-GP with improved loss is used for data enhancement to alleviate the class imbalance problem. Then, unbalanced AE performs unsupervised pre-training. Finally, the pre-trained encoder and a softmax classification layer are combined together as a neural network classifier, and the raw labeled data is used for parameter fine-tuning. We adopts the Label PowerSet scheme[35] for multi-label learning. Each label subset is mapped to a natural number, and the multi-label problem is converted to a single-label multi-classification problem.
### Data enhancement
The overall process of data enhancement based on the WGAN-GP with improved loss is shown in Fig.4. In the training phase(Fig.4(a)), when a network attack needs to be generated, the corresponding samples are marked as \(real\), and other attack data is marked as \(fake\). In the generation phase(Fig.4(b)), noise is input to \(G\) to obtain generated samples. Finally, an augmented dataset \(S^{aug}\) consisting of the raw dataset \(S\) and the generated dataset \(S^{gen}\) is constructed.
In this process, we adopt WGAN-GP with improved loss to generate fake attack data. Traditional WGAN-GP only considers that the generated data should be in the same space as
Figure 4: The process of data enhancement based on the WGAN-GP with improved loss in MLD-Model. (a) Training WGAN-GP\({}_{i}\) with improved loss in Eq(2). (b) Generating network attack data. (\(i\): category of network attack (\(i=1,2,...,M\)); \(S\): the raw dataset; \(S^{gen}\): the generated dataset; \(S^{aug}\): the augmented dataset).
the corresponding real data. In the overlapping phenomenon, however, we need to consider that the generated data should also be different from other attacks. Otherwise, the generated data will increase the clutter of feature space.
\[\begin{split}\mathbf{L}&=\underbrace{\mathbb{E}}_{ \hat{x}\sim\mathbb{P}_{g}}[D(\hat{x})]-\underbrace{\mathbb{E}}_{x\sim\mathbb{ P}_{r_{i}}}[D(x)]\\ &+\ \underbrace{\lambda^{\prime}}_{x^{\prime}\sim\mathbb{P}_{r_{j} (i\neq j;j=1,2,\ldots,M)}}[D(x^{\prime})]\\ \end{split} \tag{2}\]
Therefore, we update the traditional loss(Eq(1)) to improved loss(Eq(2)). As shown in the Eq(2), \(\mathbb{P}_{r_{i}}\) is the data distribution of attack \(i\), \(\mathbb{P}_{r_{j}(i\neq j;j=1,2,\ldots,M)}\) is the data distribution of other attacks except \(i\) and \(\lambda^{\prime}\) is the weight. A penalty item is added when the neural network generates fake data of attack \(i\). WGAN-GP with improved loss will generate the corresponding attack data while maintaining the difference between it and other attacks as much as possible.
In addition, the generated data is only used in the pre-training phase. Because feature space of network attacks in the overlapping phenomenon are nested with each other, WGAN-GP can only partially simulate the distribution of attacks. In experiments, the generated data will interfere with the decision of the classifier if it is used for model fine-tuning.
### Pre-training
We call the AE used for pre-training as unbalanced AE, because the network structure of encoder and decoder are asymmetrical. The encoder has more neurons so that more feature extraction operations can be retained in it. In addition, data in \(S^{aug}\) is used for unsupervised pre-training of the unbalance AE. Experiments show that the model can converge faster and
Figure 5: The pre-training phase and classification phase in MLD-Model. (a) Pre-training phase. (b) Classification phase. (\(X\): the data in the augmented dataset \(S^{aug}\); \(X_{i}\): the data in raw dataset \(D\) with multi-label; \(Y_{i}\): the label set of \(X_{i}\) detected by the classifier).
achieve better results after the parameters of the neural network are pre-adjusted. The details are shown in the Pre-training phase(Fig.5(a)).
### Classification
The classifier is a neural network formed by stacking the encoder in the pre-trained unbalanced AE and a softmax layer. Since the main parameters of classifier have been pre-trained, we only need to use the raw data for fine-tuning. The specific process is shown in the classifier phase in Fig.5(b).
### Algorithm
The overall training and detection process of MLD-Model is shown in Algorithm 1. The input is network attack data and hyper-parameters, and the output is the corresponding unknown network attack multi-label set. In the data preprocessing phase, we analyze the network attack dataset \(S\) to obtain the multi-label attack dataset \(D\). In the data enhancement phase, WGAN-GP with improved loss is used to construct the network attack generated dataset \(S^{gen}\), and then combined with \(S\) to form the augmented dataset \(S^{aug}\). In the pre-training phase, unsupervised pre-training based on the unbalanced AE is performed using the data in \(S^{aug}\).
In the classifier phase, the multi-label attack data in \(D\) is used to fine-tune the model parameters. In the final detection phase, the classifier judges the attack multi-label set of each sample in the unknown network attack dataset \(D^{\prime}\), and gives the multi-label detect results of unknown network attacks.
### Summary
MLD-Model focuses on the detection of multi-label network attacks. The data enhancement and pre-training techniques are used in the training process to improve the detection performance.
In the data enhancement phase, we adopt WGAN-GP as the basic architecture. Because more stable than vanilla GAN, WGAN-GP can better resist gradient disappearance/explosion, generate more diverse and real network attack data. In addition, we propose the improved loss (Eq(2)) to replace the traditional loss.
In the pre-training phase, we set the structure of the encoder and decoder in AE to be asymmetric. There are more neurons in the encoder to obtain stronger feature extraction capabilities. Finally, the classifier combined with the pre-trained encoder can be fine-tuned based on the processed raw multi-labeled dataset to obtain excellent performance.
## 6 Experiment and evaluation
We verify the overlapping phenomenon of behavior attribute between network attacks in UNSW-NB15 and CCCS-CIC-AndMal-2020. Then, we sample data from those two datasets for multi-label detection of MLD-Model.
```
0: network attack dataset \(\mathbf{S}=\{(x_{i},y_{i})\,|i=1,2,...,N;x_{i}\in\mathbf{X};y_{i}\in\mathbf{Y}\}\); unknown network attack dataset \(\mathbf{D^{\prime}}=\{({x^{\prime}}_{i})\,|i=1,2,...,N^{\prime};x^{\prime}_{i} \in\mathbf{X}\}\); Hyper-parameters \(P\): Various hyper-parameters (number of neurons, etc.) needed to construct MLD-Model.
0:\(\mathbf{Y^{\prime}}\): Multi-label results of unknown network attacks.
1: **Step 0:** Data preprocessing
2: network attack multi-label dataset \(D=\{\}\)
3:for\((x_{i},y_{i})\) in \(S\)do
4:if\(x_{i}\) in D then
5:\(D=(D-\{(x_{i},Y_{i})\})\cup\{(x_{i},Y_{i}\cup\{y_{i}\})\}\)
6:else
7:\(D=D\cup\{(x_{i},Y_{i})\}\)
8:endif
9:endfor
10: **Step 1:** Data enhancement
11: build the generated dataset \(S^{gen}=\{\}\)
12:for network attack \(q\) in \(S\)do
13:\(S_{q}=\{(x_{i},real)|i=1,2,...,N;y_{i}=q\}\)
14:\(\bar{S}_{q}=\{(x_{i},fake)|i=1,2,...,N;y_{i}\neq q\}\)
15: WGAN-GP\({}_{i}\)\(\leftarrow\) trained WGAN-GP with improved loss (Eq(2)) based on dataset \(S_{i}\) and \(\bar{S}_{q}\)
16:\(S^{gen}=S^{gen}\cup\) { data generated by WGAN-GP\({}_{i}\)}
17:endfor
18: build the augmented dataset \(S^{aug}=S^{gen}\cup S\)
19: **Step 2:** Pre-training
20: unbalanced AE = encoder + decoder
21: unbalanced AE \(\leftarrow\) trained unbalanced AE based on \(S^{aug}\)
22: **Step 3:** Classification
23: classifier = encoder(in unbalanced AE) + softmax
24: classifier \(\leftarrow\) trained classifier based on \(D\)
25: **Step 4:** Detection
26:\(Y^{\prime}=\{\}\)
27:for\(x^{\prime}_{i}\) in \(D^{\prime}\)do
28:\(Y^{\prime}_{i}\) = classifier(\(x^{\prime}_{i}\)) \(\subseteq\)\(\mathbf{Y}\)
29:\(Y^{\prime}=Y^{\prime}\cup\{(x^{\prime}_{i},Y^{\prime}_{i})\}\)
30:endfor
31:return\(Y^{\prime}\)
```
**Algorithm 1** The training and detection process of MLD-Model
### Data analysis of the overlapping behavior attribute
#### 6.1.1 Data collection and sampling
**UNSW-NB15[29]**, is a dataset that hybrids the real modern normal and the contemporary synthesized attack activities of the network traffic. The major categories of the records are normal and attack. The attack records are further classified into 9 families. The raw network packets of the UNSW-NB15 dataset is created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS). The number of records in the training set is 175,341 and in the testing set is 82,332. **CCCS-CIC-AndMal-2020[22; 31]**, is a new comprehensive and huge malware dataset from the Canadian Institute for Cybersecurity (CIC) project in collaboration with Canadian Centre for Cyber Security (CCCS). It includes 200K benign and 200K malware samples totaling 400K android apps with 14 prominent malware categories and 191 eminent malware families. These malware mainly exhibits malicious characteristics through network behavior. We show the data sampling results in Tab.1 and Tab.2.
For UNSW-NB15, we select the officially training set and test set, as shown in Tab.1. A sample has 42 features. A detailed description of these attacks can be found here[29]. For CCCS-CIC-AndMal-2020, the data is randomly sampled. Since there are too many benign data, we only select part of the benign data (Ben0.csv) and all the malicious data. The
\begin{table}
\begin{tabular}{|c|c||c|c|c|} \hline
**Category** & \begin{tabular}{c} **Size** \\ (Training set + Test set) \\ \end{tabular} & \begin{tabular}{c} **Category** \\ \end{tabular} &
\begin{tabular}{c} **Size** \\ (Training set + Test set) \\ \end{tabular} \\ \hline \hline Normal & 93,000 (56,000+37,000) & Reconnaissance & 13,987 (10,491+3,496) \\ Backdoor & 2,329 (1,746+583) & Exploits & 44,525 (33,393+11,132) \\ Analysis & 2,677 (2,000+677) & DoS & 16,353 (12,264+4,089) \\ Fuzzers & 24,246 (18,184+6,062) & Worms & 174 (130+44) \\ Shellcode & 1,511 (1,133+378) & Generic & 58,871 (40,000+18,871) \\ \hline \hline
**Total** & 257,673 (175,341+82,332) & & & \\ \hline \end{tabular}
\end{table}
Table 1: The data sampling result of UNSW-NB15
\begin{table}
\begin{tabular}{|c|c||c|c||c|c|} \hline
**Category** & **Size** & **Category** & **Size** & **Category** & **Size** \\ \hline \hline Riskware & 97,349 & Adware & 47,198 & FileInferector & 669 \\ Benign & 32,084 & Trojan & 13,542 & Backdoor & 1,538 \\ Zeroday & 13,327 & Ransomware & 6,202 & Banker & 887 \\ Spy & 3,540 & SMS & 3,125 & PUA & 2,051 \\ Dropper & 2,302 & NoCategory & 2,296 & Scareware & 1,556 \\ \hline \hline
**Total** & 227,666 & & & & \\ \hline \end{tabular}
\end{table}
Table 2: The data sampling result of CCCS-CIC-AndMal-2020
sampling results are shown in Tab.2. The sampling results are shown in Tab.2. A sample has 9,503 features. A detailed description of these attacks can be found here[22; 31].
#### 6.1.2 Analysis results
In the overlapping phenomenon of behavior attribute between network attacks, some samples are multi-labeled. We show the specific overlap of attribute based on the data sampled from UNSW-NB15 and CCCS-CIC-AndMal-2020.
The results in UNSW-NB15 are shown in Tab.3. For instance, the number Total=188 in the \(sample\) 1 column means that there are 188 samples that are the same as \(sample\) 1, 66 of which are labeled as DoS and 76 are labeled as Exploits.
The results in CCCS-CIC-AndMal-2020 are shown in Tab.4. For instance, the number Total=5,975 in the \(sample\) 1 column means that there are 5,975 samples that are the same as \(sample\) 1, 3,612 of which are labeled as Trojan and 2,363 are labeled as Zeroday.
In addition, multi-label metrics are also used to show the overlapping phenomenon, so as to quantitatively measure the overlapping distribution. The label diversity, \(LDiv\), is the number of different label sets in the dataset, as shown in the Eq(3). The label cardinality, \(LCard\), is the average label number of one sample, as shown in the Eq(4). Two metrics are calculated to measure the multi-label distribution of network attacks.
\[\text{LDiv}=|\{Y\mid\exists\mathbf{x}:(\mathbf{x},Y)\in\mathbf{D}\}| \tag{3}\]
\[\text{LCard}=\frac{1}{N}\sum_{i=1}^{N}\mid Y_{i}\mid \tag{4}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Category**} & \multicolumn{5}{c|}{**Network attack records**} \\ \cline{2-7} & sample 1 & sample 2 & sample 3 & sample 4 & sample 5 &... \\ \hline \hline Analysis & 10 & 7 & 6 & 6 & 5 \\ Backdoor & 10 & 7 & 4 & 6 & 4 \\ DoS & 66 & 47 & 38 & 42 & 31 \\ Exploits & 76 & 62 & 60 & 54 & 50 \\ Fuzzers & 10 & 7 & 6 & 6 & 5 \\ Generic & 6 & 1 & 0 & 0 & 0 \\ Normal & 0 & 0 & 0 & 0 & 0 \\ Reconnaissance & 10 & 7 & 6 & 6 & 5 \\ Shellcode & 0 & 0 & 0 & 0 & 0 \\ Worms & 0 & 0 & 0 & 0 & 0 \\ \hline \hline
**Total** & 188 & 138 & 120 & 120 & 100 &... \\ \hline \end{tabular}
\end{table}
Table 3: The top 5 multi-label network attack samples in the overlapping phenomenon of behavior attribute in UNSW-NB15
The data analysis results about multi-label are shown in Tab.5. In UNSW-NB15, there are a total of 10 basic categories and a total of 57 label sets. On average, each sample belongs to 1.689 categories. In CCCS-CIC-AndMal-2020, there are a total of 15 basic categories, and a total of 145 label sets. On average, each sample belongs to 1.413 categories. All processed data can be obtained from here2.
Footnote 2: [https://github.com/BitBrave-Xie/processed-multi-label-dataset](https://github.com/BitBrave-Xie/processed-multi-label-dataset)
### Experimental evaluation of MLD-Model
#### 6.2.1 Evaluation metrics
For the problem of multi-label learning, there are currently two types of evaluation metrics. Example-based evaluation metrics (first evaluate the performance of a single sample, then average multiple samples) and label-based evaluation metrics (first consider the performance of a single label on all samples, and then average multiple labels). In this paper, we
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Category**} & \multicolumn{5}{c|}{**Network attack records**} \\ \cline{2-7} & sample 1 & sample 2 & sample 3 & sample 4 & sample 5 &... \\ \hline \hline Adware & 0 & 206 & 3 & 134 & 1,547 \\ Backdoor & 0 & 0 & 0 & 58 & 0 \\ Banker & 0 & 0 & 0 & 28 & 0 \\ Benign & 0 & 0 & 0 & 12 & 0 \\ Dropper & 0 & 0 & 0 & 94 & 0 \\ FileInfector & 0 & 0 & 0 & 5 & 0 \\ NoCategory & 0 & 0 & 0 & 59 & 0 \\ PUA & 0 & 0 & 0 & 7 & 0 &... \\ Ransomware & 0 & 0 & 0 & 1,057 & 0 \\ Riskware & 0 & 2,602 & 2,544 & 288 & 186 \\ SMS & 0 & 0 & 0 & 14 & 0 \\ Scareware & 0 & 0 & 0 & 8 & 0 \\ Spy & 0 & 0 & 0 & 199 & 0 \\ Trojan & 3,612 & 0 & 0 & 93 & 0 \\ Zeroday & 2,363 & 0 & 0 & 301 & 0 \\ \hline \hline
**Total** & 5,975 & 2,808 & 2,547 & 2,357 & 1,733 &... \\ \hline \end{tabular}
\end{table}
Table 4: The top 5 multi-label network attack samples in the overlapping phenomenon of behavior attribute in CCCS-CIC-AndMal-2020
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Basic Category** & **LDiv** & **LCard** \\ \hline \hline UNSW-NB15 & 10 & 57 & 1.689 \\ CCCS-CIC-AndMal-2020 & 15 & 145 & 1.413 \\ \hline \end{tabular}
\end{table}
Table 5: The data analysis results of network attack overlap
do not consider the ranking of multi-labels, so example-based metrics are used to evaluate model. Among these metrics, \(h\) is the model, \(N\) is the sample size, \(x_{i}\) is the sample, \(Y_{i}\) is the corresponding multi-label set, \(\Delta Y_{i}\) is the complement of \(Y_{i}\), and \(h(x_{i})\) is the label set of \(x_{i}\) given by the model \(h\).
\(Subsetacc\), evaluates the absolute accuracy of the model, i.e., the probability that all basic labels of the sample need to be successfully identified, as shown in the Eq(5).
\[\text{Subsetacc}=\frac{1}{N}\sum_{i=1}^{N}|h(x_{i})=Y_{i}| \tag{5}\]
\(Hloss\), evaluates the error of the model in detecting the multi-label of the samples, as shown in the Eq(6). The smaller the value, the more complete the model's division ability, and the value of 0 indicates that all samples are perfectly divided.
\[\text{Hloss}=\frac{1}{N}\sum_{i=1}^{N}|h(x_{i})\Delta Y_{i}| \tag{6}\]
The \(Precision(P)\), \(Recall(R)\), \(Accuracy(Acc)\), and \(F1\), imitate evaluation metrics in single-label classification, and are also used to calculate the comprehensive detection performance of model, as shown in the Eq(7).
\[\text{Acc} =\frac{1}{N}\sum_{i=1}^{N}\frac{|Y_{i}\cap h(x_{i})|}{|Y_{i} \cup h\left(x_{i}\right)|} \tag{7}\] \[\text{P} =\frac{1}{N}\sum_{i=1}^{N}\frac{|Y_{i}\cap h(x_{i})|}{|h(x_{i})|}\] \[\text{R} =\frac{1}{N}\sum_{i=1}^{N}\frac{|Y_{i}\cap h(x_{i})|}{|Y_{i}|}\] \[F1 =\frac{2\times P\times R}{P+R}\]
#### 6.2.2 Environmental configuration
The system environment is Ubuntu16.04 LTS. The hardware facilities are 16-core CPU and 128G memory. Pytorch and Scikit-learn in Python3.7 are used to implement MLD-Model. In addition, 3 NVIDIA TITAN XPs are deployed on the server.
As a rule of thumb, we set some hyper-parameters. Samples are normalized with MinMax. The gradient penalty weight \(\lambda=10\) and our penalty item weight \(\lambda^{\prime}=1\) in WGAN-GP. The unbalanced AE uses binary cross entropy to calculate the reconstruction loss. The classifier uses cross entropy as the loss. The activation function is \(LeakyReLU\)(negative_slope=0.01). The optimizer is \(Adam(lr\)=le-3). The WGAN-GP, unbalanced AE and classifier are constructed by dense layers. The structures of MLD-Model are shown in Tab.6, and the number indicates the number of neurons in the dense layer.
#### 6.2.3 Training set and test set
We perform de-duplication processing after pre-processing the network attack data with multi-label. In UNSW-NB15, the official training set(after de-duplication) and test set(samples after de-duplication) are selected. In addition, the generated data for pre-training has 300,000 samples, with an average of 3,000 samples per single-type attack. In CCCS-CIC-AndMal-2020, since there is no official training set and test set, we select to divide the dataset into training set and test set according to the ratio of 8:2. The generated data for pre-training has 300,000 samples, with an average of 2,000 samples per single-type attack. In addition, each sample in CCCS-CIC-AndMal-2020 has 9,503-dimensional data features, so the PCA algorithm is used to reduce the dimensionality of sample to 64-dimensional after sampling.
The data composition is shown in Tab.7. The 5-fold crossover experiment is adopted in subsequent experiments.
### Ablation study on key factors in MLD-Model
#### 6.3.1 Data enhancement and pre-training
We believe that data enhancement and pre-training can effectively enable MLD-Model to obtain better detection performance. Fig.6 shows the convergence results of training \(Subsetacc\) of the classifier in MLD-Model in different conditions. In UNSW-NB15 (Fig.6(a)), the classifier can reach convergence in the three conditions. The classifier based on pre-training with \(S^{aug}\) can obviously converge faster and better. In CCCS-CIC-AndMal-2020, the classifier based on no pre-training has large fluctuations in the process of convergence. And the classifier of MLD-Model based on pre-training with \(S^{aug}\) get the best convergence effect. In general, the classifier can get better convergence results after pre-training. Especially, the classifier can further converge faster and higher after adding the generated data.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & **WGAN-GP** & **AE** & **Classifier** \\ \hline \hline UNSW-NB15 & \begin{tabular}{l} \(G\): 100-64-128-256-42 \\ \(D\): 42-64-32-24-1 \\ \end{tabular} & \begin{tabular}{l} Encoder: 42-512-256-128-64 \\ Decoder: 64-42 \\ \end{tabular} & \begin{tabular}{l} Encoder-57 \\ \end{tabular} \\ \hline CCCS-CIC- & \begin{tabular}{l} \(G\): 100-128-256-512-64 \\ \(D\): 64-128-64-24-1 \\ \end{tabular} & \begin{tabular}{l} Encoder: 64-1024-512-256-128 \\ Decoder: 128-64 \\ \end{tabular} &
\begin{tabular}{l} Encoder: 64-1024-512-256-128 \\ Encoder-145 \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 6: Main hyper-parameters structure of MLD-Model.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & **Training set** & **Test set** & **Generated Data** \\ \hline \hline UNSW-NB15 & 101,040 & 53,946 & 300,000 \\ CCCS-CIC-AndMal-2020 & 47,550 & 11,888 & 300,000 \\ \hline \end{tabular}
\end{table}
Table 7: The multi-labeled data composition of training set, test set, generate data for pre-training.
In actual detection, MLD-Model with pre-training, especially adding the generated data, also has a better detection results. Tab.8 shows the detection results of MLD-Model in different conditions in UNSW-NB15 and CCCS-CIC-AndMal-2020. The classifier of MLD-Model based on pre-training with \(S^{aug}\) can get the best overall results. For instance, in UNSW-NB15, the \(Hloss\) drops from 0.446 to 0.425, and the \(F1\) increases from 78.72% to 80.06%. In CCCS-CIC-AndMal-2020, the \(Hloss\) drops from 0.365 to 0.342, and the \(F1\) increases from 82.49% to 83.63%.
In addition, we also try to generate data using WGAN-GP with traditional loss(Eq(1)), which is then used for pre-training. However, the result is fluctuating, sometimes making the model even worse than without pre-training. Therefore, we finally select WGAN-GP with improved loss(Eq(2)) to generate data.
Figure 6: The \(Subsetacc\) convergence results of MLD-Model in the classifier phase in training set. (a) In UNSW-NB15, (b) In CCCS-CIC-AndMal-2020.
#### 6.3.2 Time complexity
The performance of MLD-Model in UNSW-NB15 and CCCS-CIC-AndMal-2020 is similar. We compare the time required for MLD-Model to achieve \(Subsetacc\)=84% in training set under three conditions in UNSW-NB15 and to achieve \(Subsetacc\)=88% in training set under three conditions in CCCS-CIC-AndMal-2020. The results are shown in Tab.9. Since the data generation belongs to preliminary preparation, we ignore this part of the time. Tab.9 shows that pre-training can reduce the model convergence time in the fine-tuning phase. Although pre-training with \(S^{aug}\) increases the pre-training time, it can further reduce the fine-tuning time.
### Comparison with other methods
There is no related multi-label methods for detecting the network attacks in the overlapping phenomenon. Therefore, we select several baseline methods for comparison.
The methods selected are shown in Tab.10. Based on the 3 strategies(first-order, second-order, high-order), algorithms can be divided into problem transformation and algorithm
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline & **Subsetacc** & **Hloss** & **Acc** & **P** & **R** & **F1** \\ \hline \hline
**In UNSW-NB15** & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline No pre-training & 78.01\% & 0.446 & 78.57\% & 78.71\% & 78.74\% & 78.72\% \\ Pre-training with \(S\) & 78.43\% & 0.447 & 78.90\% & 79.12\% & 79.06\% & 79.09\% \\ Pre-training with \(S^{aug}\) & **79.27\%** & **0.425** & **79.87\%** & **80.03\%** & **80.09\%** & **80.06\%** \\ \hline
**In CCCS-CIC-AndMal-2020** & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline No pre-training & 81.31\% & 0.365 & 82.08\% & 82.73\% & 82.25\% & 82.49\% \\ Pre-training with \(S\) & 81.53\% & 0.359 & 82.40\% & 83.08\% & 82.67\% & 82.87\% \\ Pre-training with \(S^{aug}\) & **82.28\%** & **0.342** & **83.17\%** & **83.83\%** & **83.43\%** & **83.63\%** \\ \hline \end{tabular}
\end{table}
Table 8: The detection results of MLD-Model in UNSW-NB15
\begin{table}
\begin{tabular}{|l|c|c|c||c|} \hline & **Pre-training** & **Fine-tuning** & **Detection** & **Total** \\ \hline \hline
**In UNSW-NB15** & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline No pre-training & **0s** & 97.40s & **3.22s** & 100.62s \\ Pre-training with \(S\) & 9s & 31.28s & 3.41s & **43.69s** \\ Pre-training with \(S^{aug}\) & 18.9s & **22.97s** & 3.34s & 45.21s \\ \hline
**In CCCS-CIC-AndMal-2020** & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline No pre-training & **0s** & 119.01s & 0.93s & 119.94s \\ Pre-training with \(S\) & 12.6s & 92.97s & 0.93s & **106.50s** \\ Pre-training with \(S^{aug}\) & 30.80s & **76.86s** & 0.93s & 108.59s \\ \hline \end{tabular}
\end{table}
Table 9: The training and detection time of MLD-Model in UNSW-NB15
adaptation. In the problem transformation, we select 3 transformation strategies: Binary Relevance(BR)[5], Calibrated Label Ranking(CLR)[12], and Classifier Chains(CC)[32; 33]. In the algorithm adaptation, we select the ML-KNN algorithm[45] that belongs to lazy learning[2].
#### 6.4.1 Unsw-Nb15
In UNSW-NB15, as shown in Tab.11, MLD-Model is superior to other methods in 4 of the 5 metrics. The Bayes-BR can get the best results on recall \(R\)=99.74%, but it fails in other metrics, such as the \(F1\) of 43.14% and the \(Hloss\) of 5.227. This means that Bayes-BR considers that a sample belongs to almost all categories when it detects a sample, which is unreasonable. In general, MLD-Model can get the better overall performance compared with other methods.
#### 6.4.2 CCCS-CIC-AndMal-2020
In CCCS-CIC-AndMal-2020, as shown in Tab.12, MLD-Model is also superior to other methods in 4 of the 5 metrics. Similarly, although the Bayes-BR can get the best results on recall \(R\)=91.46%, it fails in other metrics, such as the \(F1\) of 54.93% and the \(Hloss\) of 3.715. This means that Bayes-BR considers that a sample belongs to almost all categories when it detects a sample, which is unreasonable. Our method can get the \(F1\) of 83.63% and the \(Hloss\) of 0.342 in CCCS-CIC-AndMal-2020. In general, MLD-Model can get the better overall performance compared with other methods.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Method** & **Type** & **Strategy** & **Re-name** \\ \hline \hline Bayes & BR & first-order & Bayes-BR \\ Logistic Regression & BR & first-order & LR-BR \\ Decision Tree & BR & first-order & DT-BR \\ Random Forest & BR & first-order & RF-BR \\ SVM & BR & first-order & SVM-BR \\ \hline Bayes & CLR & second-order & Bayes-CLR \\ Logistic Regression & CLR & second-order & LR-CLR \\ Decision Tree & CLR & second-order & DT-CLR \\ Random Forest & CLR & second-order & RF-CLR \\ SVM & CLR & second-order & SVM-CLR \\ \hline Bayes & CC & high-order & Bayes-CC \\ Logistic Regression & CC & high-order & LR-CC \\ Decision Tree & CC & high-order & DT-CC \\ Random Forest & CC & high-order & RF-CC \\ SVM & CC & high-order & SVM-CC \\ \hline ML-KNN[45] & Lazy Learning & first-order & ML-KNN \\ \hline \end{tabular}
\end{table}
Table 10: Multi-label learning methods selected for comparison.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Method** & **Subsetacc** & **Hloss** & **Acc** & **P** & **R** & **F1** \\ \hline \hline Bayes-BR & 22.8\% & 3.715 & 39.19\% & 39.25\% & **91.46\%** & 54.93\% \\ LR-BR & 61.8\% & 0.458 & 62.08\% & 62.37\% & 62.11\% & 62.24\% \\ DT-BR & 67.07\% & 0.546 & 71.65\% & 72.2\% & 76.39\% & 74.24\% \\ RF-BR & 68.76\% & 0.357 & 69.76\% & 70.35\% & 70.2\% & 70.28\% \\ SVM-BR & 59.62\% & 0.47 & 59.76\% & 59.92\% & 59.76\% & 59.84\% \\ \hline Bayes-CLR & 52.83\% & 0.920 & 58.97\% & 59.2\% & 64.98\% & 61.96\% \\ LR-CLR & 73.81\% & 0.521 & 74.54\% & 75.31\% & 74.54\% & 74.92\% \\ DT-CLR & 77.37\% & 0.439 & 78.45\% & 79.22\% & 78.8\% & 79.01\% \\ RF-CLR & 80.47\% & 0.378 & 81.46\% & 82.26\% & 81.69\% & 81.98\% \\ SVM-CLR & 74.3\% & 0.510 & 75.06\% & 75.86\% & 75.06\% & 75.46\% \\ \hline Bayes-CC & 54.76\% & 1.331 & 58.13\% & 58.28\% & 65.21\% & 61.55\% \\ LR-CC & 70.35\% & 0.588 & 71.06\% & 71.83\% & 71.06\% & 71.44\% \\ DT-CC & 72.37\% & 0.555 & 74.27\% & 74.94\% & 75.75\% & 75.34\% \\ RF-CC & 73.42\% & 0.334 & 74.45\% & 75.1\% & 74.87\% & 74.99\% \\ SVM-CC & 68.95\% & 0.615 & 69.69\% & 70.5\% & 69.69\% & 70.09\% \\ \hline ML-KNN & 72.59\% & 0.382 & 73.48\% & 73.94\% & 73.92\% & 73.93\% \\ \hline MLD-Model & **82.28\%** & **0.342** & **83.17\%** & **83.83\%** & 83.43\% & **83.63\%** \\ \hline \end{tabular}
\end{table}
Table 11: Detection results of MLD-Model and other methods in UNSW-NB15
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Method** & **Subsetacc** & **Hloss** & **Acc** & **P** & **R** & **F1** \\ \hline \hline Bayes-BR & 10.60\% & 5.227 & 27.48\% & 27.52\% & **99.74\%** & 43.14\% \\ LR-BR & 50.72\% & 0.635 & 51.73\% & 51.88\% & 52.55\% & 52.21\% \\ DT-BR & 65.15\% & 0.543 & 70.01\% & 70.08\% & 74.8\% & 72.36\% \\ RF-BR & 72.36\% & 0.450 & 73.41\% & 73.47\% & 74.09\% & 73.78\% \\ SVM-BR & 48.81\% & 0.603 & 49.13\% & 49.17\% & 49.24\% & 49.2\% \\ \hline Bayes-CLR & 23.66\% & 1.765 & 33.02\% & 33.28\% & 42.22\% & 37.22\% \\ LR-CLR & 63.83\% & 0.736 & 64.24\% & 64.64\% & 64.33\% & 64.49\% \\ DT-CLR & 73.65\% & 0.539 & 74.25\% & 74.35\% & 74.52\% & 74.43\% \\ RF-CLR & 76.61\% & 0.478 & 77.19\% & 77.24\% & 77.41\% & 77.33\% \\ SVM-CLR & 63.85\% & 0.734 & 64.3\% & 64.7\% & 64.42\% & 64.56\% \\ \hline Bayes-CC & 8.82\% & 3.662 & 15.85\% & 15.99\% & 34.26\% & 21.81\% \\ LR-CC & 68.85\% & 0.644 & 69.21\% & 69.47\% & 69.35\% & 69.41\% \\ DT-CC & 73.09\% & 0.541 & 74.22\% & 74.31\% & 75.08\% & 74.69\% \\ RF-CC & 76.94\% & 0.451 & 77.54\% & 77.59\% & 77.77\% & 77.68\% \\ SVM-CC & 68.75\% & 0.642 & 69.24\% & 69.40\% & 69.48\% & 69.44\% \\ \hline ML-KNN & 64.97\% & 0.590 & 65.78\% & 65.94\% & 66.24\% & 66.09\% \\ \hline MLD-Model & **79.27\%** & **0.425** & **79.87\%** & **80.03\%** & 80.09\% & **80.06\%** \\ \hline \end{tabular}
\end{table}
Table 12: Detection results of MLD-Model and other methods in CCCS-CIC-AndMal-2020
## 7 Discussion
### Hyper-parameter selection
MLD-Model has a large number of hyper-parameters, such as the number of layers in the neural network, the learning rate, the epoches, and batch-size. In the experiment, we make selections based on experience and previous experimental basis. For multiple optional hyper-parameters, grid search is used to select the optimal hyper-parameter combination.
### Comparison with related single-label methods
The sample is multi-labeled in the overlapping phenomenon of behavior attribute, and one sample could belong to multiple attacks. However, traditional single-label methods are difficult to detect multi-label network attacks. The single-label network attack detection methods can only give a single-category for a sample, which means that the theoretical upper limit of the accuracy is less than 100%. They perform poorly when applied to multi-label network attack detection.
Taking the mutlti-label network attack detection in UNSW-NB15 as an example, we implement the ICVAE-DNN[41] and SVAER-DNN[40]. The results are shown in Tab.13. The performance of these two methods is lower than MLD-Model because only one category can be given for each sample. Therefore, MLD-Model is more suitable for network attack detection in the overlapping phenomenon than single-label detection methods.
### Performance of MLD-Model
Although the overall performance of MLD-Model is better than other methods, its various metrics are only 80+% (\(Acc\), \(F1\), _etc._). There are two main reasons, the increased output space and the complexity of the network attack itself.
First, a single-label network attack is multi-labeled in the overlapping phenomenon of behavior attribute. In UNSW-NB15, the output space is increased from 10 categories to 57 categories after data processing and analysis. The increase in output space make detection more difficult.
Second, there is not only overlapping phenomenon of behavior attribute between network attacks, but also similar phenomenon. Similar phenomenon represents that attacks are not completely consistent, but very similar. The difference between two samples is less than a threshold \(\epsilon\). This will make they difficult to be distinguished, resulting in false positives and false negatives of model. For instance, as shown in Fig.1 in Section 3, except for overlapping
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Method** & **Subsetacc** & **Hloss** & **Acc** & **P** & **R** & **F1** \\ \hline \hline ICVAE-DNN & 71.55\% & 0.589 & 71.76\% & 72.42\% & 71.76\% & 72.09\% \\ SVAER-DNN & 73.43\% & 0.548 & 73.68\% & 74.47\% & 73.68\% & 74.07\% \\ \hline MLD-Model & **79.27\%** & **0.425** & **79.87\%** & **80.03\%** & **80.09\%** & **80.06\%** \\ \hline \end{tabular}
\end{table}
Table 13: Detection results of MLD-Model and related single-label methods in UNSW-NB15
phenomenon, there is also a similar phenomenon of behavior attribute between network attack A and B. The research about similar phenomenon of behavior attribute is also our future work.
### Application scenario of MLD-Model
The overlapping phenomenon cause that a sample could be multi-labeled. And if a sample is multi-labeled, traditional single-label detection methods only can only give one label to this sample at most, which will lead to false negatives, making its theoretical accuracy impossible to be 100%. This fundamentally limits the effectiveness of traditional methods. MLD-Model uses a multi-label approach to detect network attacks in overlapping phenomenon, which can guarantee the theoretical line of 100% accuracy. More importantly, MLD-Model can help network administrators in two aspects.
**Tracing the source of network attack:** A sample detected as multi-label can reflect more information about the attacker behind it, such as more detailed attack methods, so that we can find the attacker more easily.
**Building a better IDS:** In a specific scenario, the distribution and similarity between different attacks can be obtained by analyzing and detecting the network attacks in the overlapping phenomenon in a multi-label manner, then helping us to build a more comprehensive defense scheme.
## 8 Conclusion
In this paper, we discover the overlapping phenomenon of behavior attribute between network attacks in the real world, and analyze the reasons. Experiments also verifies our conclusions. Overlapping phenomenon leads to a network attack sample may be multi-labeled. Identifying these attacks with a multi-label method can help researchers better trace the source of network attacks and build a better IDS. Therefore, we also propose a multi-label detection method, MLD-Model, in which WGAN-GP with improved loss is used for data enhancement, and unbalanced AE is used for pre-training. Finally, MLD-Model can achieve \(F1\)=80.06% in UNSW-NB15 and \(F1\)=83.63% in CCCS-CIC-AndMal-2020.
In the future, we will further explore the correlation between network attacks, and explore more suitable detection methods.
## Acknowledgements
This work is supported by the National Key Research and Development Program of China (Grant No.2018YFB0804704), and the National Key Research and Development Program of China (Grant No.2019YFB1005201).
|
2309.14565 | Low-temperature giant coercivity in Co$_{6.2}$Ga$_{3.8-x}$Ge$_{x}$
($x$=2.4 to 3.2) | The observation of giant coercivity exceeding 20 kOe at low temperatures in
several transition-metal-based compounds has attracted significant attention
from a fundamental perspective. This research is also relevant to developing
rare-earth-free permanent magnets, wherein cobalt is one of the primary
elements used. To facilitate easy fabrication, rare-earth-free and Co-based
inorganic bulk magnets that exhibit giant coercivity are highly demanded but
rarely reported. Herein, we report the observation of low-temperature giant
coercivity in polycrystalline metallic Co$_{6.2}$Ga$_{3.8-x}$Ge$_{x}$ ($x$=2.4
to 3.2) with the hexagonal Fe$_{13}$Ge$_{8}$-type structure composed of Kagome
and triangular lattices. As the Ge content $x$ decreases from 3.2, the magnetic
ground state changes from ferrimagnetism to ferromagnetism at $x$=2.6. In the
ferrimagnetic state, we observed a signature of spin frustration arising from
the Kagome and/or triangular lattices of Co atoms. The ferromagnetic ordering
temperatures for the $x$=2.6 and 2.4 samples are 46 K and 60 K, respectively.
The coercive fields rapidly increase upon cooling and reach values of 26 kOe
and 44 kOe in the $x$=2.6 and 2.4 samples, respectively, at 2 K. | Jiro Kitagawa, Himawari Nomura, Terukazu Nishizaki | 2023-09-25T22:27:49Z | http://arxiv.org/abs/2309.14565v1 | # Low-temperature giant coercivity in Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2)
###### Abstract
The observation of giant coercivity exceeding 20 kOe at low temperatures in several transition-metal-based compounds has attracted significant attention from a fundamental perspective. This research is also relevant to developing rare-earth-free permanent magnets, wherein cobalt is one of the primary elements used. To facilitate easy fabrication, rare-earth-free and Co-based inorganic bulk magnets that exhibit giant coercivity are highly demanded but rarely reported. Herein, we report the observation of low-temperature giant coercivity in polycrystalline metallic Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2) with the hexagonal Fe\({}_{13}\)Ge\({}_{8}\)-type structure composed of Kagome and triangular lattices. As the Ge content \(x\) decreases from 3.2, the magnetic ground state changes from ferrimagnetism to ferromagnetism at \(x\)=2.6. In the ferrimagnetic state, we observed a signature of spin frustration arising from the Kagome and/or triangular lattices of Co atoms. The ferromagnetic ordering temperatures for the \(x\)=2.6 and 2.4 samples are 46 K and 60 K, respectively. The coercive fields rapidly increase upon cooling and reach values of 26 kOe and 44 kOe in the \(x\)=2.6 and 2.4 samples, respectively, at 2 K.
_keywords_: giant coercivity, geometrical frustration, magnetization
## 1 Introduction
NdFeB and Sm-Co permanent magnets are commonly used in modern electric society. However, due to the high supply risk associated with rare earth elements such as Nd and Sm, developing rare-earth-free permanent magnets has become a major research focus[1]. To be commercially viable, permanent magnets require a large coercive field \(H_{\rm c}\) and high saturation magnetization. However, commercial rare-earth-free magnets such as Alnico and ferrite exhibit low \(H_{\rm c}\) values of approximately 3 kOe, much lower than NdFeB and Sm-Co magnets with \(H_{\rm c}\) values of 15-20 kOe[1]. Recently, some transition-metal-based magnets have been reported to exhibit huge coercivity values that exceed 20 kOe, known as giant coercivity[2]. Typically, giant coercivity is observed at low temperatures but often surpasses the \(H_{\rm c}\) values of NdFeB and Sm-Co magnets. Rare-earth-free transition-metal oxides like Mn\({}_{2}\)LiReO\({}_{6}\) and Sr\({}_{5}\)Ru\({}_{4.1}\)O\({}_{15}\) or some Fe-based
compounds have been reported as giant coercive compounds[3, 4, 5, 6, 7, 8, 9]. In most cases, \(H_{\rm c}\) increases rapidly on cooling and reaches values of 40-120 kOe at 2-4 K. The standard features among these giant coercive compounds are an anisotropic crystal structure, canted ferromagnetism, and low saturation magnetization. Despite the relatively high saturation magnetization exhibited by Alnico and ferrite magnets, understanding the underlying mechanism of giant coercivity would prove advantageous as it can inform the exploration of new rare-earth-free hard magnets.
One of the fundamental constituents of commercially-viable rare-earth-free magnets is cobalt. For facile fabrication, a bulk sample is highly desirable; however, reports of rare-earth-free and Co-based non-molecular inorganic bulk magnets exhibiting giant coercivity values over 20 kOe are few and far between. While it is true that room-temperature hard magnetic CoPt thin films, cobalt monolayers, and Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) wires have been shown to exhibit giant coercivity values[10, 11, 12], the only known example of bulk magnets would be CaBaCo\({}_{4}\)O\({}_{7}\) with the Curie temperature \(T_{\rm C}\) of 70 K and \(H_{\rm c}\) of 20 kOe at 5 K, and K\({}_{2}\)Co\({}_{3}\)(OH)\({}_{2}\)(SO\({}_{4}\))\({}_{3}\)(H\({}_{2}\)O)\({}_{2}\) with \(T_{\rm C}\) = 30 K and \(H_{\rm c}\) = 50 kOe at 1.8 K[13, 14]. Thus, exploring new Co-based non-molecular inorganic bulk ferromagnets with giant coercivity is meaningful.
Our focus is on transition-metal-based compounds with the hexagonal Fe\({}_{13}\)Ge\({}_{8}\)-type structure, which have not been extensively investigated. Only the magnetic properties of Fe\({}_{3}\)Ga\({}_{2-x}\)As\({}_{x}\) (0.21 \(\leq x\leq\) 0.85) and Fe\({}_{3}\)Ga\({}_{0.35}\)Ge\({}_{1.65}\) have been reported[15, 16], and they exhibit soft ferromagnetism at room temperature. While Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) also crystallizes into the Fe\({}_{13}\)Ge\({}_{8}\)-type structure[17], its magnetic properties have not yet been studied. Figure 1 illustrates the crystal structure, and Table 1 provides details of the atomic positions. The space group is \(P6_{3}/mmc\) (No. 194), and there are three Wyckoff sites \(2a\), \(6g\), and \(6h\) for Co atoms. The occupancy at the \(6h\) site is less than 1.0, which suggests the presence of vacancies. The Co2 and Co3 atoms form Kagome networks, while the Co1 atoms form a triangular lattice with a relatively long interatomic distance. We observe that the Co2 atoms form the ideal Kagome lattice, and the Co3 atoms form a slightly distorted Kagome lattice with vacancies. Ga and Ge atoms randomly occupy the \(2c\) and another \(6h\) sites.
In this article, we have examined the magnetic and transport properties of polycrystalline Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) with \(x\) ranging from 2.4 to 3.2. Our investigation has revealed that the material exhibits a metallic nature with itinerant \(d\)-electrons of Co and displays a change from the ferromagnetic (FM) to the ferrimagnetic ground state with an increase in \(x\). This crossover is accompanied by a shift towards a spin frustration regime dominated by antiferromagnetic (AFM) interaction. Remarkably, we have found that the giant coercivity values of 26 kOe and 44 kOe were achieved in Co\({}_{6.2}\)Ga\({}_{1.2}\)Ge\({}_{2.6}\) and Co\({}_{6.2}\)Ga\({}_{1.4}\)Ge\({}_{2.4}\), respectively, at a temperature of 2 K. Our study has further shown that the geometrical frustration with the itinerant \(d\)-electrons and canted spin structure, together with the anisotropic crystal structure, contribute to the emergence of the giant coercivity.
## 2 Materials and Methods
Polycrystalline specimens were synthesized via a homemade arc furnace employing constituent elements Co (99.99 %), Ga (99.99 %), and Ge (99.999 %). The elements were co-melted through an arc-melting process to yield button-shaped samples weighing 1.5 g on a water-cooled Cu hearth. To ensure homogeneity, each sample was subjected to multiple flips and remelting. Subsequently, each as-cast sample was sealed in an evacuated quartz tube and annealed at 700 \({}^{\circ}\)C for 4 days.
The X-ray diffraction (XRD) patterns of the powdered samples were collected at room temperature using a Bragg-Brentano geometry X-ray diffractometer (XRD-7000L, Shimadzu, Kyoto, Japan) with Cu-K\(\alpha\) radiation. A field-emission scanning electron microscope (FE-SEM; JSM-7100F, JEOL, Akishima, Japan) was employed for the metallographic examination, and an energy-dispersive X-ray (EDX) spectrometer, which was equipped with the FE-SEM, was used for the atomic composition analysis.
The temperature dependence of the dc magnetic susceptibility, \(\chi_{\rm dc}\) (\(T\)), ranging from 2 K to 300 K, and the isothermal magnetization curve were measured using MPMS3 (Quantum Design, San Diego, CA, USA). In the magnetization curve measurements,
\begin{table}
\begin{tabular}{c c c c c c} \hline Atom & Wyckoff & \(x\) & \(y\) & \(z\) & Occupancy \\ \hline Co1 & \(2a\) & 0 & 0 & 0 & 1 \\ Co2 & \(6g\) & 0.5 & 0 & 0 & 1 \\ Co3 & \(6h\) & 0.1614 & 0.3228 & 1/4 & 0.83 \\ Ga+Ge & \(2c\) & 1/3 & 2/3 & 1/4 & 1 \\ Ga+Ge & \(6h\) & 0.8088 & 0.6176 & 1/4 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Atomic positions of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\).
Figure 1: Crystal structure of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\). The solid black lines represent unit cell.
except for the samples with \(x\)= 2.4, 2.6, and 2.8 at 2 K, we employed demagnetization using the field oscillation mode in MPMS3 prior to each measurement. Despite this demagnetization procedure, a residual magnetization was observed when a high coercive field was obtained. Consequently, we excluded the initial magnetization process from the data display. To obtain precise initial magnetization curves, magnetization curves at 2 K were measured for samples with \(x\) values of 2.4, 2.6, and 2.8 after subjecting them to zero-field cooling. The temperature dependence of electrical resistivity, \(\rho\) (\(T\)), ranging from 3 K to room temperature, was measured using a dc four-probe method with a homemade system in a GM refrigerator (UW404, Ulvac cryogenics, Kyoto, Japan).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \(x\) & \(a\) (Å) & \(c\) (Å) & \(T_{\rm C}\) (K) & \(\mu_{\rm eff}\) (\(\mu_{\rm B}\)/Co) & \(\theta_{\rm W}\) (K) & \(H_{\rm c}\) at 2 K (kOe) & \(\rho\) (RT) (\(\mu\Omega\)cm) \\ \hline
2.4 & 7.906(3) & 4.970(1) & 60.3 & 1.78 & 50 & 44 & 185 \\
2.6 & 7.900(3) & 4.977(1) & 46.4 & 1.77 & 41 & 26 & 186 \\
2.8 & 7.894(4) & 4.984(2) & 13.2 & 1.97 & -83 & 9.3 & 209 \\
3.0 & 7.875(2) & 4.985(1) & 5.8 & 1.95 & -116 & 1.4 & 126 \\
3.2 & 7.857(4) & 4.992(2) & \(<2\) & 2.15 & -309 & 0.24 & 227 \\ \hline \end{tabular}
\end{table}
Table 2: Lattice parameters \(a\) and \(c\), \(T_{\rm C}\), \(\mu_{\rm eff}\), \(\theta_{\rm W}\), \(H_{\rm c}\) at 2 K, and room temperature \(\rho\) value \(\rho\) (RT) of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2).
Figure 2: (a) XRD patterns of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2). In each dataset, the observed (\(\circ\)) and calculated (solid line) XRD patterns are shown at the top. The difference between the observed and calculated XRD patterns is shown at the bottom. The tick marks indicate the positions of Bragg reflections for Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\). (b) Ge content dependence of lattice parameters. (c) \(c/a\) ratio vs. Ge content plot.
## 3 Results and Discussion
Figure 2(a) depicts the X-ray diffraction (XRD) patterns of all specimens, which are well-explained by the Fe\({}_{13}\)Ge\({}_{8}\)-type structure. The hexagonal lattice parameters \(a\) and \(c\) are obtained with the help of a Rietveld refinement program[18, 19] and are summarized in Table 2. For this purpose, we employed the RIETAN-FP program package[18]. To ensure a full parameter fitting process of utmost precision, it is imperative for the peak intensity of XRD to exceed 10000 counts. However, our acquired data yielded a peak intensity of at most 1000 counts. Consequently, we adopted a strategy of fixed atomic positions, as detailed in Table 1, while the fitting parameters encompassed lattice parameters, background function, profile function, and scaling factor. The lattice parameters are plotted as a function of the Ge content \(x\) in Fig.2(b) and reveal that the \(a\)-axis length reduces while the \(c\)-axis expands with increasing \(x\). The \(c/a\) ratio is also calculated for each sample, as displayed in Fig.2(c). The result illustrates a systematic increase of \(c/a\) as the Ge atom replaces the Ga atom gradually. Figures 3(a) to (c) present scanning electron microscopy (SEM) images for several samples (\(x\)=2.4, 2.8, and 3.2). Each non-contrast image indicates a homogeneous chemical composition with no conspicuous impurity phases. The chemical compositions, ascertained through EDX analysis, are as follows: Co\({}_{60.7(3)}\)Ga\({}_{14.6(5)}\)Ge\({}_{24.8(5)}\) for \(x\)=2.4, Co\({}_{60.9(6)}\)Ga\({}_{10.4(7)}\)Ge\({}_{28.7(1)}\) for \(x\)=2.8, and Co\({}_{60.3(5)}\)Ga\({}_{6.8(5)}\)Ge\({}_{32.9(2)}\) for \(x\)=3.2, respectively. Each composition is almost identical to the starting composition. Furthermore, elemental mappings of Co, Ga, and Ge are also presented in Figs.3(a) to (c), revealing the homogeneous distribution of constituent elements. The SEM images indicate the smoothness of the surface, which may be advantageous for various application perspectives. For example, the smoothness of the surface of magnetic material improves the sensitivity in tunnel magnetoresistance sensors[20, 21].
We conducted measurements of \(\chi_{\rm dc}\) (\(T\)) under both zero-field-cooled (ZFC) and field-cooled (FC) conditions, employing an external magnetic field of 500 Oe. The temperature-dependent dc magnetic susceptibility \(\chi_{\rm dc}\) of each sample exhibits an enhancement of \(\chi_{\rm dc}\) in the low-temperature range, which suggests a ferromagnetic ordering, as shown in Fig.4(a). Except for the \(x\)=3.2 sample with magnetic ordering temperature below 2 K, the observed irreversibility between the ZFC and FC datasets is a characteristic of ferromagnetic behavior. This phenomenon can be understood by recognizing the significant magnetic domain pinning during ZFC conditions. The \(\chi_{\rm dc}\) value at 2 K is heavily reduced by increasing \(x\), and the \(T_{\rm C}\) systematically decreases. The temperature at which the minimum temperature derivative of \(\chi_{\rm dc}\) under FC occurs defines \(T_{\rm C}\), shown in Table 2 for each sample and represented in Figs.4(b) and (c). The \(x\)=3.2 sample would possess \(T_{\rm C}\) below 2 K. The definition of \(T_{\rm C}\) employed in this study aligns with the methodology commonly applied in numerous ferromagnetic investigations[22, 23, 24]. This approach is considered reliable, as it has been validated in some ferromagnetic materials where the \(T_{\rm C}\) obtained through this method is consistent with \(T_{\rm C}\) values determined via other physical quantities, such as specific heat[25, 26].
Figure 3: SEM images of (a) \(x\)=2.4, (b) \(x\)=2.8, and (c) \(x\)=3.2 for Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\), respectively. The elemental mappings of Co, Ga, and Ge are also shown.
Figure 4: (a)Temperature dependences of \(\chi_{\rm dc}\) of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2) under ZFC and FC conditions. The external field is 500 Oe. Both axes are on a logarithmic scale. (b) and (c) Temperature derivative of \(\chi_{\rm dc}\) under FC of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2). (d) Temperature dependences of \(1/\chi_{\rm dc}\) of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2). The solid lines represent the fitting results using the Curie-Weiss law.
Figure 5: \(M\)-\(H\) curves measured at temperatures denoted in figure for (a) \(x\)=2.4, (b) \(x\)=2.6, (c) \(x\)=2.8, (d) \(x\)=3.0, and (e) \(x\)=3.2 Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) samples. \(M\)-\(H\) loops at 2 K around (f) negative \(H_{\rm c}\) and (g) positive \(H_{\rm c}\) for \(x\)=2.4, 2.6, and 2.8 samples.
The temperature dependences of inverse \(\chi_{\rm dc}\) are demonstrated in Fig.4(d). In each sample, 1/\(\chi_{\rm dc}\) follows the Curie-Weiss law expressed by \(\chi_{\rm dc}=C/(T-\theta_{\rm W})\) at high temperatures, as indicated by the solid line. The effective magnetic moment \(\mu_{\rm eff}\) obtained from the \(C\) value and the Weiss temperature \(\theta_{\rm W}\) are presented in Table 2. All \(\mu_{\rm eff}\) values are smaller than that of an isolated Co\({}^{2+}\) or Co\({}^{3+}\) ion (3.87 or 4.90 \(\mu_{\rm B}\)/Co), which suggests an itinerant character of \(d\)-electrons. For the \(x\)=2.6 or 2.4 sample, the positive \(\theta_{\rm W}\) is nearly identical to \(T_{\rm C}\), indicating a ferromagnetic ordering. However, in the \(x\)=2.8 \(\sim\) 3.2 samples, negative \(\theta_{\rm W}\) values are present, indicating the dominance of AFM interaction. The magnetization curves exhibit hysteresis loops, which are characteristic of FM compound. Therefore, the samples with \(x\)\(\geq\)2.8 undergo ferrimagnetic ground states. Furthermore, it is notable that the frustration index \(f\), defined as \(|\theta_{\rm W}|/T_{\rm C}\), significantly increases from 6.3 to over 155 as \(x\) increases from 2.8 to 3.2 in the ferrimagnetic state. This index is grounded in the concept that \(\theta_{\rm W}\) reflects the strength of magnetic interactions[27, 28]. Thus, an unfrustrated compound would attain a magnetically ordered state at \(\theta_{\rm W}\), resulting in \(f\)=1. Geometrical frustration, such as triangular and Kagome lattices, often induces magnetic spin frustration, significantly suppressing the ordering temperature. Compounds of this nature frequently exhibit \(f\) values exceeding 5, often surpassing 100[27, 28]. The observed increase in \(f\) within the Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) series, particularly when \(x\) exceeds 2.8, strongly supports the presence of spin frustration. This finding aligns with the geometrically frustrated Kagome and triangular lattices, as depicted in Fig.1. Hence, the notable characteristic of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) is its capacity for chemical manipulation of spin frustration.
The isothermal magnetization curves (where \(M\) is the magnetization and \(H\) is the external field) for all samples, spanning -70 kOe to 70 kOe, are presented in Figs.5 (a) through (e). The high field \(M\) at 2 K displays a sudden drop as \(x\) increases from 2.6 to 2.8, which is indicative of spin compensation in the ferrimagnetic samples with \(x\)=2.8 \(\sim\) 3.2. However, even in the ferromagnetic samples with \(x\)=2.4 or 2.6, the highest \(M\) value is relatively smaller than \(\mu_{\rm eff}\), implying the existence of a canted ferromagnetic structure. Notably, the \(x\)=2.4 and 2.6 samples exhibit a giant coercivity of 44 kOe and 26 kOe, respectively, at 2 K. In the \(x\)=2.4 or 2.6 sample, \(H_{\rm c}\) increases significantly as the temperature drops below \(T_{\rm C}\), and the initial magnetization curve at 2 K, exhibiting a small slope, undergoes an abrupt jump at approximately \(H_{\rm c}\). This magnetization process is a typical characteristic of domain wall pinning[29]. The ferrimagnetic samples also demonstrate hysteresis loops at low temperatures, and the corresponding \(H_{\rm c}\) values at 2 K are provided in Table 2.
Figure 6(a) displays the temperature dependences of \(\rho\) in all the samples. The values of \(\rho\) are normalized by the corresponding room temperature values listed in Table 2. The metallic nature of the samples is evident from the order of magnitude of \(\rho\) in each one. The ferrimagnetic samples exhibit a negative temperature coefficient of resistivity below around 150 K, which could be indicative of carrier localization and/or a partial opening of the gap at the Fermi surface. For a more comprehensive understanding of carrier localization in the \(x\)=2.8-3.2 samples, we present the temperature-dependent
electrical conductivity (\(\sigma\)), normalized by the room temperature \(\sigma\) (designated as \(\sigma\) (RT)) (=1/\(\rho\) (RT)), as a function of \(T^{1/2}\) in Fig.6(b). In the localization regime for each sample, \(\sigma\) diminishes with decreasing temperature, adhering closely to a \(T^{1/2}\) dependence. This temperature response is explicable through a weak localization model for three-dimensional systems[30], expressed as \(\sigma(T)=\sigma_{0}+aT^{1/2}\). In this expression, the initial term represents residual conductivity, while the second term accounts for the influence of weak localization due to electron-electron interactions, with the proportional coefficient \(a\). The solid lines in Fig.6(b) represent fits to this model, effectively capturing the localization characteristics of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.8-3.2). At lower temperatures, typically below 50 K, the experimental \(\sigma\) for each sample surpasses the predicted value of the solid line. This observation could potentially be attributed to impurity conduction, although further investigation is warranted.
We aim to investigate the correlation between Co bond length and magnetism. The Bethe-Slater curve, widely used for analysing magnetism in magnetic metals, suggests that longer and shorter interatomic distances between magnetic atoms favour FM and AFM interactions, respectively[31]. This tendency has been observed in various intermetallic compounds[32, 33]. The selected Co interatomic distances are listed in Table 3, and, taking into account the multiplicity (2 for Co1-Co1 and Co2-Co2, 6 for Co1-Co3, and 4 for Co2-Co3), Co1-Co3 and Co2-Co3 bonding likely determine the magnetic ordering type. The \(x\)-dependence of \(\theta_{\rm W}\) in Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) strongly
Figure 6: (a) Temperature dependences of \(\rho\) for Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2). Each \(\rho\) is normalized by the room temperature \(\rho\) value \(\rho\) (RT) listed in Table 2. (b) Temperature dependences of \(\sigma\) for \(x\)=2.8, 3.0, and 3.2 samples plotted as a function of \(T^{1/2}\). Each \(\sigma\) is normalized by the room temperature \(\sigma\) denoted as \(\sigma\) (RT). The solid lines represent fits to the equation \(\sigma_{0}+aT^{1/2}\) with the associated parameters as follows: (\(\sigma_{0}\) ((\(\mu\Omega\)cm)\({}^{-1}\)), \(a\) ((\(\mu\Omega\)cm)\({}^{-1}\) K\({}^{-1/2}\)))=(0.00416, 6.28\(\times 10^{-5}\)) for \(x\)=2.8, (0.00663, 1.11\(\times 10^{-4}\)) for \(x\)=3.0, and (0.00378, 4.83\(\times 10^{-5}\)) for \(x\)=3.2, respectively.
suggests the coexistence of FM and AFM interactions. Co1-Co3 and Co2-Co3 bonds could lead to AFM and FM interactions, respectively, although further investigation is necessary. When \(x\) exceeds 2.8, the Co1-Co3 and Co2-Co3 bond lengths considerably decrease, consistent with the rapid predominance of AFM interaction as reflected by \(\theta_{\rm W}\). The coexistence of FM and AFM interactions in the Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) system bears similarity to artificial layered AFM-FM structures and compounds characterized by the simultaneous presence of AFM and FM phases[34, 35]. In these systems, an intriguing phenomenon known as the exchange bias effect often manifests, characterized by a shift in the \(M\)-\(H\) hysteresis loop along the magnetic field direction. This effect's signature can be discerned by examining the disparities between the positive and negative \(H_{\rm c}\) values in the \(M\)-\(H\) loop. Positive and negative \(H_{\rm c}\) are the points where the \(M\)-\(H\) loop intersects the positive and negative \(x\)-axis. In Figs.5(f) and (g), we have depicted the positions of the negative and positive \(H_{\rm c}\) for the \(x\)=2.4, 2.6, and 2.8 samples, which exhibit larger \(H_{\rm c}\) values at 2 K, using open circles to denote these positions. For the \(x\)=2.4 and 2.6 samples, showcasing FM ordering with weak spin frustration, the positive \(H_{\rm c}\) values (43 kOe for \(x\)=2.4 and 23 kOe for \(x\)=2.6) are marginally lower than their corresponding negative \(H_{\rm c}\) values (44 kOe for \(x\)=2.4 and 26 kOe for \(x\)=2.6). This observation supports the plausible occurrence of the exchange bias effect in the \(x\)=2.4 and 2.6 samples, aligning with the coexistence of FM and AFM interactions, as mentioned earlier. Conversely, for the \(x\)=2.8 sample, which accompanies the spin-frustration state, the positive \(H_{\rm c}\) value of 9.5 kOe is almost identical to the negative \(H_{\rm c}\) value of 9.3 kOe. As \(x\) surpasses 2.8, the rapid predominance of AFM interactions is implied by the \(x\) dependence of \(\theta_{\rm W}\). Consequently, the emergence of spin frustration due to the overwhelming AFM interactions may attenuate the exchange bias effect.
It should be noted that the layered hexagonal perovskite Sr\({}_{5}\)Ru\({}_{4.1}\)O\({}_{15}\), known for its giant coercive properties[4], shares the same space group \(P6_{3}/mmc\) (No. 194) as Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\). This metallic compound exhibits a highly anisotropic crystal structure with \(c/a\)=4.106, wherein Ru acts as the magnetic atom and partially forms a triangular lattice. The analysis through Curie-Weiss fitting suggests the presence of itinerant \(d\)-electrons of Ru. The saturation moment of 0.05 \(\mu_{\rm B}\)/Ru is much smaller than the \(\mu_{\rm eff}\) value of the Ru ion, implying weak ferromagnetism. The giant coercivity of Sr\({}_{5}\)Ru\({}_{4.1}\)O\({}_{15}\) arises from the large magnetocrystalline anisotropy with the geometrical frustration[4]. Co\({}_{6.2}\)Ga\({}_{1.4}\)Ge\({}_{2.4}\) and Co\({}_{6.2}\)Ga\({}_{1.2}\)Ge\({}_{2.6}\), both of which are also metallic, have a \(c/a\) ratio much smaller than 1.0 (as seen in Fig.2(c)), indicating an anisotropic crystal structure. The Co atoms form Kagome and triangular lattices, and the Co magnetic moment is itinerant. Co\({}_{6.2}\)Ga\({}_{1.4}\)Ge\({}_{2.4}\) and Co\({}_{6.2}\)Ga\({}_{1.2}\)Ge\({}_{2.6}\) exhibit relatively low saturation moments, likely originating from a canted ferromagnetic structure. Thus, the overall behaviour of Co\({}_{6.2}\)Ga\({}_{1.4}\)Ge\({}_{2.4}\) and Co\({}_{6.2}\)Ga\({}_{1.2}\)Ge\({}_{2.6}\) is quite similar to the magnetic and transport properties of Sr\({}_{5}\)Ru\({}_{4.1}\)O\({}_{15}\). Therefore, we speculate that the giant coercivity in rare-earth-free magnets with the space group \(P6_{3}/mmc\) can be attributed to the itinerant \(d\)-electrons in geometrically frustrated metals and canted spin structure, in addition to the anisotropic crystal structure.
As mentioned in the introduction, CaBaCo\({}_{4}\)O\({}_{7}\) is the rare example of a rare-earth-free Co-based bulk inorganic compound exhibiting a giant \(H_{\rm c}\)[13]. This cobaltite also features Co-Kagome layers despite its distorted orthorhombic crystal structure. While its low saturation moment (\(\sim\) 0.7 \(\mu_{\rm B}\)/f.u.) is akin to that of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\), the unique dielectric behaviour is different from the metallic transport of Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 and 2.6). It is thus appropriate to classify Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 and 2.6) as a new category of rare-earth-free Co-based inorganic compounds that exhibit giant coercivity.
A comparison of magnetic properties between Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) and isostructural Fe\({}_{3}\)Ga\({}_{0.35}\)Ge\({}_{1.65}\) would be of great significance. Fe\({}_{3}\)Ga\({}_{0.35}\)Ge\({}_{1.65}\) exhibits FM ordering below 341 K, and even at 50 K, no noticeable hysteresis loop is detected[16]. At 50 K, the saturation magnetization is 80 emu/g, equivalent to approximately 1.47 \(\mu_{\rm B}\)/Fe. Thus, it is likely that the itinerant nature of \(d\)-electrons is weakened in Fe\({}_{3}\)Ga\({}_{0.35}\)Ge\({}_{1.65}\), and the itinerant magnetic moment is essential for the manifestation of giant coercivity in transition-metal magnets with the \(P6_{3}/mmc\) space group.
## 4 Summary
This study presents the transport and magnetic properties of polycrystalline Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2) with the hexagonal Fe\({}_{13}\)Ge\({}_{8}\)-type structure. The crystal structure of these materials is characterized by the presence of Kagome and triangular lattices formed by Co atoms, as well as three distinct crystallographic sites for Co atoms. The former feature is likely responsible for spin frustration, while the latter is significant for the competition between AFM and FM interactions. All samples are found to be metallic, with Co \(d\)-electrons exhibiting itinerancy across all \(x\) values. As \(x\) increases, the magnetic ground state changes from FM to ferrimagnetic ordering, resulting in the emergence of spin frustration, as evidenced by the large frustration index. Conversely, spin frustration appears to be suppressed in FM samples, although the competition of FM and AFM interactions contributes to the canted spin structure. Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) represents a rare system that allows for the chemical tuning of spin frustration. We note that the chemical tuning of spin frustration has not been reported
\begin{table}
\begin{tabular}{c c c c c} \hline \(x\) & Co1-Co1 & Co1-Co3 & Co2-Co2 & Co2-Co3 \\ \hline
2.4 & 2.485 & 2.536 & 2.485 & 2.632 \\
2.6 & 2.489 & 2.535 & 2.489 & 2.630 \\
2.8 & 2.492 & 2.534 & 2.492 & 2.630 \\
3.0 & 2.493 & 2.530 & 2.493 & 2.625 \\
3.2 & 2.496 & 2.526 & 2.496 & 2.621 \\ \hline \end{tabular}
\end{table}
Table 3: Selected Co interatomic distances in Co\({}_{6.2}\)Ga\({}_{3.8-x}\)Ge\({}_{x}\) (\(x\)=2.4 to 3.2). The unit is Å. The multiplicities are 2 for Co1-Co1, 6 for Co1-Co3, 2 for Co2-Co2, and 4 for Co2-Co3, respectively.
in the other giant coercive materials. Notably, Co\({}_{6.2}\)Ga\({}_{1.4}\)Ge\({}_{2.4}\) and Co\({}_{6.2}\)Ga\({}_{1.2}\)Ge\({}_{2.6}\), with FM ground states, exhibit low-temperature giant coercivities comparable to the \(H_{\rm c}\) value (30 kOe) observed in CoPt thin film studied as potential room-temperature hard magnets. This discovery of a new class of rare-earth-free Co-based compounds exhibiting giant coercivity marks a promising step toward developing rare-earth-free permanent magnets.
J.K. is grateful for the support provided by the Comprehensive Research Organization of Fukuoka Institute of Technology. T.N. is grateful for the support from the Advanced Instruments Center of Kyushu Sangyo University
## Data availability statement
All data that support the findings of this study are included within the article.
## Author contributions
Jiro Kitagawa: Conceptualization, Supervision, Formal analysis, Writing - original draft, Writing - reviewing & editing. Himawari Nomura: Investigation. Terukazu Nishizaki: Investigation, Formal analysis, Writing - reviewing & editing.
## Conflicts of interest
The authors declare no conflict of interest.
|
2309.10703 | Gukov-Pei-Putrov-Vafa conjecture for $SU(N)/\mathbb{Z}_m$ | In our earlier work, we studied the $\hat{Z}$-invariant(or homological
blocks) for $SO(3)$ gauge group and we found it to be same as
$\hat{Z}^{SU(2)}$. This motivated us to study the $\hat{Z}$-invariant for
quotient groups $SU(N)/\mathbb{Z}_m$, where $m$ is some divisor of $N$.
Interestingly, we find that $\hat{Z}$-invariant is independent of $m$. | Sachin Chauhan, Pichai Ramadevi | 2023-09-19T15:41:18Z | http://arxiv.org/abs/2309.10703v2 | # **Gukov-Pei-Putrov-Vafa conjecture for \(\boldsymbol{SU(N)/\mathbb{Z}_{m}}\)**
###### Abstract
In our earlier work, we studied the \(\boldsymbol{\hat{Z}}\)-invariant(or homological blocks) for \(\boldsymbol{SO(3)}\) gauge group and we found it to be same as \(\boldsymbol{\hat{Z}}^{\boldsymbol{SU(2)}}\). This motivated us to study the \(\boldsymbol{\hat{Z}}\)-invariant for quotient groups \(\boldsymbol{SU(N)/\mathbb{Z}_{m}}\), where \(\boldsymbol{m}\) is some divisor of \(\boldsymbol{N}\). Interestingly, we find that \(\boldsymbol{\hat{Z}}\)-invariant is independent of \(\boldsymbol{m}\).
**Keywords:** 3-manifold invariant, Quantum invariant, WRT invariant, Categorification of WRT invariant,
\(\boldsymbol{\hat{Z}}\)-invariant, Homological blocks
## 1 Introduction
Over the past few decades, the notion of string dualities has emerged as a unifying thread connecting diverse branches of mathematics. These dualities, when expressed mathematically, have given rise to profound conjectures, fostering a deeper understanding of the connections between various mathematical domains. One notable duality in this context is the 3d-3d correspondence as documented in works by Dimofte and others[1, 2, 3]. Such a correspondence establishes a link between complex Chern-Simons theory on a 3-manifold \(M\) based on gauge group \(G_{\mathbb{C}}\) and 3d \(\mathcal{N}=2\ T[M;G]\) theory.
This correspondence was a result of compactification of 6d (2,0) superconformal field theory(SCFT) of type ADE Lie algebra via topological twisting along 3-manifold \(M\) so that \(\mathcal{N}=2\) supersymmetry remains preserved in the remaining three flat directions:
\[\text{6d }(2,0)\text{ SCFT of type ADE Lie algebra }\xrightarrow{\text{ compactification on M}}\ \ \text{3d }\mathcal{N}=2\ T[M;G]. \tag{1}\]
The data of 3d \(\mathcal{N}=2\ T[M;G]\) theory is given by manifold \(M\). In other words, for every 3-manifold \(M\) there would be a corresponding 3d \(\mathcal{N}=2\ T[M;G]\) theory encoding the geometry and topology of \(M\). Many numerical and homological invariants of \(M\) have been predicted by studying \(T[M;G]\) on various backgrounds[4, 5]. Further, the topology and geometry of 3-manifolds is fairly well understood now1, but still there is no known way to explicitly identify 3d \(\mathcal{N}=2\ T[M;G]\) for a general \(M\).
Footnote 1: complete topological classification of 3-manifolds is still an open problem
In the seminal paper[6], Witten showed that Chern-Simons theory is a topological gauge theory whose partition function \(Z_{F}^{G}[M;q]\) contains the toplogical information of the underlying manifold \(M\). Inspired by the work of Witten, Reshetikhin and Turaev constructed invariants of 3-manifolds using surgery prescription on links in \(S^{3}\), commonly referred as Witten-Reshetikhin-Turaev(WRT) invariants[7].
A concrete manifestation of the 3d-3d correspondence is the Gukov-Pei-Putrov-Vafa (GPPV) conjecture. This conjecture connects the partition function of Chern-Simons theory (or the Witten-Reshetikhin-Turaev (WRT) invariant) when evaluated on a 3-manifold \(M\) to a specific type of invariant expressed in terms of \(q\)-series. In
mathematical literature, these \(q\)-series valued invariants are commonly referred to as \(\hat{Z}\) or homological blocks, and they are represented as vectors in the realm of \(q\)-series.
Initially, the exploration of \(\hat{Z}\) was focused on \(SU(2)\) gauge group through the analytic continuation of the WRT invariant defined for plumbed 3-manifold \(M(\Gamma)\). This investigation led to the definition of \(\hat{Z}\) for negative semidefinite plumbed 3-manifolds. Subsequent research efforts, as outlined in works[8, 9] extended the study of \(\hat{Z}\) by examining the GPPV conjecture in the contexts of \(SU(N)\), \(SO(3)\), and \(OSp(1|2)\) gauge groups.
Furthermore, it is essential to recognize that the WRT invariant for gauge group \(G\), \(\tau^{G}_{k^{\prime}}[M(\Gamma);\mathfrak{q}]\), is defined at a root of unity denoted as \(\mathfrak{q}\) (\(=\exp\left(\frac{2\pi i}{\mathcal{K}^{\prime}}\right)\)), where \(k^{\prime}\) signifies the renormalized Chern-Simons level. In contrast, the variable \(q\) within \(\hat{Z}^{G}_{b}[M(\Gamma);q]\) can be any complex number. Interestingly, as \(q\) approaches \(\mathfrak{q}\), the relationship between these two invariants expressed through a \(S\)-transformation matrix[4, 5]:
\[\tau^{G}_{k^{\prime}}[M(\Gamma);\mathfrak{q}]\cong\sum_{a,b}S_{ab}\hat{Z}^{G}_ {b}[M(\Gamma);q]\Big{|}_{q\to\mathfrak{q}}, \tag{2}\]
where \(\hat{Z}^{G}_{b}[M(\Gamma);q]\) admits the following physical categorification:
\[\hat{Z}^{G}_{b}[M(\Gamma);q]=\sum_{\begin{subarray}{c}i\in\mathbb{Z}+\Delta_{ b}\\ j\in\mathbb{Z}\end{subarray}}q^{i}(-1)^{j}\text{dim }\mathcal{H}^{i,j}_{b,G}. \tag{3}\]
In this equation, \(\mathcal{H}^{i,j}_{b,G}\) corresponds to the BPS sector of the Hilbert space of \(T[M;G]\) and \(\Delta_{b}\) is a rational number specific to a 3-manifold \(M\). These insights provide a promising avenue for addressing the long-standing categorification problem associated with the WRT invariant.
In our previous study[9], the \(\hat{Z}\)-invariant for the \(SO(3)\) gauge group was investigated by performing an analytical continuation of \(\tau^{SO(3)}_{k}[M(\Gamma);\mathfrak{q}]\) within a unit circle. Remarkably, it was discovered that \(\hat{Z}^{SO(3)}_{b}[M(\Gamma);q]\) is equivalent to \(\hat{Z}^{SU(2)}_{b}[M(\Gamma);q]\). This finding implies that the \(\hat{Z}\)-invariant depends on the Lie algebra rather than the Lie group, as \(SU(2)\) and \(SO(3)\) share the same Lie algebra. Furthermore, it is worth noting that \(SO(3)\cong SU(2)/\mathbb{Z}_{2}\) is the Langlands dual group of \(SU(2)\). Therefore, it raises the question of whether the equality \(\hat{Z}^{SU(2)}_{b}[M(\Gamma);q]=\hat{Z}^{SO(3)}_{b}[M(\Gamma);q]\) can be attributed to this Langlands dual group correspondence. Additionally, for higher rank gauge group \(SU(N)\) with \(N>2\) and \(N\) not being a prime number (\(N\notin\mathbb{P}\)), there exists gauge groups between \(SU(N)\) and \(SU(N)/\mathbb{Z}_{N}\). These groups are formed by taking a quotient of \(SU(N)\) with a subgroup \(\mathbb{Z}_{m}\) of \(\mathbb{Z}_{N}\). All these quotient groups share the same \(\mathfrak{su}(N)\) Lie algebra. Exploring the \(\hat{Z}\)-invariant for \(SU(N)/\mathbb{Z}_{m}\) quotient groups will eventually answer whether \(\hat{Z}\)-invariant depends on \(m\).
While the physics perspective suggests that \(\hat{Z}\)-invariant should be Lie algebra dependent only as 3d \(\mathcal{N}=2\)\(T[M;G]\) obtained by compactifying 6d \(\mathcal{N}=(2,0)\) SCFT of type ADE Lie algebra on 3-manifold \(M\). But in certain cases compactified theories does have Lie group dependence instead of Lie algebra[10]. In this paper, our primary objective is to address the question of whether the \(\hat{Z}\)-invariant exhibits dependence on Lie group or Lie algebra. To achieve this, we explicitly study the Gukov-Pei-Putrov-Vafa conjecture for gauge groups of the form \(SU(N)/\mathbb{Z}_{m}\). For this, we must first define the WRT invariant for \(SU(N)/\mathbb{Z}_{m}\) gauge group and then proceed with an analysis similar to that conducted in Ref[8].
The organization of this paper is as follows: In section (2), we provide an overview of the GPPV conjecture and \(\hat{Z}\)-invariant. Section (3) introduces the appropriate formula for the WRT invariant for the quotient group \(SU(N)/\mathbb{Z}_{m}\). In section (4), we demonstrate how to decompose the WRT invariant into \(\hat{Z}\)-invariant. Finally, we conclude in section (5) by discussing open problems and offering concluding remarks.
#### Notations and conventions:
We use the following notation throughout this paper.
\[\begin{array}{rl}M\colon&3\text{-manifold}\\ G\colon&\text{Gauge group}\\ G_{\mathbb{C}}\colon&\text{Complex gauge group}\\ \mathfrak{g}\colon&\text{Lie algebra}\\ P\colon&\text{Weight lattice}\\ P_{+}\colon&\text{Cone of dominant integer weights}\\ \Lambda_{i}\colon&\text{Fundamental weight vector where }i=1,2,\ldots,r\\ Q\colon&\text{Root lattice}\\ P^{\prime}\colon&\text{Intermediate lattice between root and weight lattice}\\ (P^{\prime})^{\bullet}\colon&\text{Dual of lattice }P^{\prime}\\ (\lambda,\mu)\colon&\text{Denotes the inner product between any two weight vectors, }\lambda\text{ and }\mu\\ \Gamma\colon&\text{Plumbing graph of tree type}\\ L\colon&\text{Number of vertices in a plumbing graph }\Gamma\\ B\colon&\text{Linking matrix associated to plumbing graph }\Gamma\\ b_{\pm}\colon&\text{Number of positive and negative eigenvalues of }B\\ \sigma\colon&\text{Signature of linking matrix }B\text{ {i.e. }}\sigma=b_{+}-b_{-}\\ W\colon&\text{Weyl group}\\ |W|\colon&\text{Order of the Weyl group}\\ \omega_{i}\colon&i^{\text{th}}\text{ element of the Weyl group}\\ \ell(\omega)\colon&\text{Length of Weyl group element }\omega\\ \rho\colon&\text{Weyl vector}\\ M(\Gamma)\colon&\text{Plumbed 3-manifold}\\ k^{\prime}\colon&\text{Renormalized Chern-Simons level}\\ k\colon&\text{Bare Chern-Simons level}\\ \mathfrak{q}\colon&\text{Root of unity, }\exp\bigl{(}\frac{2\pi i}{k^{2}}\bigr{)}\\ q\colon&\text{An arbitrary complex number inside the unit circle}\\ \deg v\colon&\text{Denotes the degree of vertex }v\text{ in a plumbing graph }\Gamma\\ \tau_{k^{\prime}}^{G}[M(\Gamma);\mathfrak{q}]\colon&\text{WRT invariant for gauge group }G\\ Z_{k^{\prime}}^{G}[M;\mathfrak{q}]\colon&\text{Chern-Simons partition function which is related to WRT invariant as }Z_{k^{\prime}}^{G}[M;\mathfrak{q}]=\frac{\tau_{k^{\prime}}^{G}[M;\mathfrak{q}]} {\tau_{k^{\prime}}^{G}[S^{2}\times S^{1};\mathfrak{q}]}\\ \hat{Z}_{b}^{G}[M(\Gamma);q]\colon&\hat{Z}\text{-invariant labelled by index }b\text{ for gauge group }G\end{array}\]
## 2 Review of GPPV conjecture and \(\hat{Z}\)-invariant
In this section, we will give a brief survey of GPPV conjecture and \(\hat{Z}\)-invariant. Moreover, in this paper, we would limit ourselves to the case of rational homology sphere or plumbed 3-manifold coming from tree type diagrams only. Therefore, we first introduce the plumbed 3-manifolds:
### Plumbed 3-manifold
In general, any connected, closed, orientable 3-manifold can be obtained by surgery on a framed link in \(S^{3}\)[11, 12]. In this context, we focus our attention on the \(L\)-component link \(\mathcal{L}\), composed of unknots with framing \(f_{i}\). The resulting manifold, which emerges through a surgical operation on \(\mathcal{L}\), is referred to as a plumbed 3-manifold. For example, we represent a six component link \(\mathcal{L}\) through a graph \(\Gamma\) with six vertices and call it a plumbing graph as shown in Figure 1.
The degree of any vertex \(v\) (\(\deg v\)) is equal to the total number of edges intersecting \(v\). In Figure 1, we can see that vertices 2 and 5 have a degree of 3, while vertices 1, 3, 4, and 6 each have a degree of 1. Further, we associate a linking matrix \(B\) with graph \(\Gamma\). It is defined as follows:
\[B_{v_{1},v_{2}}=\begin{cases}1,&v_{1},v_{2}\text{ connected},\\ f_{v},&v_{1}=v_{2}=v,\\ 0,&\text{otherwise}\end{cases} \tag{4}\]
Moreover, we define the signature of the linking matrix \(B\) as \(\sigma=b_{+}-b_{-}\) where \(b_{\pm}\) denotes the number of positive and negative eigenvalues respectively. So, depending on the sign of \(\sigma\) we referred the corresponding 3-manifold as positive or negative semidefinite plumbed 3-manifold \(M\). Three-manifolds obtained from a plumbing graph \(\Gamma\) or \(\Gamma^{\prime}\) using surgery prescription are homeomorphic if \(\Gamma\) can be converted into \(\Gamma^{\prime}\) using the following set of transformations[13, 14]:
### GPPV conjecture
In Ref[4] Gukov et al. proposed a relation between WRT and \(\hat{Z}\)-invariant for negative-definite plumbed 3-manifold \(M\). This relation was obtained using analytic continuation of \(\tau_{k}^{SU(2)}[M(\Gamma),\texttt{q}]\) and employing the Gauss sum reciprocity formula. The relation is as follows:
\[\tau_{k}^{SU(2)}[M(\Gamma),\texttt{q}]=\frac{1}{2\left(\texttt{ q}^{1/2}-\texttt{q}^{-1/2}\right)|\det B|^{1/2}}\times\\ \sum_{a\in\text{Coker}\,B}e^{-2\pi i(k+2)(a,B^{-1}a)}\sum_{b\in 2 \text{Coker}\,B+\delta}e^{-2\pi i(a,B^{-1}b)}\lim_{q\to\texttt{q}\left(\text{= exp}\left(\frac{2\pi i}{k+2}\right)\right)}\hat{Z}_{b}^{SU(2)}[M(\Gamma);q], \tag{5}\]
where \(\delta=\{\delta_{1},\delta_{2},\ldots,\delta_{L}\}\) and \(\delta_{v}=\text{deg}\;v\;(\text{mod}\;2)\). In literature, this relation is referred to as Gukov-Pei-Putrov-Vafa conjecture. Subsequently, this conjecture was studied for \(SU(N)\),\(SO(3)\) and \(OSp(1|2)\) groups in Ref[8, 9, 15]. For the \(SO(3)\) group, we deduced the conjecture in the following form:
\[\tau_{k}^{SO(3)}[M(\Gamma);\texttt{q}]=\frac{1}{2\left(\texttt{ q}^{1/2}-\texttt{q}^{-1/2}\right)|\det B|^{1/2}}\sum_{a\in\text{Coker}\,B}e^{- \pi i(2k+1)(a,B^{-1}a)}\\ \sum_{b\in 2\text{Coker}\,B+\delta}e^{-\pi i\left(a,B^{-1}(b+BI) \right)}\lim_{q\to\texttt{q}\left(\text{=exp}\left(\frac{2\pi i}{k+2} \right)\right)}\hat{Z}_{b}^{SU(2)}[M(\Gamma);q]\;. \tag{6}\]
From this expression, we see that \(\hat{Z}\)-invariant is same for both \(SU(2)\) and \(SO(3)\) groups. However, from equation (6), we see that the overall factor undergoes modification, as highlighted in blue. Further, a proof of
Figure 1: An example of a plumbing graph \(\Gamma\) and the corresponding link \(\mathcal{L}\)
Figure 2: Kirby-Neumann moves which results in homeomorphic three-manifolds
this conjecture for \(SU(2)\) group appeared in Ref[16]. We will now present a brief overview of the physical and mathematical definitions of the \(\hat{Z}\)-invariant.
### \(\hat{Z}\)-invariant
\(\hat{Z}\)-invariants are \(q\)-series valued topological invariants of \(3\)-manifold \(M\). They admit the physical definition as a partition function of \(T[M;SU(2)]\) evaluated on a cigar geometry \(D^{2}\times_{q}S^{1}\)_i.e._
\[\hat{Z}_{b}^{SU(2)}[M;q]=Z_{T[M;SU(2)]}(D^{2}\times_{q}S^{1};b) \tag{7}\]
where \(b\) is the index for these invariants and it belongs to \(\mathrm{Spin}^{c}(M)\)[17]. This relation can also be written in the following way:
\[\hat{Z}_{b}^{SU(2)}[M;q]=\sum_{\begin{subarray}{c}i\in\mathbb{Z}+\Delta\\ j\in\mathbb{Z}\end{subarray}b}q^{i}(-1)^{j}\mathrm{dim}\ \mathcal{H}_{b,SU(2)}^{i,j} \tag{8}\]
where \(\Delta_{b}\) is a rational number2 labeled by \(\mathrm{Spin}^{c}(M)\) and \(\mathcal{H}_{b,SU(2)}^{i,j}\) corresponds to the BPS sector of the Hilbert space of \(T[M;SU(2)]\). \(\mathcal{H}_{b,SU(2)}^{i,j}\) are doubly-graded homological invariants of \(M\). Hence the \(\hat{Z}_{b}^{SU(2)}[M;q]\) are often referred to as homological blocks. Moreover one can take direct sum of all these homological invariants labelled by \(b\) as follows:
Footnote 2: usually referred to as delta invariant of \(M\), see[18, 19] for more
\[\mathcal{H}_{D^{2},SU(2)}[M]=\bigoplus_{b\in\mathrm{Spin}^{c}(M)/\mathbb{Z}_{ 2}}\ \ \bigoplus_{\begin{subarray}{c}i\in\mathbb{Z}+\Delta_{b},\\ j\in\mathbb{Z}\end{subarray}}\mathcal{H}_{b,SU(2)}^{i,j} \tag{9}\]
then this vector space \(\mathcal{H}_{D^{2},SU(2)}[M]\) is widely seen as the closed \(3\)-manifold analog of Khovanov-Rozansky knot homology. From these definitions, we can interpret coefficients of powers of \(q\) arising in \(\hat{Z}\)-invariant as counting the dimension of the Hilbert space of \(T[M;SU(2)]\).
For negative semidefinite \(3\)-manifold \(M(\Gamma)\), \(\hat{Z}\)-invariant admits the following integral form for \(\mathfrak{su}(2)\) Lie algebra3:
Footnote 3: we are using the Lie algebraic notation because we know \(\hat{Z}\) is same for \(SU(2)\) and \(SO(3)\) group
\[\hat{Z}_{b}^{\mathfrak{su}(2)}[M(\Gamma);q]=(-1)^{b_{+}}q^{\frac{3\sigma- \Sigma_{a}/f_{a}}{4}}\cdot\mathrm{v.p.}\int\limits_{|z_{v}|=1}\ \prod_{v\in\text{ Vertices}}\frac{dz_{v}}{2\pi iz_{v}}\ (z_{v}-1/z_{v})^{2-\deg(v)}\cdot\left(\sum_{s\in 2 B\mathbb{Z}^{L}+b}q^{-\frac{(s,B^{-1})_{s}}{4}}\prod_{i=1}^{L}z_{i}^{s_{i}} \right), \tag{10}\]
and "v.p." means that we take principle value integral. Similar expression of \(\hat{Z}\) for \(SU(N)\), \(OSp(1|2)\), \(OSp(2|2)\) and \(SU(2|1)\) for the plumbed \(3\)-manifolds was found in Ref[8, 9, 20, 21]. Moreover, these invariants exhibits the quantum modularity[22, 23]. For \(SU(N)\) group, \(\hat{Z}\)-invariant can be described as[8, 24]:
\[\hat{Z}_{b}^{\mathrm{SU}(N)}[M(\Gamma);q]=(-1)^{\frac{N(N-1)}{2}b_ {+}}q^{\frac{3\sigma-\mathrm{T.B}}{2}\frac{N^{3}-N}{12}} \tag{11}\] \[\times\mathrm{v.p.}\oint_{|z_{vj}|=1}\prod_{v\in V}\prod_{1\leq j \leq N-1}\frac{\mathrm{dz}_{vj}}{2\pi\mathrm{i}z_{vj}}F_{3d}(z)\Theta_{2d}^{b} (z,q) \tag{12}\]
with
\[F_{3d}(z):=\prod_{v\in V}\left(\sum_{\omega\in W}(-1)^{\ell( \omega)}\prod_{1\leq j\leq N-1}z_{vj}^{(\Lambda_{j},\omega(\rho))}\right)^{2- \deg v}\] \[=\prod_{v\in V}\left(\prod_{1\leq j<k\leq N}\left(y_{vj}^{1/2}y_{ vk}^{-1/2}-y_{vj}^{-1/2}y_{vk}^{1/2}\right)\right)^{2-\deg v},\] \[\Theta_{2d}^{b}(z,q):=\sum_{s\in\mathrm{BQ}^{L}+b}q^{-\frac{1}{2} (s,\mathrm{B}^{-1}s)}\prod_{v\in V}\prod_{1\leq j\leq N-1}z_{vj}^{-(\Lambda_ {j},s_{v})},\]
where \(z_{j}=\frac{y_{j}}{y_{j+1}}\). Here, the principal value integral "v.p." implies the average over \(|W|\) number of deformed contours, each associated with a Weyl chamber. In the following section, we will focus on the WRT invariant for quotient group \(SU(N)/\mathbb{Z}_{m}\) which is necessary to study the corresponding \(\hat{Z}\)-invariant.
## 3 WRT invariant for \(\boldsymbol{SU(N)/\mathbb{Z}_{m}}\)
For a 3-manifold \(M(\Gamma)\), we define the WRT invariant4 for quotient group \(SU(N)/\mathbb{Z}_{m}\) as follows[25, 26, 27]:
Footnote 4: we use the normalization \(\tau_{SU}^{G}[S^{3},M]=1\)
\[\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathfrak{q}]=\widetilde{ \mathcal{S}}_{\rho\rho}^{L-1}\frac{\sum_{C^{\pm}}\prod_{v\in V}\mathcal{V}_{v} \prod_{e\in E}\mathcal{E}_{e}}{\left(\sum_{C}\mathcal{V}(+1\bullet)\right)^{b_ {+}}\left(\sum_{C}\mathcal{V}(-1\bullet)\right)^{b_{-}}}, \tag{13}\]
where \(\mathcal{V}\), \(\mathcal{E}\) denotes the vertex and edge factors of \(L\)-component plumbing graph \(\Gamma\), \(b_{\pm}\) represents the number of positive and negative eigenvalues of the linking matrix \(B\) and \(\pm 1\bullet\) denotes the single vertex with \(\pm 1\) framing. The summation is performed over the set of allowed representations of the \(SU(N)/\mathbb{Z}_{m}\) group, which are:
\[C=\{\lambda\in(P_{+}\cap P^{\prime})+\rho\mid(\lambda,\theta^{\vee})<k^{\prime }\}. \tag{14}\]
Here, \(P_{+}\) represents the set of dominant weights, \(\theta^{\vee}\) refers to the maximal root, \(\rho\) denotes the Weyl vector and \(P^{\prime}\) is a sublattice of \(P\) such that there is an isomorphism between abelian group \(P/P^{\prime}\) and cyclic group \(\mathbb{Z}_{m}(Q\subseteq P^{\prime}\subseteq P)\). Furthermore, when \(m=N\), then \(P^{\prime}\) is simply the root lattice \(Q\), and when \(m=1\), then \(P^{\prime}=P\). The vertex and edge factor can be expressed in terms of \(\widetilde{\mathcal{S}}\) and \(\widetilde{\mathcal{T}}\) matrices:
\[\mathcal{V}_{v}=\widetilde{\mathcal{T}}_{\lambda\lambda}^{f}\widetilde{ \mathcal{S}}_{\rho\lambda}^{2-\text{deg}v}\;,\;\;\mathcal{E}=\widetilde{ \mathcal{S}}_{\lambda\mu}. \tag{15}\]
The \(\widetilde{\mathcal{S}}\) and \(\widetilde{\mathcal{T}}\) matrices are:5
Footnote 5: these matrices are exactly the usual modular transformation matrices when \(m=1\)
\[\widetilde{\mathcal{S}}_{\lambda\mu}=\frac{i^{|\Delta_{+}|}}{|P^{\prime}/k^{ \prime}Q|^{1/2}}\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathfrak{q}^{\left( \omega(\lambda),\mu\right)},\;\;\;\;\;\widetilde{\mathcal{T}}_{\lambda\mu}= \delta_{\lambda\mu}\mathfrak{q}^{\frac{1}{2}(\lambda,\lambda)}\mathfrak{q}^{- \frac{1}{2}(\rho,\rho)} \tag{16}\]
with
\[\mathfrak{q}=\exp\left(\frac{2\pi i}{k^{\prime}}\right)\;\;\;\;\text{and}\;\; \;\;k^{\prime}\in\mathbb{Z}^{+}, \tag{17}\]
\(W\), \(Q\) and \(k^{\prime}\) denotes the Weyl group, root lattice and renormalized Chern-Simons level respectively.
_Chern-Simons level for \(SU(N)/\mathbb{Z}_{m}\) WRT invariant_
In Ref[28], it was shown that the three-dimensional Chern-Simons gauge theories with compact gauge group \(G\) are classified by fourth cohomology group of the classifying space of the gauge group: \(H^{4}(BG;\mathbb{Z})\). The classification parameter is the Chern-Simons level of the theory which is most commonly denoted by \(k\). The level \(k^{\prime}\) is the renormalised Chern-Simons level which is related to the bare Chern-Simons level \(k\) for \(SU(N)\) gauge group as follows:
\[k^{\prime}=k+N. \tag{18}\]
However for \(SU(N)/\mathbb{Z}_{m}\) group which is non-simply connected, the relation between \(k\) and \(k^{\prime}\) is as follows:
\[k^{\prime}=\gamma k+N, \tag{19}\]
where \(\gamma\) is some integer which can be calculated by considering the following short exact sequence:
\[1\longrightarrow\mathbb{Z}_{m}\longrightarrow SU(N)\stackrel{{\pi }}{{\longrightarrow}}SU(N)/\mathbb{Z}_{m}\longrightarrow 1, \tag{20}\]
where \(\mathbb{Z}_{m}\) is the subgroup of \(\mathbb{Z}_{N}\). Let \(\alpha\) and \(\tilde{\alpha}\) be the generators of \(H^{4}(B(SU(N));\mathbb{Z})\) and \(H^{4}(B(SU(N)/\mathbb{Z}_{m});\mathbb{Z})\) respectively. Then we have the following relation:
\[B\pi^{*}(\tilde{\alpha})=\gamma\alpha, \tag{21}\]
where \(\pi^{*}\) is the pullback map of \(\pi\) in equation (20). The factor \(\gamma\) is simply determined by comparing the images of \(\tilde{\alpha}\) and \(\alpha\) in the cohomology group \(H^{*}(BT;\mathbb{Z})\) where \(T\) is the maximal torus of rank \(N-1\). So, the factor \(\gamma\) is found to be the smallest integer for which the following equation is satisfied[28]:
\[\frac{\gamma}{2}(\Lambda_{a},\Lambda_{a})\in\mathbb{Z},\ \ \forall\ a, \tag{22}\]
where \(\Lambda_{a}\)'s are the fundamental weight vectors corresponding to the subgroup \(\mathbb{Z}_{m}\) of \(\mathbb{Z}_{N}\). For \(SU(N)/\mathbb{Z}_{N}\) group, \(\gamma\) is determined to be \(2N\) when \(N\) is even and \(N\) when \(N\) is odd:
\[k^{\prime}=\begin{cases}Nk+N,&\text{ when $N$ is odd}\\ 2Nk+N,&\text{ when $N$ is even}\end{cases}. \tag{23}\]
For clarity, we have included the computation of sublattice \(P^{\prime}\) and Chern-Simons level \(k^{\prime}\) for certain non-simply connected groups in Appendix (B). With this prescription of WRT for \(SU(N)/\mathbb{Z}_{m}\), we will now focus on the corresponding \(\hat{Z}\) by studying the GPPV conjecture.
## 4 GPPV conjecture for \(\boldsymbol{SU(N)/\mathbb{Z}_{m}}\)
As discussed in the previous section, the WRT invariant \(\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathrm{q}]\) associated with \(M(\Gamma)\) is given by
\[\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathrm{q}]=\widetilde{ \mathcal{S}}_{\rho\rho}^{L-1}\frac{\sum_{C^{L}}\prod_{v\in V}\mathcal{V}_{v} \prod_{e\in E}\mathcal{E}_{e}}{\left(\sum_{C}\mathcal{V}(+1\bullet)\right)^{b _{+}}\left(\sum_{C}\mathcal{V}(-1\bullet)\right)^{b_{-}}}. \tag{24}\]
For the sake of convenience, let's express the above equation in the following manner:
\[\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathrm{q}]=\frac{\mathcal{ F}[M(\Gamma);\mathrm{q}]}{\left(\mathcal{F}[M(+1\bullet);\mathrm{q}]\right)^{b_{+}} \left(\mathcal{F}[M(-1\bullet);\mathrm{q}]\right)^{b_{+}}}, \tag{25}\]
where
\[\mathcal{F}[M(\Gamma);\mathrm{q}]=\frac{1}{(\widetilde{\mathcal{S}}_{\rho \rho})^{L+1}}\sum_{C^{L}}\prod_{v\in V}\mathcal{V}_{v}\prod_{e\in E}\mathcal{E }_{e}. \tag{26}\]
Similar to the \(SU(2)\) group, we will have to perform Gauss decomposition of eqn.(24) to extract the homological blocks from it. Hence we rewrite the above equation (26) in a form so that we can use Gauss sum reciprocity formula[29]. We achieve this by extending the summation range \(C^{L}\) over all Weyl chambers \(W(C)^{L}\). Note that the matrices are invariant under the action of Weyl group elements.6 Hence we can sum over all the Weyl chambers and divide by the number of Weyl chambers to rewrite (26) as
Footnote 6: upto a sign but that will not affect our final answer for \(\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathrm{q}]\)
\[\mathcal{F}[M(\Gamma);\mathrm{q}] =\frac{1}{(\widetilde{\mathcal{S}}_{\rho\rho})^{L+1}|W|^{L}} \sum_{W(C)^{L}}\prod_{v\in V}\mathcal{V}_{v}\prod_{e\in E}\mathcal{E}_{e}, \tag{27}\] \[=\frac{1}{(\widetilde{\mathcal{S}}_{\rho\rho})^{L+1}|W|^{L}} \mathrm{q}^{-\frac{\sum_{L=1}^{L}f_{j}}{2}(\rho,\rho)}\left(\frac{i^{| \Delta_{+}|}}{|P^{\prime}/k^{\prime}Q|^{\frac{1}{2}}}\right)^{L+1}\sum_{ \lambda\in W(C)^{L}}\underbrace{\prod_{v\in V}\left(\sum_{\omega\in W}(-1)^{ \ell(\omega)}\mathrm{q}^{(\lambda_{v},\omega(\rho))}\right)^{2-\deg v}}_{ \text{linear in $\lambda_{v}$}}\times\] \[\underbrace{\mathrm{q}^{\frac{f_{v}}{2}(\lambda_{v},\lambda_{v})} \prod_{(e_{1},e_{2})\in E}\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathrm{q}^{( \omega(\lambda_{e_{1}}),\lambda_{e_{2}})}}_{\mathrm{q}^{\frac{1}{2}(\lambda, B)}} \tag{28}\]
where we have used the fact that sum and product can be interchanged. The set over which the summation is being performed in equation (28) has now become \(W(C)\). We further extend it to the whole lattice \(((P\cap P^{\prime})+\rho)/k^{\prime}Q\) which is just \((P^{\prime}+\rho)/k^{\prime}Q\). In doing so, we observe for some representations \(\lambda\), the term linear in \(\lambda_{v}\) term (28) will be zero:
\[\prod_{v\in V}\left(\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathrm{q}^{(\lambda_{v}, \omega(\rho))}\right)^{2-\deg v}=0. \tag{29}\]
Using Weyl denominator formula, the expression can be rewritten as
\[\prod_{v\in V}\left(\prod_{\alpha\in\Delta_{+}}\left(\mathrm{q}^{\frac{( \lambda_{v},\omega)}{2}}-\mathrm{q}^{-\frac{(\lambda_{v},\omega)}{2}}\right) \right)^{2-\deg v}=0.\]
Further, expressing \(\lambda_{v}\) in terms of fundamental weight vectors \(\Lambda_{i}\)_i.e.\(\lambda_{v}=\sum_{j=1}^{r}n_{v_{j}}\Lambda_{j}\)_, the above equation becomes
\[\prod_{v\in V}\left(\prod_{\alpha\in\Delta_{+}}\left(\prod_{1\leq j\leq N-1}x_{ v_{j}}^{\frac{(\Lambda_{j},\omega)}{2}}-\prod_{1\leq j\leq N-1}x_{v_{j}}^{- \frac{(\Lambda_{j},\omega)}{2}}\right)\right)^{2-\deg v}\bigg{|}_{x_{v_{j}}= \mathrm{q}^{n_{v_{j}}}}=0. \tag{30}\]
Hence, the points for which linear term in \(\lambda_{v}\) becomes zero satisfy the following equation
\[\sum_{1\leq j\leq N-1}n_{v_{j}}(\Lambda_{j},\alpha)=0. \tag{31}\]
These points causes the singularity when \(\deg v>2\). Hence we first need to regularise the sum over these points. We introduce a parameter \(\beta\) such that
\[\beta\in\mathbb{C}\quad\text{and}\quad 0<|\beta|<1.\]
Using this parameter, we can rewrite the linear term in \(\lambda_{v}\) as:
\[\prod_{v\in V}\left(\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathrm{q}^{( \lambda_{v},\omega(\rho))}\right)^{2-\deg v}=\frac{1}{|W|^{L}}\prod_{v\in V} \left[\left(\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathrm{q}^{(\lambda_{v}, \omega(\rho))}\beta^{f(\omega,\omega_{1})}\right)^{2-\deg v}+\right.\]
\[\left.\left(\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathrm{q}^{(\lambda_{v}, \omega(\rho))}\beta^{f(\omega,\omega_{2})}\right)^{2-\deg v}+\ldots+\left. \left(\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathrm{q}^{(\lambda_{v},\omega( \rho))}\beta^{f(\omega,\omega_{|W|})}\right)^{2-\deg v}\right]\right|_{\beta \to 1}, \tag{32}\]
in which the function \(f(\omega,\omega_{i})\) is defined as follows:
\[f(\omega,\omega_{i}):=\begin{cases}1,&\text{ when }\omega=\omega_{i}\\ 0,&\text{ otherwise.}\end{cases} \tag{33}\]
The RHS of equation (32) can be expanded as \(|\beta|<1\):
\[\frac{1}{|W|^{L}}\sum_{m\geq 0}\left(\sum_{s}\chi_{s}^{m}\mathrm{q}^{(\lambda,s)} \right)\beta^{m}, \tag{34}\]
where \(s=\{s_{1},s_{2},\ldots,s_{L}\}\) is some subset of \(P^{L}\). Further, we interchange the summation to rewrite equation (34) as
\[\frac{1}{|W|^{L}}\sum_{s\in Q^{L}+\delta}\underbrace{\left(\sum_{m\geq 0} \chi_{s}^{m}\beta^{m}\right)}_{\xi_{s}^{\delta}}\mathrm{q}^{(\lambda,s)}= \frac{1}{|W|^{L}}\sum_{s\in Q^{L}+\delta}\xi_{s}^{\beta}\mathrm{q}^{(\lambda,s )}, \tag{35}\]
where \(\chi_{s}^{m}\in\mathbb{Z}\). Hence, we can rewrite the linear term as series in \(\mathrm{q}\), and its coefficients \(\xi_{s}^{1}\) can be determined by the following equation:
\[\prod_{v\in V}\left(\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathrm{q}^{(\lambda_ {v},\omega(\rho))}\right)^{2-\deg v}=\frac{1}{|W|^{L}}\sum_{s\in Q^{L}+\delta} \xi_{s}^{\beta}\mathrm{q}^{(\lambda,s)}\Big{|}_{\beta\longrightarrow 1}, \tag{36}\]
where \(\delta_{v}=(2-\text{deg }v)\rho\text{ mod }Q\) and \(\xi_{s}^{1}\in\mathbb{Z}.\) In summary, we have rewritten the linear term as some series in \(\mathfrak{q}.\) This series is obtained by taking an average of individual geometric series in \(\mathfrak{q},\) each determined by a specific selection of Weyl chamber. This completes our regularization of linear term. For clarity we have provided a detailed example in the appendix (A). This led us to the following equation (37):
\[\mathcal{F}[M(\Gamma);\mathfrak{q}]=\frac{1}{(\widetilde{\mathcal{S}}_{\rho \rho})^{L+1}|W|}q^{-\frac{\sum f_{j}}{2}(\rho,\rho)}\left(\frac{i^{|\Delta_{+} |}}{|P^{\prime}/k^{\prime}Q|^{\frac{1}{2}}}\right)^{L+1}\times\]
\[\sum_{\lambda\in(P^{\prime}+\rho)^{L}/k^{\prime}Q^{L}}\left(\frac{1}{|W|^{L}} \sum_{s\in Q^{L}+\delta}\xi_{s}^{\beta}\mathfrak{q}^{(\lambda,s)}\right) \mathfrak{q}^{\frac{1}{2}(\lambda,B\lambda)}\Big{|}_{\beta\longrightarrow 1}. \tag{37}\]
Now, in order to use the reciprocity formula we replace \(Q\) with \(\eta P^{\prime}\) for some positive integer \(\eta\) as \(\eta P^{\prime}\subseteq Q\subseteq P^{\prime}\subseteq P\) and subsequently multiply it by the suitable factor given by the order of quotient of these two lattices. Hence, the above equation becomes:
\[\mathcal{F}[M(\Gamma);\mathfrak{q}]=\frac{1}{(\widetilde{\mathcal{S}}_{\rho \rho})^{L+1}|W|}q^{-\frac{\sum f_{j}}{2}(\rho,\rho)}\left(\frac{i^{|\Delta_{+} |}}{|P^{\prime}/k^{\prime}Q|^{\frac{1}{2}}}\right)^{L+1}\underbrace{\left( \frac{1}{|Q/\eta P^{\prime}|^{L}}\right)}_{\text{factor which compensates for replacing }Q\text{ with }\eta P^{\prime}}\times\]
\[\sum_{\lambda\in(P^{\prime}+\rho)^{L}/\eta k^{\prime}(P^{\prime})^{L}}\left( \frac{1}{|W|^{L}}\sum_{s\in Q^{L}+\delta}\xi_{s}^{\beta}\mathfrak{q}^{( \lambda,s)}\right)\mathfrak{q}^{\frac{1}{2}(\lambda,B\lambda)}\Big{|}_{\beta \longrightarrow 1}. \tag{38}\]
Since we are interested in non-simply connected group \(SU(N)/\mathbb{Z}_{m}\) for which \(\rho\notin P^{\prime},\) we have to do the following shift in \(\lambda\): \(\lambda\longrightarrow\lambda+\mathbf{\rho},\) where \(\mathbf{\rho}=(\underbrace{\rho,\rho,\ldots,\rho}_{L\text{-times}}).\) Subsequently, we get the following:
\[\mathcal{F}[M(\Gamma);\mathfrak{q}]=\frac{1}{(\widetilde{\mathcal{S}}_{\rho \rho})^{L+1}|W|^{L+1}}\mathfrak{q}^{-\frac{\sum f_{j}}{2}(\rho,\rho)}\left( \frac{i^{|\Delta_{+}|}}{|P^{\prime}/k^{\prime}Q|^{\frac{1}{2}}}\right)^{L+1} \left(\frac{1}{|Q/\eta P^{\prime}|^{L}}\right)\mathfrak{q}^{\frac{1}{2}(\mathbf{ \rho},B\mathbf{\rho})}\times\]
\[\sum_{s\in Q^{L}+\delta}\xi_{s}^{\beta}\mathfrak{q}^{(\rho,s)}\sum_{\lambda \in(P^{\prime})^{L}/\eta k^{\prime}(P^{\prime})^{L}}\mathfrak{q}^{\frac{1}{2} (\lambda,B\lambda)}\mathfrak{q}^{(\lambda,s+B\mathbf{\rho})}\Big{|}_{\beta \longrightarrow 1}. \tag{39}\]
Now using Gauss sum reciprocity formula[29] and with the assumption that the quadratic form, \(B:\mathbb{Z}^{L}\times\mathbb{Z}^{L}\longrightarrow\mathbb{Z}\) is negative definite7,8, \(\mathcal{F}[M(\Gamma);\mathfrak{q}]\) equals:
Footnote 7: that is \(\sigma=-L\)
Footnote 8: in following equation \(l\) denotes the rank of the lattice \((P^{\prime})^{L}\) and \(\ell\) represents the length of the Weyl group element
\[=\left(\frac{1}{|W|\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathfrak{q}^{(\rho, \omega(\rho))}}\right)^{L+1}\mathfrak{q}^{-\frac{\sum f_{j}}{2}(\rho,\rho)} \left(\frac{1}{|Q/\eta P^{\prime}|^{L}}\right)\left(\frac{\exp\!\left(\frac{ \pi i\sigma}{4}\right)(\eta k^{\prime})^{l/2}}{|\text{det}(\eta B)|^{\frac{1}{2} -1}\text{ Vol}[((P^{\prime})^{\bullet})^{L}]}\right)\times\]
\[\sum_{a\in((P^{\prime})^{\bullet})^{L}/\eta B((P^{\prime})^{\bullet})^{L}}\exp \!\left[-\pi ik^{\prime}(a,B^{-1}a)\right]\sum_{b\in(Q^{L}+\delta)/BQ^{L}} \exp\!\left[-2\pi i(a,B^{-1}(b+B\mathbf{\rho}))\right]\sum_{s\in BQ^{L}+b}\xi_{s}^ {\beta}\mathfrak{q}^{-\frac{(s,B^{-1}s)}{2}}\Big{|}_{\beta\longrightarrow 1}, \tag{40}\]
where \((P^{\prime})^{\bullet}\) denotes the dual lattice of \(P^{\prime}.\) The WRT invariant \(\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathrm{q}]\) including the framing factor reduces to
\[=\frac{1}{|W|^{L+1}}\mathrm{q}^{-\frac{(\rho,\rho)}{2}(3L+\mathrm{Tr}B)}\underbrace {\left(\sum_{a\in(P^{\prime})^{\bullet}/\eta(P^{\prime})^{\bullet}}\exp(\pi ik^{ \prime}(a,a)-2\pi i(a,\rho))\right)^{-L}}_{|(P^{\prime})^{\bullet}/\eta(P^{ \prime})^{\bullet-L}}\frac{1}{|\mathrm{det}B|^{\frac{N-1}{2}}\sum_{\omega\in W }(-1)^{\ell(\omega)}\mathrm{q}^{(\rho,\omega(\rho))}}\times\]
\[\sum_{\begin{subarray}{c}a\in((P^{\prime})^{\bullet})^{L}/\eta B((P^{\prime}) ^{\bullet})^{L}\\ |(P^{\prime})^{\bullet}/\eta(P^{\prime})^{\bullet}|^{L}\sum_{a\in((P^{\prime}) ^{\bullet})^{L}/B((P^{\prime})^{\bullet})^{L}}\end{subarray}}\exp\bigl{(}-\pi ik ^{\prime}(a,B^{-1}a)\bigr{)}\sum_{b\in(Q^{L}+\delta)/BQ^{L}}\exp\bigl{(}-2\pi i (a,B^{-1}(b+B\rho))\bigr{)}\times\]
\[\sum_{s\in BQ^{L}+b}\xi_{s}^{\beta}\mathrm{q}^{-\frac{(s,B^{-1}s)}{2}}\Big{|} _{\beta\to 1}, \tag{41}\]
which simplifies to the following:
\[\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathrm{q}]=\frac{1}{|W|^{L+ 1}}\ \frac{q^{-\frac{(\rho,\rho)}{2}(3L+\mathrm{Tr}B)}}{|\mathrm{det}B|^{\frac{N-1} {2}}\sum_{\omega\in W}(-1)^{\ell(\omega)}q^{(\rho,\omega(\rho))}}\sum_{a\in(( P^{\prime})^{\bullet})^{L}/B((P^{\prime})^{\bullet})^{L}}\exp\bigl{(}-\pi ik^{ \prime}(a,B^{-1}a)\bigr{)}\times\]
\[\sum_{b\in(Q^{L}+\delta)/BQ^{L}}\exp\bigl{(}-2\pi i(a,B^{-1}(b+B\boldsymbol{ \rho}))\bigr{)}\sum_{s\in BQ^{L}+b}\xi_{s}^{\beta}\mathrm{q}^{-\frac{(s,B^{-1 }s)}{2}}\Big{|}_{\beta\to 1}. \tag{42}\]
Now, assuming that the following holds:
\[\lim_{\beta\to 1}\sum_{s\in BQ^{L}+b}\xi_{s}^{\beta}\mathrm{q}^{-\frac{(s,B^{-1 }s)}{2}}=\lim_{q\to q}\sum_{s\in BQ^{L}+b}\xi_{s}^{1}q^{-\frac{(s,B^{-1}s)}{2}}, \tag{43}\]
we finally obtain,
\[\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);\mathrm{q}]=\frac{1}{|W|^{L +1}}\ \frac{q^{-\frac{(\rho,\rho)}{2}(3L+\mathrm{Tr}B)}}{|\mathrm{det}B|^{\frac{N-1} {2}}\sum_{\omega\in W}(-1)^{\ell(\omega)}q^{(\rho,\omega(\rho))}}\sum_{a\in(( P^{\prime})^{\bullet})^{L}/B((P^{\prime})^{\bullet})^{L}}\exp\bigl{(}-\pi ik^{ \prime}(a,B^{-1}a)\bigr{)}\times\]
\[\sum_{b\in(Q^{L}+\delta)/BQ^{L}}\exp\bigl{(}-2\pi i(a,B^{-1}(b+B\boldsymbol{ \rho}))\bigr{)}\sum_{s\in BQ^{L}+b}\xi_{s}^{1}q^{-\frac{(s,B^{-1}s)}{2}}\Big{|} _{q\to q}, \tag{44}\]
\[=\frac{1}{|W||\mathrm{det}B|^{\frac{N-1}{2}}\sum_{\omega\in W}(-1)^{\ell( \omega)}\mathrm{q}^{(\rho,\omega(\rho))}}\sum_{a\in((P^{\prime})^{\bullet})^{L }/B((P^{\prime})^{\bullet})^{L}}\exp\bigl{(}-\pi ik^{\prime}(a,B^{-1}a)\bigr{)}\times\]
\[\sum_{b\in(Q^{L}+\delta)/BQ^{L}}\exp\bigl{(}-2\pi i(a,B^{-1}(b+B\boldsymbol{ \rho}))\bigr{)}\lim_{q\to q}\underbrace{\hat{Z}_{b}^{\mathrm{su}(N)}[M( \Gamma);q]}_{\text{independent of $m$}}. \tag{45}\]
From equation (44) explicit expression of \(\hat{Z}\)-invariant can be read off as:
\[\hat{Z}_{b}^{\text{su}(N)}[M(\Gamma);q]=|W|^{-L}q^{-\frac{(3L+T_{b})}{2}(\rho,\rho )}\sum_{s\in BQL^{+}b}\xi_{s}^{1}q^{-\frac{(s,B^{-1})}{2}}\ \in\ |W|^{-L}q^{\Delta_{b}}\mathbb{Z}[[q]], \tag{46}\]
Thus we have shown that \(\hat{Z}\)-invariant does not depend on \(m\). The overall factor which relates the \(\hat{Z}\) with \(\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);q]\) has the \(m\) dependence. This led us to the following proposition:
**Proposition**.: _Let \(M(\Gamma)\) be a negative definite plumbed 3-manifold. For non-simply connected group \(SU(N)/\mathbb{Z}_{m}\), WRT invariant can be decomposed in the following form:_
\[\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M(\Gamma);q]=\frac{1}{|W||detB|^{\frac {N-1}{2}}\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathfrak{q}^{(\rho,\omega(\rho ))}}\sum_{a\in((P^{\prime})^{\bullet})^{L}/B((P^{\prime})^{\bullet})^{L}} \exp\bigl{(}-\pi ik^{\prime}(a,B^{-1}a)\bigr{)}\times\]
\[\sum_{b\in(Q^{L}+\delta)/BQ^{L}}\exp\bigl{(}-2\pi i(a,B^{-1}(b+B\rho))\bigr{)} \lim_{q\to q}\hat{Z}_{b}^{\text{su}(N)}[M(\Gamma);q], \tag{47}\]
_where \(\bullet\) denotes the dual operation on the lattice \(P^{\prime}\), \(k^{\prime}=\gamma k+N\), and \(\mathfrak{q}=\exp\big{(}\frac{2\pi i}{k^{\prime}}\big{)}\)._
Moreover, we can express the terms appearing as coefficients to \(\hat{Z}\)-invariant as linking pairing and homology group. The linking pairing is defined as follows:
**Definition** (Linking pairing).: _For a closed and connected 3-manifold \(M\), with \(\partial M=\emptyset\), we have the linking pairing(\(\ell k\)) on the torsion part of \(H_{1}(M;\mathbb{Z})\),_
\[\ell k:\text{Tor }H_{1}(M;\mathbb{Z})\otimes\text{Tor }H_{1}(M;\mathbb{Z}) \longrightarrow\mathbb{Q}/\mathbb{Z}. \tag{48}\]
_For \(a,b\in\text{Tor }H_{1}(M;\mathbb{Z})\), \(\ell k\) is given as:_
\[\ell k(a,b)=\frac{\#(a^{\prime}\cdot b)}{n}\text{ mod }\mathbb{Z}, \tag{49}\]
_where \(n\in\mathbb{Z}_{\neq 0}\) such that \(n\)\(a=0\in H_{1}(M;\mathbb{Z})\) and \(a^{\prime}\) is a 2-chain which is bounded as \(\partial a^{\prime}=na\). For plumbed 3-manifold \(M(\Gamma)\), \(\ell k\) is simply,_
\[\ell k(a,b)=(a,B^{-1}b)\text{ mod }\mathbb{Z},\ \ \ \ a,b\in\mathbb{Z}^{L}/B \mathbb{Z}^{L}. \tag{50}\]
Using this we write the Gukov-Pei-Putrov-Vafa conjecture for \(SU(N)/\mathbb{Z}_{m}\) as follows:
**Conjecture**.: _Let \(M\) be a closed 3-manifold with \(b_{1}(M)=0\) and \(\text{Spin}^{c}(M)\) be the set of \(\text{Spin}^{c}\) structures on \(M\). Then WRT invariant \(\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M;\mathfrak{q}]\) can be decomposed as follows:_
\[\tau_{k^{\prime}}^{SU(N)/\mathbb{Z}_{m}}[M;\mathfrak{q}]=\frac{1}{|H_{1}(M; \mathbb{Z})|^{\frac{N-1}{2}}\sum_{w\in W}(-1)^{\ell(w)}\mathfrak{q}^{(\rho, \omega(\rho))}}\sum_{a,b\in(\text{Spin}^{c}(M))^{(N-1)}/S_{N}}\exp\Biggl{(}-2 \pi ik^{\prime}\sum_{i=1}^{N-1}\ell k(a_{i},a_{i})\Biggr{)}\times\]
\[\exp\Biggl{(}-2\pi i\sum_{i=1}^{N-1}a_{i}\Biggr{)}\exp\Biggl{(}-4\pi i\sum_{i= 1}^{N-1}\ell k(a_{i},b_{i})\Biggr{)}\lim_{q\to q}\hat{Z}_{b}^{\text{su}(N)}[M ;q], \tag{51}\]
_where \(\hat{Z}_{b}^{\text{su}(N)}[M;q]\in|W|^{-c}q^{\Delta_{b}}\mathbb{Z}[[q]]\) and \(S_{N}\) is the symmetric group of degree \(N\)._
Note that for simply connected \(SU(N)\) group, appendix(C), there is no shift in \(b\). The shift in eqn.(47), \(b\longrightarrow b+B\boldsymbol{\rho}\), is attributed to the non-simply connected nature of \(SU(N)/\mathbb{Z}_{m}\) group. This introduces the term \(\exp\Bigl{(}-2\pi i\sum_{i=1}^{N-1}a_{i}\Bigr{)}\) in the above conjecture.
## 5 Conclusions and future directions
In this paper, we have worked out the explict form of GPPV conjecture for the case of \(SU(N)/\mathbb{Z}_{m}\) gauge group. We have found that the \(\hat{Z}\)-invariant is independent of \(\mathbb{Z}_{m}\) factor. In fact, it turns out that the dependence of \(\mathbb{Z}_{m}\) arises as an overall factor to WRT invariant:
\[\tau^{SU(N)/\mathbb{Z}_{m}}_{k^{\prime}}[M;\mathfrak{q}]=\sum_{b}c_{b}(\mathbb{ Z}_{m})\hat{Z}_{b}^{\mathfrak{su}(N)}[M;q]\Big{|}_{q\to e^{\frac{2\pi i}{2}}}. \tag{52}\]
We list some of the issues which we encountered while doing this exercise. We hope to resolve these in our future works:
* In the process of Gauss decomposition of WRT invariant, there exists singularities correspondingly to the walls of the Weyl group. For certain cases of quotient groups \(SU(N)/\mathbb{Z}_{m}\), these singularities do not arise by definition(For eg. \(SU(2)/\mathbb{Z}_{2}\)). We are interested in observing the progression of the proof for the GPPV conjecture in these particular instances. Although, recently a proof of this conjecture appeared for simply laced Lie algebras[30] but the proof is not available for non-simply connected groups or quotient groups.
* We have conjectured the relation between WRT invariant for \(SU(N)/\mathbb{Z}_{m}\) group and \(\hat{Z}^{\mathfrak{su}(N)}\). It would be interesting to study the \(w\)-refined version of \(SU(N)\) WRT invariant and its relation with \(\hat{Z}^{\mathfrak{su}(N)}\) invariant. For \(SU(2)\) WRT invariant, a \(w\)-refined WRT invariant was introduced in Ref.[31], and its relation with \(\hat{Z}\) was studied in Ref.[15].
* It would be nice to explore the extension of our work to more general 3-manifolds and knot complements.
Acknowledgments.SC expresses gratitude to Pavel Putrov and Sunghyuk Park for numerous fruitful discussions. SC is also appreciative of the MoU involving CAS-IITB and ICTP, which provided the opportunity for a two-month visit to ICTP, where significant progress related to this research was achieved. SC extends thanks to the organizers of String-Math 2023, where a portion of this work was presented. Additionally, SC would like to acknowledge the IoE cell at IIT Bombay for providing financial support during the visit to String-Math 2023. SC and PR would like to thank all the speakers as well as the organisers of the Learning workshop on BPS states and 3-manifolds for discussions and interactions. PR would like to acknowledge the ICTP's Associate programme where we made some progress during her visit as senior associate.
Appendix A Explicit calculation of \(\hat{Z}^{\mathfrak{su}(3)}\)-invariant for a particular plumbing graph
In this appendix, we work out explicitly the \(\hat{Z}\) for \(\mathfrak{su}(3)\) Lie algebra for the following plumbed 3-manifold:
For \(\mathfrak{su}(3)\) case, \(W=\{1,s_{1},s_{2},s_{1}s_{2},s_{2}s_{1},s_{1}s_{2}s_{1}\}\), \(\rho=\Lambda_{1}+\Lambda_{2}\) and \((\rho,\rho)=2\).
So, the \(\hat{Z}\)-invariant is:
\[\hat{Z}_{b}^{\mathfrak{g}}[M(\Gamma),q]=|W|^{-L}q^{-\frac{(3L+\mathbb{T}_{P},B)}{2}(\rho,\rho)}\sum_{s\in BQ^{L}+b}\xi_{s}^{1}q^{-\frac{(\epsilon,B^{-1},\epsilon)}{2}}\] (A1)
which in this case:
\[\hat{Z}_{b}^{\mathfrak{su}(3)}[M(\Gamma),q]=\frac{q}{6^{4}}\sum_{s\in BQL+b}\xi_{s} ^{1}q^{\frac{-(s,B-1_{s})}{2}} \tag{10}\]
where \(\xi_{s}^{1}\) is determined from the following equation:
\[\prod_{v\in V}\left(\sum_{\omega\in W}(-1)^{\ell(\omega)}\mathfrak{q}^{(\lambda _{v},\omega(\rho))}\right)^{2-\deg v}=\frac{1}{6^{4}}\sum_{s\in Q^{L}+\delta} \xi_{s}^{\beta}\mathfrak{q}^{(\lambda,s)}\Big{|}_{\beta\longrightarrow 1} \tag{11}\]
We write the LHS of above equation as follows:
\[\prod_{v\in V}\Big{(}\mathfrak{q}^{(n_{1}^{\tau}\Lambda_{1}+n_{2}^ {\tau}\Lambda_{2})(\Lambda_{1}+\Lambda_{2})}-\mathfrak{q}^{(n_{1}^{\tau} \Lambda_{1}+n_{2}^{\tau}\Lambda_{2})(-\Lambda_{1}+2\Lambda_{2})}-\mathfrak{q}^ {(n_{1}^{\tau}\Lambda_{1}+n_{2}^{\tau}\Lambda_{2})(2\Lambda_{1}-\Lambda_{2})}\] \[+\mathfrak{q}^{(n_{1}^{\tau}\Lambda_{1}+n_{2}^{\tau}\Lambda_{2}) (-2\Lambda_{1}+\Lambda_{2})}+\mathfrak{q}^{(n_{1}^{\tau}\Lambda_{1}+n_{2}^{ \tau}\Lambda_{2})(\Lambda_{1}-2\Lambda_{2})}-\mathfrak{q}^{(n_{1}^{\tau} \Lambda_{1}+n_{2}^{\tau}\Lambda_{2})(-\Lambda_{1}-\Lambda_{2})}\Big{)}^{2- \deg v} \tag{12}\]
where \(\lambda_{v}=n_{1}^{v}\Lambda_{1}+n_{2}^{v}\Lambda_{2}\). The above equation simplies to the following using \(\Lambda_{1}^{2}=\Lambda_{2}^{2}=\frac{2}{3}\) and \(\Lambda_{1}\Lambda_{2}=\frac{1}{3}\):
\[\prod_{v\in V}\Big{(}\mathfrak{q}^{(n_{1}^{v}+n_{2}^{v})}-\mathfrak{q}^{n_{2} ^{v}}-\mathfrak{q}^{n_{1}^{v}}+\mathfrak{q}^{-n_{1}^{v}}+\mathfrak{q}^{-n_{2} ^{v}}-\mathfrak{q}^{-(n_{1}^{v}+n_{2}^{v})}\Big{)}^{2-\deg v} \tag{13}\]
We write this equation using regularising parameter \(\beta\) as follows
\[=\lim_{\beta\to 1}\frac{1}{6^{4}}\prod_{v\in V}\Bigg{(} \Big{(}\beta\mathfrak{q}^{(n_{1}^{v}+n_{2}^{v})}-\mathfrak{q}^{n_{2}^{ v}}-\mathfrak{q}^{n_{1}^{v}}+\mathfrak{q}^{-n_{1}^{v}}+\mathfrak{q}^{-n_{2}^{v}}- \mathfrak{q}^{-(n_{1}^{v}+n_{2}^{v})}\Big{)}^{2-\deg v}\times\] \[\Big{(}\mathfrak{q}^{(n_{1}^{v}+n_{2}^{v})}-\beta\mathfrak{q}^{n_ {2}^{v}}-\mathfrak{q}^{n_{1}^{v}}+\mathfrak{q}^{-n_{1}^{v}}+\mathfrak{q}^{-n_{ 2}^{v}}-\mathfrak{q}^{-(n_{1}^{v}+n_{2}^{v})}\Big{)}^{2-\deg v}\times\] \[\Big{(}\mathfrak{q}^{(n_{1}^{v}+n_{2}^{v})}-\mathfrak{q}^{n_{2}^{ v}}-\beta\mathfrak{q}^{n_{1}^{v}}+\mathfrak{q}^{-n_{1}^{v}}+\mathfrak{q}^{-n_{2}^{ v}}-\mathfrak{q}^{-(n_{1}^{v}+n_{2}^{v})}\Big{)}^{2-\deg v}\times\] \[\Big{(}\mathfrak{q}^{(n_{1}^{v}+n_{2}^{v})}-\mathfrak{q}^{n_{2}^{ v}}-\mathfrak{q}^{n_{1}^{v}}+\mathfrak{q}^{-n_{1}^{v}}+\mathfrak{q}^{-n_{2}^{v}}- \mathfrak{q}^{-(n_{1}^{v}+n_{2}^{v})}\Big{)}^{2-\deg v}\times\] \[\Big{(}\mathfrak{q}^{(n_{1}^{v}+n_{2}^{v})}-\mathfrak{q}^{n_{2}^{ v}}-\mathfrak{q}^{n_{1}^{v}}+\mathfrak{q}^{-n_{1}^{v}}+\mathfrak{q}^{-n_{2}^{v}}- \beta\mathfrak{q}^{-(n_{1}^{v}+n_{2}^{v})}\Big{)}^{2-\deg v}\Bigg{)}, \tag{14}\]
now we expand the term inside the parenthesis in the above equation as \(|\beta|<1\), to get the following:
\[=\lim_{\beta\to 1}\left(\frac{1}{6^{4}}\sum_{s\in Q^{4}+\delta}\xi_{s}^{\beta} \mathfrak{q}^{(\lambda,s)}\right) \tag{15}\]
Above equation fixes the \(\xi_{s}^{1}\). Moreover, \(\delta_{v}=0\)\(\forall\)\(v\), for \(\mathfrak{su}(3)\) Lie algebra as \(\rho(=\Lambda_{1}+\Lambda_{2}=\alpha_{1}+\alpha_{2})\in Q\). Further for this plumbing graph there is only one homological block which corresponds to \(b=0\). Using all this we obtain the \(\hat{Z}\)-invariant (10) as follows:
\[\hat{Z}_{0}^{\mathfrak{su}(3)}[M(\Gamma);q]=q^{3/2}\Big{(}1-2q+2q^{3}+q^{4}-2q^ {5}-2q^{8}+4q^{9}+2q^{10}-4q^{11}+2q^{13}-6q^{14}+2q^{15}-2q^{16}+4q^{18}-q^{20}\] \[+4q^{21}-2q^{22}-4q^{23}+2q^{24}+2q^{25}-4q^{26}+6q^{30}-2q^{31}+6 q^{33}-2q^{34}-2q^{35}-2q^{38}+q^{40}+\mathcal{O}(q^{41})\Big{)}. \tag{16}\]
Note that the variable \(q\) here is the analytically continued variable inside the unit circle.
Appendix B **Sublattice \(P^{\prime}\) and Chern-Simons level \(k^{\prime}\) for \(SU(4)/\mathbb{Z}_{2}\),\(SU(6)/\mathbb{Z}_{2}\) and \(SU(6)/\mathbb{Z}_{3}\)**
In this appendix, we will present the sublattice \(P^{\prime}\) and Chern-Simons level \(k^{\prime}\) for some non-simply connected groups. The \(i^{\rm th}\) fundamental weight vector for \(\mathfrak{su}(N)\) Lie algebra is given by:
\[\Lambda_{i}=\frac{1}{N}(\underbrace{N-i,N-i,\ldots,N-i}_{i-{\rm times}}, \underbrace{-i,-i,\ldots,-i}_{(N-i)-{\rm times}})\ \ \ {\rm where}\ i\in\{1,2,\ldots,N-1\}.\] (B9)
\(SU(4)/\mathbb{Z}_{2}:\)
The center of \(SU(4)\) is \(\mathbb{Z}_{4}\) which is isomorphic to \(\{e,\Lambda_{1},\Lambda_{2},\Lambda_{3}\}\). Further, the center of \(SU(4)/\mathbb{Z}_{2}\) is \(\mathbb{Z}_{2}\) which is isomorphic to \(\{e,\Lambda_{2}\}\). The root lattice \(Q\) of \(\mathfrak{su}(4)\) Lie algebra, which also corresponds to the equivalence class for identity group element, is given by adding the following vectors:
\[\{(2n_{1}-n_{2})\Lambda_{1},(-n_{1}+2n_{2}-n_{3})\Lambda_{2},(-n_{2}+2n_{3}) \Lambda_{3}|n_{1},n_{2},n_{3}\in\mathbb{Z}\},\] (B10)
where \(\Lambda_{1},\Lambda_{2}\) and \(\Lambda_{3}\) are weight vectors. The equivalence class for the group element \(\Lambda_{2}\) corresponds to
\[\{(2n_{1}-n_{2})\Lambda_{1},(-n_{1}+2n_{2}-n_{3}+1)\Lambda_{2},(-n_{2}+2n_{3}) \Lambda_{3}|n_{1},n_{2},n_{3}\in\mathbb{Z}\}.\] (B11)
Hence the lattice \(P^{\prime}\) is given by taking the union of two equivalence classes (B10) and (B11).
Chern-Simons level \(k^{\prime}=\gamma k+4\) and factor \(\gamma\) is fixed by smallest integer which satisfies the following equation:
\[\frac{\gamma}{2}(\Lambda_{2},\Lambda_{2})\in\mathbb{Z}\ \Longrightarrow\ \gamma=2,\] (B12)
therefore, \(k^{\prime}=2k+4\).
\(SU(6)/\mathbb{Z}_{2}:\)
The center of \(SU(6)/\mathbb{Z}_{2}\) is \(\mathbb{Z}_{3}\cong\{e,\Lambda_{2},\Lambda_{4}\}\). Therefore the sublattice \(P^{\prime}\) corresponding to the allowed representations of \(SU(6)/\mathbb{Z}_{2}\) is given by:
\[\{(2n_{1}-n_{2})\Lambda_{1},(-n_{1}+2n_{2}-n_{3})\Lambda_{2},(-n_{2}+2n_{3}-n _{4})\Lambda_{3},(-n_{3}+2n_{4}-n_{5})\Lambda_{4},(-n_{4}+2n_{5})\Lambda_{5}\}\cup\] \[\{(2n_{1}-n_{2})\Lambda_{1},(-n_{1}+2n_{2}-n_{3}+1)\Lambda_{2},(- n_{2}+2n_{3}-n_{4})\Lambda_{3},(-n_{3}+2n_{4}-n_{5})\Lambda_{4},(-n_{4}+2n_{5}) \Lambda_{5}\}\cup\] \[\{(2n_{1}-n_{2})\Lambda_{1},(-n_{1}+2n_{2}-n_{3})\Lambda_{2},(-n_ {2}+2n_{3}-n_{4})\Lambda_{3},(-n_{3}+2n_{4}-n_{5}+1)\Lambda_{4},(-n_{4}+2n_{5} )\Lambda_{5}\}.\] (B13)
Chern-Simons level \(k^{\prime}=\gamma k+6\) where \(\gamma\) is fixed by requiring:
\[\frac{\gamma}{2}(\Lambda_{3},\Lambda_{3})\in\mathbb{Z}\ \Longrightarrow\ 4,\] (B14)
therefore, \(k^{\prime}=4k+6\).
\(SU(6)/\mathbb{Z}_{3}:\)
For \(\mathbb{Z}_{2}\cong\{e,\Lambda_{3}\}\), \(P^{\prime}\) is:
\[\{(2n_{1}-n_{2})\Lambda_{1},(-n_{1}+2n_{2}-n_{3})\Lambda_{2},(-n_{2}+2n_{3}-n _{4})\Lambda_{3},(-n_{3}+2n_{4}-n_{5})\Lambda_{4},(-n_{4}+2n_{5})\Lambda_{5}\}\cup\] \[\{(2n_{1}-n_{2})\Lambda_{1},(-n_{1}+2n_{2}-n_{3})\Lambda_{2},(-n_ {2}+2n_{3}-n_{4}+1)\Lambda_{3},(-n_{3}+2n_{4}-n_{5})\Lambda_{4},(-n_{4}+2n_{5} )\Lambda_{5}\}.\] (B15)
and Chern-Simons level \(k^{\prime}=\gamma k+6\) is fixed as follows:
\[\frac{\gamma}{2}(\Lambda_{2},\Lambda_{2})\in\mathbb{Z}\ \ {\rm and}\ \ \frac{\gamma}{2}(\Lambda_{4},\Lambda_{4})\in\mathbb{Z}\ \Longrightarrow\ \gamma=3,\] (B16)
hence, \(k^{\prime}=3k+6\).
## Appendix C GPPV conjecture for simply connected case: \(Su(n)\)
In order to compare the GPPV conjecture with the simply connected \(SU(N)\) group, we present the GPPV conjecture for the \(SU(N)\) group. For the \(SU(N)\) group, with the weight lattice being \(P\) and the root lattice being \(Q\), the WRT invariant for plumbed 3-manifold \(M(\Gamma)\) can be decomposed as follows:
\[\tau^{SU(N)}_{k^{\prime}}[M(\Gamma);\mathrm{q}]=\frac{1}{|W||\mathrm{det}B|^{1/ 2}\sum_{w\in W}(-1)^{\ell(w)}\mathrm{q}^{\,(\rho,w(\rho))}}\sum_{a\in(Q)^{L}/ B(Q)^{L}}\exp\bigl{(}-\pi ik^{\prime}(a,B^{-1}a)\bigr{)}\times\]
\[\sum_{b\in(Q^{L}+\delta)/BQ^{L}}\exp\bigl{(}-2\pi i(a,B^{-1}b)\bigr{)}\lim_{q \to\mathrm{q}}\hat{Z}^{\mathrm{su}(N)}_{b}[M(\Gamma),q].\] (C17)
where \(B\) is the linking matrix for \(\Gamma\). For a general closed 3-manifold with \(b_{1}(M)=0\), we conjecture the following:
\[\tau^{SU(N)}_{k^{\prime}}[M;\mathrm{q}]=\frac{1}{|H_{1}(M;\mathbb{ Z})|^{\frac{N-1}{2}}\sum_{w\in W}(-1)^{\ell(w)}\mathrm{q}^{\,(\rho,\omega(\rho))}} \sum_{a,b\in(\mathrm{Spin}^{\kappa}(M))^{(N-1)}/S_{N}}\exp\Biggl{(}-2\pi ik^{ \prime}\sum_{i=1}^{N-1}\ell k(a_{i},a_{i})\Biggr{)}\times\\ \exp\Biggl{(}-4\pi i\sum_{i=1}^{N-1}\ell k(a_{i},b_{i})\Biggr{)} \lim_{q\to\mathrm{q}}\hat{Z}^{\mathrm{su}(N)}_{b}[M;q].\] (C18)
|
2309.06161 | Towards an Understanding of Developers' Perceptions of Transparency in
Software Development: A Preliminary Study | Software applications play an increasingly critical role in various aspects
of our lives, from communication and entertainment to business and healthcare.
As these applications become more pervasive, the importance of considering
human values in software development has gained significant attention. In this
preliminary study, we investigate developers's perceptions and experiences
related to human values, with a focus on the human value of transparency. We
interviewed five experienced developers and conducted thematic analysis to
explore how developers perceive transparency, violations of transparency, and
the process of fixing reported violations of transparency. Our findings reveal
the significance of transparency as a fundamental value in software
development, with developers recognising its importance for building trust,
promoting accountability, and fostering ethical practices. Developers recognise
the negative consequences of the violation of the human value of transparency
and follow a systematic process to fix reported violations. This includes
investigation, root cause analysis, corrective action planning, collaborative
problem-solving, and testing and verification. These preliminary findings
contribute to the understanding of transparency in software development and
provide insights for promoting ethical practices. | Humphrey O. Obie, Juliet Ukwella, Kashumi Madampe, John Grundy, Mojtaba Shahin | 2023-09-12T12:08:40Z | http://arxiv.org/abs/2309.06161v1 | Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preliminary Study
###### Abstract
Software applications play an increasingly critical role in various aspects of our lives, from communication and entertainment to business and healthcare. As these applications become more pervasive, the importance of considering human values in software development has gained significant attention. In this preliminary study, we investigate developers' perceptions and experiences related to human values, with a focus on the human value of _transparency_. We interviewed five experienced developers and conducted thematic analysis to explore how developers perceive transparency, violations of transparency, and the process of fixing reported violations of transparency. Our findings reveal the significance of transparency as a fundamental value in software development, with developers recognising its importance for _building trust_, _promoting accountability_, and _fostering ethical practices_. Developers recognise the negative consequences of the violation of the human value of transparency and follow a systematic process to fix reported violations. This includes investigation, root cause analysis, corrective action planning, collaborative problem-solving, and testing and verification. These preliminary findings contribute to the understanding of transparency in software development and provide insights for promoting ethical practices.
Human values, transparency, software engineering
## I Introduction
As software applications become ever more pervasive, the importance of considering human values in software development has gained significant attention [1, 2]. Human values are the guiding principles of what people consider important in life [3]. Human values encompass a broad range of principles, ethics, and moral considerations that guide our interactions, decisions, and behaviours [4]. Incorporating human values into software development ensures that the resulting applications align with ethical standards, promote user trust, and contribute to the well-being of individuals and society [5, 6].
One important aspect of human values in software development that has not been researched very much to date is _transparency_. Transparency is an attribute of communication in software development that enables stakeholders to answer their questions about the software system during its software life cycle [7, 8]. Transparency also encompasses openness, clarity, and visibility of the inner workings, processes, and actions of software applications [9]. Transparent software applications provide users with insight into how their data is collected, used, and protected. They enable users to understand the algorithms and decision-making processes behind automated systems. Transparency empowers users, promotes accountability, and fosters trust between developers, users, and other stakeholders [7, 8, 9].
Understanding how developers perceive, address, and prioritise human values, particularly the value of transparency, is crucial for promoting ethical and responsible software development practices [8]. By exploring developers' perspectives and experiences, we can gain insights into the challenges they face, and the strategies they employ to address human values violations, specifically focusing on transparency [5, 10]. Such insights can inform the development of guidelines, best practices, and educational initiatives that foster a culture of transparency in software development [9].
In this preliminary study, we aim to investigate developers' perceptions and experiences related to the human value of transparency, in the context of software application development. We explore how developers perceive transparency and its violation during the development process, and how they address and prioritise this value in their work. We also aim to contribute to the growing body of knowledge on the importance of human values in software development. By understanding the perspectives of developers and their strategies for the value of transparency, we can strive towards the creation of software applications that align with ethical standards, promote user trust, and enhance the overall societal impact of technology.
## II Background and Related Work
**Human Values in Software Engineering (SE):** The topic of human values in software engineering (SE) has begun to gain attention in the literature, with a focus on the ethical, social, and professional aspects of software development [1, 2, 11]. Whittle et al. make a case for considering human values such as integrity as "first-class entities" in software engineering, and calls for systematic software-engineering methods for incorporating values in the software development
lifecyle [6]. However, another study reveals that software companies do consider human values in their practices, but the maturity of this consideration varies widely, depending on practitioners' awareness and organisational culture, and suggests that embedding values in technology can be achieved through an evolution of existing practices [12].
Other studies have proposed methods for measuring human values in SE. Winter et al. proposed the Values Q-sort, a systematic approach to capturing values in SE [13], while Shams et al. employed the Portrait Values Questionnaire (PVQ) to capture the values of female farmers from Bangladesh in a mobile app development project [14]. Obie et al. [15] however, argue that when designing and applying instruments for eliciting human values requirements, the specific context of the domain should be taken into account [15].
Some recent works have adopted the use of user reviews as supplementary data sources for identifying requirements related to values and their violations. Shams et al. analysed 1,522 reviews from 29 Bangladeshi agriculture apps, identifying 21 desired user values, of which 11 were reflected in the apps and 10 were missing, highlighting the importance of considering user values in app development to avoid dissatisfaction and negative socio-economic impacts [16]. Similarly, Obie et al. analysed 22,119 app reviews from the Google Play Store using natural language processing techniques, finding that 26.5% of the reviews indicated perceived violations of human values, with benevolence and self-direction being the most violated categories [17]. While [16] and [17] have focused on more general human value categories, [5] and [10] zoomed in on the specific value item of honesty - automatically detecting the violations of honesty and providing a taxonomy of the different types of honesty violations. Similar to [5] and [10], this work focuses on the single value of transparency - to understand developers' perceptions and experiences with the value of transparency, and how they approach fixing the violations of the value of transparency.
**Transparency in Software Engineering**: Transparency is an important area in software engineering (SE) and there has been some exploration of the concept of transparency in SE. Hochstetter et al. [18] introduced a transparency maturity model for government software tenders. Spagnuelo et al. argue that the transparency of a system must be considered a critical quality that must be appropriately addressed, and not simply as a high-level concept [19]. The authors proposed quality metrics for measuring transparency as a non-functional requirement for software systems. Ofem et al [8] carried out a systematic literature review on the concept of transparency in software development. Their review found that transparency remains a much under-researched non-functional quality requirement concept, especially how it might impact software development. Only three studies reviewed conceptualised transparency in software development and explored the issue of transparency as it impacts software artefacts.
Focusing on the betterment of socio-technical systems, Hosseini et al. prescribed the importance of realising transparency as a first-class requirement, as the failure of adequately implementing transparency may affect other social requirements such as privacy, trust, collaboration, and non-bias [20]. The authors further proposed a baseline model for capturing transparency requirements as an early step in this direction. Isong et al [9] propose a framework for improving the concept of transparency during software engineering. They propose a transparency improvement programme during early phases of software development along with measures of transparency in software development processes and artifacts.
Tu et al. discussed transparency within the context of SE as an attribute of communication in the development of software systems, enabling stakeholders to answer their questions about a software system during its lifecycle, and proposed accessibility, relevance, and understandability as the three key attributes for measuring transparency in SE projects [21]. The result of a survey showed that while software developers are familiar with the general concept of transparency, they are not accustomed to its practical application in software projects [21].
Other studies have focused on the social advantages of the value of transparency in SE. The results of a study with GitHub users showed that transparency in social applications in SE aids innovation, knowledge sharing, and community building [22]. Dabbish et al. argue that transparency strengthens collaboration and coordination between developers in software projects [23]. In the study of GitHub users, the authors surmise that transparency aids developers in managing their projects and deal effectively with dependencies, amongst others [23].
Another work by Tu et al. posit transparency as the visibility of information to stakeholders [7]. The results of an experiment conducted by Tu et al. show that there is a positive relationship between increased transparency of requirements and documents and more effective communication amongst various stakeholders [7].
We build on this prior body of work on transparency in SE. Our work aims to provide an overarching understanding of how developers perceive the value and violation of transparency in the software development lifecycle, and possible ways in supporting transparency in software artefacts throughout the software development lifecycle.
## III Study Design
### _Aim and Research Questions_
In this preliminary study, we aim to investigate developers' perceptions and experiences related to human values, with a specific focus on the value of transparency, in the context of software application development. We explore how developers perceive the value of transparency and its violations, in the development process, and how they address and prioriti these values in their work. Following this aim, we guided our study with the following three research questions:
**RQ1**: How do developers perceive the value of transparency in the development of software applications?
**RQ2**: How do developers perceive the violation of the value of transparency in the development of software applications?
**RQ3**: How do developers address reported violations of transparency?
### _Methodology_
We followed a qualitative research methodology and conducted in-depth semi-structured interviews with 5 software practitioners to better understand their opinions on the value of transparency in software development. We present the study procedures in the following subsections. We first obtained Institutional Review Board approval for our human study (details redacted for anonymous peer review).
#### Iii-B1 Participant selection
We recruited software practitioners for our study by emailing the authors' personal contacts in the industry. Participants were not compensated and participated voluntarily. In total 5 practitioners agreed to be interviewed for this preliminary study. Participants have worked in various domains and countries, with 4.8 years of professional experience on average (minimum of 2 years and maximum of 8 years). Table I summarises the demographic information of the participants.
#### Iii-B2 Interviewing process
The first author conducted a series of interviews with 5 interviewees, and each interview was completed within 40 minutes. The interviews were semi-structured and divided into two parts, with more time dedicated to the latter part. In the first part, we asked some demographic questions, such as the interviewees' experience in software development, testing, and project management. In the second part, we then asked questions to understand their opinions on the human value of transparency in software development practice and showed them sample user reviews containing reports of human values violations. The interview questions were designed to explore the research questions related to the perceptions of the value of transparency, violations of transparency, and the process of fixing reported values-violations. Below are some pertinent examples of the interview questions:
1. Do you think human values should be considered in the development of software applications? Why or why not?
2. What does the value of transparency mean to you as a person and as a developer?
3. What do you think of the violation of the value of transparency?
4. If several value violations are reported to you and your team, how would you prioritise which ones to fix?
5. How would you go about fixing the violations of transparency and test that they have been fixed?
#### Iii-B3 Data Analysis
We transcribed the interview recordings using Pacific Transcription Services1 and then read the transcripts and conducted a thematic coding analysis of the transcripts. We included sentences during the coding process that are related to transparency topics. We followed the thematic analysis approach [24] to analyse and categorise the interview textual data.
Footnote 1: [https://www.pacifictranscription.com.au/](https://www.pacifictranscription.com.au/)
The first author read the transcripts and coded the contents of the interviews using the NVIVO2 tool for analysing the qualitative data, and discussed the codes with the second author in Zoom meetings to verify the codes and topics. The transcripts were interpreted in small chunks of words (codes), with recurrent codes grouped into themes. The identified themes were reviewed and refined through an iterative process. This involved examining the data within each theme, making comparisons, and ensuring that the themes accurately represented the content of the transcripts. Themes were revised as necessary to capture the variations in the data.
Footnote 2: [https://lumivero.com/](https://lumivero.com/)
## IV Results
In this section, we present the main themes and highlight the results of our preliminary study.
_RQ1: How do developers perceive the value of transparency in the development of software applications?_
#### Iv-A1 Transparency as a Core Value
All participants perceive transparency as a fundamental value in software development, often closely linked with honesty and the need for accountability. They believe that being open and clear in communication and actions is essential to the process of developing software. For example, participant P1 discusses the importance of keeping stakeholders informed about progress, suggesting that transparency involves clear communication about the development process, _"...we keep in communication ensuring honesty and transparency at the same time... we would generally keep in communication to see how the project is - how the result is going on."_ Participant P4 comments, _"...They [transparency] should be part of anything that we design, that's what I feel"_, while P5 says, _"...Software development, I think the first thing should be transparency I would say..."_
#### Iv-A2 Balancing Transparency with Practical and Ethical Considerations
While transparency is important, it sometimes needs to be balanced with other considerations, both practical, e.g., the changing scope of a project, and ethical, e.g., respecting user privacy. Developers believe that maintaining transparency is a complex task that involves navigating various challenges and trade-offs. For example, P5 suggests that transparency is important, but sometimes certain things need to be hidden from the client due to the changing scope of the project, _"on a personal level transparency is important, but if you're really involved in the project transparency is - see, I'm not advocating to hide something, but at some stage of the project you have to hide something to the client because the scope always changes."_
_RQ2: How do developers perceive the violation of the value of transparency in the development of software applications?_
#### Iv-B1 Subjectivity of Violations
Developers perceive a "transparency violation" as something that can vary among individuals. For example, P1 mentions that the violation of values is a grey area where the right and wrong perspectives will be different for each person interpreting the problem: _"the violation value - it is also a grey area where there is no right and wrong. So, the right and wrong perspectives will be different to each person who is interpreting the problem."_ This suggests that developers recognise the complexity and subjectivity involved in identifying and addressing violations.
#### Iii-B2 Detecting Violations
Developers have strategies for identifying reported transparency value violations. For example, they consider the number of similar complaints about transparency-related issues. For example, P5 suggests that if multiple people report the same complaint, it indicates that something is wrong, _"...if we get a complaint on something, if 10 people report the same complaint that means that something's wrong."_ P2 corroborates this theme: "_If that particular thing is happening to a lot of users, then I'll definitely [do] a security patch saying that we encountered this problem or some users highlighted this problem here."_
#### Iii-B3 Consequences of Violations
Violations of the value of transparency can have significant consequences for both developers and organisations. For example, P3 suggests that violations of transparency and honesty can harm the organisation and the individual developer, _"They might be blaming us for not giving the proper information and I believe we are not being transparent about our work... we haven't informed them. We haven't had that transparency, thereby we might be getting some cases raised from the clients."_ This suggests that violations are not just theoretical issues, but can have real-world impacts. This is consistent with the results of the recent study by Obie et al. [10].
### _RQ3: How do developers address reported violations of transparency?_
#### Iii-B1 Investigation and Root Cause Analysis
Developers first engage in a process of investigation and root cause analysis. They recognise the importance of understanding the underlying factors contributing to the violation in order to effectively address it and prevent its recurrence. For example, P4 emphasises the need to investigate the root cause of a transparency violation to prevent its recurrence: _"in order to identify what's the reason behind this issue, I would do a root cause analysis to determine what's the actual issue."_
#### Iii-B2 Corrective Action Planning
Developers develop corrective action plans to address reported violations of values. They believe that formulating strategies and actions are necessary to rectify the violation and prevent its future occurrence. This theme highlights the proactive planning and implementation of actions. For instance, P1 mentions the importance of developing corrective action plans to address transparency violations and ensure transparency is upheld; _"Because with transparency we let the users know what we have been up to. What kind of things we're fixing. They will know that - what kinds of data may have been gathered or what are the possible actions that has been done."_ P3 also says, _"Each and every task within our system we had a column for criticality of that task. So we are picking the tasks by looking at the criticality of it...usually have to assign a priority or maybe criticality value for it. It would be one, two and three. So one would mean this needs to be fixed within a week. Two means it's okay if it's fixed within months. Three means it's not that much important."_
#### Iii-B3 Collaborative Problem-Solving
Developers emphasise collaborative problem-solving in fixing reported values-violations. They recognise that involving relevant stakeholders, such as team members, users, or clients, leads to more effective problem-solving and resolution. Collaboration enhances the collective knowledge and expertise in addressing values violations. For example, P5 highlights the importance of considering various perspectives: _"Everyone should be treated equally because when you come to the programmers of the actual software... those are the people who will be working with the project most of the time. So the values should be valued... it's not about one particular thing on values. It is about the combination of values, then we can evaluate them and commit to a framework that we know will bring them all together in the best interests of the [project] execution."_ While P3 discusses the value of involving the development team in problem-solving when addressing reported violations; _"So there's a team...They will be put into this frontline and those BAs or software engineers will be looking at these customer issues or the issues raised by the customers....all the team will get together and fix this because this is a problem that's going on their live system."_
#### Iii-B4 Testing and Verification
Developers recognise the importance of testing and verification in the process of fixing reported values-violations. They believe that conducting tests and verification activities ensures that the implemented solution effectively addresses the violation and restores the desired transparency. This theme highlights the significance of ensuring the effectiveness of the implemented solution. For instance, P3 mentions the importance of testing the implemented solution to ensure that the (transparency) violation has been fixed; _"...what I feel is that when you fix that particular issue, before you push it into the production environment you could always inform quality assurance tested about this particular violation and have it as a test case."_
## V Discussion and Implications
We aimed to conduct a preliminary empirical study to explore developers' perceptions and experiences regarding human values in software development, with a specific focus on the value of transparency. Our findings shed light on several key aspects related to developers' understanding of transparency, violations of this value, and the process of addressing such reported values-violations.
Regarding RQ1, our findings revealed that developers recognise the importance of transparency as a fundamental human value in software development. They perceive transparency as crucial for building trust with users and stakeholders, promoting accountability, and fostering ethical practices. This aligns with previous research highlighting the significance of values in software development [5, 10, 16, 17].
For RQ2, we found that developers are aware of the potential violations of transparency in software applications. The themes identified were the subjectivity of values violations depending on the individuals, systemic patterns for addressing violations, and consequences of violations for both individual developers and their organisations. These findings emphasise the need for developers to proactively address and prevent such violations through ethical coding practices and robust quality assurance processes.
In answering RQ3, we uncovered several strategies employed by developers. The findings indicated that developers engage in investigation and root cause analysis to understand the underlying factors contributing to value violations. They develop corrective action plans, involve collaborative problem-solving with relevant stakeholders, and conduct testing and verification to ensure that the reported values-violations are effectively addressed. These approaches reflect the commitment of developers to rectify violations and uphold transparency in their software applications. Figure 1 summarise these findings.
Our findings have broader societal implications. Transparency in software applications is essential for building trust with users and stakeholders and ensuring the ethical and responsible use of technology. By understanding the perceptions, challenges, and strategies related to transparency, stakeholders such as regulatory bodies, policymakers, and consumer advocacy groups can develop guidelines, regulations, and standards that promote transparency in software development. This can lead to increased accountability, improved user experiences, and a more ethical and trustworthy digital environment.
## VI Limitations
_Sample Size._ The sample size for our study is relatively small, which may limit the generalisability of the findings. Additionally, the study relied on self-reported perceptions and experiences, which are subject to biases and limitations. Future research could expand the sample size, include a more diverse range of participants, and utilise mixed-methods approaches to gain a more comprehensive understanding of the topic.
_Social Desirability Bias._ Participants may have provided responses that they believed were socially desirable, rather than fully reflecting their true perceptions and experiences.
Fig. 1: The summary of the findings. How developers perceive transparency as a value in software development, how they perceive the violation of transparency, and how they address reported transparency violations.
To minimise this bias, participants were assured of the confidentiality and anonymity of their responses. The use of open-ended questions and encouraging honest and candidid responses helped reduce the potential for social desirability bias.
_Researcher Bias._ The analysts' background and understanding of human values, and interpretations may have influenced the analysis and findings of this study. To address this potential bias, the analysts examined the literature on human values with a focus on the value of transparency in both the social sciences and values studies in software engineering. Furthermore, two analysts were involved in the coding and theme development process to enhance objectivity and reduce individual biases.
## VII Conclusion and Future Work
This paper explored developers' perceptions, and experiences related to human values, particularly the value of transparency, in software application development. Our findings revealed that developers highly value transparency as a fundamental human value in software development. Developers demonstrated an awareness of potential violations of transparency and acknowledged the negative impact of these violations on user trust and the overall user experience. We also provide insights into the strategies employed by developers to fix reported values-violations, including investigation and root cause analysis, corrective action planning, collaborative problem-solving, and testing and verification.
Building upon our preliminary results, there are several avenues for future research. Firstly, expanding the sample size and diversifying the participants across different software development domains, experience levels, and cultural contexts could provide a more comprehensive understanding of developers' perceptions and experiences regarding human values and transparency. Furthermore, exploring the perspectives of other stakeholders, such as users, clients, and regulatory bodies, could provide a holistic view of the significance of transparency in software development. Understanding their expectations, concerns, and experiences would contribute to the development of guidelines, standards, and policies that promote transparency and ethical practices in software application development.
## Acknowledgements
This work is supported by ARC Discovery Grant DP200100020. Madampe and Grundy are supported by ARC Laureate Fellowship FL190100035.
|
2302.14617 | Quantum battery charging by non-equilibrium steady-state currents | We present an analysis of the availability and maximum extractable work of
quantum batteries in the presence of charge and/or heat steady-state currents.
Quantum batteries are modelled as non-interacting open quantum systems
(mesoscopic systems) strongly coupled to two thermal and particle reservoirs
within the framework of non-equilibrium Green's function theory in a
steady-state regime. We found that the battery can be charged manifestly by a
steady-state charge current compared to heat one, especially, in an
off-resonant transport regime. It allows us to reliably access the performance
of the quantum batteries in the high bias-charging regime. | F. H. Kamin, Z. Abuali, H. Ness, S. Salimi | 2023-02-28T14:56:01Z | http://arxiv.org/abs/2302.14617v2 | # Quantum battery charging by non-equilibrium steady-state currents
###### Abstract
We present an analysis of the availability and maximum extractable work of quantum batteries in the presence of charge and/or heat steady-state currents. Quantum batteries are modeled as non-interacting open quantum systems (mesoscopic systems) strongly coupled to two thermal and particle reservoirs within the framework of non-equilibrium Green's function theory in a steady-state regime. We found that the battery can be charged manifestly by a steady-state charge current compared to heat one, especially, in an off-resonant transport regime. It allows us to reliably access the performance of the quantum batteries in the high bias-charging regime.
## I Introduction
Quantum thermodynamics is concerned with the interchange of energy and matter between microscopic systems and their environments, as well as their description in terms of thermodynamic quantities such as heat, work, entropy, etc. [1]. In recent decades, quantum transport has received a lot of attention, e.g., heat and charge transport through molecular junctions [2; 3; 4]. At the atomic level, a temperature (chemical potential) gradient causes charge carriers in materials to disperse from hot (high potential) to cold (low potential) and this effect can be utilized to measure temperature, generate electricity, and so on. It is no secret that transport phenomena are of great importance to various types of scientific research, including physics. Also, quantum transport has been extensively studied in order to continue progress in nanofabrication. Moreover, recent advances in nanoscale fabrication techniques have led to the theoretical and experimental developments of non-equilibrium (NE) quantum impurity systems [5; 6; 7; 8; 9]. Quantum impurities are commonly known as quantum dots. In this type of system with an initial NE state, energy and particles are exchanged between the system and the environment to restore equilibrium. This equilibrium is well understood for classical systems, where it usually leads to thermal stability. Therefore, a NE steady-state current occurs across a quantum dot (central region) when it is connected to several leads at different temperatures and chemical potentials. Studies of NE steady-states have shown that they continuously dissipate energy to their surroundings, in contrast with equilibrium states. Consequently, this leads to continuous entropy production and a time-reversal symmetry breakdown.
Today, quantum batteries (QBs) represent a vital field of research that concerns designing optimal energy storage protocols for the transfer to quantum devices. As now, a variety of theoretical efforts have been made, including examining how quantum resources affect QB performance [10; 11; 12; 13; 14; 15; 16], presenting models for achieving optimal mechanisms for batteries such as high charging and capacity [17; 18; 19; 20; 21], slow erosion [22; 23], to discussing the environmental effects on charging and discharging of QBs [24; 25; 26; 27; 28; 29; 30]. Furthermore, several experimental platforms have been studied to realize operational quantum batteries [31; 32; 33; 34; 35; 36]. In this regard, we can address the use of an organic semiconductor that is composed of two-levels systems connected to a microcavity [31]. Alternatively, QBs can be represented by semiconductor quantum dots embedded within optical microcavities, where energy is exchanged between the solid-state qubit and light fields during charging and discharging [32]. Superconducting circuits are also another field of experimental research for quantum batteries [33; 34]. An example is the transmon qutrit QB, which is composed of a three-level transmon coupled to an external field. In this model, to avoid unwanted spontaneous discharge or attenuation, a stimulated Raman adiabatic passage is incorporated into the charging to ensure a stable charging process [33]. Moreover, IBM quantum chips have been introduced as stable and optimal quantum batteries regarding charging time and stored energy [35], and nuclear magnetic resonance (NMR) architecture has been employed to investigate quantum benefits in collectively charging spin systems [36].
In a variety of different scenarios, a cyclic unitary process is usually employed as the best method of maximum work extraction. Nonetheless, stable charging and optimal energy transfer processes are critical for QBs. Typically, the quantum system, which is a battery or charger, interacts with the external environment, leading to decoherence and quantum resource destruction. Due to this interaction, the entropy level of the battery increases, and so unitary evolution applied to the system tend not to be sufficient to rectify any entropy production and ultimately stabilize the system. In fact, the presence of decoherence effects of the environment during the charging process plays a negative role in the performance of operational QBs [23; 24; 25; 26; 27; 28; 29; 30]. Moreover, the self-discharge phenomenon is the result of such interactions [28; 37]. So far, attempts have been made to avoid the inevitable interactions of the QB with the environment, which may lead to its deactivation over time [23]. However, some approaches can change the destructive role of the environment from negative to positive. Where compared with a cyclic unitary process, non-unitary discharging provides more charge through availability or exergy [38]. In such situations, the steady-state of an open quantum system provides the desired quantum resource. It is therefore possible to design QBs that do not wear out in the presence of environmental effects. These considerations open up a new path for using quantum impurity models to study quantum battery
charging by converting energy into extractable work. Such transformations are executed in many steady-state mesoscopic or nanoscale systems through a steady-state current of microscopic particles such as electrons and photons [39; 40]. Thus, new insights into steady-state electron currents can be used to discover how various QBs are charged. In this sense, for example, batteries can be characterized as energy-converting devices. [41; 42; 43; 44]. Also, it is worthwhile to emphasize that the protocol described here can be applied to practically all existing impurity systems, from single quantum dots, to double quantum dots, etc. As a matter of fact, finding ways to fully charge quantum batteries from a quantum thermodynamic perspective will be highly crucial, which is our main purpose.
Typically, we consider the open batteries with a few degree of freedom, but presume that they strongly interact with macroscopic heat reservoirs. We present a model for charge and heat transfer based on the non-equilibrium Green's function (NEGF) formalism [45; 46]. Moreover, we fix the boundary values (\(T_{L},\mu_{L}\)) and (\(T_{R},\mu_{R}\)) for the temperature and chemical potential of left and right reservoirs, respectively. In light of these statements, we investigate the charging process of a QB via a transport set consisting of a central quantum system (quantum battery) in contact with a pair of electron reservoirs in the local equilibrium. We suggest an optimal bias-charging process of the QBs by applying the appropriate bias (chemical potential difference), and consequently the charge current, in a specific transport regime.
The paper is organized as follows. In Sec. II we present the general procedure for construction of the work extraction for the QB. The special model of battery and reservoirs is discussed in Sec. III. In Sec. IV we discuss some illustrative results of model. Conclusions are presented in Sec. V. In Appendix A, we compare our method with an alternative approach based on reduced density matrices. Finally, a detailed discussion of charge and energy currents is included in Appendix B.
## II Figure of merit
A QB with non-equilibrium steady-state characteristics is proposed in this work. Where, by interacting with the environment, it makes use of resources such as heat and/or charge currents to accomplish useful and accessible work.
To start, we consider a simplified model of the QB as a central conductor connecting two electron reservoirs \(L\) and \(R\) in their own local thermal equilibrium state. The reservoirs are characterized by a density matrix \(\hat{\rho}_{i}=Z_{i}^{-1}e^{-\hat{\rho}(\hat{H}_{i}-\mu_{R}\hat{\rho}_{i})}\) (\(i=L,R\)) at two different inverse temperatures \(\beta_{L}=\frac{1}{\hat{T}_{L}}\) and \(\beta_{R}=\frac{1}{\hat{T}_{R}}\) and with two chemical potentials \(\mu_{L}\) and \(\mu_{R}\) (see Fig. 1). Here, \(Z_{i}=tr_{i}[e^{-\beta_{L}(\hat{H}_{i}-\mu_{R}\hat{\rho}_{i})}]\) is the partition function, and \(\hat{H}_{i}\) and \(\hat{N}_{i}\) are the Hamiltonian and particle number operators for each reservoir \(i\), respectively. The initial density matrix \(\hat{\rho}_{0}\) of the decoupled L-QB-R (left reservoir-battery-right reservoir) system is given by the product state \(\hat{\rho}_{0}=\hat{\rho}_{L}\otimes\hat{\rho}_{QB}\otimes\hat{\rho}_{R}\), where \(\hat{\rho}_{QB}\), \(\hat{\rho}_{L}\), and \(\hat{\rho}_{R}\) denote the density matrix of battery, left and right reservoir, respectively. Since quantum battery QB is not in a thermodynamic limit, \(\hat{\rho}_{QB}\) is assumed to be arbitrary. Once the battery is connected to the reservoirs, the time-evolution of the entire system is ruled by the full Hamiltonian \(\hat{H}=\hat{H}_{0}+\hat{H}_{int}\), where \(\hat{H}_{0}=\hat{H}_{L}+\hat{H}_{QB}+\hat{H}_{R}\) displays the sum of independent free Hamiltonians in each part and \(\hat{H}_{int}\) characterizes the interaction between QB and \(i\)th reservoir. In the long time limit, the reservoirs drive the system into a global non-equilibrium (NE) steady-state.
The NE steady-state regime is described as the Gibbs-like ensembles that can be obtained either by using the McLennan-Zubarev approaches [47; 48; 49; 50] or the NE density matrix approach developed by Hershfield in Ref. [51], which provides a thorough description of the NE steady-state behavior. Recently, the full equivalence between the McLennan-Zubarev NE statistical operator and Hershfield's approach for the NE steady-state has been shown in Ref. [52; 53]. By definition, the NE density matrix is given by \(\hat{\rho}^{NE}=\hat{\Omega}^{(+)}\hat{\rho}_{0}\hat{\Omega}^{(+)^{+}}\), where \(\hat{\Omega}^{(+)}=\lim_{\tau\to\infty}e^{i\hat{H}\tau}e^{-i\hat{H}_{0}\tau}\) is the Moeller operator [54; 55; 56] and characterizes the asymptotic steady state.
Moreover, NE steady-state can be distinguished by its non-zero entropy production rate as well as its ability to maintain non-zero mean currents in the system [57; 58]. Where the NE entropy production rate \(\sigma\) is directly related to the asymptotic NE steady-state current of particles \(I_{Q}\) and energy \(J_{E}\) as \(\sigma=\Delta_{\mu}I_{Q}-(\beta_{L}-\beta_{R})J_{E}\) with \(\Delta_{\mu}=\beta_{L}\mu_{L}-\beta_{R}\mu_{R}\)[59; 60].
Recent studies have shown that the battery performance suffers from some energy loss due to the non-unitary effects on the QB when we consider the coupling with external heat reservoirs [38]. As a result, cyclic unitary transformations cannot retrieve part of the total energy in the system since part of it is not stored as ergotropy. This new perspective assesses the amount of residual energy that cannot be extracted as useful work from open quantum batteries by unitary processes. A non-unitary extraction process can provide maximum work through availability or exergy [38; 61]. In thermodynamic terms, a arbitrary system (associated with a density matrix \(\hat{\rho}\)) that is out of equilibrium with its environment can exchange work and heat with its surroundings. Furthermore, it can transfer pure work energy to a third external system. In this manner, the maximum available work is the so-called availability or exergy [61], which is defined here as follows
\[W_{ext}=\Lambda(\hat{\rho})-\Lambda(\hat{\rho}^{eq}). \tag{1}\]
where \(\Lambda(\hat{\rho})=E-\mu N-TS(\hat{\rho})\) is the non-equilibrium grand potential with \(E=tr(\hat{\rho}\ \hat{H})\), \(S(\hat{\rho})=-tr(\hat{\rho}\ln\hat{\rho})\), and \(N=tr(\hat{\rho}\ \hat{N})\) being the energy, von Neumann entropy, and particle num
Figure 1: A mesoscopic system, composed of a quantum conductor (QB) connected to two particle and heat reservoirs, left \(L\) and right \(R\), at their own local equilibrium.
ber of the system, respectively. The superscript "\(eq\)" designates the values belonging to the equilibrium state \(\hat{\rho}^{eq}=Z^{-1}e^{-\beta\ (\hat{H}-\mu\beta)}\) of the system at the inverse temperature \(\beta=1/kT\). As a result, it can be concluded that \(\Lambda(\hat{\rho}^{eq})=-\beta^{-1}\ln Z\) for the equilibrium state.
Assuming that \(\hat{\rho}\) represents a momentary state of the system, we obtain following Ref. [61] (with \(k=1\))
\[S(\hat{\rho}\parallel\hat{\rho}^{eq})=\frac{1}{T}(E-E^{eq})+ \frac{\mu}{T}(N-N^{eq})-(S-S^{eq})\, \tag{2}\]
where the relative entropy \(S(\hat{\rho}\parallel\hat{\rho}^{eq})=tr[\hat{\rho}(\ln\hat{\rho}-\ln\hat{ \rho}^{eq})]\) is the information gain of system. Therefore one can find that [61]
\[\beta W_{ext}=S(\hat{\rho}\parallel\hat{\rho}^{eq}). \tag{3}\]
Evidently, availability (exergy) is the information gain, up to a factor \(\beta\) and it is considered a fundamental quantity both in statistics and physics.
The next step is to achieve a NE steady-state reduced density matrix for the central system by partial trace of global NE steady-state \(\hat{\rho}^{NE}\). However, it may be challenging to determine the exact form of \(\hat{\rho}_{QB}\) in a long time limit. In this paper, we use a NE thermodynamic description of the system based on the non-equilibrium Green functions (NEGF) theory. This method provides standard definitions for energy, charge and heat currents, system entropy, and exergy (see Sec. III).
## III Model
A simple QB can be modeled as a single-level quantum dot connected to two one-dimensional electron reservoirs (tight-binding approach to non-interacting electrons). The free Hamiltonian of the L-QB-R system is given by
\[H_{0}=\varepsilon_{QB}\hat{d}^{\dagger}\hat{d}+\sum_{\alpha=L,R} \sum_{k=0}^{\infty}\varepsilon_{\alpha}\hat{c}_{\alpha,k}^{\dagger}\hat{c}_{ \alpha,k}-h_{\alpha}(\hat{c}_{\alpha,k-1}^{\dagger}\hat{c}_{\alpha,k}+c.c.)\, \tag{4}\]
with the energy electron level \(\varepsilon_{QB}\) and the creation and annihilation operators \(\hat{d}^{\dagger},\hat{d}\) of an electron on the QB. \(\varepsilon_{\alpha}\) is the energy and \(\hat{c}_{\alpha,k}\), \(\hat{c}_{\alpha,k}^{\dagger}\) are the fermionic annihilation and creation operators of \(k\)th mode of the \(\alpha=L,R\) reservoir, respectively. The central system (QB) interacts with the reservoirs by electron tunneling term
\[H_{int}=-\sum_{\alpha=L,R}\nu_{\alpha}(\hat{c}_{\alpha 0}^{\dagger}\hat{d}+\hat{d}^{ \dagger}\hat{c}_{\alpha 0})\, \tag{5}\]
where \(\nu_{\alpha}\) is the interaction strength between the QB and the \(\alpha=L,R\) reservoir. It is worth noting that our proposed scenario can be used to model the charge transmission phenomenon. The electrons in the reservoirs are described by Fermi-Dirac (FD) equilibrium distribution functions \(f_{\alpha}^{eq}(\omega;\mu_{\alpha},T_{\alpha})=[e^{\beta_{\alpha}(\hbar\omega -\mu_{\alpha})}+1]^{-1}\). Applying a temperature gradient \(\Delta T=T_{L}-T_{R}\) or/and bias voltage \(\Delta\mu=\mu_{L}-\mu_{R}\) leads to heat or/and electron transfer between the reservoirs.
We analyze the typical transport in which a QB is connected to two reservoirs that are first stored at different temperatures and chemical potentials. Over a long period of time, the system reaches a NE steady-state with a mean rate of non-interacting quantum charge current \(I_{Q}\) and energy current \(J_{E}\). The Landauer-Buttiker formalism describes the NE steady-state regime by a transmission probability
\[\tau(\omega)=G^{\prime}(\omega)\Gamma_{L}(\omega)G^{\alpha}( \omega)\Gamma_{R}(\omega)\, \tag{6}\]
for charge and energy transport at energy \(\omega\) (\(\hbar=1\)). It can be obtained from the retarded Green function \(G^{\prime}(\omega)=[\omega-\varepsilon_{QB}-\Sigma_{L}^{\prime}(\omega)- \Sigma_{R}^{\prime}(\omega)]^{-1}\) and advanced Green function \(G^{\prime}(\omega)=[G^{\prime}(\omega)]^{*}\) of the QB and the so-called leads (reservoirs) self-energies \(\Sigma_{\alpha}^{\prime}(\omega)\) where \(\Sigma_{\alpha}^{\prime}(\omega)=\gamma_{\alpha}^{\prime}e^{-i\omega_{\alpha}( \omega)}/h_{\alpha}\) using the energy dispersion relation \(\omega=\varepsilon_{QB}-2h_{\alpha}\cos(k_{\alpha})\) of the \(\alpha=L,R\) reservoir [40; 59]. Each reservoir is associated with a spectral function \(\Gamma_{\alpha}(\omega)=-2\operatorname{Im}(\Sigma_{\alpha}^{\prime}(\omega))\).
The charge \(I_{Q}\) and energy \(J_{E}\) currents are given by (\(e=1,\hbar=1\)) [59; 62; 39]
\[I_{Q}=\frac{1}{2\pi}\int\ d\omega\ \tau(\omega)\ (f_{L}(\omega)-f_{R}(\omega))\, \tag{7}\]
and
\[J_{E}=\frac{1}{2\pi}\int\ d\omega\ \tau(\omega)\ \omega\ (f_{L}(\omega)-f_{R}( \omega))\, \tag{8}\]
respectively. Thus, we can define the heat current as \(J_{H}^{\alpha}=J_{E}-\mu_{\alpha}I_{Q}\) for each reservoir \(\alpha\).
We are interested in calculating the NE thermodynamical properties of the QB connected to the reservoirs. The number of particles, the energy, the entropy and the exergy in the QB can be obtained from the reduced density matrix \(\hat{\rho}_{QB}=\operatorname{Tr}_{L,R}[\hat{\rho}^{NE}]\), which is derived from the trace of the full density matrix \(\hat{\rho}^{NE}\) (defined in Section II) over the degrees of freedom of the \(L\) and \(R\) reservoirs. As mentioned above, summing over the infinite number of degrees of freedom in the reservoirs can be a difficult task (this is even more true in the presence of particle interaction in the QB). An alternative way is to use the so-called NE distribution function of the QB connected to the reservoirs. Such a distribution function is well defined within the NEGF formalism [63; 64], and it describes the correct statistics of the electron(s) in the QB under the NE (steady-state) conditions. Note that, in Appendix A, we discuss in detail the differences between calculations performed with the correct NE distribution function and with an approximated expression for the reduced density matrix \(\hat{\rho}_{QB}\).
For the model system we consider here, i.e. a single electron level connected to two reservoirs, the NE distribution function \(f_{QB}^{NE}\) of the QB is given by a linear combination of the FD distributions of reservoirs weighted by the "strength" of the coupling of the QB and the reservoirs. It is expressed as follows [63; 64]:
\[f_{QB}^{NE}(\omega)=\frac{\Gamma_{L}(\omega)f_{L}(\omega)+\Gamma_{R}(\omega)f_{R}( \omega)}{\Gamma_{L}(\omega)+\Gamma_{R}(\omega)}\, \tag{9}\]
where the reservoir spectral functions \(\Gamma_{L,R}\) are defined above.
We can now calculate the number of particle \(N_{QB}^{\rm NE}\), the energy \(E_{QB}^{\rm NE}\), the entropy \(S_{QB}^{\rm NE}\) of the \(QB\) using the spectral func
tion \(A_{QB}(\omega)=-\operatorname{Im}G^{\prime}(\omega)/\pi\) and \(f_{QB}^{NE}\) as follows [62]:
\[N_{QB}^{\rm NE} =\int\mathrm{d}\omega\;A_{QB}(\omega)\;f_{QB}^{\rm NE}(\omega) \tag{10}\] \[E_{QB}^{\rm NE} =\int\mathrm{d}\omega\;\omega\;A_{QB}(\omega)\;f_{QB}^{\rm NE}(\omega) \tag{11}\]
and
\[S_{QB}^{\rm NE} =-\int\mathrm{d}\omega\;A_{QB}(\omega)\] \[\left[f_{QB}^{\rm NE}(\omega)\ln f_{QB}^{\rm NE}+(1-f_{QB}^{\rm NE }(\omega))\ln\!\left(1-f_{QB}^{\rm NE}\right)\right]\;. \tag{12}\]
Finally, from the general definitions Eqs. (1) and (2) and following Ref. [62], one obtains a compact expression for the exergy \(W_{QB}^{ext}\), expressed in terms of the spectral function \(A_{QB}\), the NE distribution function \(f_{QB}^{NE}\) and the equilibrium FD distribution \(f_{QB}^{eq}=f_{\alpha}^{eq}\) of the QB
\[\beta W_{ext} =\int d\omega\;A_{QB}(\omega)\] \[\left[f_{QB}^{NE}\ln\left(\frac{f_{QB}^{NE}}{f_{QB}^{eq}}\right) +(1-f_{QB}^{NE})\ln\left(\frac{1-f_{QB}^{NE}}{1-f_{QB}^{eq}}\right)\right]\;. \tag{13}\]
Now we have all the expressions needed to perform numerical calculations. We are looking for understanding the NE effects of charge and heat currents on the performance of the QB. More specifically, we want to know the advantages of applying a bias along with (or without) a temperature gradient on the charging protocol of the QBs, and find the best conditions to optimize the exergy (and/or the entropy) in the QB. Ultimately, we would like to provide an optimal charging protocol for the QBs.
## IV Results
Our model system contains several adjustable parameters. For convenience, we choose that the parameters \(\varepsilon_{\alpha}\), \(h_{\alpha}\), \(\nu_{\alpha}\) describing the \(\alpha=L,R\) reservoir are the same for both \(L\) and \(R\) reservoirs. Additionally, in order to avoid atypical results due to the (electron-hole) symmetry of the spectral function \(A_{QB}(\omega)\), namely \(A_{QB}(\omega)=A_{QB}(-\omega)\), and/or of the distribution functions, we choose that the equilibrium chemical potential \(\mu_{eq}\) is different from the band-center \(\varepsilon_{\alpha}\) of the reservoirs.
### The equilibrium case
First, it is instructive to consider the equilibrium case, for which \(\mu_{L}=\mu_{R}=\mu_{eq}\) and \(T_{L}=T_{R}=T_{eq}\). At equilibrium, there is no currents since hence \(f_{L}=f_{R}=f_{\alpha}^{eq}\). There is also no exergy in the QB since \(f_{QB}^{NE}=f_{QB}^{eq}=f_{\alpha}^{eq}\).
However, the number of particles, the energy and the entropy of the QB have a finite value, which depends on the position of the energy level \(\varepsilon_{QB}\) relative to the equilibrium Fermi level \(\mu_{eq}\).
We define three regimes: (1) the resonant transport regime where \(\varepsilon_{QB}=\mu_{eq}\), i.e. the equilibrium chemical potential \(\mu_{eq}\) is located at the peak of the spectral function \(A_{QB}(\omega)\), which is also the peak of the transmission probability \(\tau(\omega=\varepsilon_{QB})\sim 1\). One should note that, for the one energy-level model considered here, the spectral function \(A_{QB}(\omega)\) as well as the transmission \(\tau(\omega)\) are essentially peaked functions, with a maximum at \(\omega=\varepsilon_{QB}\) and a width given approximatively by \(\sim(\Gamma_{L}(\varepsilon_{QB})+\Gamma_{R}(\varepsilon_{QB}))/2\). (2) the off-resonant-empty regime where \(\varepsilon_{QB}\gg\mu_{eq}\). The equilibrium chemical potential \(\mu_{eq}\) is located in the (ascending) tail of the spectral function \(A_{QB}(\omega)\), and the energy level \(\varepsilon_{QB}\) is mostly empty at equilibrium, i.e. \(\int\mathrm{d}\omega f_{QB}^{NE}A_{QB}\ll 1\). (3) the off-resonant-full regime where \(\varepsilon_{QB}\ll\mu_{eq}\). The equilibrium chemical potential \(\mu_{eq}\) is located in the (descending) tail of the spectral function \(A_{QB}(\omega)\), and the energy level \(\varepsilon_{QB}\) is almost full at equilibrium, i.e. \(\int\mathrm{d}\omega f_{QB}^{NE}A_{QB}\sim 1\).
Fig (2) shows the equilibrium entropy versus \(\varepsilon_{QB}\). The equilibrium entropy has a maximum value for half filling of the electronic level, i.e. when \(\varepsilon_{QB}=\mu_{eq}\), as expected. The maximum value is close to the value of \(-\ln(1/2)=\ln(2)\sim 0.69\) given by the Landauer's principle. The entropy is zero when the electronic level is completely empty (\(\varepsilon_{QB}\ll\mu_{eq}\)) or completely full (\(\varepsilon_{QB}\gg\mu_{eq}\)). Note that this is simply a quantum electron equivalent of a classical problem in statistical mechanics 1.
Footnote 1: For the classical problem of \(N\) “particles” put on \(M\) sites, the entropy is given by the logarithm of the number of possible combination to arrange the \(N\) particles on the \(M\) sites. The entropy is maximum when \(N\) is at/around half the number of sites. When all sites are occupied (or empty), there is only one combination possible, and therefore the entropy is zero.
Driving the QB out of equilibrium by applying a bias \(\Delta\mu=\mu_{L}-\mu_{R}\) or a temperature gradient \(\Delta T=T_{L}-T_{R}\) will lead to particle/energy transfer between the two reservoirs.
There will be an energy window for which \(f_{L}\neq f_{R}\). Most of the transfer of particle/energy happens within this energy window. The dependence of the entropy (and of the exergy) versus \(\varepsilon_{QB}\) will be modified from the equilibrium case when the energy level \(\varepsilon_{QB}\) is located within this energy window as shown in the next section.
### Non-equilibrium cases
As a first step towards understanding the QB charging, we will consider that the bias and/or the temperature gradient is applied only on one side of the L-QB-R junctions. The chemical potential \(\mu_{R}\) and the temperature \(T_{R}\) of the right reservoir is kept constant and equal to the equilibrium values, i.e. \(\mu_{R}=\mu_{eq}\) and \(T_{R}=T_{eq}\). While a finite bias \(\Delta\mu\) and/or temperature gradient \(\Delta T\) is applied on the left reservoir, with chemical potential \(\mu_{L}=\mu_{eq}+\Delta\mu\) and temperature \(T_{L}=T_{eq}+\Delta T\).
We have performed calculations for the three regimes (mentioned in the previous section) in the presence of a temperature gradient (with/without applied bias) and in the presence of an applied bias (with/without temperature gradient). All the results for the charge and energy/heat currents are shown in Appendix B.
As expected, applying a bias \(\Delta\mu\) or a temperature gradient \(\Delta T\) will generate a charge or an energy current through the QB. This, in turn, increases the entropy and the exergy in the battery. The increase in entropy originates from the increase of possible electron/hole combinations in the QB due to the applied bias or temperature gradient. The increase in exergy originates from the charge and energy currents since the latter can be seen as extra work created in the QB by the NE conditions. As a general trend, the larger \(\Delta\mu\) or \(\Delta T\), the larger the currents (see Appendix B) and consequently the larger the entropy and the exergy.
Figure 3 shows the NE entropy in the QB versus the energy level \(\varepsilon_{QB}\) for an applied bias. In comparison to the equilibrium case Fig. 2, the NE entropy increases for \(\varepsilon_{QB}\) values included in an energy window corresponding to the applied bias. This is to be expected since for these energies \(\omega\), we have \(f_{L}(\omega)\neq f_{R}(\omega)\) and correlatively \(f_{QB}^{\rm NE}(\omega)\neq f_{a}(\omega)\) with values allowing for a wider range of energies for which the QB energy level is quasi half-filled.
#### iii.2.1 Applied temperature gradient \(\Delta T\)
The exergy results for different temperature gradients are shown in Fig. 4. As expected, the exergy increases with increasing temperature gradients. However the behavior of \(\beta W_{ext}(\beta=\beta_{eq})\) versus \(\Delta T\) depends strongly on the position of the energy level of the QB.
For the off-resonant cases, the charge current is dominated by one "type of particle", i.e. by electron for the off-resonant-empty regime and by hole for the off-resonant-full regime. Both thermal and charge currents have opposite sign (compared Fig. 11a and Fig. 13a). Regardless to the sign of the thermal currents, the currents increase with increasing \(\Delta T\) which therefore leads to an increase of the exergy (see Fig. 4a and 4c).
For the resonant case, there is a competing contribution from electron and hole processes and hence a strong reduction of the thermal charge current (see Figs. 9a), which also corresponds to a reduction of the exergy (see Fig. 4b) in comparison to the off-resonant cases.
Adding a small bias \(\Delta\mu\) to the temperature gradient \(\Delta T\) leads to a modification of the charge current. With our convention, a positive (negative) bias \(\Delta\mu=\mu_{L}-\mu_{R}\) implies electron transfer from left to right (right to the left). This leads to a positive (negative) contribution for the thermal currents.
For the off-resonant-empty regime, an additional positive (negative) bias increases (decreases) the charge current and similarly for the exergy.
For the off-resonant-full regime, an additional positive (negative) bias increases (decreases) the charge current. However, because of the sign of the current, this corresponds to an opposite behavior for the absolute value of the current. And therefore, an additional positive (negative) bias decreases (increases) the exergy.
#### iii.2.2 Applied bias \(\Delta\mu\)
We now turn on the effects of small to large applied biases on the exergy (in the presence or not of a small temperature gradient). With our set-up, a positive bias favors electron transfer from left to right. Therefore positive biases lead to larger currents in the off-resonant-empty regimes in comparison to the resonant and off-resonant-full regime (compare Fig. 12a and Fig. 10a to Fig. 14a). Consequently, the exergy is larger for the off-resonant-empty regime in comparison to the resonant and off-resonant-full regimes (see Fig. 5).
Our calculations show that one obtains the largest values for the exergy for positive applied bias (in the off-resonant-empty regime) in comparison to temperature gradients. Based on these considerations, it seems feasible to develop optimal bias-charging protocols for QBs. Moreover, a negative temperature gradient in high bias-charging regimes in Fig. 5c improves battery performance. As a result, these observations provide a great deal of control over optimal battery charging.
Interestingly, it is possible to optimize further the exergy by adapting the position of QB energy level for the NE conditions (positive applied bias). The dependence of the NE exergy versus \(\varepsilon_{QB}\) is shown in Fig. 6. The exergy reaches a maximum value when the QB energy level is located around the value of the applied bias \(\Delta\mu\). Adding a (positive or negative) temperature gradient does not change much the position of the
Figure 3: Non-equilibrium entropy in the QB versus the energy level \(\varepsilon_{QB}\), for an applied bias \(\Delta\mu=0.5\) and \(T_{L}=T_{R}=T_{eq}\). Other parameters are the same as in Fig. 2.
maximum of the exergy versus \(\varepsilon_{QB}\) as can be seen in Fig. 7.
## V Conclusion
We have presented a model for the study of the charging process and thermodynamics of QBs strongly coupled to their environments (heat and particle reservoirs). In our model, the battery is charged by applying a temperature gradient and/or bias to create NE steady-state currents of heat and particles. Our model uses the standard NE Green's function expressions for the quantum transport. We have found optimal regimes for battery charging by studying the battery performance using a NE distribution function approach.
By using this scenario, interesting results were obtained, where the model parameters are delicately balanced. It has been shown that the charging of a quantum battery can be significantly improved by increasing the bias (chemical potential difference), compared to a temperature gradient, more specifically in the off-resonant-empty transport regime. Ad
Figure 4: Maximum available work \(\beta W_{ext}\) as a function of \(\Delta T\) (with additional applied bias), for the different regimes: (4a) off-resonant-full \(\varepsilon_{QB}=-0.2\), (4b) resonant \(\varepsilon_{QB}=0.1\), and, (4c) off-resonant-empty \(\varepsilon_{QB}=0.4\). Other parameters are \(\varepsilon_{a}=0,\ h_{a}=1.0,\ \nu_{a}=0.12\), \(T_{eq}=0.1\) and \(\mu_{eq}=0.1\).
Figure 5: Maximum available work \(\beta W_{ext}\) as a function of \(\Delta\mu\) (with additional temperature gradient), for the different regimes: (5a) off-resonant-full \(\varepsilon_{QB}=-0.2\), (5b) resonant \(\varepsilon_{QB}=0.1\), and (5c) off-resonant-empty \(\varepsilon_{QB}=0.4\). The other parameters are the same as in Fig. 4.
Figure 6: Maximum available work \(\beta W_{ext}\) versus the energy level \(\varepsilon_{QB}\) for the different applied biases \(\Delta\mu\) and \(T_{L}=T_{R}=T_{eq}\). The exergy has a maximum value for \(\varepsilon_{QB}\sim\Delta\mu\). The other parameters are the same as in Fig. 4.
Figure 7: Maximum available work \(\beta W_{ext}\) versus the energy level \(\varepsilon_{QB}\) for an applied bias \(\Delta\mu=0.5\) and different small temperature gradients \(\Delta T\). The exergy has a maximum value for \(\varepsilon_{QB}\sim\Delta\mu\), and the presence of an additional temperature gradient does not affect too much the position of the maximum exergy. The other parameters are the same as in Fig. 4.
ditionally, it is also possible to enhance the battery charging performance by applying a small bias to a temperature gradient (or an small additional temperature gradient to an applied bias).
More importantly, one can further increase the exergy of the QB by adapting the position of the QB energy level in the NE conditions corresponding to applied biases. A maximum exergy is obtained for the QB energy level being equal to the applied bias. A three-terminal device where the QB is gated, in a similar way as in conventional transistors, could be designed to allow optimum exergy, by varying the gate voltage, when a charge current flows between the source and drain electrodes.
###### Acknowledgements.
This work has been supported by the University of Kurdistan. S. Salimi thanks research funded by Iran national science foundation (INSF) under project No.4003162.
## Appendix A An approximation for the reduced density matrix
Following the results presented in Ref. [65], it is possible to obtain (for small temperature gradients and biases) an explicit expression for the density matrix (in the NE steady-state regime) for the single-level system (QB-region). Following Sec. III in Ref. [65], such a density matrix is given by
\[\hat{\varrho}^{NE}_{QB}=\frac{\exp\!\left(-a\hat{d}^{\dagger}\hat{d}\right)}{ Z_{QB}}=\frac{\exp\!\left(-a\ \hat{d}^{\dagger}\hat{d}\right)}{1+\exp\!\left(-a\right)}\;. \tag{10}\]
with \(a=\ln\!\left(d_{QB}^{-1}-1\right)\) and \(d_{QB}=\langle\hat{d}^{\dagger}\hat{d}\rangle=\int\mathrm{d}\omega\,A_{QB}( \omega)f^{\mathrm{NE}}_{QB}(\omega)\).
One should note that the definition of \(\hat{\varrho}^{\mathrm{NE}}_{QB}\) is completely different from the reduced density matrix \(\hat{\rho}^{\mathrm{NE}}_{QB}=\mathrm{Tr}_{L,R}[\hat{\rho}^{\mathrm{NE}}]\) in our approach. Some effects of NE conditions for the coupled QB region are taken into account via the quantities \(a\) and \(d_{QB}\). However \(\hat{\varrho}^{\mathrm{NE}}_{QB}\) has clearly the form of a density matrix for an isolated single-level system.
The elimination of the \(L\) and \(R\) degrees of freedom, in the calculation of \(\hat{\varrho}^{\mathrm{NE}}_{QB}\), is a difficult task. \(\hat{\rho}^{\mathrm{NE}}\) is build from the total Hamiltonian of the coupled system, and the total Hamiltonian does not commute with the individual Hamiltonians \(H_{L,QB,R}\) because of the coupling operators \(H_{int}\) between the QB and \(L,R\) regions. Therefore the calculations based on \(\hat{\varrho}^{\mathrm{NE}}_{QB}\) are only valid in the weak coupling (between the QB and the reservoirs) regime.
From Eq. (10), we can evaluate the maximum available work in Eq. (3) for the QB region
\[\beta W^{o}_{\mathrm{ext}}=\mathrm{Tr}\left[\hat{\varrho}^{\mathrm{NE}}_{QB} \ln\left(\frac{\hat{\varrho}^{\mathrm{NE}}_{QB}}{\hat{\varrho}^{\mathrm{eq}}_ {QB}}\right)\right] \tag{11}\]
In the definition of \(\hat{\varrho}^{\mathrm{NE}}_{QB}\), the partition function \(Z_{QB}\) is the trace \(Z_{QB}=\mathrm{Tr}[\exp\!\left(-a\hat{d}^{\dagger}\hat{d}\right)]=1+\exp(-a)\) taken over only two states corresponding to the QB energy level being empty or occupied by one electron
\[\begin{split} Z_{QB}&=\mathrm{Tr}[\exp\!\left(-a \hat{d}^{\dagger}\hat{d}\right)]\\ &=\sum_{N=0,1}\langle N|\exp\!\left(-a\hat{d}^{\dagger}\hat{d} \right)|N\rangle=1+\exp(-a)\end{split} \tag{12}\]
where \(\langle 1|\hat{d}^{\dagger}\hat{d}|1\rangle=1\) and \(\langle 0|\hat{d}^{\dagger}\hat{d}|0\rangle=0\).
One can also define an entropy from \(\hat{\varrho}^{\mathrm{NE}}_{QB}\). By using the same trace as for the partition function, one easily gets
\[\begin{split} S^{\mathrm{NE}}_{\varrho}&=-\mathrm{ Tr}[\hat{\varrho}^{\mathrm{NE}}_{QB}\ln\hat{\varrho}^{\mathrm{NE}}_{QB}]\\ &=-\frac{1}{1+e^{-a}}\ln\left(\frac{1}{1+e^{-a}}\right)-\frac{e^ {-a}}{1+e^{-a}}\ln\left(\frac{e^{-a}}{1+e^{-a}}\right)\\ &=-(1-d_{QB})\ln(1-d_{QB})-d_{QB}\ln(d_{QB})\end{split} \tag{13}\]
And finally, the exergy in Eq. (11) is given by the following expression
\[\beta W^{o}_{ext}=d_{QB}\ln\left(\frac{d_{QB}}{f^{\mathrm{eq}}_{QB}}\right)+( 1-d_{QB})\ln\left(\frac{1-d_{QB}}{1-f^{\mathrm{eq}}_{QB}}\right) \tag{14}\]
Now, a few comments about the differences between our expressions for the NE entropy Eq. (12) and maximum available work Eq. (13) and Eqs. (13)-(14) are in order.
Our expressions Eq. (12) and Eq. (13) are obtained from an energy integration, where each energy corresponding to one "scattering process". Such an energy integration does not appear in Eq. (13) and Eq.(14), or indirectly in the definition of \(d_{QB}\). Moreover, one can interpret the spectral function \(A_{QB}\) as a probability distribution since \(\int\mathrm{d}\omega\,A_{QB}(\omega)=1\). Therefore, one term in our NE entropy expression can be rewritten as follows
\[\int\mathrm{d}\omega\,A_{QB}(\omega)\left[f^{\mathrm{NE}}_{QB}(\omega)\ln f^{ \mathrm{NE}}_{QB}(\omega)\right]=\langle f^{\mathrm{NE}}_{QB}\ln f^{\mathrm{ NE}}_{QB}\rangle_{A_{QB}}\;, \tag{15}\]
which is simply an average over the distribution \(A_{QB}(\omega)\).
The corresponding term in \(S^{\mathrm{NE}}_{\varrho}\) Eq. (13) is rewritten as
\[d_{QB}\ln d_{QB}=\langle f^{\mathrm{NE}}_{QB}\rangle_{A_{QB}}\ln f^{\mathrm{ NE}}_{QB}\rangle_{A_{QB}}\;. \tag{16}\]
From probability theory, the average of a function \(\langle f(X)\rangle\) (in the most general case) is never equal to the function of the average, i.e. \(\langle f(X)\rangle\neq f(\langle X\rangle)\). Therefore the results from Eq. (12) or Eq. (13) will always differ from Eq. (13) or Eq. (14).
However, there is only one case for which Eq. (12) or Eq. (13) gives the same results as Eq. (13) or Eq.(15). It is the case of weak coupling to the leads (\(\nu_{L,R}\to 0\)) for which \(A_{QB}(\omega)\rightarrow\delta(\omega-\varepsilon_{QB})\), leading to \(\langle f^{\mathrm{NE}}_{QB}\ln f^{\mathrm{NE}}_{QB}\rangle_{A_{QB}}=f^{\mathrm{ NE}}_{QB}(\varepsilon_{QB})\ln f^{\mathrm{NE}}_{QB}(\varepsilon_{QB})\) and \(\langle f^{\mathrm{NE}}_{QB}\rangle_{A_{QB}}=f^{\mathrm{NE}}_{QB}(\varepsilon_{QB})\).
In Fig. 8, we show the maximum available work and NE entropy, as a of function \(\Delta\mu\), calculated from the density matrix \(\hat{\varrho}^{\mathrm{NE}}_{QB}\) (\(W^{\mathrm{ev}}_{ext},S^{NE}_{\varrho}\)) and from the correct distribution function \(f^{\mathrm{NE}}\) (\(W^{\mathrm{ev}}_{QB}\)). Figs. (a)a and (b)b correspond to the intermediate coupling regime \(\nu_{\alpha}=0.12\) and the weak coupling regime \(\nu_{\alpha}=0.06\) respectively.
Calculations from Eq. (13) and Eq. (12) differ from the maximum work and NE entropy obtained from Eq. (13) and Eq. (14) as expected (in the general cases). In the limit of weak coupling to the \(L\) and \(R\) reservoirs, as expected, both approaches provide the similar results. This is to be expected as the effective form of the density matrix \(\tilde{\varrho}_{B}^{\text{NE}}\) is that of an isolated (i.e. not coupled to the reservoirs) system.
## Appendix B Results for the charge and energy currents
The analysis of the \(\Delta T\) and \(\Delta\mu\) dependence of the currents, entropy and exergy leads to the following:
The exergy (divided by the equilibrium temperature) \(\beta W_{ext}\) has larger values for an applied bias \(\Delta\mu\) (with no temperature gradient \(T_{L}=T_{R}\)) than for an applied temperature gradient \(\Delta T\) (with no bias \(\mu_{L}=\mu_{R}\)).
The largest values for \(\beta W_{ext}\) are obtained for the off-resonant-empty regime with an applied bias \(\Delta\mu\) (\(T_{L}=T_{R}\)), see Fig. 5. Such values can be increased by applying an additional negative temperature gradient \(\Delta T<0\) (where the heat flow goes from right to left \(T_{L}<T_{R}\)) when \(\Delta\mu>\varepsilon_{\partial B}-\mu=0.3\). Below the threshold \(\Delta\mu<\varepsilon_{\partial B}-\mu=0.3\), one can increase the values of \(\beta W_{ext}\) by applying an opposite temperature gradient \(\Delta T>0\) (where the heat flow goes from left to right \(T_{L}>T_{R}\)). Note that the crossover point in the exergy, currents \(I_{Q}\), energy current \(J_{E}\) and heat current \(J_{H}^{L}\), see Figs. (a)a, (b)b and (c)c respectively, corresponds to the chemical potential \(\mu_{L}\) reaching the maximum of the transmission (and of the spectral function \(A_{B}(\omega)\)), i.e. \(\mu_{L}=\varepsilon_{\partial B}\). Surprisingly, a reverse temperature gradient in high bias-charging regimes improves battery performance. Consequently, NE heat current can boost the battery performance or diminish it. A great deal of control over battery charging is therefore possible.
The values of the charge current \(I_{Q}\) in the off-resonant-full regime for different applied bias \(\Delta\mu\), see Fig. (a)a, are rather small. This is because the transport occurs via the (descending) tail of the transmission peak where the transmission values are small (\(\ll 1\)). The maximum in transmission \(\tau\approx 1\) occurs for energies around \(\omega\sim\varepsilon_{\partial B}=-0.2\). This is also reflected in the small values of the energy and heat current, see Figs. (b)b, (c)c and of the exergy Fig. (a)a. For the off-resonant-empty regime, the applied bias \(\Delta\mu\) window swipes the entire transmission peak. An increasing bias \(\Delta\mu\) leads to an increase in the charge \(I_{Q}\) and energy \(J_{E}\) currents, as well as for the exergy.
As far as the \(\Delta\mu\) dependence of the different quantities is concerned, one can see that the NE entropy has smaller amplitudes for the off-resonant-full regime compared to the off-resonant-empty regime, see Fig. 15. A similar behavior is also obtained for the exergy, compared Fig. (a)a and Fig. (c)c. Given that the exergy or extractable work is a part of entropy production spent to reach NE steady-state, the similar behavior of the NE entropy and of the exergy is reasonable and expected. |
2309.16372 | Aperture Diffraction for Compact Snapshot Spectral Imaging | We demonstrate a compact, cost-effective snapshot spectral imaging system
named Aperture Diffraction Imaging Spectrometer (ADIS), which consists only of
an imaging lens with an ultra-thin orthogonal aperture mask and a mosaic filter
sensor, requiring no additional physical footprint compared to common RGB
cameras. Then we introduce a new optical design that each point in the object
space is multiplexed to discrete encoding locations on the mosaic filter sensor
by diffraction-based spatial-spectral projection engineering generated from the
orthogonal mask. The orthogonal projection is uniformly accepted to obtain a
weakly calibration-dependent data form to enhance modulation robustness.
Meanwhile, the Cascade Shift-Shuffle Spectral Transformer (CSST) with strong
perception of the diffraction degeneration is designed to solve a
sparsity-constrained inverse problem, realizing the volume reconstruction from
2D measurements with Large amount of aliasing. Our system is evaluated by
elaborating the imaging optical theory and reconstruction algorithm with
demonstrating the experimental imaging under a single exposure. Ultimately, we
achieve the sub-super-pixel spatial resolution and high spectral resolution
imaging. The code will be available at: https://github.com/Krito-ex/CSST. | Tao Lv, Hao Ye, Quan Yuan, Zhan Shi, Yibo Wang, Shuming Wang, Xun Cao | 2023-09-27T16:48:46Z | http://arxiv.org/abs/2309.16372v1 | # Aperture Diffraction for Compact Snapshot Spectral Imaging
###### Abstract
We demonstrate a compact, cost-effective snapshot spectral imaging system named Aperture Diffraction Imaging Spectrometer (ADIS), which consists only of an imaging lens with an ultra-thin orthogonal aperture mask and a mosaic filter sensor, requiring no additional physical footprint compared to common RGB cameras. Then we introduce a new optical design that each point in the object space is multiplexed to discrete encoding locations on the mosaic filter sensor by diffraction-based spatial-spectral projection engineering generated from the orthogonal mask. The orthogonal projection is uniformly accepted to obtain a weakly calibration-dependent data form to enhance modulation robustness. Meanwhile, the Cascade Shift-Shuffle Spectral Transformer (CSST) with strong perception of the diffraction degeneration is designed to solve a sparsity-constrained inverse problem, realizing the volume reconstruction from 2D measurements with Large amount of aliasing. Our system is evaluated by elaborating the imaging optical theory and reconstruction algorithm with demonstrating the experimental imaging under a single exposure. Ultimately, we achieve the sub-super-pixel spatial resolution and high spectral resolution imaging. The code will be available at: [https://github.com/Krito-ex/CSST](https://github.com/Krito-ex/CSST).
## 1 Introduction
Snapshot spectral imaging (SSI) refers to the acquisition of a 3D spatial-spectral data cube containing spectral information at each spatial location in a single exposure [1]. Whereas spectrum is a fundamental property that covers the physical characteristics of scenes, the visual and discriminative capabilities along the spectral and temporal dimensions will lead to unparalleled high-dimensional visual capability [2]. Hence, the acquisition of high temporal-spatial-spectral resolution data can provide a more comprehensive and refined observation and measurement of dynamic objects or processes.
Compared with scanning strategies of traditional imaging spectrometers along the spatial or spectral dimension, SSI methods perform specific system designs [3, 4, 5, 6] based on the intrinsic sparsity of the spatial-spectral information of a scene through predefined and well-calibrated modulation or projection paradigms, which can achieve video-level capture of spectral data and have the potential for a wide range of applications in various scenarios such as combustion dynamics [7], cellular dynamics [8], industrial monitoring [9].
However, shortcomings in the compactness, the spatial-temporal-spectral resolution of the imaging system, and the robustness of the modulation limit the application of SSI where portability is paramount [10, 11]:
SSI systems based on computational imaging methods recover the spectral cube by encoding the incident light and solving an underdetermined, sparsity-constrained inverse problem. However, the current prevailing designs rely on bulky relay systems, prisms, or other dispersive elements that result in massive and complex optical systems [10]. Among these, dispersive methods exemplified by CTIS [4]
Figure 1: (a) illustrates the CTIS acquisition method and strategy of using long optical path with sacrificing spatial resolution, while ADIS reconstructs from aliasing; (b) depicts different imaging methods for mosaic filter sensors.
obviate the need for spatial modulation at the relay system's focal plane, offering the potential for compact design. However, as shown in Figure 1(a), CTIS takes measures of long optical length and sacrifices spatial resolution to reduce the degree of data aliasing. In contrast, we propose a framework that utilizes a single mask at non-focal plane locations to achieve diffractive effects previously accomplished with complex gratings. Specific orthogonal mask diffraction can generate multiplexed spatial spectral projections to reconstruct 3D data cubes without sacrificing system integration, which consists of two sets of parallel lines with orthogonal directions. Overall, ADIS greatly improves the compactness of spectral imaging systems with the same level of integration and manufacturing cost as common RGB or monochrome cameras.
The filter array-based SSI schemes have a compact architecture, but as shown in Figure 1(b), the filter array itself is a sampling trade-off in spatial-spectral dimensions, sacrificing the spatial or spectral resolving ability of imaging systems [12]. The encoding potential of the filter array, however, opens the door to an inverse solution process in ADIS. So a novel encoding scheme is adopted, treating the filter array as a sub-super-resolution encoding array with periodicity. Further, we establish a Transformer-based deep unfolding method, CSST, with orthogonal degradation perception that simultaneously captures local contents and non-local dependencies to meet the challenge of reconstructing convincing sub-super-resolution results from highly confounded measurements.
Additionally, existing SSI technologies rely on multiple optical devices to complete optical encoding in physical space, and its accuracy in practical applications depends on the spatial-spectral mapping relationship determined by the calibration position of optical components, while the ADIS proposed maintains spatial invariance. Under arbitrary perturbation to the aperture mask, it still uniformly maintains the mixed spectral encoding generated by the optical multiplexer to solve the movement problem faced in actual measurement. Furthermore, when the physical parameters of the optical device are determined, the distance between the optical combiner and the sensor is the only variable that affects the spectral mapping. Therefore, ADIS reconstruction only relies on the constant parameters of the system and the distance between the system and the imaging plane, without any complicated calibration.
In summary, specific contributions are:
\(\bullet\) A novel SSI framework with an optical multiplexer, enabling high-fidelity and compact snapshot spectral imaging, offering greater resilience against extraneous perturbations.
\(\bullet\) A novel diffraction-projection-guided algorithm for hyperspectral reconstruction, capturing the intricate dependencies of diffraction-guided spatial-spectral mapping.
\(\bullet\) A prototype device demonstrating excellent hyperspectral acquisition and reconstruction performance.
\(\bullet\) Theoretical derivation, structural analysis and necessary trade-offs for system and algorithm design.
## 2 Related Work
**Coded aperture methods** involve the utilization of a coded aperture, which selectively blocks incident light either in the focal plane or the rainbow plane [13]. Over the past few decades, various representative systems such as CASSI [3, 14],PMVIS [5] and HVIS [15] have been developed to code the light field in the image plane using an occlusion mask, while employing the dispersive element to realize spectral modulation. Additionally, several improvement schemes have been proposed [16, 17, 18], to enhance the effectiveness of the coding process. Despite their efficacy, these systems suffer from the limitations of a bulky optical relay system and the lack of robustness in calibration due to environmental disturbances. In contrast, our system highlights the modulation robustness, which is achieved through the utilization of a clean architecture comprising solely of a mosaic sensor and lenses in combination with an optical multiplexer.
**Dispersive methods** use prisms or diffractive optics to encode the spectral information. For example, CTIS [4] sacrifices spatial resolution for spectral resolution, which suffers from cone loss; or uses a single dispersion to blur the scene, but leads to a highly ill-conditioned problem and low reconstruction accuracy because the spectral encoding is only at the edges of the objects in the scene [19]; or to further improve the compactness of the system, uses diffractive optics such as DOE to reconstruct 3D hyperspectral information based on the sparsity assumption. However, the modulation robustness of these approaches is still limited by the created anisotropic PSF [20]. in contrast, our system preserves the system compactness with a good enhancement for potential for portable application scenarios.
**Filter-array-based methods** commonly recover desired channels by utilizing tiled spectral filter arrays in conjunction with the sensor, which incorporate a unique layout of super-pixels periodically arranged in the plane, leading to a reduction in spatial resolution with an increase in the number of sampled channels [12]. While some demosaic techniques may be used in combination with filter-array-based methods, they rely on data that is not initially captured by the sensor [21]. Although constrained by detector and filter dimensions, narrow-band filter-based spectrometers possess a distinct advantage in terms of miniaturization [10]. Various design solutions, such as thin-films [22], planar photonic crystals [23], metasurfaces [24], have been demonstrated in laboratory settings for the development of filter-array-based spectrometers. In this study, we utilize an orthogonal mask to multiplex information from a single point to different sensor locations for encoding purposes. Fur
thermore, our approach can be applied to other hardware solutions for mosaic encoding designs, thus extending its potential applications.
**Reconstruction Algorithm.** In the field of hyperspectral image (HSI) reconstruction, traditional iterative decoding approaches encounter significant challenges in terms of the time-consuming encoding process and the requirement for prior knowledge [25, 26]. To address these challenges, end-to-end deep-learning approaches have been proposed and have demonstrated remarkable potential in optimizing complex ill-posed problems in various snapshot imaging systems [27, 28, 29, 30]. Notably, \(\lambda\)-net [29] and TSA-net [28] have proposed dual-stage generative models and self-attention, respectively, to map HSI images from a single-shot measurement while modeling spatial and spectral correlation with reasonable computation cost. Recently, Transformer-based methods [31, 32, 33] have emerged as superior alternatives, outperforming previous methods and greatly improving reconstruction efficiency. Additionally, some studies have combined the strengths of both iterative and deep-learning methods by utilizing deep unfolding networks for HSI reconstruction [34, 33]. However, most of these methods rely on a structural mathematical representation of the inverse process, which is absence in ADIS, making the above methods inapplicable or ineffective, so a Transformer-based deep unfolding method, CSST, is designed to cater the requirements of ADIS inverse solving.
## 3 System overview
This section introduces the proposed SSI system, ADIS, covering its basic configuration, principles, and mathematical logic for determining the system imaging model and device parameters. We also discuss design trade-offs of system parameters and analyze the system's robustness to external perturbations.
### System Configuration
Figure 2 illustrates the configuration of our aperture diffraction imaging spectrometer system, comprising a special lens featuring an orthogonal mask on the principal plane. Alternatively, the lens can be substituted with two plano-convex lenses and orthogonal masks during experimentation. The system is completed with a mosaic array filter camera. When a field point with a smooth reflectance distribution is captured, the system disperses spectral information across different spectral bands in an orthogonal pattern. This pattern directs the information to various sub-pixel positions on the mosaic filter-encoded array. As a result, each sub-pixel on the sensor collects different bands from different spatial positions, enabling sub-super pixel resolution snapshot spectral imaging.
### Imaging Forward Model
We now consider a multi-slit diaphragm has \(N\) parallel rectangular diaphragms with rectangular square apertures of width \(a\) and length \(b\). The distance between two adjacent slits is \(d\). A simplified schematic of ADIS is shown in Figure 3(a). According to the Huygens-Fresnel principle, each point on a wavefront can be considered a new secondary wave source. Thus, we can treat each rectangular square aperture as a point source for a multi-slit diaphragm. Through the amalgamation of waves generated by each of these point sources, we can effectively derive the complete wave pattern of the entire diaphragm:
\[E_{p}=E_{0}\frac{\sin\beta_{1}}{\beta_{1}}\frac{\sin N\gamma_{1}}{\sin\gamma_{ 1}}\frac{\sin\beta_{2}}{\beta_{2}}\frac{\sin N\gamma_{2}}{\sin\gamma_{2}} \tag{1}\]
Where \(\theta_{1}\) and \(\theta_{2}\) are the diffraction angles in x- and y-directions respectively, \(\beta_{1}=\frac{1}{2}kb\sin\theta_{1}\), \(\beta_{2}=\frac{1}{2}ka\sin\theta_{2}\), \(\gamma_{1}=\frac{1}{2}kd\sin\theta_{1}\), \(\gamma_{2}=\frac{1}{2}kd\sin\theta_{2}\). Further, by utilizing the paraxial approximation in far-field imaging, the angular relationship can be transformed into a position relationship (\(\sin\theta_{1}\approx\tan\theta_{1}=\frac{x}{f_{2}}\), \(\sin\theta_{2}\approx\tan\theta_{2}=\frac{y}{f_{2}}\)). As a result,
Figure 2: Illustration of ADIS architecture and reconstruction pipeline. In the upper-left, the equivalence between the two complementary masks is depicted. The PSF show in the middle is obtained by ADIS through monochromatic illumination and multiband superimposition.
the intensity and position relationship of the diffraction pattern can be represented as follows:
\[I(x,y,\lambda)=I_{0}\cdot D(x,y,\lambda)\cdot P(x,y,\lambda) \tag{2}\]
\[D(x,y,\lambda)=\sin{c^{2}(\frac{b}{\lambda f_{2}}x)}\sin{c^{2}(\frac{a}{\lambda f _{2}}y)} \tag{3}\]
\[P(x,y,\lambda)=\Bigg{[}\frac{\sin(N\frac{\pi d}{\lambda f_{2}}x)}{\sin(\frac{ \pi d}{\lambda f_{2}}x)}\Bigg{]}^{2}\Bigg{[}\frac{\sin(N\frac{\pi d}{\lambda f _{2}}y)}{\sin(\frac{\pi d}{\lambda f_{2}}y)}\Bigg{]}^{2} \tag{4}\]
Where the formula \(D(x,y,\lambda)\) is the diffraction factor describes the diffraction effect of each rectangular square hole. \(P(x,y,\lambda)\) is the interference factor describes the effect of multi-slit interference. \((x,y)\) denotes the spatial coordinates on the receiving screen, while \(f_{2}\) denotes the distance between the diffraction array and the sensor.
Therefore, Given our design with a lens generating orthogonal diffraction in front of the sensor, the forward model of ADIS can be considered as a combination of projection modulation and intensity encoding:
\[L[x,y]=\sum_{\lambda=0}^{K-1}F_{\lambda}[x,y]\cdot Q[x,y,\lambda] \tag{5}\]
Where \(F_{\lambda}[x,y]\) denotes the modulation of optical multiplexer, which is comprehensively conveyed via Equation 2, while \(Q[x,y,\lambda]\) denotes filtering and coding influence of mosaic filter sensors.
### Orthogonal Mask Parameters
Through the analytical formula of aperture diffraction in the image plane, we can analyze the relationship between different diffraction orders and the parameters of the mask. We adjust the aperture mask parameters to increase the intensity of first-order diffraction while suppressing low-order diffraction. Increasing the diffraction intensity of one order can add more spectral information to the image plane, while suppressing other diffraction orders can reduce the stray intensity information during image processing. Whereas the dispersion function of the aperture mask is uniquely determined by the imaging focal length \(f_{2}\) and period \(d\), where the dispersion distance for the first-order diffraction is: \(\Delta x_{m}=\frac{f_{2}}{d}\left(\lambda_{\max}-\lambda_{\min}\right)\). Notably, expanding dispersion distance enhances system spectral resolution, yet escalating it also magnifies PSF dispersion, exacerbating reconstruction underdetermination. Combining the effects of dispersion distance and PSF discretization on the reconstruction of the system, we choose appropriate square holes period \(d=10\mu m\), which is within our manufacturing capability. We calculate and compare the intensity distributions of zero-order and first-order diffraction.
For the zero-order diffraction:
\[I=I_{0}\bigg{(}\frac{\sin{\beta_{1}}}{\beta_{1}}\bigg{)}^{2}\bigg{(}\frac{\sin {\beta_{2}}}{\beta_{2}}\bigg{)}^{2}N^{4}=I_{0}N^{4} \tag{6}\]
For the first-order diffraction:
\[I^{\prime}=I_{0}N^{4}\bigg{(}\frac{\sin{\beta_{1}}}{\beta_{1}}\bigg{)}^{2}=I_ {0}N^{4}\bigg{[}\frac{d}{b\pi}\sin{\bigg{(}\frac{b}{d}\pi\bigg{)}}\bigg{]}^{2} \tag{7}\]
Let \(m=\frac{d}{b}\), So \(I^{\prime}/I_{0}~{}=\big{[}\frac{m}{\pi}\sin{\big{(}\frac{\pi}{m}\big{)}} \big{]}^{2}\). According to calculations, the intensity contrast between the zero-order diffraction and the first-order diffraction depends entirely on \(\frac{d}{b}\), which is the ratio between the opening aperture and the spacing of the square holes.
Furthermore, considering the diffraction pattern defined in Equation 2, varying m also influences diffraction patterns. And when we determine \(a=b=5\mu m\) for the case of \(d=10\mu m\), all the even orders will missing, which can be to reduce the projection complexity appropriately. \(I_{D_{x}-D_{y}}=I_{0}N^{4}A_{D_{X}}A_{D_{y}}\) is expressed as the intensity relation of different diffraction levels, \(D_{x}\), \(D_{y}\) denote the number of diffraction orders in the orthogonal direction respectively.
\[A_{D_{x}}=\begin{cases}1&,D_{x}=0\\ \frac{4}{D_{x}^{2}\pi^{2}}&,D_{x}=1,3,5,...\\ 0&,D_{x}=2,4,6,...\\ \end{cases} \tag{8}\]
\(A_{D_{x}}\), \(A_{D_{y}}\) have the same mathematical form and together define the projection form of ADIS. Furthermore, the complementary form of the \(N\times N\) square aperture array can be employed using the Babinet principle, thereby elevating the light throughput efficiency from \(25\%\) to \(75\%\).
### Modulation Robustness
The maintenance of spatial invariance in optical systems is an indispensable characteristic for effectively addressing interference-related issues. While depth invariance is the main part to be considered in an imaging system, here we address the depth invariance of ADIS. Suppose a monochromatic incident wave field \(u_{0}\) with amplitude \(A_{0}\), phase \(\phi_{0}\) passes through the optical multiplexer:
Figure 3: (a) illustrates the simplified schematic of the ADIS’s profile; (b) shows the PSF of the system at different depths.
\[u_{0}(x_{m},y_{m})=A_{0}(x_{m},y_{m})e^{i\phi_{0}(x_{m},y_{m})} \tag{9}\]
An amplitude encoding and phase shift occurs by the optical multiplexer:
\[u_{1}(x_{m},y_{m})=u_{0}(x_{m},y_{m})A_{1}(x_{m},y_{m})e^{i\phi_{1}(x_{m},y_{m})} \tag{10}\]
And when the ADIS is illuminated by a point light source located at depth Z. The spherical wave filed \(u_{0}\) emitted by the source incident to the optical multiplexer can be represented by:
\[u_{0}(x_{m},y_{m};Z)\propto\frac{1}{\xi}e^{ik(\xi-Z)} \tag{11}\]
Where \(\xi=\sqrt{{x_{m}}^{2}+{y_{m}}^{2}+Z^{2}}\).Since the aperture size is negligibly smaller than the imaging depth, the following relationship exists: \(\xi\approx Z\). Then the wave field \(u_{1}\) modulated by the optical multiplexer can be expressed as:
\[u_{1}(x_{m},y_{m};Z)\propto\frac{1}{Z}A_{1}(x_{m},y_{m})e^{i\{k(\xi-Z)+\phi_{ 1}(x_{m},y_{m})\}} \tag{12}\]
Since \(\xi\approx Z\), the point source is relatively close to optical infinity, and \((\xi-Z)\ll\phi_{1}(x_{m},y_{m})\) holds in Equation 12. Then Equation 12 can be approximated as Equation 10.The above derivation verifies the depth invariance of the ADIS in a specific depth range. This derivation confirms ADIS's depth invariance within a specific range. We validated this by capturing ADIS PSFs at various depths using a \(550nm\) laser (Figure 3(b)), revealing consistent invariance beyond the imaging focal length.
Moreover, ADIS demonstrates resilience to \((x,y)\)-direction device perturbations, provided the modulation plane remains within the imaging optical path. Here, we assume a positional shift p in the y-direction for the Mask. Then we can get: \(E_{p}=E_{0}\frac{\sin\beta_{1}}{\beta_{1}}\frac{\sin N\gamma_{1}}{\gamma_{1}} \frac{\sin\beta_{2}}{\beta_{2}}\frac{\sin N\gamma_{2}}{\gamma_{2}}\). \(e^{ikp\sin\theta}\). Taking the amplitude of the electric field can obtain \(|E_{p}|=E_{0}\frac{\sin\beta_{1}}{\beta_{1}}\frac{\sin N\gamma_{1}}{\gamma_{1}} \frac{\sin\beta_{2}}{\beta_{2}}\frac{\sin N\gamma_{2}}{\gamma_{2}}\), which verifies the modulation robustness of the system.
## 4 Hyperspectral Reconstruction
Drawing on the benefits of self-attention for simultaneously capturing both short- and long-ranged dependencies and dynamic weighting, the Transformer architecture has demonstrated exceptional performance in a range of tasks [31, 32, 33, 35, 36, 37]. In parallel, the deep unfolding framework shows considerable promise through the utilization of multi-stage networks to map measurement outcomes onto the HSI, coupled with layer-by-layer optimization of the imaging system's priori model. This approach affords a more seamless integration between the depth unfolding framework and the imaging model.
In this paper, we present the Cascade Shift-Shuffle Spectral Transformer (CSST) algorithm, which is designed to improve network degradation perception by leveraging shift and shuffle operations that conform to the physical model of imaging and possess a strong perception of orthogonal diffraction projection patterns.
### Copf
To tackle the aforementioned challenges, we develop a Cascaded Orthogonal Perception Framework (COPF) that utilizes a deep unfolding framework to address the aperture diffraction degradation process. The COPF is illustrated in Figure 4. First, a lightweight Quantitative Parameter Estimation network (QPENet), is designed to estimate key cues for later iterations from the system's measurements and priori information such as filter-encoded spectral response and orthogonal diffraction patterns. Notably, the computed PSF exhibits greater spatial extent than the input filter function. To tackle data redundancy, we first downsample the PSF's spatial resolution and the filter function's channel dimension. Figure 4 illustrates the architecture of QEPNet, which includes a \(conv1\times 1\), a \(conv3\times 3\), and three fully connected layers. The estimated parameter \(\beta=\{\beta_{1},\beta_{2},...,\beta_{k}\}\) is a multichannel feature map that has the same resolution as the input features, whose number of channel layers is kept consistent with the number of iterations, allowing the estimated parameters to guide and optimize the reconstruction process pixel by pixel. Subsequently, COPF adaptively adjusts the feature map information to guide the iterative learning by inputting \(\beta\) channel by channel at different levels of iterations. The initial values for the iterative process in COPF are acquired through a multi-scale integration of system measurements and prior knowledge. During the iterative learning process, the denoiser is cascaded with different cue information input directly in the iterative framework to fully utilize the guiding role of \(\beta\).
Figure 4: Illustration of COPF architecture with k stages. Theoretically, the SST in the COPF can be replaced with a different denoiser.
### Shift-Shuffle Transformer
The utilization of transformer models for global and local perception encounters challenges of a restricted receptive field and computationally intensive processing. So we propose a novel denoiser, Shift-Shuffle Transformer (SST) as shown in Figure 5, to be inserted in COPF. SST employs channel-shuffle and shift operations with fixed length (set to 1), introduced at the feature map level in the orthogonal direction. These operations improve the model's ability to perceive blending generated by aperture diffraction, while also facilitating the modeling of both short and long distances via the shift operation's function as a special token mixer. It is worth noting that the incorporation of the shift operation does not result in an increase in the total number of algorithm parameters.
Similar to [31, 33], we utilize a three-layer U-shaped structure as the base framework of SST as shown in Figure 5(a). Firstly, SST uses a \(conv3\times 3\) to map reshaped input \(X_{k}\) concatenated with stretched \(\beta_{k}\), filter function \(\sigma\in\mathbb{R}^{H\times W\times 3}\), \(\text{PSF}\varsigma\in\mathbb{R}^{H\times W\times 3}\) into feature \(X_{0}\in\mathbb{R}^{H\times W\times C}\). Secondly, \(X_{0}\) passes through the encoder, bottleneck, and decoder to be embedded into deep feature\(X_{f}\in\mathbb{R}^{H\times W\times C}\). Basic unit Shift-Shuffle Attention Block (SSAB) assumes the composition of encoder and decoder.
**Shift-Shuffle Attention Block** consists of two layer normalization (LN), a SS-MSA, and a Feed-Forward Network (FFN) follows the classic design. The most important part of SSAB is Shift-Shuffle Multi-head Self-Attention(SS-MSA) with two stages:
**First Stage**. In the first stage of SST, only shift operations \(\Upsilon\left(\cdot\right)\) are performed on the channels. for input tokens \(X_{in}\in\mathbb{R}^{H\times W\times C}\):
\[A_{1}^{i}=\mathrm{softmax}(\Upsilon(\frac{Q_{1}^{i}K_{1}^{i^{T}}}{\sqrt{d_{h} }}+P_{1}^{i}))V_{1}^{i} \tag{13}\]
Where \(h=1\), \(d_{h}=C\), \(\Upsilon\left(\cdot\right)\) denotes shifting the input feature map by one pixel in each of its last two dimensions. And the output of first stage is \(S(X_{in})_{1}=\sum\limits_{i=1}^{h}A_{1}^{i}W_{1}^{i}\).
**Second Stage**. Q, K, V will be split into two equal parts along the channel dimension as: \(Q_{2}=[Q_{2f},Q_{2s}],K_{2}=[K_{2f},K_{2s}],V_{2}=[V_{2f},V_{2s}]\). The two parts perform different operations separately and get the corresponding results:
\[A_{2f}^{i}=\mathrm{softmax}(\Upsilon(\frac{Q_{2f}^{i}K_{2f}^{i^{ T}}}{\sqrt{d_{h}}}+P_{2f}^{i}))V_{2f}^{i} \tag{14}\] \[A_{2s}^{i}=\Theta^{T}(\mathrm{softmax}(\Upsilon(\frac{\Theta(Q_ {2s}^{i})\Theta(K_{2s}^{i^{T}})}{\sqrt{d_{h}}}+P_{2s}^{i}))\Theta(V_{2s}^{i})) \tag{15}\]
Where \(h=1\), \(d_{h}=\frac{C}{2}\), \(\Theta\left(\cdot\right)\) denotes the channel shuffle operations like ShuffleNet [38] and DAHST [33]. And the output of second stage is:
\[S(X_{in})_{2}=\sum\limits_{i=1}^{h}A_{2f}^{i}W_{2f}^{i}+\sum\limits_{i=1}^{h}A _{2s}^{i}W_{2s}^{i} \tag{16}\]
Then we reshape the result of Equation 16 to obtain the output \(X_{out}\in\mathbb{R}^{H\times W\times C}\). The global employment of shift operations, without any supplementary computational overhead, conforms with the ADIS imaging paradigm, while combined with Shuffle operations, enhances the CSST's perceptual capabilities.
## 5 Experimental analysis
Similar to [28, 42, 43, 44, 31, 32, 33], 28 wavelengths are selected from 450nm to 650nm and derived by spectral interpolation manipulation for the HSI data. However, ADIS creates a wide-area, band-by-band form of PSF, which means that we need HSI of larger spatial size to cre
Figure 5: (a) Diagram of SST with three-layer U-shaped structure; (b) SSAB consists of a FFN, a SS-SMA and two LN layers.
ate measurements with a certain scale of simulation to conduct experiments. Real experiments and simulation experiments with different methods and different mosaic patterns are conducted.
### simulation Experiments
**Simulation Dataset.** We adopt two datasets, i.e., CAVE-1024 [28] and KAIST [45] for simulation experiments. The CAVE-1024 consists of 205 HSIs with spatial size 1024x1024 obtained by interpolating and splicing from the CAVE [46] dataset. The KAIST dataset contains 30 HSIs of spatial size \(2704\times 3376\). 10 scenes from the KAIST dataset are selected for testing, while the CAVE-1024 dataset and another 20 scenes from the KAIST dataset are selected for training.
**Implementation Details.** The dispersion step of the primary diffraction is \(0.5\) spatial pixels, while the simulation experiment is deployed in the range of \(400nm\) to \(670nm\), which means that \(586\times 586\times 28\) data cubes are needed to generate \(256\times 256\) resolution measurements for conducting experiments while preserving the tertiary diffraction. We implement CSST by Pytorch. All CSST models are trained with Adam [47] optimizer (\(\beta_{1}=0.9\) and \(\beta_{2}=0.999\)) using Cosine Annealing scheme [48] for 300 epochs on an RTX 3090 GPU. The initial learning rate is \(4\times 10^{-4}\).
**Quantitative Analysis.** Table 1 compares the results of CSST and 7 methods including four reconstruction methods(lambda-Net [29], TSA-Net [28], HDNet [30] and MST++ [41]), three Super-resolution algorithms (Restormer [35], MPRNet [40], MIRNet[39]) on 10 simulation scenes. CSST shows the best experimental results on the ADIS spectral reconstruction task, i.e., 34.08dB in PSNR and 0.958 in SSIM. CSST-9stg significantly outperforms two recent SOTA methods Restormer and MST++ by 0.79dB and 1.85dB, demonstrating the effectiveness and acceptability of the imaging system.
**Qualitative Analysis.** Figure 6 illustrates the comparative performance of our CSST and other methods in the HSI reconstruction of ADIS on the same scene. Visual inspection of the image reveals that the CSST-9stg method provides more intricate details, sharper textures, and well-defined structures. Conversely, the previous approaches produce either overly smooth results that compromise the
\begin{table}
\begin{tabular}{|c|c c c c c c c c c c c c c|} \hline Algorithm & Inference Time & Params & GFLOPS & S1 & S2 & S3 & S4 & S5 & S6 & S7 & S8 & S9 & S10 & Avg \\ \hline HDNet [30] & 2ms & 2.37M & 144.16 & 29.55 & 27.82 & 24.45 & 31.38 & 27.54 & 27.75 & 24.43 & 31.81 & 33.07 & 24.13 & 28.19 \\ & & & 0.879 & 0.862 & 0.821 & 0.883 & 0.825 & 0.856 & 0.812 & 0.906 & 0.893 & 0.834 & 0.857 \\ \hline MIRNet [39] & 2ms & 2.04M & 14.26 & 30.266 & 29.09 & 25.10 & 33.04 & 27.52 & 28.46 & 24.66 & 31.94 & 33.31 & 26.22 & 28.96 \\ & & & 0.907 & 0.888 & 0.846 & 0.909 & 0.860 & 0.871 & 0.822 & 0.913 & 0.903 & 0.854 & 0.877 \\ \hline lambda-Net [29] & 2ms & 32.72M & 23.10 & 30.77 & 28.79 & 26.73 & 31.85 & 28.25 & 28.69 & 27.89 & 32.54 & 34.76 & 25.96 & 29.62 \\ & & & 0.919 & 0.872 & 0.851 & 0.792 & 0.835 & 0.827 & 0.832 & 0.901 & 0.909 & 0.862 & 0.860 \\ \hline TSA-Net [28] & 5ms & 44.23M & 91.19 & 23.81 & 30.26 & 27.13 & 34.47 & 28.58 & 30.35 & 26.95 & 33.98 & 35.73 & 26.80 & 30.71 \\ & & & 0.948 & 0.923 & 0.900 & 0.923 & 0.901 & 0.913 & 0.865 & 0.941 & 0.926 & 0.914 & 0.915 \\ \hline MPRNet [40] & 3ms & 2.95M & 77.30 & 32.38 & 30.91 & 27.34 & 34.53 & 29.24 & 30.49 & 28.98 & 33.97 & 35.90 & 27.02 & 31.08 \\ & & & 0.941 & 0.931 & 0.912 & 0.930 & 0.907 & 0.924 & 0.879 & 0.942 & 0.941 & 0.923 & 0.923 \\ \hline MST++ [41] & 3ms & 1.33M & 17.45 & 33.75 & 31.78 & 28.87 & 35.51 & 29.95 & 32.34 & 28.01 & 35.03 & 38.53 & 28.49 & 32.23 \\ & & & 0.962 & 0.952 & 0.942 & 0.941 & 0.921 & 0.948 & 0.900 & 0.958 & 0.960 & 0.942 & 0.942 \\ \hline Restormer [35] & 10ms & 15.12M & 87.87 & **35.42** & 32.62 & 29.97 & 36.82 & 30.19 & 33.41 & **30.71** & 36.00 & 38.75 & 28.99 & 33.29 \\ & & & 0.970 & 0.959 & 0.951 & 0.942 & 0.926 & 0.956 & 0.909 & 0.961 & 0.962 & 0.945 & 0.948 \\ \hline
**CSST-9stg (Ours)** & 34ms & 6.56M & 70.44 & **34.72** & **34.75** & **31.28** & **36.91** & **31.601** & **33.378** & 30.58 & **36.68** & **39.29** & **31.06** & **34.08** \\ & & & **0.971** & **0.974** & **0.964** & **0.948** & **0.936** & **0.964** & **0.921** & **0.970** & **0.969** & **0.961** & **0.958** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of reconstruction results of different algorithms,Inference time, Params, FLOPS, PSNR (dB) and SSIM are reported.
Figure 6: Qualitative comparison of reconstruction results of different algorithms. Zoomed-in patches of the HSI in the fuchsia box are presented in the lower-left of the figure.
underlying structure or introduce color artifacts and speckled textures. Moreover, the lower left corner of the figure presents the spectral profile of the intensity-wavelength corresponding to the fuchsia square. The CSST-9stg spectral profile exhibits the highest correlation and overlap with the reference curve, demonstrating the superiority of our approach in achieving spectral dimensional consistency reconstruction and the effectiveness of ADIS.
### Real HSI Reconstruction
**Implementation Details.** Firstly, we develop a prototype system utilizing an orthogonal mask with \(25\%\) light-throughput and a Bayer array, as illustrated in Figure 6 (top left). This prototype includes additional filters with a wavelength range of \(450nm-650nm\) to restrict the operating band, and an adjustable diaphragm. The small footprint of the system enables high-dimensional information acquisition. The orthogonal mask utilized in prototype is created by overlapping two sets of parallel lines, each with a width and interval of 5\(\upmu\)m, and the width uniformity accuracy is 0.2\(\upmu\)m. The mask has daiameter of \(25.4mm\) and includes a \(12mm\times 12mm\) modulation surface. It is custom-priced at $80 per unit, with costs below $5 per unit for commercial volume production. Once the physical setup of the system was determined, all projection relationships by could be easily computed by Equation 2 even under the disturbances.
**Training Dataset.** We train CSST-5stg with the real configuration on CAVE-1024 and KAIST datasets jointly. Meanwhile, to address the disparity between real-world experiments and simulations arising from inherent noise and our omission of higher-order low-intensity diffraction, we incorporated randomized noise into the training data for model training, thereby bridging the aforementioned gap.
**Experimental analysis.** The performance of real HSI reconstruction is demonstrated in Figure 7(b), which presents the measurements of a spatial size of \(1056\times 1536\times 24\). The reconstructed spectral data exhibit well-structured content, clear textures, and minimal artifacts. Notably, the predicted spectral curves of the two marker points closely match the curves collected using a point spectrometer. These results provide compelling evidence for the correctness and effectiveness of the mathematical model, design framework, and reconstruction algorithm architecture.
**Dynamic Performance Verification.** In Figure 7(c), the snapshot performance of ADIS is demonstrated through dynamic flame video reconstruction(35 fps).
### Ablation Study
Here we further conduct ablation experiments on each effective component of the CSST algorithm proposed in this paper to demonstrate the necessity of the components used in the algorithm.
We first remove the global shift operations in SS-MSA and COPF from CSST-3stg to conduct the break-down ablation as shown in Table 2. Then We further conduct a comparative analysis to investigate the impact of the shift step size utilized in the shift operations on the effectiveness of CSST reconstruction. The results presented in Table 2 demonstrate a decreasing trend in the reconstruction efficacy of CSST with increasing shift step size. However, it
\begin{table}
\begin{tabular}{c c c c} \hline COPF & Shift (x,y) & PSNR(dB) & SSIM \\ \hline ✓ & ✗ & 27.77 & 0.870 \\ ✓ & (1,1) & **28.74** & **0.885** \\ ✗ & (1,1) & 27.83 & 0.871 \\ ✓ & (2,2) & 28.44 & 0.879 \\ ✓ & (3,3) & 28.19 & 0.873 \\ ✓ & (4,4) & 28.30 & 0.876 \\ ✓ & (5,5) & 28.01 & 0.868 \\ \hline \end{tabular}
\end{table}
Table 2: Break-down ablation results in SS-MSA and COPF, Performance comparison of CSST with different shift step.
Figure 7: (a) shows the prototype of ADIS; (b) illustrates the ADIS’s measurements acquired from real-word and images of different spectral bands recovered by CSST-5stg; (c) shows the measurements and reconstruction results of four frames of a dynamic flame captured by ADIS; (d) Compares the recovered spectral curves and ground truth at the two markers.
is noteworthy that all the CSST algorithms with the shift operation outperform the algorithm that lacks the shift operations.
### Simulation with Different Mosaic patterns
The current section aims to investigate the adaptability of the ADIS architecture with diverse mosaic arrays, and here we utilize CSST-5stg for comparative experiments. The experimental setups employed in this study remain consistent with Section 5.1, with the exception of the encoding form of the sensor mosaic array, which is altered. Three distinct mosaics, including a \(2\times 2\) pattern with 3 channels, a \(3\times 3\) pattern with 4 channels, and a \(4\times 4\) pattern with 9 channels, are utilized for comparative experimentation, as demonstrated in Figure 8. With the improvement of filter encoding capability, the imaging performance of ADIS can be further improved.
### Evaluation of Real System
**Spectral accuracy.** We captured a spectral-interesting, texture-rich scene containing ColorCheck under D65 source illumination to evaluate the spectral accuracy of the hyperspectral image. The measurements captured by our prototype camera and the reconstruction result in the \(495.4nm\) channel is shown in Figure 9(a). We also demonstrate excellent agreement between the reconstructed spectra at heart-shaped markers with intricate texture details and the corresponding ground truth spectra.
**Spatial resolution.** In Figure 9(b), measurements of letters on the ColorCheck are compared with the reconstruction of the \(495.4nm\) channel, which underwent reconstruction, markedly improving the MTF function. Figure 9(c) demonstrates the successful reconstruction of the image within the yellow box, revealing clear textures in each band and restoring high-frequency details from aliased data.
**Tradeoff between accuracy and spectral resolution.** In ADIS, spectral resolution hinges on dispersion distance, while reconstruction accuracy is related to PSF concentration. A higher PSF dispersion decreases inter-band spectral data correlation, thereby alleviating underdetermination in the inverse process. Hence, future efforts should center on optimizing system parameters and algorithm performance to enhance overall performance.
**Sparse Propensity of Reconstruction.** Comparing the reconstruction results of different scenes in Figure 7(b) and 9(c), the artifacts within ADIS reconstructions escalate when the texture complexity and spectral complexity intensify, which could potentially be mitigated through augmentation of training data complexity and diversity.
## 6 Conclusion
A compact diffractive optical system comprising an ultra-thin aperture mask and conventional imaging lens forms a discrete coding pattern on a mosaic sensor. The Cascaded Shift-Shuffle Spectral Transformer (CSST) algorithm is used to decode the diffraction pattern for high-resolution hyperspectral imaging. Meanwhile, the system's spatial invariance ensures pattern robustness, and its diffraction efficiency is improved to 75% using Babinet's principle. Further work is needed to improve imaging quality and spectral resolution while maintaining high diffraction efficiency. Furthermore, there's a need to investigate ADIS's potential for fulfilling large FOV demands.
## 7 Acknowledgments
This work is supported by the National Natural Science Foundation of China (No.62025108), the Leading Technology of Jiangsu Basic Research Plan (No.BK20192003), and the Key & Plan of Jiangsu Province (No. BE2022155).
Figure 8: (a) Different mosaic patterns with different filter functions; (b) illustrates the reconstruction results of ADIS combined with different mosaic filter sensor simulations; (c) illustrates recovered spectral curves and ground-truth in the green box.
Figure 9: (a) Measurement and reconstruction results of a spectral-interesting, texture-complex scene, with a comparison of reconstructed spectra and ground truth spectra at the heart-shaped markers; (b) MTF comparison of the images before and after reconstruction; (c) reconstruction results of the scene in various bands.
## References
* [1] Xun Cao. Hyperspectral/multispectral imaging. In _Computer Vision: A Reference Guide_, pages 592-598. Springer, 2021.
* [2] Quan Yuan, Qin Ge, Linsen Chen, Yi Zhang, Yuhang Yang, Xun Cao, Shuming Wang, Shining Zhu, and Zhenlin Wang. Recent advanced applications of metasurfaces in multi-dimensions. _Nanophotonics_, (0), 2023.
* [3] Ashwin Wagadarikar, Renu John, Rebecca Willett, and David Brady. Single disperser design for coded aperture snapshot spectral imaging. _Applied optics_, 47(10):B44-B51, 2008.
* [4] Michael Descour and Eustace Dereniak. Computed-tomography imaging spectrometer: experimental calibration and reconstruction results. _Applied optics_, 34(22):4817-4826, 1995.
* [5] Xun Cao, Hao Du, Xin Tong, Qionghai Dai, and Stephen Lin. A prism-mask system for multispectral video acquisition. _IEEE transactions on pattern analysis and machine intelligence_, 33(12):2423-2435, 2011.
* [6] Qionghai Dai, Chenguang Ma, Jinli Suo, and Xun Cao. Computational hyperspectral imaging. In _JSAP Annual Meetings Extended Abstracts The 75th JSAP Autumn Meeting 2014_, pages 3821-3821. The Japan Society of Applied Physics, 2014.
* [7] Jacek Hunicz and Dariusz Piernikarski. Investigation of combustion in a gasoline engine using spectrophotometric methods. In _Optoelectronic and Electronic Sensors IV_, volume 4516, pages 307-314. SPIE, 2001.
* [8] Adrian Tartuttis and Vasilis Ntziachristos. Advances in real-time multispectral optoacoustic imaging and its applications. _Nature photonics_, 9(4):219-227, 2015.
* [9] Nathan Hagen. Survey of autonomous gas leak detection and quantification with snapshot infrared spectral imaging. _Journal of Optics_, 22(10):103001, 2020.
* [10] Zongyin Yang, Tom Albrow-Owen, Weiwei Cai, and Tawfique Hasan. Miniaturization of optical spectrometers. _Science_, 371(6528):eabe0722, 2021.
* [11] Xia Hua, Yujie Wang, Shuming Wang, Xiujuan Zou, You Zhou, Lin Li, Feng Yan, Xun Cao, Shumin Xiao, Din Ping Tsai, et al. Ultra-compact snapshot spectral light-field imaging. _Nature communications_, 13(1):2732, 2022.
* [12] Pierre-Jean Lapray, Xingbo Wang, Jean-Baptiste Thomas, and Pierre Gouton. Multispectral filter arrays: Recent advances and practical implementation. _Sensors_, 14(11):21626-21659, 2014.
* [13] Xun Cao, Tao Yue, Xing Lin, Stephen Lin, Xin Yuan, Qionghai Dai, Lawrence Carin, and David J Brady. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. _IEEE Signal Processing Magazine_, 33(5):95-108, 2016.
* [14] Michael E Gehm, Renu John, David J Brady, Rebecca M Willett, and Timothy J Schulz. Single-shot compressive spectral imaging with a dual-disperser architecture. _Optics express_, 15(21):14013-14027, 2007.
* [15] Xun Cao, Xin Tong, Qionghai Dai, and Stephen Lin. High resolution multispectral video capture with a hybrid camera system. In _CVPR 2011_, pages 297-304. IEEE, 2011.
* [16] Xing Lin, Gordon Wetzstein, Yebin Liu, and Qionghai Dai. Dual-coded compressive hyperspectral imaging. _Optics letters_, 39(7):2044-2047, 2014.
* [17] Xing Lin, Yebin Liu, Jiamin Wu, and Qionghai Dai. Spatial-spectral encoded compressive hyperspectral imaging. _ACM Transactions on Graphics (TOG)_, 33(6):1-11, 2014.
* [18] Claudia V Correa, Henry Arguello, and Gonzalo R Arce. Compressive spectral imaging with colored-patterned detectors. In _2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 7789-7793. IEEE, 2014.
* [19] Seung-Hwan Baek, Incheol Kim, Diego Gutierrez, and Min H Kim. Compact single-shot hyperspectral imaging using a prism. _ACM Transactions on Graphics (TOG)_, 36(6):1-12, 2017.
* [20] Daniel S Jeon, Seung-Hwan Baek, Shinyoung Yi, Qiang Fu, Xiong Dun, Wolfgang Heidrich, and Min H Kim. Compact snapshot hyperspectral imaging with diffracted rotation. 2019.
* [21] Sofiane Mihoubi, Olivier Losson, Benjamin Mathon, and Ludovic Macaire. Multispectral demosaicing using pseudo-panchromatic image. _IEEE Transactions on Computational Imaging_, 3(4):982-995, 2017.
* [22] Shao-Wei Wang, Changsheng Xia, Xiaoshuang Chen, Wei Lu, Ming Li, Haiqian Wang, Weibo Zheng, and Tao Zhang. Concept of a high-resolution miniature spectrometer using an integrated filter array. _Optics letters_, 32(6):632-634, 2007.
* [23] Nadia K Pervez, Warren Cheng, Zhang Jia, Marshall P Cox, Hassan M Edrees, and Ioannis Kymissis. Photonic crystal spectrometer. _Optics express_, 18(8):8277-8285, 2010.
* [24] Andreas Tittl, Aleksandres Leitis, Mingkai Liu, Filiz Yesilkoy, Duk-Yong Choi, Dragomir N Neshev, Yuri S Kivshar, and Hatice Altug. Imaging-based molecular barcoding with pixelated dielectric metasurfaces. _Science_, 360(6393):1105-1109, 2018.
* [25] Xin Yuan. Generalized alternating projection based total variation minimization for compressive sensing. In _2016 IEEE International conference on image processing (ICIP)_, pages 2539-2543. IEEE, 2016.
* [26] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. _Foundations and Trends(r) in Machine learning_, 3(1):1-122, 2011.
* [27] Ziyi Meng, Mu Qiao, Jiawei Ma, Zhenming Yu, Kun Xu, and Xin Yuan. Snapshot multispectral endomicroscopy. _Optics Letters_, 45(14):3897-3900, 2020.
* [28] Ziyi Meng, Jiawei Ma, and Xin Yuan. End-to-end low cost compressive spectral imaging with spatial-spectral self-attention. In _Computer Vision-ECCV 2020: 16th European
Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIII 16_, pages 187-204. Springer, 2020.
* [29] Xin Miao, Xin Yuan, Yunchen Pu, and Vassilis Athitsos. l-net: Reconstruct hyperspectral images from a snapshot measurement. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 4059-4069, 2019.
* [30] Xiaowan Hu, Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Hd-net: High-resolution dual-domain learning for spectral compressive imaging. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 17542-17551, 2022.
* [31] Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 17502-17511, 2022.
* [32] Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Coarse-to-fine sparse transformer for hyperspectral image reconstruction. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVII_, pages 686-704. Springer, 2022.
* [33] Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Henghui Ding, Yulun Zhang, Radu Timofte, and Luc Van Gool. Degradation-aware unfolding half-shuffle transformer for spectral compressive imaging. _arXiv preprint arXiv:2205.10102_, 2022.
* [34] Lizhi Wang, Chen Sun, Maoqing Zhang, Ying Fu, and Hua Huang. Dnu: Deep non-local unrolling for computational spectral imaging. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1661-1671, 2020.
* [35] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5728-5739, 2022.
* [36] Z Liu, Y Lin, Y Cao, H Han, Y Wei, Z Zhang, S Lin, and B Guo. Hierarchical vision transformer using shifted windows. 2021.
* [37] Guangting Wang, Yucheng Zhao, Chuanxin Tang, Chong Luo, and Wenjun Zeng. When shift operation meets vision transformer: An extremely simple alternative to attention mechanism. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pages 2423-2430, 2022.
* [38] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 6848-6856, 2018.
* [39] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Learning enriched features for real image restoration and enhancement. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXV 16_, pages 492-511. Springer, 2020.
* [40] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 14821-14831, 2021.
* [41] Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, and Luc Van Gool. Mst++: Multi-stage spectral-wise transformer for efficient spectral reconstruction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 745-755, 2022.
* [42] Xin Yuan, David J Brady, and Aggelos K Katsaggelos. Snapshot compressive imaging: Theory, algorithms, and applications. _IEEE Signal Processing Magazine_, 38(2):65-88, 2021.
* [43] Ziyi Meng, Shirin Jalali, and Xin Yuan. Gap-net for snapshot compressive imaging. _arXiv preprint arXiv:2012.08364_, 2020.
* [44] Tao Huang, Weisheng Dong, Xin Yuan, Jinjian Wu, and Guangming Shi. Deep gaussian scale mixture prior for spectral compressive imaging. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 16216-16225, 2021.
* [45] Inchang Choi, MH Kim, D Gutierrez, DS Jeon, and G Nam. High-quality hyperspectral reconstruction using a spectral prior. Technical report, 2017.
* [46] Jong-Il Park, Moon-Hyun Lee, Michael D Grossberg, and Shree K Nayar. Multispectral imaging using multiplexed illumination. In _2007 IEEE 11th International Conference on Computer Vision_, pages 1-8. IEEE, 2007.
* [47] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [48] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_, 2016.
**Aperture Diffraction for Compact Snapshot Spectral Imaging**
**Supplementary Material --**
## 1 Overview
In the supplementary material, we first provide a comprehensive account of imaging model in section 2, elucidating the variances in the PSF based on different parameters. Then We expound on the test set employed in the simulation experiments of this paper in section 3. Additionally, detailed discussion and comparison of simulation experiments at lower exposures are presented in Section 4. Ultimately, additional results of ADIS's reconstruction from the real acquisitions are illustrated in section 5.
## 2 Detailed process of imaging model
**Imaging forward model.** We now consider a multi-slit aperture comprising \(N\times N\) parallel rectangular apertures, with each rectangular aperture having a width of \(a\), a length of \(b\), and a center-to-center distance between adjacent slits of \(d\). The Fraunhofer diffraction formula, a fundamental calculation method in optics, is utilized to characterize the diffraction phenomenon when light passes through an aperture. When light passes through a finite-sized aperture, it generates a series of interference and diffraction patterns within the far-field region. The Huygens-Fresnel principle states that each point on a wave surface can be treated as a new secondary wave source, and the wave surface can be considered as a superposition of spherical waves emitted by an infinite number of point sources.
Linear systems possess an essential characteristic known as the principle of superposition, which asserts that the resulting output of a linear system, when multiple input signals are applied, is the linear superposition of these input signals. This principle is applicable to the phenomenon of aperture mask diffraction, wherein each aperture behaves as a point source. The waves emanating from every point source combine constructively and destructively to produce the output wave of the entire aperture. This process of wave superposition is linear, implying that the spatial distribution of the output wave corresponds to the superposition of waves generated by all point sources as the number of apertures increases. Consequently, each rectangular aperture in the aperture mask can be viewed as a point source, and the waves produced by all point sources can be added coherently to yield the diffraction pattern across the entire aperture mask.
The initial form of Fraunhofer diffraction is:
\[E_{p}=c\int\limits_{A}e^{ikr}dA \tag{1}\]
Considering a single square hole mask with a width of \(a\) and a length of \(b\), we can write its imaging distribution on the Fourier surface as:
\[E_{p}=c\int\limits_{0}^{b}\int\limits_{0}^{a}e^{ik(r_{0}+x\sin\phi+y\sin\theta) }dxdy \tag{2}\]
Therefore, for a multi-slit mask comprising \(N\times N\) parallel rectangular apertures, we can write the diffraction formula as follows:
\[\begin{split} E_{p}&=ce^{ikr_{0}}\left[\int\limits_{0 }^{b}e^{iky\sin\theta_{1}}dy+\cdots+\int\limits_{(N-1)d}^{(N-1)d+b}e^{iky\sin \theta_{1}}dy\right]\\ &\quad\times\left[\int\limits_{0}^{a}e^{ikx\sin\theta_{2}}dx+ \cdots+\int\limits_{(N-1)d}^{(N-1)d+a}e^{ikx\sin\theta_{2}}dx\right]\end{split} \tag{3}\]
Calculated to get:
\[\begin{split} E_{p}&=ce^{ikr_{0}}\frac{e^{ikb\sin \theta_{1}}-1}{ik\sin\theta_{1}}\times\frac{1-e^{ikNd\sin\theta_{1}}}{1-e^{ikd \sin\theta_{1}}}\\ &\quad\times\frac{e^{ika\sin\theta_{2}}-1}{ik\sin\theta_{2}} \times\frac{1-e^{ikNd\sin\theta_{2}}}{1-e^{ikd\sin\theta_{2}}}\end{split} \tag{4}\]
Figure 1: Simplified schematic of the ADIS’s profile
To simplify the parameters, let:
\[\begin{split}\beta_{1}&=\frac{1}{2}kb\sin\theta_{1}, \beta_{2}=\frac{1}{2}ka\sin\theta_{2},\\ \gamma_{1}&=\frac{1}{2}kd\sin\theta_{1},\gamma_{2}= \frac{1}{2}kd\sin\theta_{2}\end{split} \tag{5}\]
\[\begin{split} E_{p}&=ce^{ikr_{0}}ab\frac{e^{2i\beta_{1 }}-1}{2i\beta_{1}}\times\frac{1-e^{2iN\gamma_{1}}}{1-e^{2i\gamma_{1}}}\\ &\qquad\times\frac{e^{2i\beta_{2}}-1}{2i\beta_{1}}\times\frac{1 -e^{2iN\gamma_{2}}}{1-e^{2i\gamma_{2}}}\end{split} \tag{6}\]
Then we get:
\[E_{p}=E_{0}\frac{\sin\beta_{1}}{\beta_{1}}\frac{\sin N\gamma_{1}}{\sin\gamma_{ 1}}\frac{\sin\beta_{2}}{\beta_{2}}\frac{\sin N\gamma_{2}}{\sin\gamma_{2}} \tag{7}\]
Since focusing is performed under paraxial conditions, the formula \(\sin\theta_{1}\approx\tan\theta_{1}=\frac{x_{m}}{f_{2}}\), \(\sin\theta_{2}\approx\tan\theta_{2}=\frac{y_{m}}{f_{2}}\):
\[I(x_{m},y_{m},\lambda)=I_{0}\cdot D(x_{m},y_{m},\lambda)\cdot P(x_{m},y_{m},\lambda) \tag{8}\]
\[D(x_{m},y_{m},\lambda)=\sin c^{2}(\frac{\pi b}{\lambda f_{2}}x_{m})\sin c^{2} (\frac{\pi a}{\lambda f_{2}}y_{m}) \tag{9}\]
\[P(x_{m},y_{m},\lambda)=\left[\frac{\sin(N\frac{\pi d}{\lambda f_{2}}x_{m})}{ \sin(\frac{\pi d}{\lambda f_{2}}x_{m})}\right]^{2}\times\left[\frac{\sin(N \frac{\pi d}{\lambda f_{2}}y_{m})}{\sin(\frac{\pi d}{\lambda f_{2}}y_{m})} \right]^{2} \tag{10}\]
Where the formula \(D(x_{m},y_{m},\lambda)\) is the diffraction factor describes the diffraction effect of each rectangular square hole. \(P(x_{m},y_{m},\lambda)\) is the interference factor describes the effect of multi-slit interference. \((x_{m},y_{m})\) denotes the spatial coordinates on the receiving screen, while \(f_{2}\) denotes the distance between the diffraction array and the sensor.
Finally, we get the formula under orthogonal aperture diffraction:
\[I=I_{0}\bigg{(}\frac{\sin\beta_{1}}{\beta_{1}}\bigg{)}^{2}\bigg{(}\frac{\sin N \gamma_{1}}{\sin\gamma_{1}}\bigg{)}^{2}\bigg{(}\frac{\sin\beta_{2}}{\beta_{2} }\bigg{)}^{2}\bigg{(}\frac{\sin N\gamma_{2}}{\sin\gamma_{2}}\bigg{)}^{2} \tag{11}\]
**Mask with different \(b/d\)**. According to Equation 8, the contrast in intensity between the zero-order diffraction and the first-order diffraction is entirely reliant on the ratio between the aperture opening and the spacing of the square holes. To simulate different ratios, we designed masks and present the simulation results under various aperture mask parameters (\(b/d\)) in Figure 3. After comparing the diffraction patterns, we selected the parameter values of \(d=10\) and \(a=b=5\) for the mask. This aperture mask exhibits a first-order diffraction intensity that is half of the zero-order diffraction and a second-order diffraction that is precisely situated in the suppressed region, creating a missing order.
## 3 Test set of simulation experiment
Here we show our test set of 10 scenes selected from the KAIST [11] dataset as depicted in Figure 2. The 256*256
Figure 3: The intensity distribution of light on the x-axis for various mask parameters is represented by the red curve.
Figure 2: Illustrations of the test set and the rendering of the generated measurements
measurements that can be generated from the 586*586*28 HSI by PSF. Meanwhile, to improve the visualization of the ADIS dispersion pattern, we perform RGB interpolation to render the measurements.
## 4 Simulation Experiments (Low Exposure)
Unlike the main text's simulation experiments conducted with regular exposure, here, the measurements' amplitude is scaled down to approximately one-fourth of the original to simulate varying exposure conditions. Similar to [8, 12, 13, 14, 15, 16, 17], 28 wavelengths are selected from 450nm to 650nm and derived by spectral interpolation manipulation for the HSI data.
**Simulation Dataset.** We adopt two datasets, i.e., CAVE-1024 [8] and KAIST [11] for simulation experiments. The CAVE-1024 consists of 205 HSIs with spatial size 1024x1024 obtained by interpolating and splicing from the CAVE [18] dataset. The KAIST dataset contains 30 HSIs of spatial size \(2704\times 3376\). 10 scenes from the KAIST dataset are selected for testing, while the CAVE-1024 dataset and another 20 scenes from the KAIST dataset are selected for training.
**Implementation Details.** The dispersion step of the primary diffraction is \(0.5\) spatial pixels, while the simulation experiment is deployed in the range of \(400nm\) to \(670nm\), which means that \(586\times 586\times 28\) data cubes are needed to generate \(256\times 256\) resolution measurements for conducting experiments while preserving the tertiary diffraction. We
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline
**Algorithms** & **S1** & **S2** & **S3** & **S4** & **S5** & **S6** & **S7** & **S8** & **S9** & **S10** & **Avg** \\ \hline
**U-Net**[1] & 21.43 & 23.89 & 19.87 & 18.82 & 20.72 & 22.08 & 21.16 & 28.51 & 28.40 & 22.31 & 22.72 \\ & 0.8162 & 0.8157 & 0.7554 & 0.5626 & 0.7269 & 0.7800 & 0.7894 & 0.8505 & 0.7493 & 0.7965 & 0.764 \\ \hline
**HSCN+**[2] & 24.35 & 25.54 & 21.53 & 25.14 & 19.37 & 24.85 & 23.94 & 28.65 & 23.53 & 19.68 & 23.66 \\ & 0.8074 & 0.8078 & 0.7313 & 0.7932 & 0.6931 & 0.8063 & 0.7795 & 0.8615 & 0.6281 & 0.7324 & 0.764 \\ \hline
**HDNet**[3] & 22.42 & 25.62 & 20.72 & 19.49 & 23.38 & 23.68 & 24.43 & 29.21 & 30.43 & 22.50 & 24.19 \\ & 0.8502 & 0.8428 & 0.7964 & 0.6112 & 0.7958 & 0.8157 & 0.8193 & 0.8730 & 0.8328 & 0.8342 & 0.807 \\ \hline
**BIRNAT**[4] & 25.18 & 26.49 & 22.57 & 20.99 & 18.34 & 24.94 & 24.45 & 30.03 & 29.44 & 22.92 & 24.54 \\ & 0.8664 & 0.8533 & 0.7990 & 0.7376 & 0.7467 & 0.8297 & 0.8267 & 0.8884 & 0.7918 & 0.8365 & 0.818 \\ \hline
**MIRNet**[5] & 22.87 & 27.16 & 22.69 & 25.74 & 19.08 & 23.85 & 25.45 & 30.22 & 30.27 & 23.27 & 25.06 \\ & 0.7794 & 0.8375 & 0.7735 & 0.7848 & 0.7285 & 0.8149 & 0.8061 & 0.8908 & 0.8024 & 0.8139 & 0.803 \\ \hline
**lambda-Net**[6] & 29.65 & 27.19 & 24.67 & 24.70 & 24.89 & 25.61 & 26.65 & 31.16 & 33.79 & 23.72 & 27.20 \\ & 0.8943 & 0.8323 & 0.8052 & 0.5535 & 0.7772 & 0.7412 & 0.8056 & 0.8711 & 0.8998 & 0.8167 & 0.800 \\ \hline
**MPRNet**[7] & 29.25 & 29.84 & 25.68 & 29.12 & 26.99 & 27.58 & 26.63 & 32.74 & 33.45 & 26.63 & 28.79 \\ & 0.9167 & 0.9157 & 0.8925 & 0.8897 & 0.8799 & 0.8782 & 0.8604 & **0.9723** & 0.9030 & 0.9096 & 0.897 \\ \hline
**TSA-Net**[8] & 29.58 & 29.22 & 25.88 & 28.18 & 27.65 & 27.60 & 27.55 & 32.76 & 34.25 & 25.49 & 28.82 \\ & 0.9240 & 0.9028 & 0.8833 & 0.8757 & 0.8834 & 0.792 & 0.8633 & 0.9232 & 0.8981 & 0.8826 & 0.892 \\ \hline
**MST++**[9] & 32.52 & 30.76 & 26.24 & 28.68 & 28.01 & 28.24 & 25.81 & 33.31 & 36.24 & 27.45 & 29.70 \\ & 0.9426 & 0.9175 & 0.9076 & 0.8911 & 0.8959 & 0.9067 & 0.8793 & 0.9387 & 0.9309 & 0.9248 & 0.914 \\ \hline
**Restormer**[10] & 32.86 & 30.57 & 26.99 & 29.85 & 28.26 & 28.52 & 28.78 & 33.76 & 36.46 & 26.68 & 30.32 \\ & 0.9555 & 0.9303 & 0.9133 & 0.8969 & 0.9073 & 0.9095 & 0.8925 & 0.9436 & 0.9371 & 0.9277 & 0.921 \\ \hline CSST-9stg (Ours) & **34.18** & **33.47** & **29.20** & **30.76** & **30.79** & **30.53** & **29.36** & **35.84** & **38.55** & **28.87** & **32.16** \\ & **0.9623** & **0.9632** & **0.9477** & **0.9178** & **0.9296** & **0.9450** & **0.9056** & 0.9630 & **0.9610** & **0.9461** & **0.944** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of reconstruction results of different algorithms at low exposure, PSNR (dB) and SSIM are reported.
Figure 4: Qualitative comparison of reconstruction results of different algorithms at low exposure. Zoomed-in patches of the HSI in the fuchsia box are presented in the lower-left of the figure.
implement CSST by Pytorch. All CSST models are trained with Adam [19] optimizer (\(\beta_{1}=0.9\) and \(\beta_{2}=0.999\)) using Cosine Annealing scheme [20] for 300 epochs on an RTX 3090 GPU. The initial learning rate is \(4\times 10^{-4}\).
**Quantitative Analysis.** Table 1 compares the results of CSST and 10 methods including one baseline method(Unet [1]), six reconstruction methods(lambdaNet [6], HDNet [3], BIRNAT [4], TSA-Net [8], HSCNN+ [2] and MST++ [9]), three Super-resolution algorithms (Restormer [10], MPRNet [7], MIRNet[5]) on 10 simulation scenes at low exposure. CSST shows the best experimental results on the ADIS spectral reconstruction task, i.e., 32.16dB in PSNR and 0.944 in SSIM. CSST-98tg significantly outperforms two recent SOTA methods Restormer and MST++ by 1.84dBdB and 2.46dB, demonstrating stronger reconstruction performance compared to previous methods under low exposure conditions and robustness against exposure variations.
**Qualitative Analysis.** Figure 4 illustrates the comparative performance of our CSST and other methods in the HSI reconstruction of ADIS on the same scene at low exposure. Visual inspection of the image reveals that the CSST-98tg method provides more intricate details, sharper textures, and well-defined structures. Conversely, the previous approaches produce either overly smooth results that compromise the underlying structure or introduce color artifacts and speckled textures. Moreover, the lower left corner of the figure presents the spectral profile of the intensity-wavelength corresponding to the fuchsia square.
## 5 Additional real reconstruction results
Here we further show the reconstruction outcomes of various scenes captured by ADIS in the figure 5. These results exhibit distinct textures and well-structured edges, thereby corroborating the efficacy of ADIS in snapshot sub-super pixel resolution spectral imaging.
|
2306.06082 | Augmentation-aware Self-supervised Learning with Conditioned Projector | Self-supervised learning (SSL) is a powerful technique for learning robust
representations from unlabeled data. By learning to remain invariant to applied
data augmentations, methods such as SimCLR and MoCo are able to reach quality
on par with supervised approaches. However, this invariance may be harmful to
solving some downstream tasks which depend on traits affected by augmentations
used during pretraining, such as color. In this paper, we propose to foster
sensitivity to such characteristics in the representation space by modifying
the projector network, a common component of self-supervised architectures.
Specifically, we supplement the projector with information about augmentations
applied to images. In order for the projector to take advantage of this
auxiliary conditioning when solving the SSL task, the feature extractor learns
to preserve the augmentation information in its representations. Our approach,
coined Conditional Augmentation-aware Self-supervised Learning (CASSLE), is
directly applicable to typical joint-embedding SSL methods regardless of their
objective functions. Moreover, it does not require major changes in the network
architecture or prior knowledge of downstream tasks. In addition to an analysis
of sensitivity towards different data augmentations, we conduct a series of
experiments, which show that CASSLE improves over various SSL methods, reaching
state-of-the-art performance in multiple downstream tasks. | Marcin Przewięźlikowski, Mateusz Pyla, Bartosz Zieliński, Bartłomiej Twardowski, Jacek Tabor, Marek Śmieja | 2023-05-31T12:24:06Z | http://arxiv.org/abs/2306.06082v2 | # Augmentation-aware Self-supervised Learning with Guided Projector
###### Abstract
Self-supervised learning (SSL) is a powerful technique for learning robust representations from unlabeled data. By learning to remain invariant to applied data augmentations, methods such as SimCLR and MoCo are able to reach quality on par with supervised approaches. However, this invariance may be harmful to solving some downstream tasks which depend on traits affected by augmentations used during pretraining, such as color. In this paper, we propose to foster sensitivity to such characteristics in the representation space by modifying the projector network, a common component of self-supervised architectures. Specifically, we supplement the projector with information about augmentations applied to images. In order for the projector to take advantage of this auxiliary guidance when solving the SSL task, the feature extractor learns to preserve the augmentation information in its representations. Our approach, coined **C**onditional **A**ugmentation-aware **S**elf-**supervised **L**earning (CASSLE), is directly applicable to typical joint-embedding SSL methods regardless of their objective functions. Moreover, it does not require major changes in the network architecture or prior knowledge of downstream tasks. In addition to an analysis of sensitivity towards different data augmentations, we conduct a series of experiments, which show that CASSLE improves over various SSL methods, reaching state-of-the-art performance in multiple downstream tasks.2
Footnote 2: We share our codebase at [https://github.com/gmun/CASSLE](https://github.com/gmun/CASSLE).
## 1 Introduction
Artificial neural networks have proven to be a successful family of models in several domains, including, but not limited to, computer vision [29], natural language processing [10], and solving problems at the human level with reinforcement learning [42]. This success is attributed largely to their ability to learn useful feature representations [27] without additional effort for input signals preparation. However, training large deep learning models requires extensive amounts of data, which can be costly to prepare, especially when human annotation is needed [3; 33].
High-quality image representations can be acquired without relying on explicitly labeled data by utilizing self-supervised learning (SSL). A self-supervised model is trained once on a large dataset without labels and then transferred to different downstream tasks. Initially, self-supervised methods addressed well-defined pretext tasks, such as predicting rotation [26] or determining patch position [21]. Recent studies in SSL proposed contrastive methods learning representations that remain invariant when subjected to various data augmentations [32; 13; 64] leading to impressive results that have greatly diminished the disparity with representations learned in a supervised way [11].
Nevertheless, contrastive methods may perform poorly when a particular downstream task relies on features affected by augmentation [66]. For example, color jittering can result in a representation space invariant to color shifts, which would be detrimental to the task of flower classification (see Figure 1). Without prior knowledge of possible downstream tasks, this effect is hard to mitigate in contrastive learning [58; 66]. Solutions for retaining information about used data augmentations in the feature extractor representation include forcing it explicitly with a modified training scheme [66; 37; 67], or by preparing a feature extractor to be adapted to a specific downstream task, e.g., with hypernetworks [12]. However, these approaches often involve significant modifications either to the contrastive model architecture [66], training procedure [37; 67], or costly training of additional models [12].
In this work, we propose a new method called **C**onditional **A**ugmentation-aware **S**elf-**s**upervised **L**earning (CASSLE) that mitigates augmentation invariance of representation without neither major changes in network architecture or modifications to the self-supervised training objective. We propose to use the augmentation information during the SSL training as additional guidance for the projector network. This encourages the feature extractor network to retain information about augmented image features in its representation. CASSLE can be applied to any joint-embedding SSL method regardless of its objective, provided that it utilizes a projector network [14; 13; 64; 70; 15]. The outcome is a general-purpose, augmentation-aware encoder that can be directly used for any downstream task. CASSLE presents improved results in comparison to other augmentation-aware SSL methods, improving transferability to downstream tasks where invariance of the model representation for specific data changes could be harmful.
**The main contributions of our work are threefold:**
* We propose a simple yet effective method for Conditional Augmentation-aware Self-supervised Learning (CASSLE). Using our guided projector enables preserving more information about augmentations in representations than in existing methods.
* CASSLE is a general modification that can be directly applied to existing joint-embedding SSL approaches without introducing additional objectives and major changes in the network architecture.
* In a series of experiments we demonstrate that CASSLE reaches state-of-the-art performance with different SSL methods for robust representation learning and improves upon the performance of previous augmentation-aware approaches. Furthermore, our analysis indicates that CASSLE learns representations with increased augmentation sensitivity compared to other approaches.
Figure 1: In the traditional self-supervised setting, contrastive loss minimization pulls the representations of augmented image views closer in the latent space of the projector (left). This may also reduce the distance between their feature extractor representations (right). Thus, the representation becomes invariant to augmentation-induced perturbations, which may hinder the performance on downstream tasks. In contrast, the self-supervised objective of CASSLE draws together joint representations of images and their augmentations in the projector space (bottom row). By conditioning the projector with augmentation information, image representations retain more sensitivity to perturbations in the feature extractor space. This proves to be beneficial when solving downstream tasks.
Related work
Self-supervised learning(SSL) is a paradigm of learning representations from unlabeled data that can later be used for downstream tasks defined by human annotations [2; 4]. Despite learning artificial _pretext tasks_, instead of data-defined ones, SSL models have achieved tremendous success in a plethora of domains [20; 63; 56; 7]. This includes computer vision, where a variety of pretext tasks has been proposed [21; 71; 45; 26]. However, arguably the most prominent and successful SSL technique to emerge in recent years is training of joint-embedding models for augmentation invariance [6; 60], defined by objectives such as contrastive InfoNCE loss [32; 13; 14], self-distillation [30; 13; 46] or Canonical Correlation Analysis [39; 70; 5]. Those objectives are often collectively referred to as _contrastive objectives_[59; 4]. A common component of joint-embedding architectures is the _projector network_, which maps representations of the feature extractor into the space where the contrastive objective is imposed [13; 14]. The usefulness of the projector has been explained through the lens of transfer learning, where it is often better to transfer intermediate network representations, to reduce the biases from the pretraining task [68; 8]. The projector also helps to mitigate the noisy data augmentations and enforces some degree of pairwise independence of image features [4; 41].
Augmentation invariance of self-supervised modelsis a natural consequence of training them with contrastive objectives and it is crucial to choose such augmentations which will allow to form useful representations [13]. While a common set of augmentations demonstrated to typically work well on natural images in SSL has been established in the literature [32; 13; 64; 39; 72], the optimal choice of augmentations varies between specific tasks [58; 23]. Xiao et. al. find that augmentation invariance can hinder the performance of the model on downstream tasks which require attention to precisely those traits that it had been previously trained to be invariant to [66]. This inspired several techniques of retaining augmentation-specific information in joint-embedding models, such as projectors sensitive to different augmentation types [66; 23], adding an objective of explicit prediction of augmentation parameters [37], weighing the contrastive objective with the augmentation strength [67], as well as task-specific pretraining [52; 61]. The above approaches produce general-purpose feature extractors which can be transferred to downstream tasks without further tuning of their parameters. However, they often involve complex modifications either to the SSL model architecture [66], training procedure [37; 67], or simply tedious task-specific pretraining [61]. Another line of work proposes to train Hypernetworks [28] which produce feature extractors invariant to chosen subsets of augmentations - a more elastic, but considerably harder to train approach [12]. Following [66; 37], we produce a general-purpose feature extractor and utilize augmentation information similarly to [37; 12]. Contrary to the above methods, we inject the information about the applied augmentations directly into the projector and make no modification either to the contrastive objective or the feature extractor.
## 3 Method
In this section, we present our approach, Conditional Augmentation-aware Self-supervised learning (CASSLE). Section 3.1 provides a background on joint-embedding self-supervised methods and their limitations. Section 3.2 explains the essence of CASSLE and how it leverages augmentation information to improve the quality of learned representations. Section 3.3 describes the details of CASSLE and its implementation.
### Preliminaries
A typical contrastive framework used in self-supervised learning consists of an augmentation function \(t_{\omega}\) and two networks: feature extractor \(f\) and projector \(\pi\). Let \(\mathbf{v}_{1}=t_{\omega_{1}}(\mathbf{x}),\mathbf{v}_{2}=t_{\omega_{2}}( \mathbf{x})\) be two augmentations of a sample \(\mathbf{x}\sim X\) parameterized by \(\omega_{1},\omega_{2}\sim\Omega\). The feature extractor maps them into the embedding space, which is the representation used in downstream tasks. To make the representation invariant to data augmentations, \(\mathbf{e}_{1}=f(\mathbf{v}_{1})\) is forced to be similar to \(\mathbf{e}_{2}=f(\mathbf{v}_{2})\)3. Instead of imposing similarity constraints directly on the embedding space of \(f\), we use a projector \(\pi\), which transforms the embeddings into target space for applying the contrastive loss \(\mathcal{L}\). This trick, known as _Guillotine Regularization_, helps the feature extractor to better generalize to downstream tasks, due to \(f\) not being directly affected by \(\mathcal{L}\)[68; 13; 14; 8].
Minimizing \(\mathcal{L}(\pi(\mathbf{e}_{1}),\pi(\mathbf{e}_{2}))\) directly leads to reducing the distance between embeddings \(\pi(\mathbf{e}_{1})\) and \(\pi(\mathbf{e}_{2})\). However, \(\mathcal{L}\) still indirectly encourages intermediate network representations (including the output of the feature extractor \(f\)) to also conform to the contrastive objective to some extent, which means that the feature extractor tends to erase the information about augmentation from its output representation. This behavior may however be detrimental for certain downstream tasks (see Figures 1 and 4), which rely on features affected by augmentations. For instance, learning invariance to color jittering through standard contrastive methods may lead to degraded performance on the downstream task of flower recognition is not a color-invariant task [58; 66]. Consequently, the success of typical SSL approaches depends critically on a careful choice of augmentations used for model pretraining [13; 58].
### Cassle
To overcome the above limitations of SSL, we guide the feature extractor to encode the information about augmentations in its output representation. In consequence, the obtained representation will be more informative for downstream tasks which depend on features modified by augmentations.
CASSLE achieves this goal by conditioning the projector \(\pi\) on the parameters of augmentations used to perturb the input image. Specifically, we introduce the Guiding network \(\gamma\) which transforms augmentation parameters \(\omega\sim\Omega\) into _augmentation embeddings_\(\mathbf{g}=\gamma(\omega)\). Moreover, we modify \(\pi\) to transform the joint image and augmentation representations \((\mathbf{e},\mathbf{g})\) into the space where the objective \(\mathcal{L}\) is imposed. We do not alter the \(\mathcal{L}\) itself; instead, training relies on minimizing the contrastive loss \(\mathcal{L}\) between \(\pi(\mathbf{e}_{1},\omega_{1})\) and \(\pi(\mathbf{e}_{2},\omega_{2})\). Thus, \(\pi\) learns to draw \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) together in its representation space _on condition of_\(\mathbf{g}_{1}\)_and_\(\mathbf{g}_{2}\). We visualize the architecture of CASSLE in Figure 2.
Let us explain why CASSLE reduces the detrimental effect of augmentations on the representations of feature extractor \(f\). If \(\pi\) minimizes \(\mathcal{L}\) on embeddings enriched with information about augmentation, then \(f\) is encouraged to preserve augmentation information in its representation. Intuitively, by retaining the information about augmented features in feature extractor, we facilitate the image and augmentation representations to counter one another in the projector. Without this guidance, the feature extractor and projector could be considered as a single network which could erase information about augmentation at any intermediate layer. Since CASSLE guides the feature extractor to encode the augmented features, its performance is not as vulnerable to incorrectly selected augmentations as in the vanilla SSL approaches. In particular, CASSLE pretrained with color jittering can still be used in the downstream task of flower recognition, while for vanilla approaches this augmentation should be prohibited.
Figure 2: Overview of CASSLE. We extend the typical self-supervised learning approaches by incorporating the information of augmentations applied to images into the projector network. The Guiding network infers the representations of augmentations and passes them to the projector network as conditional inputs. In CASSLE, the SSL objective is thus imposed on joint representations of images and the augmentations that had been applied to them. This way, CASSLE enables the feature extractor to be more aware of augmentations than the methods that do not condition the projector network.
CASSLE can be applied to a variety of joint-embedding methods, as the only modification it makes is changing the projector network to utilize the additional input produced by the guiding network. We do not modify any additional aspects of the extended self-supervised approaches, such as objective functions, which is appealing from a practical perspective. Last but not least, the architecture of the feature extractor in CASSLE is not affected by the introduced augmentation guidance. Since we only modify the input to the projector, which is discarded after the pretraining, the feature extractor can be directly used in downstream tasks similar to vanilla SSL techniques.
### Practical implementation of the guidance mechanism
We construct augmentation information \(\omega\) by concatenating vectors \(\omega^{aug}\) describing the parameters of each augmentation type [37]. In this work, we focus on a set of augmentations used commonly in the literature [13; 14; 64], listed below along with descriptions of their respective parameters \(\omega^{aug}\):
* \(\omega^{c}\in[0,1]^{4}\) describes the normalized coordinates of cropped image center and cropping sizes.
* \(\omega^{j}\in[0,1]^{4}\) describes the normalized intensities of brightness, contrast, saturation, and hue adjustment.
* \(\omega^{b}\in[0,1]\) is the standard deviation of the Gaussian filter used during the blurring operation.
* \(\omega^{f}\in\{0,1\}\) indicates whether the image has been flipped.
* \(\omega^{g}\in\{0,1\}\) indicates whether the image has been reduced to grayscale.
To enhance the projector's awareness of the color changes in the augmented images, we additionally enrich \(\omega\) with information about **color difference** - \(\omega^{d}\in[0,1]^{3}\), which is computed as the difference between the mean values of color channels of the image before and after the color jittering operation. We empirically demonstrate that inclusion of \(\omega^{d}\) in \(\omega\) improves the performance of CASSLE (see Section 4.4).
Afterwards, the Guiding network \(\gamma\), transforms \(\omega\) into augmentation embeddings \(\mathbf{g}\). For the architecture of \(\gamma\) we choose the Multi-layer Perceptron. We condition the projector \(\pi\) with \(\mathbf{g}\) by concatenating \(\mathbf{g}\) to the image embeddings \(\mathbf{e}\) before feeding them to \(\pi\). We also explore other alternatives for combining \(\mathbf{g}\) and \(\mathbf{e}\) for conditioning \(\pi\) (see Section 4.4)
## 4 Experiments
In Section 4.1, we evaluate CASSLE's performance on downstream tasks such as classification, regression, object detection, and image retrieval4. In Section 4.2, we analyze CASSLE's sensitivity to augmentations and conditioning, as well as its effect on the process of contrastive pretraining in Section 4.3. Finally, we discuss the choice of hyperparameters of CASSLE in Section 4.4. In all experiments, unless specified otherwise, we utilize the ResNet-50 architecture [29] and conduct the self-supervised pretraining on ImageNet-100 - a 100-class subset of the ILSVRC dataset [53] used commonly in the literature [58; 66; 37; 12]. We use the standard set of augmentations including horizontal flipping, random cropping, grayscaling, color jittering and Gaussian blurring [32; 37; 30]. For consistency in terms of hyperparameters, we follow [37] for MoCo-v2 and SimSiam, and [12] for SimCLR. We describe additional details of training and evaluation in the supplementary material.
Footnote 4: We compare CASSLE to a number of recently proposed methods [66; 37; 12]. We report the performance of those methods from the literature [58; 66; 37; 12], given that the code for [66] and [12] was not made available at the time of writing. As for the results of baseline SSL models and AugSelf [37], we report their results from the literature except when our runs of those methods yielded results different by at least 2 pp. We mark such cases with \(\dagger\).
### Evaluation on downstream tasks
We begin the experimental analysis by addressing the most fundamental question - how does CASSLE impact the ability of models to generalize? In order to answer it, we evaluate models pretrained via CASSLE and other self-supervised techniques on a variety of downstream visual tasks, such as classification, regression, object detection, and image retrieval.
Linear evaluationWe evaluate the performance of pretrained networks on the downstream tasks of classification on a wide array of datasets: CIFAR10/100 (C10/100) [36], Food101 (Food) [9], MIT67 (MIT) [51], Oxford-IIIT Pets (Pets) [48], Oxford Flowers-102 (Flowers) [44], Caltech101 (Caltech) [25], Stanford Cars (Cars) [35], FGVC-Aircraft (FGVCA) [40], Describable Textures (DTD) [17], SUN-397 (SUN) [65], as well as regression on the 300 Faces In-the-Wild (300W) dataset [55]. We follow the linear evaluation protocol [34; 13; 37], described in detail in the supplementary material. We evaluate multiple self-supervised methods extended with CASSLE, as well as other recently proposed extensions which increase sensitivity to augmentations [37; 66; 12]. The results, averaged over 3 random seeds, are reported in Table 1. We find that in the vast majority of cases, CASSLE improves the performance of vanilla joint-embedding methods. Moreover, in the case of InfoNCE-based (SimCLR, MoCo), and CCA-based (Barlow Twins) approaches, CASSLE generally achieves better downstream results than the other SSL extensions [37; 12]. On the other hand, CASSLE gives a lower performance boost to SimSiam [64] than AugSelf [37], nevertheless improving the vanilla approach.
Object detectionWe next evaluate the pretrained networks on a more challenging task of object detection on the VOC 2007 dataset [24]. We follow the training scheme of [32; 14], except that we only train the object detector modules and keep the feature extractor parameters fixed during training for detection to better compare the pretrained representations. We report the Average Precision (AP) [38] of models pretrained through MoCo-v2 and SimCLR [13] with AugSelf [37] and CASSLE extensions in Table 2. The compared approaches yield similiar results, with CASSLE representation slightly surpassing the vanilla methods and AugSelf.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline
**Method** & C10 & C100 & Food & MIT & Pets & Flowers & Caltech & Cars & FGVCA & DTD & SUN & CUB & 300W \\ \hline \multicolumn{11}{c}{_SimCLR_[13]} \\ \hline Vanilla & 81.80 & 61.40 & 56.59\({}^{\dagger}\) & 61.26\({}^{\dagger}\) & 69.10 & 81.58\({}^{\dagger}\) & 75.95\({}^{\dagger}\) & 31.20\({}^{\dagger}\) & 38.68\({}^{\dagger}\) & 64.99\({}^{\dagger}\) & 46.37\({}^{\dagger}\) & 28.87\({}^{\dagger}\) & 88.47\({}^{\dagger}\) \\ AugSelf [37]\({}^{\dagger}\) & 84.30 & 63.47 & 60.76 & 63.43 & **71.86** & **86.59** & 79.88 & 36.56 & **42.90** & 66.59 & 48.84 & 34.46 & 88.79 \\ AI [12] & 83.90 & 63.10 & – & – & 69.50 & 68.30 & 74.20 & – & – & 53.70 & – & **38.60** & 88.00 \\
**CASSLE** & **85.61** & **64.09** & **61.00** & **63.58** & 71.43 & 85.98 & **80.62** & **37.97** & 42.26 & **67.07** & **49.42** & 33.91 & **89.05** \\ \hline \multicolumn{11}{c}{_MoCo-v2_[32; 14]} \\ \hline Vanilla & 84.60 & 61.60 & 59.67 & 61.64 & 70.08 & 82.43 & 77.25 & 33.86 & 41.21 & 64.47 & 46.50 & 32.20 & \(88.77^{\dagger}\) \\ AugSelf [37] & 85.26 & 63.90 & 60.78 & 63.36 & 73.46 & 85.70 & 78.93 & 37.35 & 39.47 & 66.22 & 48.52 & 37.00 & \(89.49^{\dagger}\) \\ AI [12] & 81.30 & 64.60 & – & – & **74.00** & 81.30 & 78.90 & – & – & **68.80** & – & **41.40** & **90.00** \\ LooC [66] & – & – & – & – & – & – & – & – & – & – & 39.60 & – \\
**CASSLE** & **86.32** & **65.29** & **61.93** & **63.86** & 72.86 & **86.51** & **79.63** & **38.82** & **42.03** & 66.54 & **49.25** & 36.22 & 88.93 \\ \hline \multicolumn{11}{c}{_Barlow Twins_[70]} \\ \hline Vanilla\({}^{\dagger}\) & 85.90 & 66.10 & 59.41 & 61.72 & 72.30 & 87.13 & 81.95 & 41.54 & 44.40 & 65.85 & 49.18 & 35.02 & 89.04 \\ AugSelf [37]\({}^{\dagger}\) & **87.28** & 66.98 & 60.52 & 63.96 & 72.11 & 86.68 & 81.73 & 39.88 & 44.23 & 65.21 & 47.71 & 37.02 & 88.88 \\
**CASSLE** & 87.03 & **67.27** & **62.19** & **65.08** & **72.75** & **87.99** & **82.56** & **41.68** & **46.63** & **66.31** & **50.09** & **38.25** & **89.52** \\ \hline \multicolumn{11}{c}{_SimSiam_[64]} \\ \hline Vanilla & 86.89 & 66.33 & 61.48 & 65.75 & 74.69 & 88.06 & 84.13 & 48.20 & 48.63 & 65.11 & 50.60 & 38.40 & 89.01 \\ AugSelf [37] & **88.80** & **70.27** & **65.63** & **67.76** & **76.34** & **90.70** & **85.30** & 47.52 & **49.76** & **67.29** & **52.28** & **45.30** & **92.84** \\
**CASSLE** & 87.38 & 67.36 & 63.27 & 66.84 & 75.02 & 88.95 & 84.86 & **48.51** & 49.35 & 66.81 & 51.62 & 39.47 & 89.37 \\ \hline \multicolumn{11}{c}{_MoCo-v3_[32; 15] (ViT-Small feature extractor [22] pretrained on the full ImageNet dataset.)} \\ \hline Vanilla\({}^{\dagger}\) & 83.17 & 62.40 & 56.15 & 53.28 & 62.29 & 81.48 & 69.63 & 28.63 & 32.84 & 57.18 & 42.16 & 35.00 & 87.42 \\ AugSelf [37]\({}^{\dagger}\) & 84.25 & 64.12 & **58.28** & **56.12** & **63.93** & **83.13** & 72.45 & 29.64 & 32.54 & **60.27** & 43.22 & **37.16** & 87.85 \\
**CASSLE** & **85.13** & **64.67** & 57.30 & 55.90 & 63.88 & 82.42 & **73.53** & **30.92** & **35.91** & 58.24 & **43.37** & 36.09 & **88.53** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Linear evaluation on downstream classification and regression tasks. CASSLE consistently improves representations formed by vanilla SSL approaches and performs better or comparably to other techniques of increasing sensitivity to augmentations [66; 37; 12].
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & _MoCo-v2_ & _SimCLR_ \\ \hline Vanilla & 45.12 & 44.78 \\ AugSelf [37] & 45.20 & 44.44 \\
**CASSLE** & **45.90** & **45.02** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average Precision of object detection on VOC dataset [24; 38]. CASSLE extension of MoCo-v2 and SimCLR outperforms the vanilla approaches and AugSelf extension by a small margin.
Image retrievalFinally, we benchmark the pretrained models on the task of image retrieval. We select query images from the Cars and Flowers datasets and find four examples closest to queries in terms of cosine similarities of feature extractor representations. We compare the images retrieved by MoCo-v2, AugSelf [37] and CASSLE in Figure 3. CASSLE selects pictures of cars that are the most consistent in terms of color. In the case of flowers, the nearest neighbor retrieved by the vanilla model is a different species than that of the query image, whereas both CASSLE and AugSelf select the first two nearest neighbors from the same species but then retrieve images of flowers with similar shapes, but different colors. This again indicates greater reliability of features learned by CASSLE.
### Analysis of representations formed by CASSLE
Sensitivity to augmentationsWe investigate the awareness of augmentation-induced data perturbations in the intermediate and final representations of pretrained networks. As a proxy metric for measuring this, we choose the InfoNCE loss [60; 13]. The value of InfoNCE is high if embeddings of pairs of augmented images are less similar to one another than to embeddings of unrelated images, and low if positive pairs of embeddings are on average separated correctly, and thus, the given representation is invariant to augmentations. We report the mean InfoNCE loss values under different augmentation types at subsequent stages of ResNet-50 and projectors of MoCo-v2, AugSelf [37] and CASSLE in Figure 4.
In all networks, the augmentation awareness decreases gradually throughout the feature extractor and projector stages. In CASSLE, we observe a much softer decline in the feature extractor stages and a sharper one in the projector. Representations of CASSLE feature extractor are on average more difficult to match together than those of vanilla MoCo-v2 and AugSelf [37]. This implies that the CASSLE feature extractor is indeed more sensitive to augmentations than its counterparts.
Figure 4: A comparison of InfoNCE loss measured on different kinds of augmentations at subsequent stages of the ResNet-50 and projectors pretrained by vanilla, AugSelf [37] and CASSLE variants of MoCo-v2. Feature extractor representation of CASSLE yields higher InfoNCE values which suggests that it is more susceptible to augmentations.
Figure 3: Example of image retrieval performed by feature extractors pretrained via MoCo-v2 as well as AugSelf [37] and CASSLE on Cars and Flowers images. The CASSLE feature extractor retrieves images with more consistent color scheme and object shape than other approaches.
On the other hand, representations of all projectors, including CASSLE, are similarly separable. This suggests that the conditioning mechanism helps CASSLE projector to better amortize the augmentation-induced differences between the feature extractor embeddings.
The above observations indicate that in the vanilla and (to a slightly lesser extent) AugSelf approaches, both the projector and the intermediate representations are enforced to be augmentation-invariant. On the other hand, in CASSLE, the task of augmentation invariance is solved to a larger degree by the projector, and to a smaller degree by the feature extractor, allowing it to be more augmentation-aware. As shown in Section 4.1, this sensitivity does not prevent the CASSLE feature extractor from achieving similar or better performance than its counterparts when transferred to downstream tasks.
Dependency of CASSLE projector on conditioningWe next verify that the CASSLE projector indeed relies on the guidance mechanism. In Figure 5, we compare cosine similarities of projector representations of images and augmentation embeddings. We consider true pairs of images and embeddings of augmentations that were applied to them (green), as well as shuffled pairs where the augmentation vector does not correspond to its respective image embedding (red). We can see that incorrect conditioning has a negative effect on the projector, decreasing the similarities of image pairs by a non-trivial margin. This indicates that CASSLE projector indeed depends on augmentation conditioning in order to perform its function well.
### Analysis of the contrastive learning procedure
We next compare the training of MoCo-v2 [32; 14] with and without CASSLE or AugSelf [37] extensions, and plot the contrastive loss values measured throughout training in the left part of Figure 6, and on the right, the values of losses relative to the baseline MoCo-v2. CASSLE minimizes the contrastive objective faster than the other two variants, in particular early in the training procedure. This suggests that augmentation information provides helpful guidance for a model not yet fully trained to align augmented image pairs and thus, CASSLE learns to depend on this information.
### Ablation Study
CASSLE is parametrized by several hyperparameters, described below. To select them optimally, we train different variants of MoCo-v2+CASSLE and evaluate them on the same classification and
Figure 5: Similarities of CASSLE projector representations when conditioned with augmentation information from either their respective images (green) or other, randomly chosen images (red). Solid lines denote the mean values of similarities. Guiding the CASSLE projector with wrong augmentation information decreases its ability to draw image pairs together, indicating that it indeed relies on augmentation information to perform its task.
Figure 6: Absolute (left) and relative to Baseline (right) values of contrastive losses of Baseline, AugSelf [37], and CASSLE MoCo-v2 variants, measured during pretraining. CASSLE minimizes the contrastive objective faster than Baseline and AugSelf, in particular early in the training procedure.
regression tasks as in Section 4.1. We rank the models from best to worst performance on each task and report the average ranks in Table 3. We provide the full results in the supplementary material.
Augmentation information contents- We compare conditioning the projector with different subsets of augmentation information. The average best representation is trained with conditioning on all possible augmentation information. Moreover, using the additional **color difference** information additionally improves the results, indicating that it is indeed useful to consider not only augmentation parameters but also information about its effects.
Conditioning the projector- Apart from **concatenation** of image embeddings \(\mathbf{e}\) and augmentation embeddings \(\mathbf{g}\), we consider several other methods of conditioning \(\pi\) with \(\mathbf{g}\): element-wise **(2) addition** or **(3) multiplication** of \(\mathbf{e}\) and \(\mathbf{g}\), or **(4) hypernetwork**[28] - generating the parameters of \(\pi\) with \(\gamma\) and processing through \(\pi\) the unmodified \(\mathbf{e}\). Conditioning through **concatenation** and **addition** yields on average the strongest performance on downstream tasks. We choose to utilize the **concatenation** method in our experiments, as it requires a slightly smaller Guiding network.
Depth and hidden size of the Guiding Network- While CASSLE is robust to the size of the \(\gamma\) MLP, using the depth and hidden size of 6 and 64, respectively, yields the strongest downstream performance. Given such an architecture of the guiding network, our computation overhead is negligible as we increase the overall number of parameters by around \(0.1\%\).
## 5 Conclusion
In this paper, we propose a novel method for augmentation-aware self-supervised learning that retains information about data augmentations in the representation space. To accomplish this, we introduce the concept of the guided projector, which receives augmentation information while processing the representation vector. Our solution necessitates only small architectural changes and no additional auxiliary loss components. Therefore, the training concentrates on contrastive loss, which enhances overall performance.
We compare our solution with existing augmentation-aware SSL methods and demonstrate its superior performance on downstream tasks, particularly when augmentation invariance leads to the loss of vital information. Moreover, we show that it converges faster and obtains representations more susceptible to augmentations than the baseline methods.
Overall, our method offers a straightforward and efficient approach for retaining information about data augmentations in the representation space. It can be directly applied to SSL methods, contributing to the further advancement of augmentation-aware self-supervised learning.
LimitationsSince our approach relies on modifying the projector network of self-supervised models, such a component must be present in the architecture for CASSLE to be applicable. Recently proposed SSL methods typically meet this requirement [13; 14; 70; 64; 30; 39], but there are others, such as DirectCLR [31], which eliminate the projector and are thus incompatible with CASSLE. Moreover, in CASSLE, the projector is instructed by only parametrized augmentations causing other types of augmentations [16; 19; 18; 69] difficult to encode.
\begin{table}
\end{table}
Table 3: Ablation study of CASSLE parameters. CASSLE performs best when conditioned on all available augmentation information, by concatenating or adding the augmentation and image embeddings. |
2309.06607 | An Empirical Analysis of Racial Categories in the Algorithmic Fairness
Literature | Recent work in algorithmic fairness has highlighted the challenge of defining
racial categories for the purposes of anti-discrimination. These challenges are
not new but have previously fallen to the state, which enacts race through
government statistics, policies, and evidentiary standards in
anti-discrimination law. Drawing on the history of state race-making, we
examine how longstanding questions about the nature of race and discrimination
appear within the algorithmic fairness literature. Through a content analysis
of 60 papers published at FAccT between 2018 and 2020, we analyze how race is
conceptualized and formalized in algorithmic fairness frameworks. We note that
differing notions of race are adopted inconsistently, at times even within a
single analysis. We also explore the institutional influences and values
associated with these choices. While we find that categories used in
algorithmic fairness work often echo legal frameworks, we demonstrate that
values from academic computer science play an equally important role in the
construction of racial categories. Finally, we examine the reasoning behind
different operationalizations of race, finding that few papers explicitly
describe their choices and even fewer justify them. We argue that the
construction of racial categories is a value-laden process with significant
social and political consequences for the project of algorithmic fairness. The
widespread lack of justification around the operationalization of race reflects
institutional norms that allow these political decisions to remain obscured
within the backstage of knowledge production. | Amina A. Abdu, Irene V. Pasquetto, Abigail Z. Jacobs | 2023-09-12T21:23:29Z | http://arxiv.org/abs/2309.06607v1 | # An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature
###### Abstract.
Recent work in algorithmic fairness has highlighted the challenge of defining racial categories for the purposes of anti-discrimination. These challenges are not new but have previously fallen to the state, which enacts race through government statistics, policies, and evidentiary standards in anti-discrimination law. Drawing on the history of state race-making, we examine how longstanding questions about the nature of race and discrimination appear within the algorithmic fairness literature. Through a content analysis of 60 papers published at FAccT between 2018 and 2020, we analyze how race is conceptualized and formalized in algorithmic fairness frameworks. We note that differing notions of race are adopted inconsistently, at times even within a single analysis. We also explore the institutional influences and values associated with these choices. While we find that categories used in algorithmic fairness work often echo legal frameworks, we demonstrate that values from academic computer science play an equally important role in the construction of racial categories. Finally, we examine the reasoning behind different operationalizations of race, finding that few papers explicitly describe their choices and even fewer justify them. We argue that the construction of racial categories is a value-laden process with significant social and political consequences for the project of algorithmic fairness. The widespread lack of justification around the operationalization of race reflects institutional norms that allow these political decisions to remain obscured within the backstage of knowledge production.
racial categories, algorithmic fairness, state race-making +
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
[MISSING_PAGE_POST]
critical literature (Bahdan et al., 2017; Chen et al., 2018), we argue that values from theoretical computer science play an equally important role in the construction of racial categories. These influences must be understood in order to assess the extent to which racial classification under algorithmic fairness frameworks departs from other institutional understandings of race. This shift has important implications for how the values and institutions in the algorithmic fairness community have shaped practices of racial classification. Moreover, misalignment between legal frameworks and algorithmic fairness frameworks has consequences for the utility and impact of algorithmic fairness interventions in real-world settings.
This work serves as a case study for understanding how normative values are adopted, embedded within, and obscured by analytic choices about how to measure, quantify, and represent social categories. In surfacing the relationship between values and analytic choices, this project represents the beginning of a research agenda to ensure that algorithmic fairness research is in fact working toward its intended anti-discrimination goals rather than uncirtically reproducing existing power relations.
## 2. Related Work
### Racial Categories in Algorithmic Fairness
The literature on racial classification in algorithmic fairness frameworks highlights a lack of attention toward the nature of racial categories. Much work has been done in computer science to formally define fairness, leading to significant work surrounding the conflict between notions of group and individual fairness and understanding what is meant by "fairness." However, there has been less work around what is meant by "group" (Bahdan et al., 2017). Critiques of algorithmic fairness frameworks highlight the mistreatment of race as an individual trait rather than relational system (Bahdan et al., 2017; Chen et al., 2018), insufficient attention to the situated and context-dependent nature of race (Bahdan et al., 2017; Chen et al., 2018; Chen et al., 2018), and the uncritical adoption of the "protected class" framework of race from U.S. anti-discrimination law (Bahdan et al., 2017; Chen et al., 2018). Moreover, this body of work argues that by failing to engage meaningfully with the meaning of social categories, algorithmic fairness frameworks are susceptible to adopting incoherent and dangerous notions of race that reduce racial distinctions to differences in biology or appearance (Chen et al., 2018; Chen et al., 2018).
The literature in this area reveals that this problem is not unique to group fairness. While some critiques focus primarily on the failures of group fairness to account for differences between groups, critiques of counterfactual fairness--the most popular formalization of individual fairness (Krishnan et al., 2016)- highlight the persistent problem of defining relevant categories of analysis (Chen et al., 2018; Chen et al., 2018). The counterfactual model of fairness proposes that a predictor is fair toward an individual if it would have given the same prediction in the counterfactual world where the individual had belonged to a different group, for example a racial group. Operationalizing this model of fairness requires confronting both what makes a counterfactual world similar enough for comparison and what it means for an individual to belong to a different racial group. Criticisms of the counterfactual model's treatment of race demonstrate that computer scientists cannot escape the thorny political work of racial classification by using a particular mathematical definition of fairness, even one that purports to center individual merit over group membership.
### Identifying Values in Machine Learning Research
In recent years, there has been growing interest around specifying and uncovering the values embedded in machine learning research. Researchers have highlighted the importance of such values in shaping seemingly technical decisions (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). In order to understand how normative values are embedded within decisions about how to operationalize race, we examine the values underlying algorithmic fairness research. Ethical considerations in technical research and AI include autonomy, benefcience, non-malefeficence, justice, explicability, and legal compliance (Chen et al., 2018; Chen et al., 2018). However, in practice, machine learning research tends to under-emphasize these ethical principles in favor of values like performance and efficiency (Chen et al., 2018; Chen et al., 2018). While the FAccT community explicitly centers the ethical values of fairness, accountability, and transparency, it often overlooks other moral values such as respect and agency (Krishnan et al., 2016).
Prior work on values in machine learning research highlights the community's tendency to prioritize generalization, universality, and abstraction over values of contextuality and situatedness (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). This pattern exists in both the broad machine learning community and within the algorithmic fairness community in particular, where researchers often fail to name concrete harms and specific impacted groups, for example failing to directly address anti-Blackness (Chen et al., 2018). The literature highlights two fundamental tensions: the tension between ethical and performance values (Chen et al., 2018) and the tension between generalizability and contextuality (Chen et al., 2018). We focus on these key values to assess how values influence the adoption of racial categories.
## 3. Racial Classification and Institutions
Institutions play a critical role in race-making; science and the state have been particularly influential sites in the creation and designation of racial categories. We propose that the algorithmic fairness community is an emerging race-making institution that merits further attention. Although prior work has primarily highlighted the dangers of algorithmic fairness researchers uncirtically reproducing legal and biological conceptualizations of race, we argue that it is equally important to understand how algorithmic fairness frameworks align with and depart from these traditional institutional influences. In particular, we emphasize that the algorithmic fairness community has its own values, goals, and practices that shape the adoption and construction of racial categories, which we explore in our analysis. For greater context, we first present an incomplete overview of this history to demonstrate how institutional contexts, values, and goals have shaped racial classification practices.
### Racial Classification in Scientific Research
The scientific enterprise engages in classification by identifying kinds of people (Krishnan et al., 2016), which serves as an important site of political and ethical work (Bahdan et al., 2017). Dorothy Roberts argues that modern racial classifications emerged jointly from the scientific revolution and colonialist expansion to create and bolster new state and scientific institutions (Krishnan et al., 2016). Race became a project of biological classification--whether to evidence or establish a scientific basis of racial differences (e.g., Linnaeus, Galton) or undermine it (Darwin)--that remains a foundation of scientific, social, and medical research (Krishnan et al., 2016; Chen et al., 2018). In the U.S., projects of governance (the census, voting, citizenship) and trade and political projects (from slavery and abolition (Krishnan et al., 2016) to
eugenics (Srivastava et al., 2017; Wang et al., 2018)) formed a route for scientific racism to become encoded in social projects (Wang et al., 2018).
On one hand we might observe social and cultural nuance: Roth (2018) theorizes _racial schemas_ as cognitive and cultural classification processes that can vary from person to person, even acknowledging that one person can hold multiple racial schemas at once. Despite variation, the construction of boundaries between groups is shaped by a variety of political and social factors including institutions, power, and political network structures (Srivastava et al., 2017). Yet within modern practices of science, we see the history of race as a governing technology play out today (Wang et al., 2018). For instance, when researchers use racial categories in their studies, race is frequently conceptualized as a fixed, and often biological, identity characteristic rather than a dynamic social and political phenomenon (Krishnan et al., 2017; Krishnan et al., 2017). Scientists may choose a given racial classification for a variety of reasons, including widespread acceptance, the ability to facilitate comparisons across studies, and stability (Srivastava et al., 2017). Inconsistencies in racial categories have been noted in many disciplines including survey methods (Krishnan et al., 2017), public health (Krishnan et al., 2017; Wang et al., 2018), and computer vision (Krishnan et al., 2017; Wang et al., 2018). Although differences in racial classification can affect research conclusions (Krishnan et al., 2017), researchers often fail to explain or justify their operationalizations of race (Krishnan et al., 2017; Wang et al., 2018). This has the potential to reify harmful conceptions of race and undermine the effectiveness of interventions intended to address racial disparities (Krishnan et al., 2017; Wang et al., 2018) and instead obscures that fundamentally arbitrary, inconsistent racial classifications are ideological and political (Krishnan et al., 2017).
### State Race-Making
The state plays an essential role in making and enforcing racial categories through censuses, legislation, and everyday governance (Krishnan et al., 2017). These categories serve as powerful tools for social stratification and reflect normative decisions about how states ought to allocate resources and rights. Brown (2017) identifies three institutional characteristics in particular which shape state racial classification: evidentiary standards for decision-making, record-keeping requirements, and incentive structures. We return to the role of these three institutional structures in algorithmic fairness in the discussion.
In their work on Indigenous statistics, Walter and Andersen (Walter and Andersen, 2018) draw an important link between the creation of such racial categories and quantification, noting that the statistical representation of Indigeneity is an explicit project of racialization. They highlight the role of power, and particularly of state power, in the formation of racial categories through data collection and statistical analysis. While Walter and Andersen focus on official population statistics, such as censuses, they note that quantification extends beyond this particular setting. Indeed, quantitative representations of race are central to the project of algorithmic fairness. Building on Walter and Andersen's analysis of state power, we propose that the algorithmic fairness community acts as an emerging site of power through its quantitative enactment of racial boundaries.
### Institutional Goals in Classification: The Case of Multiracial Identity
The field of critical mixed race studies provides a framework for engaging with historical and contemporary state efforts to construct race toward its own ends. We briefly discuss two examples where developments in the state project of race-making, towards ostensibly inclusive ends, were used to reinforce the dominant hierarchy. In the U.S. context, the political meaning of multiracialism has evolved from the legacy of the one-drop rule--a racial classification principle that asserts that a person with any Black ancestry should be classified as Black--to a celebration of multiracialism as emblematic of a post-racial American future during the introduction of multiracial identification on the 2000 census. This history brings to light several political values embedded within emergent conceptions of multiracialism. Multiracialism came to be depicted as an antidote to historical racial divides, closely linked to American national identity and the image of the U.S. as a "melting pot" of cultures. Moreover, multiracialism was associated with the future, positioning Black identity politics as dated and reinforcing the logic of white supremacy by creating a new economically, politically, and socially ascendant racial identity through its distance from Blackness (Krishnan et al., 2017). This trajectory is not unique to the U.S. context. In post-revolutionary Mexico, a new _mestizo_ identity was forged around modernity and nationalism (Krishnan et al., 2017). Following the Mexican Revolution, the Mexican middle class-which was primarily Indigenous--gained social and economic power. Rather than respond to their interests, however, the Mexican government formed a hybrid racial identity whose modern goals would align with the state's goals of industrialization and economic development. Moreover, implicit in this new identity was a distance from Indigeneity, which could be escaped through the adoption of technology and assimilation to the new mestizo identity. The historical construction of multiracial identity across both the U.S. and Mexican contexts demonstrates the political goals embedded within the decision of how multiracial individuals are racialized. In each case, new classification systems were ultimately used to reproduce and reinforce the structure of the dominant racial hierarchy.
## 4. Method
In order to empirically assess the construction of racial categories in the algorithmic fairness community, we performed qualitative coding on a set of papers from the algorithmic fairness literature that discuss race between 2018 and 2020. While not exhaustive of the entirety of the fairness community, this allows us to identify and discuss emerging notions of race within such a community in its nascent years. Given our limited sample and the qualitative nature of our research, we have no claims over the generalizability of our findings beyond our sample, but, because of the criteria we used for selecting and analyzing our sample, we believe that the set is nevertheless a fair and telling snapshot of how race is conceptualized in recent literature on algorithmic fairness. The following subsections describe the details of the sample construction and the coding process. Qualitative coding of papers enables us to focus on realized research practices within the algorithmic fairness community. Future work might draw on interviews with researchers to examine how authors' perspectives and intentions interact with these practices, but this is beyond the scope of our work.
### Sample
Data collection and analysis was performed by the first author of this paper. The author constructed the sample by beginning with every paper published in a leading domain-specific conference within the algorithmic fairness community, the ACM conference on Fairness, Accountability, and Transparency (FAecT, originally FAT*),
between 2018-2020. As a flagship publication venue for work on algorithmic fairness, FAccT sets the standards for work in this area and broadly represents the state of the field. The first three years of the conference were examined to understand the process by which categories emerge and become naturalized within a nascent community of practice. To target works primarily about algorithmic fairness, the author adapted the selection criteria proposed in Fabris et al.'s (Fabris et al., 2018) survey of data sets used in the algorithmic fairness literature and selected the subsample of these papers whose abstract contains at least one of the following strings, where the asterisk represents the wildcard character: "fair" (targeting, for example "fairness", "unfair"), "bias" ("biased", "debias"), "discriminat" ("anti-discrimination", "discriminatory"), disparate, "parit" ("parity", " disparities"). From this subsample, only the papers that deal directly with race by restricting to papers which contain at least one of the following strings race, "racism", "antiracism", racial were selected. Finally, a manual check of this set of papers was performed and any papers that use these keywords with a different meaning were removed. This left a subset of 65 papers, of which 5 were extended abstracts rather than full papers. The extended abstracts were excluded from analysis because space constraints significantly limit the extent to which authors can explain and justify their choice of race categories. Thus, the final sample comprised 60 full-length FAccT papers published between 2018 and 2020.
### Analysis
To analyze the sample of papers, systematic qualitative coding was performed via ATLAS.ti. Following a semi-grounded theory approach, the author employed an iterative qualitative coding process including an initial coding stage, which draws upon the theoretical literature outlined above, and a subsequent focused coding stage where the author reorganized and synthesized the data coded in the initial stage in order to identify and verify emergent patterns. In the initial coding stage, each paper was coded line-by-line for conceptualizations of race and values. Initially, the author searched for conceptualizations of race emphasized in the existing literature (for example, legal constructions of race). When alternative ways of conceptualizing race appeared, these were coded using an in vivo coding approach, which emphasizes preserving the exact terms used in the text. A similar process was used for coding values in the annotated documents: the author initially drew on existing ethical frameworks for AI and prior work on values in machine learning described in section 2.2, focusing in particular on ethical and performance values, generalizability, and contextuality. Other values were added through in vivo coding. Following this open coding stage, the author used the initial list of codes to go through each document again to ensure that codes that emerged mid-process were applied to each paper. At this point, focused coding was employed to identify the most frequent and significant codes (Sund
and sometimes even within a single paper. Race is conceptualized at varying levels of abstraction and different sets of categories are deemed relevant for analysis. Figure 1 shows an overview; additional details can be found in Table 1 in the Appendix.
36 of the 60 FAccT papers analyzed provide some formal notion of racial categories. Of these, 12 (33.3%) leave the racial categories in question abstract, to be defined at the algorithm implementation stage. Typically, these abstractions not only allow for multiple racial classification schemas but also enable the substitution of any group in place of a racial group. Under this model, racial categories are seen as interchangeable with other social categories, legally protected groups, or groups of different sizes.
Among the 24 papers that define specific race categories, there are 14 distinct racial classification schemas used, which fall into 5 broad categories: Black/white, white/non-white, Black/non-Black, more than two races, and skin tone. The most common of these, used in 11 papers, is a binary classification schema that distinguishes between white and Black or closely related categories like Caucasian and African-American. Less commonly used are binary schemas that distinguish between white and non-white (3 papers) or Black and non-Black (2 papers). Eight papers adopted classification schemas with more than two race categories. Among these 8 papers, there were 9 different categorization schemas, indicating little agreement about what categories are relevant for analysis (_N.B._ some papers adopt multiple schemas). All of these schemas include Black and white categories, demonstrating a shared understanding of these categories' importance. Asian and Hispanic groups appeared in 5 and 7 of these schemas respectively indicating some agreement about their relevance. Finally, two papers, both in the computer vision setting, used a measure of skin tone as a proxy for race rather than directly adopting racial categories.
Decisions about how to delimit racial categories vary not only between papers but sometimes within a single paper. Four papers adopt multiple schemas for race, reporting different results using different sets of racial categories. In all but one case, the schemas in a given paper were subsets of one another. For example, one paper presents results using the following three classification schemas: 1. Asian, Black, Hispanic, Mixed, White, Other, Unknown; 2. Black, Mixed, White, Other, Unknown; and 3. Black, White. Typically the use of multiple schemas is not explained. In the previous example, for instance, it is unclear both whether the "Other" category is modified to include the Asian and Hispanic categories under Schema 2 and why the schemas differed between analyses. Only one paper presents a justification for using multiple classification schemas, citing statistical robustness as a reason to collapse groups with a small number of observations under a single umbrella.
These findings highlight a wide variety of racial classification schemas present across the literature. While there is not a consensus or standard view of the full set of racial categories that are relevant in algorithmic fairness research, there is widespread agreement about the importance of the Black and white categories. Interestingly, despite the historical and institutional importance of the Census Bureau in defining racial categories, no paper used the exact schema used to collect race data in the decennial Census. Rather than deferring to historically influential standards, current norms within the algorithmic fairness community grant researchers substantial autonomy in their ability to select a racial classification schema.
### Multiracialism is often elided
Multiracialism is rarely mentioned in the annotated papers. Of the 24 papers that use specific race categories, only 3 (12.5%) disclose how multiracial people are classified. Each of these papers uses a different process: one combines all multiracial observations into a single "mixed" label, one includes multiracial observations under the larger umbrella of "other", and one excludes multiracial observations from analysis.
The lack of a multiracial category in all but one paper reflects the tendency of papers within the literature to adopt binary classification schemas. However, even within these schemas it remains unclear how multiracial people are classified. In the frequently adopted Black/white schema, for example, it is unclear how data points representing multiracial Black and multiracial white individuals are treated. This decision enables significant analytic flexibility on the part of the researcher between several justifiable options. Researchers may choose to exclude multiracial observations from analysis; to count them with each group of which they are a member (for example, including a biracial Black and white person in analyses of both the Black population and the white population); or to count them only within the historically marginalized group, among other options. As the multiracial population grows, these decisions will increasingly influence the results of fairness analyses. While our findings show that current publishing norms allow these decisions to remain concealed, this obscurity may enable the manipulation of the multiracial category toward hidden ends.
### Group boundaries are constructed across many dimensions
The inconsistency in how researchers construct racial categories reflects deeper inconsistencies in how the algorithmic fairness community understands race. Differences between racial groups are conceptualized in a number of ways both across and within papers. Underpinning these inconsistencies are divergent views of what types of categories are relevant for analysis. We discuss the five most common conceptualizations of racial difference below: legal protection (45% of papers), social status, minority status (28.3%), sensitivity (28.3%), and social salience (16.7% of papers). For a full breakdown of all conceptualizations, including counts and example quotes from the data, see Table 3 in the Appendix.
#### 5.3.1. Legal Protection
Prior work on race in algorithmic fairness frameworks highlights the prevalence of conceptualizing racial groups as "protected classes" and establishing group boundaries in terms of legal protection (Brandt et al., 2018; Bretton et al., 2018). Indeed, this was the most common way of describing race in the literature, appearing in 27 (45%) of the annotated papers. As in the following example, recourse to protected classes often refers to U.S. law and views race as interchangeable with other legally protected attributes (in particular, gender):
"We consider **U.S. anti-discrimination laws**, which name **race, color, national origin, religion, sex, gender, sexual orientation, disability, age, military history, and family status** as protected attributes" - Yang et al. 2020 (p. 553)
Legal frameworks may be particularly useful for aligning algorithmic fairness interventions with existing anti-discrimination law, but are also adopted in papers that do not explicitly attempt to support legal interventions. Despite its prevalence, this framework appears in fewer than half of the annotated papers and is far from the only conceptualization of race in the literature.
#### 5.3.2. Social Status
Notions of race that emphasize status distinguish between advantaged groups and disadvantaged groups as the relevant populations for comparison. Advantage or disadvantage is understood in a number of ways, including power, privilege, vulnerability, and stigma. Yet, under this vision of race, algorithmic fairness interventions attempt to mitigate the very disadvantage they use to distinguish between groups.
#### 5.3.3. Minority Status
Although minority status is often closely linked to processes of disadvantage, the conceptualization of racial groups in algorithmic fairness as minority and majority groups emphasizes a _quantitative_ understanding of group boundaries rather than one situated in political and social relations. Distinguishing between minority and majority groups may be particularly relevant in algorithmic fairness settings where unfairness arises in part because of the under-representation of minority groups within the data.
#### 5.3.4. Sensitivity
The term "sensitive attribute" is commonly used within the theoretical computer science literature on privacy, where it refers to information that would be undesirable to disclose. However, it is unclear what makes a given attribute sensitive when this term is adopted outside of the privacy context. Consequently, sensitivity represents a flexible way to talk about group differences. While the annotated documents rarely address the meaning of sensitivity, sensitivity is occasionally defined in terms of other notions of race. In the following quotes, for example, sensitivity is aligned with legal protection, social status (in this case expressed as privilege), and minority status:
"Historical datasets often reflect historical prejudices; **sensitive or protected attributes** may affect the observed treatments and outcomes." -Madras et al. 2019 (p. 349)
"S is the sensitive attribute where [_S_ =1] is the **privileged class.**" -Friedler et al. 2019 (p. 332)
"An intentionally malicious--or unintentionally ignorant-- advertiser could leverage such data to preferentially target (i.e., include or exclude from targeting) users belonging to certain sensitive social groups (e.g., **minority race**, religion, or sexual orientation)." -Speicher et al. 2018 (p. 2)
Because of the ambiguous meaning of sensitivity, the sensitive attribute terminology provides an abstract and general way to discuss groups without engaging with the meaning of these groups.
#### 5.3.5. Social Salience
The critical literature on racial categories in the algorithmic fairness literature emphasizes the notion that racial categories should refer to "socially salient" groups (Bahdan et al., 2018; Bansal et al., 2019). In other words, racial groups should be studied according to the relevance of their group membership in a particular social context. This perspective has been adopted occasionally in empirical settings: in the following quote, for example, the authors indicate that a result should be ignored because the group in question is merely an artifact of the data and does not describe a relevant social group out in the world:
"the most significant difference--that between the "Unknown" category and the rest--is **not one that directly corresponds to a salient race/ethnicity group**" -Chouldechova et al. 2018 (p. 9)
#### 5.3.6. Uncommon Conceptualizations of Race
In addition to the conceptualizations of race that appear frequently, it is worth noting conceptualizations that remain largely absent. While numerous ways of conceptualizing race appear in the algorithmic fairness literature, we note that race is typically treated as a legal, social, or political axis of discrimination rather than an issue of personal identity. Only about 6.7% of papers conceptualize race according to personal identity. This aligns with the FAccT community's general orientation toward values of fairness and justice over values like dignity, respect, and self-determination (Zhou et al., 2018). We also note that biological notions of race appear infrequently: 10% of papers highlight observable differences and only 3.3% of papers discuss ethnic origin.
## 6. Researcher Justifications
We find that researchers typically fail to justify their choice of a particular racial categorization schema. In total, only 13 papers (21.7%) provide any reasoning behind their chosen schema. Even when substituting to the 24 papers that adopt a specific categorization schema, only 9 (37.5%) present some form of justification. When justifications are provided, they fall into five broad categories: data availability, technical factors, appeals to prior work, epistemic concerns, and relevance. These types of justifications are not mutually exclusive, and are often inter-related. We summarize each type of justification below, with relevant examples from the annotated papers. (See Table 2 in Appendix for all counts.)
### Data Availability
Researchers may adopt a particular racial categorization schema based on how race is presented in the data they use for analysis. Researchers may choose not to modify the schema used in their data for analytic simplicity, as in the following example:
"We use race and ethnicity as a combined field in this paper because **that is how the data was collected and organized** in the LA City Attorney's Office system." -Rodolfa et al. 2020 (p. 147)
In this case, the researchers default to the decision made by the Los Angeles City Attorney's Office for its own administrative purposes. Even if researchers choose not to adopt the same schema as in their data, they may still be affected by the choices of data collectors. To the extent that researchers use data that they did not originally collect, they may be limited by choices made at an earlier stage by someone else if relevant information is obscured under the original data collection schema. In the following example, race data are not collected at all, leading the researchers to use an arbitrary variable in its place:
"the first binary feature was used as a substitute sensitive feature since we **did not have access to sensitive features**." -Dwork et al. 2018 (p. 3)
Finally, distrust in data quality may lead researchers to choose a particular schema:
"**based on our analysis of the consistency of racial classification within the court data**, we have determined this categorization scheme introduces the fewest problems with inconsistent classification" - Lum et al. 2020 (p. 488)
Based on these examples, we argue that data collectors exert influence over the racial categories adopted in algorithmic fairness both directly (by foreclosing certain analyses) and indirectly (through defaults and varying data quality).
### Technical Factors
A number of technical considerations may influence a researcher's choice of racial categories. Algorithms used to identify or mitigate unfairness may be constrained in what types of inputs they can handle, particularly in the case of novel methods. This often leads to an emphasis on binary racial categorization schemas, such as the privileged/not-priviged dichotomy chosen in the example below:
"**Some algorithms additionally require that the sensitive attributes be binary** (e.g., "White" and "not White" instead of handling multiple racial categorizations) - for this version of the data (numerical+binary) we modify the given privileged group to be 1 and all other values to be 0." -Friedler et al. 2019 (p. 332)
Computational efficiency for complex algorithms or in the analysis of large data sets may also lead researchers to choose simpler categorization schemas. In the following example, the researchers once again choose a two-category schema, this time distinguishing between Black and white.
"To make brute force auditing **computationally tractable**, we designate only two attributes as protected; _petwhite_ and _pctblack_, the percentage of each community that consists of white and black people respectively." -Kearns et al. 2019 (pp. 106-108)
Finally, statistical robustness may motivate researchers to choose racial categorization schemas such that each category has a sufficiently large sample size. This could lead researchers to omit groups with small populations or to combine these groups into larger categories, as in the following example:
"[S]everal demographic categories appeared rarely, if at all, in the Twitter data. For the sake of **more robust statistical comparisons**, some analyses below collapse these race categories to, for example, {_White; Black; Hispanic; Other; Don't Know_}." - Borradaile et al. 2020 (p. 574)
In each case, technical constraints and desiderata lead researchers toward simpler categorization schemas with fewer racial categories. Though justifications for racial schemas are rare in the annotated papers, the prevalence of binary schemas suggests that technical motivations may play an important role in the adoption of racial categories within the algorithmic fairness community.
### Appeals to Prior Scientific Work
Some justifications draw on prior academic research. These justifications often draw from beyond the algorithmic fairness literature, which is relatively new and has fewer established standards compared to, for example, the dermatology community cited in the following case.
"We chose the Fitzpatrick six-point labeling system to determine skin type labels given its **scientific origins**. Dermatologists use this scale as the **gold standard** for skin classification and determining risk for skin cancer" - Buolamwini and Gebru 2018 (p. 6)
However, as the algorithmic fairness community begins to establish its own norms, publication standards, and notions of rigor, future work in the field may instead appeal to existing work from within the community. In the following example, the authors cite the Buolamwini and Gebru paper quoted above as justification for adopting a similar racial categorization schema in a similar context.
"**Similar to prior work**, skin color is used as a surrogate for race membership because it is more visually salient." -Yang et al. 2020 (p. 554)
This process of self-perpetuation and naturalization points to the importance of norms and standards within algorithmic fairness institutions.
### Epistemic Concerns
In addition to referencing specific scientific work, justifications of racial categorization schemas also draw on more general notions of scientific rigor by appealing to epistemic principles like reliability, consistency, objectivity, and precision:
"Importantly, we determined that different coders following this protocol could **reliably classify the race and gender of users." - Borradaile et al. 2020 (p. 574)
"Since race and ethnic labels are unstable, we decided to use skin type as a more **visually precise** label to measure dataset diversity. Skin type is one phenotypic attribute that can be used to more **objectively characterize datasets** along with eye and nose shapes." -Buolamwini and Gebru 2018 (p. 4)
As the algorithmic fairness community begins to establish its core epistemic values through publishing standards and methodological norms, these values will likely influence how researchers adopt racial categories and justify their choices.
### Contextual Relevance
Some papers justify their use of a particular categorization schema based on its relevance to the context of study. Researchers may adopt racial categories that reflect the cultural context in which the work is situated. In the case of the algorithmic fairness literature, researchers often draw on the U.S. context. As a result racial categories typically reflect notions of race stemming from the U.S.'s particular histories of slavery, segregation, and discriminatory policy. In the following justification, the researchers explicitly attempt to capture social understanding of race in the U.S. setting:
" Gender and race are fluid and socially constructed categories, and there are other possible ways of categorizing the gender and race of users. However, we believe these categories provide a reasonable, though necessarily simplified, **reflection of race and gender divisions in the US**." - Borradaile et al. 2020 (p. 574)
Though justifications are rarely given in the annotated documents, cultural context can explain the prevalence of categorization schemas that center Blackness and whiteness. These categories of analysis are particularly relevant due to legacies of anti-Black racism and white supremacy. Beyond the larger cultural setting, justifications may also focus on racial categories' relevance in a particular domain of study.
"Furthermore, skin type was chosen as a phenotypic **factor of interest** because default camera settings are calibrated to expose lighter-skinned individuals." - Buolamwini & Gebru 2018 (p. 4)
The contextual relevance approach to racial categorization highlights the fact that inconsistencies across or even within papers are not necessarily a problem. Differences in racial schemas may reflect important differences in the social groups that are relevant to understanding and intervening in discrimination.
## 7. Values in Classification
Racial classification is value-laden and political. Drawing on previous work establishing the values in machine learning and algorithmic fairness research, we identify the values that appear in the annotated documents in order to understand how normative goals drive the adoption of particular racial schemas. We find that the most frequently occurring values are performance-related (50% of papers), which encompasses accuracy, effectiveness, and efficiency. This is followed by justice (45%), which covers values like equity, equality, and merit; non-maleficence (36.7%), which encompasses harm-reduction, risk-reduction, privacy, and safety; real-world applicability (33.3%); epistemic values (28.3%), which includes certainty, consistency, objectivity, and precision; contextuality (20%); and generalizability (20%).
We examine co-occurrences of values with conceptions of race (defined in Section 5.3) in order to understand how values and racial categories interact within the algorithmic fairness setting (see Figure 2 in Appendix for a comprehensive overview of co-occurrences). Legal protection is a common way of conceptualizing race across values, reflecting critiques that the "protected class" framework is adopted uncritically within the literature. However, other notions of race tend to appear in conjunction with particular values. Specifically, papers that emphasize justice, non-maleficence, and contextuality conceptualize race as a status category more often than as a legal category. Meanwhile, the notion of race as a "sensitive attribute", which rarely appears in papers that emphasize justice, is often associated with papers that express performance-related values. Conversely, the social salience conception of race is rare among the papers that emphasize performance but appears frequently in papers that focus on justice and contextuality. These findings highlight that values are differentially expressed through different classification schemas. Consequently, illuminating hidden assumptions about the nature of race can help surface the values embedded in algorithmic fairness research.
## 8. Discussion
Our results highlight the fact that racial categories do not appear in a homogeneous way in the algorithmic fairness literature. Although legal anti-discrimination frameworks--and, in particular, the U.S. notion of protected classes--appear frequently in the literature, they have not produced a consensus view of race among algorithmic fairness researchers. While prior research has emphasized the algorithmic fairness community's over-reliance on protected classes (Bahdan et al., 2018; Chen et al., 2019), legal frameworks are only part of the story. In particular, the influence of academic computer science appears in numerous ways. Although algorithmic fairness brings together researchers from many fields, it has important ties to computer science and machine learning communities. FAccT was originally introduced as a workshop at the Conference on Neural Information Processing Systems, a prominent machine learning conference. Moreover, FAccT has been affiliated with the Association for Computing Machinery since 2019. We argue that this context has shaped the use of racial categories within algorithmic fairness frameworks in important ways.
While the algorithmic fairness community and FAccT have attempted to orient themselves around ethical principles, they are shaped by disciplinary values from within computer science. In particular, values of performance and generalizability appear often within the annotated papers. Race is frequently abstracted within algorithmic fairness frameworks so that these frameworks can be generalized to other settings. The term "sensitive attributes" appears with little attention to the meaning of sensitivity outside of its original context in privacy, rendering this term similarly abstracted from reality. Finally, technical considerations shape racial categories in important ways. Algorithmic limitations and performance concerns drive researchers toward racial schemas that involve fewer categories, often leading to binary operationalizations of race.
The multiracal case provides a clear illustration of both these findings and their consequences. In particular, the multiracal category is typically absent from analysis. This is in keeping with the tendency in computer science toward simple, often binary, classification schemas, as well as the tendency away from small population sizes. Moreover, when multiracal groups are mentioned, they are treated inconsistently, demonstrating the significant flexibility left in the hands of the researcher. This setting also highlights a persistent lack of justifications--or even explanation--around the adoption of racial categories. Despite the fact that multiracal people can be classified in a number of ways under most schemas, these decisions are rarely stated. The history of multiracal statistics highlights the potential to manipulate this flexibility and obscurity toward a number of political goals.
As Jacobs and Wallach (2018) argue, decisions about how to operationalize social constructs such as race are not merely an academic concern but have real, fairness-related consequences. They advocate for making these operationalization decisions explicit in order to make assumptions visible and testable in the name of transparency, accountability, and contestability. We argue that algorithmic fairness researchers should prioritize these visibility practices for defining race, a key construct in the literature that remains underestimated and inconsistently applied yet central to many of the harms the field purports to address. Meanwhile, details about operationalizations of fairness are often explicitly stated within the FAccT literature. Yet these fairness definitions rest on formalizations of social categories (including racial categories) that are rarely detailed or justified and threaten to undermine the project of algorithmic fairness altogether.
The history of state race-making and racial statistics outlined in Section 3.2 reveals that racial categories are susceptible to manipulation and have been weaponized to advance state goals. As the work of categorization falls into the hands of algorithmic fairness practitioners, care must be given to ensure that both old and new avenues for manipulation are addressed. The uncritical adoption of existing legal frameworks may reproduce longstanding power relations enacted by the state. On the other hand, the flexibility of racial categories can be leveraged toward other interests, for example, obscuring discrimination within a system by choosing a categorization schema that shows parity between the defined groups.
The government context offers key lessons for the algorithmic fairness community. Historically, state racialization has been driven by institutional factors within government including evidentiary standards, record-keeping requirements, and incentives (Bailley et al., 2012). We argue that institutions within the algorithmic fairness community will determine how racial categories are instantiated beyond the government context by creating new evidentiary standards, record-keeping requirements, and incentives. From the nascent algorithmic fairness community, Institutions including publishing venues, auditing bodies, and regulatory authorities will determine how race is conceptualized. Thus far, publishing standards and incentives, the persistence of proprietary data, and legal compliance incentives have given shape to a regime under which racial categories are adopted inconsistently and with little expectation of justification. This allows racial categories to be constructed in relative obscurity toward any number of ends, from finding significant scientific results to green-lighting corporate projects. Such flexibility and obscurity merit particular attention in light of recent concerns about corporate capture within FAccT stemming from institutional factors like funding and proprietary data access (Srivastava et al., 2017). Based on both the historical influence of institutions and the results of our analysis, we argue that institutional contexts and data access shape the adoption of racial categories. We propose that the algorithmic fairness community must work to produce institutional arrangements that center the redistribution of power and promote the community's intended values of fairness, accountability, and transparency.
FAccT, its reviewers, funding agencies, and the larger algorithmic fairness community all play an important institutional role. Interventions targeting racial discrimination ought to be assessed not only based on their technical details, but also on whether these interventions are built upon a meaningful and relevant understanding of race. In this paper we identify five types of justifications that are used to motivate the adoption of a racial classification schema: data availability, technical motivations, prior scientific work, epistemic concerns, and contextual relevance. While each of these justifications can provide important details, we propose that every justification should center the contextual relevance of its racial classification schema. This information is key to ensuring that readers can understand and evaluate research, use it in the appropriate context, and assess in whose interest a given racial classification schema was chosen.
The analysis presented in this paper is limited by several factors. First, we focus on what is written directly within the annotated papers. Some authors may choose not to include information about their decisions or include this information in supplemental materials due to space constraints. Because our analysis foregrounds the importance of visibility practices, we argue that the focus on what is included in the body of each paper is justified. However, interviews may elicit a different understanding of how researchers conceptualize race and greater insight into how racial schemas operate as a cognitive process. A second limitation comes from the decision to focus on academic work within a specific conference. This paper does not capture algorithmic fairness research published at other venues (for example, at traditional computer science conferences). However, we show that even when ethical values are explicitly prioritized, as in the case of FAccT, disciplinary values from computer science remain influential. By focusing on FAccT we highlight its role as a key location for institutional change, but this paper does not engage with the significant algorithmic fairness work that occurs outside of academia. Future work should examine how racial categories are implemented in industry and government settings as these are important sites of practice. Finally, this paper covers only the initial years of FAccT from 2018 to 2020. Since then, FAccT has matured and begun to cite a larger canon, including humanistic work and work previously published at FAccT itself. Additionally, the conference has adopted an increasingly reflexive and self-critical orientation, as exemplified by the existing critiques of racial categories in the algorithmic fairness literature (Bailley et al., 2012; Srivastava et al., 2017). While our work focused primarily on the external influences and foundations that the FAccT community drew from in the early years of the conference, current practices merit ongoing attention and reflection.
## 9. Conclusion
Important critiques of algorithmic fairness have highlighted the field's failure to account for the complex, socially situated, and political nature of racial categories (Bailley et al., 2012; Srivastava et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017). We build on this work by examining how algorithmic fairness researchers use racial categories in practice, how they justify these decisions, and the values underlying their choices. Through a systematic qualitative analysis of the FAccT literature, we show that racial categories are inconsistently applied throughout the algorithmic fairness literature, with little justification or explanation. Despite recourse to the language of "protected classes" and the state's historical role in racial classification, we find abstract and binary racial schemas are commonly adopted while government racial schemas remain absent. We argue that this points to the importance of computer science, and its values of performance and generalizability, in shaping the field of algorithmic fairness. We also discuss the need for institutional reforms that center visibility practices and careful operationalizations of race. By highlighting the role of values and institutional factors in shaping racial categories, we hope that this work can enable the algorithmic fairness community to re-examine its practices around racial classification in order to align the field's interventions with its values.
## Acknowledgments
We are grateful to Matthew Bui, Dallas Card, Meera Desai, Jared Katzman, Lu Xian, and three anonymous FAccT reviewers for helpful comments on earlier versions of this paper.
|
2303.17989 | Unsupervised crack detection on complex stone masonry surfaces | Computer vision for detecting building pathologies has interested researchers
for quite some time. Vision-based crack detection is a non-destructive
assessment technique, which can be useful especially for Cultural Heritage (CH)
where strict regulations apply and, even simple, interventions are not
permitted. Recently, shallow and deep machine learning architectures applied on
various types of imagery are gaining ground. In this article a crack detection
methodology for stone masonry walls is presented. In the proposed approach,
crack detection is approached as an unsupervised anomaly detection problem on
RGB (Red Green Blue) image patches. Towards this direction, some of the most
popular state of the art CNN (Convolutional Neural Network) architectures are
deployed and modified to binary classify the images or image patches by
predicting a specific class for the tested imagery; 'Crack' or 'No crack', and
detect and localize those cracks on the RGB imagery with high accuracy. Testing
of the model was performed on various test sites and random images retrieved
from the internet and collected by the authors and results suggested the high
performance of specific networks compared to the rest, considering also the
small numbers of epochs required for training. Those results met the accuracy
delivered by more complex and computationally heavy approaches, requiring a
large amount of data for training. Source code is available on GitHub
https://github.com/pagraf/Crack-detection while datasets are available on
Zenodo https://doi.org/10.5281/zenodo.6516913 . | Panagiotis Agrafiotis, Anastastios Doulamis, Andreas Georgopoulos | 2023-03-31T12:07:23Z | http://arxiv.org/abs/2303.17989v1 | Article
###### Abstract
Computer vision for detecting building pathologies has interested researchers for quite some time. Vision-based crack detection is a non-destructive assessment technique, which can be useful especially for Cultural Heritage (CH) where strict regulations apply and, even simple, interventions are not permitted. Recently, shallow and deep machine learning architectures applied on various types of imagery are gaining ground. In this article a crack detection methodology for stone masonry walls is presented. In the proposed approach, crack detection is approached as an unsupervised anomaly detection problem on RGB (Red Green Blue) image patches. Towards this direction, some of the most popular state of the an CNN (Convolutional Neural Network) architectures are deployed and modified to binary classify the images or image patches by predicting a specific class for the tested imagery; "Crack" or "No crack", and detect localize those cracks on the RGB imagery with high accuracy. Testing of the model was performed on various test sites and random images retrieved from the internet and collected by the authors and results suggested the high performance of specific networks compared to the rest, considering also the small numbers of epochs required for training. Those results meet the accuracy delivered by more complex and computationally heavy approaches, requiring a large amount of data for training. Source code is available on GitHub: [https://github.com/pagraf/Crack-detection](https://github.com/pagraf/Crack-detection) while datasets are available on Zenodo: [https://doi.org/10.5281/zenodo.6516913](https://doi.org/10.5281/zenodo.6516913).
Cracking, Classification, Detection, CNN, RGB images, Cultural Heritage
## 1 Introduction and aims
Masony structures represent the highest proportion of building stock worldwide, including Cultural Heritage assets and historical buildings. Currently, the structural condition of such structures and especially of monuments, is predominantly manually inspected which is a laborious, costly and subjective process [1]. With recent developments in Computer Vision and Machine Learning, there is a great opportunity to exploit RGB and/or Multi-Spectral imagery to speed up this process, with increased objectivity, high precision and accuracy. Computer Vision for detecting building pathologies has interested researchers for quite some time. Vision-based crack detection is a non-destructive assessment technique, which can be useful especially for Cultural Heritage where strict regulations apply and even simple interventions, such as placing crack-rules, are not permitted by the conservation authorities [1]. Lately, shallow and deep Machine Learning architectures applied on various types of imagery (RGB, RGB-Depth, multi-spectral etc.) are gaining ground due to the increased automation and accuracy they offer.
CNNs are recently given great attention because of their extended applications in image classification, semantic segmentation, and other fundamental computer vision problems. They usually consist of the feature extraction part, which is made of convolutional layers and pooling layers, and the classification part, containing many stacked fully connected layers. In the first part, kernels in convolutional layers manipulate the input image, multiplying the weights in each kernel by the pixels' values and combining the sum to create a new image-like array passed to the next layer. Pooling layers play a role in down-sampling to reduce the number of data and save computational resources. In the next part, the image first passes through a flatten layer to be converted to a one-dimensional array. The following fully connected layers use this array as input and produce the predicted label by applying the linear combination and the non-linear activation function. Because of their advantage of extracting deep features layer by layer, nowadays, CNNs are widely used to solve real-world problems.
CNN based image classification can be categorized into three types: (I) image or image patch classification (Figure 1a, c), (II) boundary box regression (Figure 1b) and (III) semantic segmentation (Figure 1d) [3]. In image classification, the image or image patch is labelled with a class. When boundary box regression is considered, a box bounds the detected object, which in the discussed cases is a crack, and reveals its position and boundaries. To achieve this, the weights of the last dense layer are exploited. These two classification techniques have been extensively used to detect cracks and other defects delivering very promising results [4, 5, 6, 7]. These techniques are implemented at block/patch level rather than at pixel level. A combination of the above two classification types is performed for crack detection in the presented research, since a class for the image or the image patch is predicted, also delivering the position of the cracks by projecting the weights of the last dense layer in the form of an attention map.
The exact location can also be determined in high accuracy using semantic segmentation methods. They can deliver the width or length of any defects/cracks since each pixel is assigned to a class label [8, 9, 10, 11]. Fully Convolutional Networks (FCNs), have been extensively used for semantic segmentation in many applications [15]. FCNs performed as an extended CNN, where the final prediction was a semantically segmented image instead of a class identification [1]. Recently, FCNs have been used widely for semantic segmentation on images containing cracks [16, 17, 37, 38]. Feature Pyramid Networks (FPNs) are typical model architectures to generate pyramidal feature representations for object detection. These architectures aim at extracting various features at different scales and then fuse them leading to pixel-level class predictions of higher accuracy [20]. FPNs have also been used widely for crack detection [21, 22]. However, training those networks requires a large amount of manually annotated data, a costly and time-consuming procedure.
To deal with the above issue, transfer learning has been extensively implemented on different fields of computer vision with remarkable results and is considered suitable when the training dataset is small allowing for better performance and much less computational effort. The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world [23]. CNNs utilizing transfer learning
Figure 1: **Crack detection with image classification (a), with boundary box regression (b), with image patch classification (c) and with pixel semantic segmentation (d)**
have been used extensively for image classification and semantic segmentation of cracks [1, 10, 16, 24, 25]. Lately, different studies obtained remarkable results in crack segmentation by implementing region proposal networks followed by algorithms for pixel-level crack detection [26, 27].
Despite the variability of the developed methods, they are mostly devoted to concrete, pavement, brick-walls or road crack detection problems, which are much different problems compared to crack detection on stone masonry walls of Cultural Heritage sites. In the context of HYPERION H2020 project "Development of a Decision Support System for Improved Resilience & Sustainable Reconstruction of historic areas to cope with Climate Change & Extreme Events based on Novel Sensors and Modelling Tools" ([https://www.hyperion.org/project.eu/](https://www.hyperion.org/project.eu/)), an easy to deploy crack detection model is sought.
With this in mind, the presented work focuses on testing and evaluating the most popular state of the art CNN architectures and presenting a simple and easy to deploy unsupervised methodology to detect cracks on complex stone masonry surfaces where joints' visual characteristics are much similar to cracks, thus providing new paradigms for the assessment of historical structures. Towards that direction, typical transfer-learning, i.e., the model is built upon a pre-trained open-source model and training from scratch approaches are presented for each architecture and each dataset, evaluating in depth the results, both in terms of classification and crack localization, comparing also the required training time. The obtained results provide an insight for researchers working with deep learning-based algorithms for crack detection, while the developed model represents a stone masonry crack detection tool that can be scaled to more complex models, including multi-label classification. The source code of the CNN systems used is freely available on GitHub: https://link will be available upon acceptance requiring only a few data to be fine-tuned, if needed. Datasets are also available on Zenodo: https://link will be available upon acceptance
The rest of the paper is organized as follows: In Section 2, the datasets used are described, and afterwards the proposed methodology is presented in detail and justified. Section 3 discusses the tests performed and the experimental results are presented while Section 4 concludes the paper.
## 2 Dataset and Methodology
The tested CNN architectures and models and the presented methodology are intended to increase the automation level on masonry Cultural Heritage structures' inspection using RGB imagery which till now is performed manually, a laborious, costly, and subjective process. The crack detection problem, is approached as an anomaly detection problem on RGB image patches, containing cracks or not. Towards this, in this paper the performance of VGG16 [2], VGG19 [2], InceptionResNetV2 [31], MobileNetV3Small [32], MobileNetV3Large [32], DenseNet121 [33], DenseNet169 [33], DenseNet201 [33], ResNet201 [34], ResNet101V2 [35], and Xception [35] CNN architectures are compared and evaluated over different real world datasets, and the efficiency and adaptability in classifying images or image patches of stone masonry walls by delivering a specific class for the tested imagery; "Crack" or "No crack", and consequently detect and localize those cracks on the RGB imagery is validated.
The experimental program comprises four phases:
1. Create the reference image datasets.
2. Establish the reference CNN architectures and models.
3. Implement transfer-learning approach or
4. Implement training from scratch approach and
5. Run the training experiments and evaluate the results. The details on each phase are presented as follows.
### Datasets
To train and evaluate the aforementioned CNN models, various images with cracks are used from the test sites of HYPERION H2020 project as well as other sites, not being CH sites and not related to the rest of the sites geographically, visually or chronologically. Specifically, square image patches of 224x224 pixels (default input size for VGG16 and VGG19 models) from the test sites of Naillac pier, St. Nikolos Forst as well as from the roman bridge in Rhodini (Rhodis) are used, forming the _CRACK-CH: A Crack detection and classification dataset_ on complex stone masonry surfaces which is available on Zenodo: [https://doi.org/10.5281/zenodo.6516913](https://doi.org/10.5281/zenodo.6516913).
The Saint Nikolaos Forst is an important part of the great fortifications of the Medieval City of Rhodes located at the entrance of the Mantraki port. At this location, there was just a chapel dedicated to Saint Nikolaos until 1464, when it was turned to a fortification. Since then, it has undergone reinforcements and expansions in order to defend the city and the harbour. The outer walls were built in 1480 AD and in 1863 AD it was finally transformed to a lighthouse. The second study area is the Naillac Pier at Saint Paul's marp where a monumental tower was located as part of the fortification of the Commercial Harbor of Rhodes. It was constructed around 1400 AD on the Hellenstic Pier, but it was destroyed in 1863 after a severe earthquake. In 2017, the Naillac Tower was graphically reconstructed and presented as it stood until 1863, during the Ottoman rule. The Rhodini Roman Bridge is one of the few ancient bridges surviving in Greece and part of the Hellenstic fortification of the city, making it a monument of great importance. It was built across the stream of Rhodini, situated outside the Medieval City and has two arched openings. The Roman Bridge is in continuous use until today and its structural integrity has deteriorated, while the scaffoldings which now support the arches are gradually rusting and losing their efficiency [30]. While Naillac and St. Nikolos test sites formed the respective datasets (Fig. 2, Table 1), the Rhodini Roman bridge's images, due to their small number, were mixed with images crowd sourced from the internet, forming the "Random" dataset. Those images were split into two separate categories.; "Cracks" and "No cracks", facilitating the later training and evaluation of the model.
Sample images are shown in Fig. 2 and the number of images in each dataset in Table 1. In total, only 98 images were used. 56 of the images
Figure 2: **Sample images of the different test sites used.**
represent areas with various types and dimensions of cracks on various types of materials while the rest 42 represent areas without cracks, but complex enough (see Fig. 2). To augment the available data during training, vertical and horizontal flipping, colour jittering as well as random rotation was applied to the images. For VGG161 and VGG19 the images were converted from RGB to BGR, then each colour channel was zero-centered with respect to the ImageNet dataset, without scaling. For ResNet50, ResNet101, MobileNet small and large, Xception and InceptionResNetV2 input pixel values were scaled between -1 and 1, sample-wise. Finally, for DenseNet architectures, the input pixel values were scaled between 0 and 1 and each channel is normalized with respect to the ImageNet dataset.
### CNN architectures and models used
Main goal of the implemented CNN models here is the classification of the images into "Crack" and "No crack" as well as the important task of detection and localization of the cracks, if any, on the imagery. To achieve this dual purpose, the most efficient method relies on a strong classifier. Initially, due to the small number of available imagery for training a full-scale model from scratch, this was decided to be achieved through a transfer learning approach. Transfer learning consists of taking features learned on one problem, and leveraging them on a new, similar problem. Next, training the models from scratch was performed to comparatively evaluate the accuracy results network-wise for both the approaches. The following deep learning models have been used. Their selection was based on accuracy and popularity criteria.
VGG16 is a convolutional neural network that is 16 layers deep while VGG19 is a similar network with 19 layers depth. The network has an image input size of 22dx224 pixels. Inception-ResNet-v2 is a convolutional neural architecture that builds on the Inception family of architectures but incorporates residual connections (replacing the filter concatenation stage of the Inception architecture). MobileNetV3 is a convolutional neural network that is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm, and then subsequently improved through novel architecture advances. This network was tested in order to show the potential of crack detection on mobile devices. A DenseNet is a type of convolutional neural network that utilizes dense connections between layers, through Dense Blocks, where we connect all layers (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning un-referenced functions. Residual networks let stacked layers fit a residual mapping. They stack residual blocks on top of each other to form a network: e.g. a ResNet50 has fifty layers using these blocks and a ResNet101 has one hundred and one layers using these blocks. There is empirical evidence that these types of network are easier to optimize, and
\[\textit{precision}=\frac{\textit{TP}}{\textit{TP}+\textit{FP}},\textit{recall}= \frac{\textit{TP}}{\textit{TP}+\textit{FN}}\]
can gain accuracy from considerably increased depth [36]. Xception is a convolutional neural network architecture that relies solely on depth-wise separable convolution layers. While standard convolution performs the channel wise and spatial-wise computation in one step, Depth-wise Separable Convolution splits the computation into two steps: depth-wise convolution applies a single convolutional filter per each input channel and point wise convolution is used to create a linear combination of the output of the depth wise convolution.
In the transfer learning approach performed, the above base models were instantiated and pre-trained weights on ImageNet [28, 29] were loaded into, to kick-start training. Then all the layers of the base models were frozen and to avoid overfitting, the top layers are being excluded from the original model and a new model was created on top of the output of the base model layers. This new model was created using a Global Average Pooling 2D layer, to keep the spatial dimension of the base models' outputs followed by a dense classifier with two units with a softmax activation function. In the training from scratch approach all the layers of the base models were unfrozen. For training the models, a Stochastic Gradual Descent (SGD) optimizer was used for VGG16 and VGG19 models while an Adam optimizer was used for the remaining models. Learning rates used in each case can be found in the Appendix. To compute the loss between the labels and predictions, the Categorical Cross Entropy loss was used. Those selections are reflecting the best results achieved for each model, after intensive testing. Since models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates, a callback that monitors the validation loss was also used. If no improvement is seen for a 'patience' number of epochs, five in our case, the learning rate is then reduced in a minimum learning rate is set.
To perform cracks localization on the test imagery, many approaches can be used. The most common one is to replace the class score by bounding box location candidates. However, in the approach presented here, since a bounding box would contain greater areas of the image instead of only the detected crack or cracks, an attention map representation is used. This way, only the detected cracks are highlighted on the imagery in "red-ish" color, providing also additional useful information. To achieve this, the weights of the final dense layer of the CNN are exploited. This activation map is then bi-linearly up-sampled to have the same size as the original RGB image, and then it is projected on it, generating the resulting images.
### Evaluation Metrics
To evaluate the different training and testing approaches, several metrics are used, precision which gives the ability of a classification model to return only relevant instances, recall which gives the ability of a classification model to identify all relevant instances, F1 score which is a single metric that combines recall and precision using the harmonic mean and accuracy which is the ratio of the correctly labelled subjects to the whole pool of subjects. While recall expresses the ability to find all relevant instances in a dataset, precision expresses the proportion of the data points that the model says was relevant were actually relevant.
\[\textit{accuracy}=\frac{\textit{TP}+\textit{TN}}{\textit{TP}+\textit{TN}+ \textit{FP}+\textit{FN}}\]
where TP are the true positives: data points labelled as "Crack" that are actually "Cracks", FP are the false positives: data points labelled as "Cracks" that are actually "No cracks", TN are the true negatives: data points labelled as "No cracks" that are actually "No cracks" and FN are the false negatives: data points labelled as "No cracks" that are actually "Cracks". Considering crack localization, visual evaluation was performed.
## 3 Experimental Results
The 3 available datasets were used to form six different test cases. This way, the generalization potential of the networks would be highlighted while limitations and drawbacks would come into light. Table 2 presents the 6 test cases and the training and testing images used for each one. All experiments were performed on an NVIDIA GTX 1070 GPU and a 2.20GHz Intel Core i7-8750HQ CPU.
\begin{table}
\begin{tabular}{l l c} \hline
**Dataset** & **Label** & **\# Images** \\ \hline Naillac & Crack & 22 \\ & No Crack & 14 \\ St. Nikolaos & Crack & 8 \\ & No Crack & 16 \\ Random images & Crack & 26 \\ & No Crack & 12 \\ \hline Total images & & 98 \\ \hline \end{tabular}
\end{table}
Table 1: Number of images for each dataset and each label
\begin{table}
\begin{tabular}{l c c} \hline
**model** & **Parameters** & **Depth** \\ \hline VGG16 [2] & 138.4M & 16 \\ VGG19 [2] & 143.7M & 19 \\ InceptionResNetV2 [31] & 55.9M & 449 \\ MobileNetV3Small [32] & 2.9M & 66 \\ MobileNetV3Large [32] & 5.4M & 217 \\ DenseNet121 [33] & 8.1M & 242 \\ DenseNet169 [33] & 14.3M & 338 \\ DenseNet201 [33] & 20.2M & 402 \\ ResNet50V2 [34] & 25.6M & 103 \\ ResNet10V2 [34] & 44.7M & 205 \\ Xception [35] & 22.9M & 81 \\ \hline \end{tabular}
\end{table}
Table 2: The implemented models, the number of their parameters in millions (M) and the depth which is the topological depth of the network. This includes activation layers, batch norm. layers etc.
The same six test cases were followed both for the transfer learning and the training from scratch approach, however, below, training and validation accuracy/loss curves and typical predictions of only the latest are presented in detail, since when training from scratch, models achieved higher accuracy, especially in crack localization. Extensive details and results of all the tests performed can be found in Appendix of this article.
### Experiments on Training - Test cases 0 and 1
The mixed dataset contains images from all the HYPERION datasets, including random images retrieved from the internet and images captured from non-CH masonry walls by the authors. 55 images of those are depicting areas of masonry walls with "No cracks" while 56 images are labelled as "Cracks", totalling 111 images. For the Test case 0, 35 images of each category were used for training the models while for the Test case 1, 28 images of each category used, forming the respective percentages of training-testing data as follows: 63% - 37% and 50% - 50% respectively. At this point it is highlighted that each image of the dataset is unique, and testing is performed always on unseen data. When following the transfer learning approach, for the Test case 0, ResNet50 with a learning rate equal to 0.000085 achieved the best accuracy 0.90, being trained for a total time of 74.141 seconds. ResNet101 and Inception-ResNet-v2 follow with 0.88 accuracy.
**Results for ResNet101 are displayed on top while results for the VGG19 are displayed on the bottom. (test case 0)**
For the Test case 1, DenseNet121, DenseNet201 and ResNet50 achieved the best accuracy of 0.87, using 0.0001, 0.0001 and 0.000085 learning rates and being trained in 106.533, 197.251 and 73.580 respectively. For the training from scratch approach over Test case 0, VGG16 and VGG19 achieved the higher accuracy scores of 0.98 in 138.383 and 191.179 seconds respectively, both with a learning rate of 0.0001. For the Test case 1, only VGG19 achieved the best accuracy score, reaching 0.96, being trained for a total time of 148.327 seconds with a learning rate of 0.0001. VGG16 follows with 0.91 accuracy score and slightly less training time, 119.763 seconds.
As expected, due to the small number of available training data, differences in training and validation results are apparent between test case 0 and test case 1. Even if, in the Test case 1 the maximum testing accuracy achieved reached the 0.96 and in the Test case 0, reached the 0.90, by observing Fig. 3 and Fig. 4 the effect of reducing the training data from 63% to 50% of the total data is clear. In the transfer learning approach of Test case 1, validation loss never fell below 0.4, while in the training from scratch approach, there is larger oscillation in the validation loss, compared to Test case 0. Additionally, the differences are more obvious in the attention maps where even in the training from scratch approach of Test case 1, the crack in the second image patch is not fully detected, contrary to the results of Test case 0. Also, similar differences are apparent in the patches after the transfer learning approach.
### Experiments on Training - Test case 2
This Training-Test case is realized by training the models on all the imagery available, except of the images of the St. Nikolaos test site and testing of those models on this. This approach will highlight the generalization capabilities of the trained models. 39 images of those are depicting areas of masonry walls with "No cracks" while 50 images are labelled as "Cracks", totaling 89 images, were used for training while 16
\begin{table}
\begin{tabular}{c c c c} \hline
**Test case** & **Label** & **\# Training Images \# Testing Images** \\ \hline
0 & Crack & 35 & 21 \\ & No Crack & 35 & 20 \\
1 & Crack & 28 & 27 \\ & No Crack & 28 & 27 \\
2 & Crack & 50 & 8 \\ & No Crack & 39 & 16 \\
3 & Crack & 36 & 22 \\ & No Crack & 41 & 14 \\
4 & Crack & 30 & 26 \\ & No Crack & 30 & 12 \\
5 & Crack & 26 & 30 \\ & No Crack & 12 & 30 \\ \hline \end{tabular}
\end{table}
Table 2: The six different test cases performed in this article.
Figure 4: Models’ training and validation accuracy, training and validation loss and typical images from the testing datasets, demonstrating the detected crack and the predicted image class. Results for ResNet101 are displayed on top while results for the VGG19 are displayed on the bottom. (Test case 1)
Figure 3: Models’ training and validation accuracy, training and validation loss and typical images from the testing datasets, demonstrating the detected crack and the predicted image class.
and 8 images respectively were used for testing the trained models. When following the transfer learning approach InceptionResNetV2, MobileNetV3Small and MobileNetV3Large networks achieved the best accuracy scores, reaching 0.95 in 231.764, 45.558 and 47.121 seconds respectively. Learning rates were selected as follows: 0.0001. 0.000085, and 0.000085 respectively. By training from scratch the models, VGG16, ResNet50 and ResNet101 networks achieved the best accuracy scores, reaching 0.95 in 154.351, 189.391 and 337.003 seconds respectively. Learning rates were selected as follows: 0.0001. 0.000085, and 0.000085 respectively.
### Experiments on Training - Test case 3
Training-Test case 3 is the training of the models on all the imagery available, except of the images of the Naillac test site and testing of those models on this.
This approach will also highlight the generalization capabilities of the trained models. 41 images of those are depicting areas of masonry walls with "No cracks" while 36 images are labelled as "Cracks", totaling 77 images, were used for training while 14 and 22 images respectively were used for testing the trained models. When following the transfer learning approach MobileNetV3Small network achieved the best accuracy score, reaching 0.94 in 110.159 seconds. Learning rate was selected equal to 0.000085. ResNet101 follows with an accuracy score of 0.91 and total training time 362.797 seconds. By training from scratch the models, VGG19 network achieved the best accuracy score, reaching 1 in 173.208 seconds while VGG16 followed, reaching accuracy score of 0.97 in 137.379 seconds. Learning rate was selected 0.0001 for both networks.
### Experiments on Training - Test case 4
Training-Test case 4 is the training of the models on the imagery available from the HYPERION data and testing those models on the random imagery collected by the authors and some images retrieved from the internet. This approach will also highlight the generalization capabilities of the trained models. 30 images of those are depicting areas of masonry walls with "No cracks" while 30 images are labelled as "Cracks", totaling 60 images, were used for training while 12 and 26 images respectively were used for testing the trained models. When following the transfer learning approach ResNet50 network achieved the best accuracy score, reaching 0.84 in 71.210 seconds. Learning rate was selected equal to 0.000085. DenseNet201, VGG19, MobileNetV3Small and MobileNetV3Large follow with an accuracy score of 0.81 and total training time 192.115, 68.845, 42.433 and 49.863 seconds respectively. By training from scratch the models, again, VGG19 network achieved the best accuracy score, reaching 0.97 in 173.236 seconds while VGG16 follows, reaching accuracy score of 0.95 in 138.661 seconds. Learning rate was selected 0.0001 for both networks.
Figure 5: Models’ training and validation accuracy, training and validation loss and typical images from the testing datasets, demonstrating the detected crack and the predicted image class. Results for InceptionResNetV2 are displayed on top while results for the VGG16 are displayed on the bottom.
Figure 6: Models’ training and validation accuracy, training and validation loss and typical images from the testing datasets, demonstrating the detected crack and the predicted image class. Results for MobileNetV3Small are displayed on top while results for the VGG19 are displayed on the bottom.
3.5 Experiments on Training - Test case 5
This final Training-Test case 5 is realized by the training of the models on the random imagery collected by the authors and some images retrieved from the internet and testing those models on imagery available from the HYPERION data. This approach will also highlight the generalization capabilities of the trained models. 12 images of those are depicting areas of masonry walls with "No cracks" while 26 images are labelled as "Cracks", totalling 60 images, were used for training while 30 and 30 images respectively were used for testing the trained models. When following the transfer learning approach Inception-ResNet-v2 network achieved the best accuracy score, reaching only 0.66 in 227,398 seconds. Learning rate was selected equal to 0.0001. By training from scratch the models, VGG16 network achieved the best accuracy score, reaching 0.87 in 95.106 seconds while VGG19 and ResNet50 followed, with accuracy scores of 0.85 in 121.754 and 119.874 seconds respectively. Learning rate was selected 0.0001 \(t\) and 0.000085 respectively.
**Fig. 8 - Models' training and validation accuracy, training and validation loss and typical images from the testing datasets, demonstrating the detected crack and the predicted image class. Results for InceptionResNetV2 are displayed on top while results for the VGG19 are displayed on the bottom.**
### Comparative Results and Evaluation
In the figures below, results of all the performed training and test cases are presented comparatively. Fig. 9 depicts the testing accuracy of the models for each test case, after training only the additional Global Average Pooling (2D) and dense layers of the model (transfer learning approach). There, it is obvious that for the majority of the test cases, ResNet50, MobileNetV3Small and InceptionResNetV2 are achieving the best results, however, as can be seen in the Appendix Fig. 11-Fig. 12 and Tables 13-14, localization is not correct in most of the cases. On the other hand, Fig. 10 presents testing accuracy of the same networks after being trained from scratch. There, it is clear that VGG16 and VGG19 models are outperforming the rest, while ResNet50 and ResNet101 are following. Details can be also found in the Appendix Fig. 11-Fig. 13 and Tables 13-14. Fig. 11 and Fig. 12 comparatively present the computational time for training the described models. There, as expected, MobileNetV3Small network outperformed the rest, while MobileNetV3Large, VGG16 and VGG19 networks are following.
**Fig. 11 - Training time for training only the last layers**
**Fig. 10 - Testing accuracy after training from scratch**
**Fig. 7 - Models' training and validation accuracy, training and validation loss and typical images from the testing datasets, demonstrating the detected crack and the predicted image class. Results for ResNet50V2 are displayed on top while results for the VGG19 are displayed on the bottom.**
**Fig. 9 - Testing accuracy after training only the last layers**
### Experiments on full-scale real-world image data
To exploit the trained models over real-world data, instead of using it only with small image patches of 224x224 pixels, a sliding window approach was implemented. Sliding windows play an integral role in object detection, as they allow the localization of objects in the image. The step size of the sliding window indicates how many pixels are going to be skipped in both the (x, y) direction. Normally, loop over each and every pixel of the image is not desirable (i.e. step-1) as this would be computationally prohibitive if we were applying an image classifier at each window. Instead, the step is determined on a per-dataset basis and is tuned to give optimal performance based on your dataset of images. In the examined real-world data here, input images of 547x23648 pixels were used and a trained model was used to predict over a sliding window of 224x224 pixels with steps of 32 pixels.
Towards that direction, for each prediction, the weights of the final dense layer of the CNN were stored in an array having the same size as the original full-resolution real-world image. To avoid issues shown in Fig. 13 (a), a fusion approach was adopted and as such, the new predictions were stacked to previous ones, enabling false positive and false negative filtering and making also the process independent of the position of the "Crack" or "Non-crack" on the image patch (Fig. 13, b).
Figure 14 demonstrates the results of a VGG19 model trained on the CRACK-CH dataset over some real world images of cracks on stone masonry walls. Results indicate the high performance and high generalization potential of the proposed approach.
The approach was also implemented in real time scenarios using cameras connected to a portable computer presenting interesting results about the real time applicability of the frameworks (Fig. 14, bottom right image).
## 4 Discussion and Concluding remarks
Taking into account the small amount of available training, validation and testing data, a typical case for Cultural Heritage applications, the CNN delivered highly accurate results in terms of image/patch classification. It is shown that even when the model is trained using data from one test site and tested over another test site, resulting accuracy is very close to 90%, similar to the accuracies achieved in the vast majority of the state of the art methods.
Considering all the tests performed, it is deduced that VGG16 and VGG19 delivered the most accurate predictions in the majority of the cases when training the models from scratch while they require some of the shortest training times. Taking into account the small number of available data for training, this success may be attributed to their small depth, 16 and 19 respectively, facilitating easier training compared to the rest of the networks which are much deeper. Inception-ResNet-v2 network provided mid accuracy levels in transfer learning tests and cracks were localized quite well, while when training the network from scratch, it was outperformed by most of the other networks and cracks weren't localized at all. Also, as observed in Fig. 11 and Fig. 12, in both approaches and for all the test cases performed it was the most expensive in terms of training time. MobileNetV3Small and Large networks provided mid accuracy levels too, in both training approaches and for all the test cases performed. However, they didn't manage to localize the cracks in the majority of the cases, even if the image patch was classified correctly as "Crack" or "No Crack". As expected, those two networks required the least time for training, compared to the rest of the networks under investigation. DenseNet121, DenseNet169 and DenseNet201 networks for all the transfer learning test cases provided less accuracy compared to the aforementioned networks while for the majority of the training from scratch test cases they resulted in higher accuracy levels compared Inception-ResNet-v2 network and lower accuracy levels compared to both MobileNetV3 architectures. They also proved to be the most time expensive networks for training, following inception-ResNet-v2 network and Xception network, except DenseNet121. ResNet50 and ResNet101 networks achieved very high accuracy levels in the transfer learning approach while they achieved mid accuracy levels when training from scratch. Both networks managed to detect some of the cracks in the first approach while they mostly failed in the second. As expected, ResNet50 was trained faster. Finally, Xception network achieved high accuracy in the transfer learning approach and mid accuracy in the training from scratch approach. It required some of the least time to be trained in both cases while crack detection was better in the first cases. Worth noticing that after training only the last layers of the models, which is a faster process, as expected, cracks are not localized in the same detail, compared to the models trained from scratch, even if the image patch is classified correctly. This can be explained by the fact that when training the models from scratch, the extracted features are describing better the characteristics of the images used for training, thus highlighting cracks where apparent. In all the above tests it is observed that the deeper networks achieved higher accuracies and better crack localization results in the transfer learning approach. This is justified by the fact that these networks require a lot of data to be trained correctly. However, VGG16 and VGG19 proved that they can deliver better accuracy and much more accurate localization results when trained from scratch, while requiring a little time
Figure 14: Various results over unseen data
Figure 12: The prediction of the sliding window before (a) and after (b) fusion of the results
Figure 13: The prediction of the sliding window before (a) and after (b) fusion of the results
to achieve this. This can be attributed mainly to their small depth, which facilitates a correct training of the model with such a few data available.
Results suggested the high performance of the proposed approach, considering also the small numbers of epochs required for training. When training and testing was performed between different tests sites, accuracy was slightly smaller compared to the one achieved when using the shuffled data, however this was expected due to the large variations between the cracks of the two sites. Nevertheless, those results met the accuracy delivered by more complex and computationally heavy approaches, requiring a large amount of data for training.
|
2309.04010 | An explicit multi-time stepping algorithm for multi-time scale coupling
problems in SPH | Simulating physical problems involving multi-time scale coupling is
challenging due to the need of solving these multi-time scale processes
simultaneously. In response to this challenge, this paper proposed an explicit
multi-time step algorithm coupled with a solid dynamic relaxation scheme. The
explicit scheme simplifies the equation system in contrast to the implicit
scheme, while the multi-time step algorithm allows the equations of different
physical processes to be solved under different time step sizes. Furthermore,
an implicit viscous damping relaxation technique is applied to significantly
reduce computational iterations required to achieve equilibrium in the
comparatively fast solid response process. To validate the accuracy and
efficiency of the proposed algorithm, two distinct scenarios, i.e., a nonlinear
hardening bar stretching and a fluid diffusion coupled with Nafion membrane
flexure, are simulated. The results show good agreement with experimental data
and results from other numerical methods, and the simulation time is reduced
firstly by independently addressing different processes with the multi-time
step algorithm and secondly decreasing solid dynamic relaxation time through
the incorporation of damping techniques. | Xiaojing Tang, Dong Wu, Zhengtong Wang, Oskar Haidn, Xiangyu Hu | 2023-09-07T20:26:08Z | http://arxiv.org/abs/2309.04010v1 | # An explicit multi-time stepping algorithm for multi-time scale coupling problems in SPH
###### Abstract
Simulating physical problems involving multi-time scale coupling is challenging due to the need of solving these multi-time scale processes simultaneously. In response to this challenge, this paper proposed an explicit multi-time step algorithm coupled with a solid dynamic relaxation scheme. The explicit scheme simplifies the equation system in contrast to the implicit scheme, while the multi-time step algorithm allows the equations of different physical processes to be solved under different time step sizes. Furthermore, an implicit viscous damping relaxation technique is applied to significantly reduce computational iterations required to achieve equilibrium in the comparatively fast solid response process. To validate the accuracy and efficiency of the proposed algorithm, two distinct scenarios, i.e., a nonlinear hardening bar stretching and a fluid diffusion coupled with Nafion membrane flexure, are simulated. The results show good agreement with experimental data and results from other numerical methods, and the simulation time is reduced firstly by independently addressing different processes with the multi-time step algorithm and secondly decreasing solid dynamic relaxation time through the incorporation of damping techniques.
keywords: Smoothed particle hydrodynamics, Multi-time scale coupling, Multi-time step algorithm, Dynamic damping, Multi-physics problem
## 1 introduction
Smoothed Particle Hydrodynamics (SPH), a typically mesh-free method, which is originally introduced by Lucy [1], Gigold and Monaghan [2] for studying astrophysical problems, has been widely applied to simulate fluid-flows [3; 4; 5; 6], solid mechanics [7; 8; 9; 10; 11], fluid-structure interaction [12; 13; 14] in recent years. Comprehensive reviews can be found in Refs. [15; 16; 17; 18; 19]. Even with wide applications, SPH has some limitations when it comes to simulating multi-scale coupling problems existing in various engineering fields, particularly those involving solid dynamic response which is a typically fast process [20]. The disparity in the time scales of fast and slow processes presents a continuing challenge to numerical simulations [21].
To solve multi-time scale problems, either an implicit or explicit scheme can be applied. The implicit scheme allows for a larger time step in the time integration [22; 23], enabling the monolithic scheme to solve the equations for all fast and slow processes simultaneously. For instance, Zhao [24] used an implicit Newmark scheme to model the flow through a porous elastic solid, where solid dynamics and fluid diffusion occur at different time scales. Gaston [25] employed an implicit scheme to analyze the fluid, chemistry, and structure coupling behavior in a reactor, which is a common phenomenon in the engineering field. However, since the inversion of the stiffness matrix used for solving equations is required for each time step [26; 27], this approach is quite expensive concerning both computation time and memory consumption [28].
The explicit approach is more favorable for solving multi-time scale coupling problems due to its direct time integration and simple numerical formulation [29; 30; 31; 32]. Some researchers have used this approach to simulate material stretching and necking, where the load is applied during a long time period while the material's dynamic response is instant and fast [33; 34; 35]. Since the realistic load is applied in a long time scale, a long physical simulation time is expected. However, with a quite small stable time step size allowed in explicit scheme for the fast process, usually millions of time steps are required to simulate the entire process, which is very often not feasible. To reduce the overall simulation time, loading rate is usually increased artificially [34]. However, high non-realistic loading rate may lead to certain limitations and inaccuracies in the simulation results [36].
This paper presents a multi-time stepping algorithm in SPH, where a large and a small time steps are chosen according to the slow and fast processes in the simulation, respectively. Two, i.e., an outer and an inner loops are arranged with these two time steps for time integration. Specifically, the slow process is integrated with a large time step in the outer loop, while the fast solid dynamic process with a much smaller time step in the inner loop. Since the time step size of the fast process is small, many iterations of the solid stress relaxation may occur within one outer loop and lead to low computational efficiency. To address this issue, a dynamic relaxation method based on implicit operator splitting scheme [37] is adopted to accelerate the convergence rate of the fast dynamic process to an elastic equilibrium state. To assess the performance and computational efficiency of the proposed algorithm, the simulations of tensile tests, including two dimensional and three dimensional cases, are firstly carried out; and then the evolution of fluid diffusion in porous media coupling with elastic deformation is simulated. The latter fluid-structure coupling process occurs in chemical reactors, e.g. in the fuel cell of battery, where fluid mixture diffuses through a Nafion membrane, affecting the battery performance due to the varying fluid concentration and membrane deformation. The obtained results demonstrate that the proposed algorithm performs better both in accuracy and efficiency compared to previous numerical methods.
The reminder of this paper is organized as follows. Section 2 summaries the theories and governing equations for nonlinear hardening plastic solid mechanics and fluid-structure interaction. Section 3 describes the corresponding SPH discretization. In Section 4, the proposed multi-time stepping algorithm coupling with the dynamic relaxation are detailed. Section 5 states the physical problems and the results obtained using the proposed algorithm are compared with those from previous methods and experiments. Finally, Section 6 presents brief concluding remarks. The source code and data needed for this numerical simulation work can be found in SPHinXsys, an
open-source multi-physics SPH library, available at [https://www.sphinxsys.org](https://www.sphinxsys.org).
## 2 Governing equations
### Total Lagrangian solid dynamics
In this section, we provide a concise introduction to solid dynamics within the framework of total Lagrange formulation, along with the relevant notations and and symbols that will be utilized in the subsequent models. The analysis focuses on a solid body \(\mathcal{B}\), which occupies two regions: \(\mathcal{R}_{0}\) and \(\mathcal{R}\), representing the body's configurations at time \(t_{0}\) (\(t=0\)) and \(t\) respectively. In the initial configuration \(\mathcal{R}_{0}\), the position vector of a material point is represented by \(\mathbf{X}\in\mathcal{R}_{0}\), while in the current configuration, it is denoted as \(\mathbf{x}\in\mathcal{R}\). The motion of the solid body is described by the invertible mapping \(\phi\), which transforms a material point \(\mathbf{X}\) to its corresponding vector \(\mathbf{x}=\phi(\mathbf{X},t)\), as illustrated in Figure. 2.1. Based on this definition, the Lagrangian velocity of a material point is defined as \(\mathbf{v}(\mathbf{X},t)=\frac{d\phi(\mathbf{X},t)}{dt}\). The deformation gradient \(\mathbf{F}\), which characterizes the deviation of a material point from its initially undeformed position to its deformed position, can be computed from the displacement vector \(\mathbf{u}=\mathbf{x}-\mathbf{X}\) using the following equation:
\[\mathbf{F}=\frac{d\mathbf{x}}{d\mathbf{X}}=\nabla^{0}\mathbf{u}+\mathbf{I}, \tag{1}\]
where \(\mathbf{I}\) is the unit matrix, and the superscript \((\bullet)^{0}\) accounts for quantities in the initial reference configuration. The corresponding Jacobian determinant term \(J=\det(\mathbf{F})\) indicates the local volume gain \(J>1\) or loss \(J<1\).
The governing equations of solid deformation within the total Lagrange framework are derived as
\[\begin{cases}\rho=\rho^{0}\frac{1}{J}\\ \rho^{0}\frac{\mathrm{d}\mathbf{v}}{\mathrm{d}t}=\nabla^{0}\cdot\mathbf{P}^{T} \end{cases}, \tag{2}\]
where \(\rho\) and \(\rho_{0}\) are the densities in the current configuration \(\mathcal{R}\) and the initial configuration \(\mathcal{R}_{0}\) respectively, \(\mathbf{v}\) the velocity and \(\mathbf{P}\) the first Piola-Kirchhoff stress tensor.
Figure 2.1: Finite deformation process on a body \(\mathcal{B}\).
Different from the Cauchy stress \(\mathbf{\sigma}\), which points to the force measured in the deformed configuration, \(\mathbf{P}\) relates to stress within the initial configuration, and the two stresses are related by
\[\mathbf{P}=J\mathbf{\sigma}\mathbf{F}^{-T}=\mathbf{\tau}\mathbf{F}^{-T}, \tag{3}\]
where \(\mathbf{\tau}\) is the Kirchhoff stress tensor, which is obtained from the constitutive relation as given in Appendix A. Also, using a multiplicative decomposition technique [38; 39], a hardening plastic model is also given in Appendix A.
### Fluid-structure interaction
For the fluid diffusion in porous media coupling with elastic deformation of the porous membrane, we propose a fluid-structure interaction model, where the fluid diffuses in the porous solid, leading to an increased fluid pressure and solid deformation.
In this model, the heterogeneous body is considered as a continuous solid medium containing uniformly distributed small voids with a homogeneous porosity \(a\). When this medium comes into contact with a fluid, fluid flows into these small pores and diffuses inside this medium due to the presence of the fluid concentration gradient, resulting in the formation of a mixture comprising solid and fluid components, as illustrated in Figure 2. To simplify this model, we adopt the methodology proposed by Zhao [24] to to present a mixture momentum equation while fluid behaviors follow the diffusion law.
#### 2.2.1 Mass and momentum equations
With a porosity \(a\) and fluid saturation level \(\widetilde{a}\) (see Appendix B.1), the locally effective fluid density \(\rho^{l}\) can be expressed as
\[\rho^{l}=\rho_{0}^{l}\widetilde{a}, \tag{4}\]
where \(\rho_{0}^{l}\) is the initial density of the fluid. The governing equations for the solid body involving the density conservation is described as
\[\rho^{s}=\rho_{0}^{s}\frac{1}{J}, \tag{5}\]
where \(\rho^{s}\) and \(\rho_{0}^{s}\) are the solid density defined in current configuration \(\mathcal{R}\) and initial configuration \(\mathcal{R}_{0}\) respectively, For a porous solid partially-saturated by fluid, the total linear momentum \(\mathbf{M}\) in the region \(\mathcal{R}\) is the sum of fluid momentum and solid momentum
\[\mathbf{M}=\rho\mathbf{v}=\rho^{l}\mathbf{v}^{l}+\rho^{s}\mathbf{v}^{s}, \tag{6}\]
Figure 2: Partially saturated porous medium.
where \(\rho\), \(\mathbf{v}\) is the total density and velocity, \(\mathbf{v}^{l}\) the velocity of fluid, \(\mathbf{v}^{s}\) the velocity of dry porous solid. Due to the difference between \(\mathbf{v}^{l}\) and \(\mathbf{v}^{s}\), the fluid flux \(\mathbf{q}\) on the element boundary \(\partial V\) can then be expressed as
\[\mathbf{q}=\rho^{l}(\mathbf{v}^{l}-\mathbf{v}^{s}). \tag{7}\]
Obviously, if there is no fluid passing through the boundary, \(\mathbf{q}=0\), the fluid mass in an element is conserved. The transfer of fluid mass and momentum between micro-scale solid constituents happens when fluid flows from regions with higher fluid saturation to those with lower saturation. Therefore, within an element \(dV\) of the mixture, the balance of linear momentum implies that the time derivative of momentum \(\mathbf{M}\) is determined by two factors. One is the stress exerting on the element and the other one is the fluid flux of linear momentum \(\mathbf{v}^{l}\otimes\mathbf{q}\) on the boundary \(\partial V\), where the symbol \(\otimes\) means an outer product of two vectors or tensors. It follows that the conservation of total linear momentum of the mixture can be expressed as
\[\frac{D\mathbf{M}}{Dt}=\nabla\cdot\mathbf{\sigma}-\nabla\cdot\left(\mathbf{v}^{l} \otimes\mathbf{q}\right), \tag{8}\]
where \(\mathbf{\sigma}\) represents the cumulative Cauchy stress in the mixture acting on the solid. \(\mathbf{\sigma}\) is determined by Cauchy stress \(\mathbf{\sigma}^{s}\) and the pressure stress due to the presence of the fluid phase \(\mathbf{\sigma}^{l}\), which is detailed in Appendix B.2.
#### 2.2.2 Fick's law
In a partially saturated solid, the fluid saturation difference leads to the motion of fluid from higher fluid fraction to lower parts and the flux follows the Fick's law
\[\mathbf{q}=-K\rho^{l}\nabla\widetilde{a}, \tag{9}\]
indicating that the fluid flux is proportional to the diffusivity \(K\), the effective fluid density \(\rho^{l}\) as well as the gradient of the fluid saturation \(\widetilde{a}\). Consequently, the time derivative of fluid mass in an element \(dV\) is due to the fluid flux \(\mathbf{q}\) on the element boundary \(\partial V\), written as
\[\frac{D\rho^{l}}{Dt}=-\nabla\cdot\mathbf{q}. \tag{10}\]
## 3 SPH implementation
In SPH, the continuum is represented by a set of Lagrangian particles that carry various properties, such as mass, position, velocity, and other attributes. A variable field is approximated using a kernel function that represents the influence of neighboring particles and the mechanics of the continuum are approximated by modeling the interactions between these particles. In this section, we transform the governing equations of two previously discussed models into SPH discretization.
### SPH discretization for solid dynamics
To discretize the solid mechanics, we employ the initial undeformed configuration as the reference. First, aiming to restore 1st order consistency, a correction matrix \(\mathbf{B}^{0}\)[10; 40] of particle \(a\) is adopted as
\[\mathbf{B}^{0}_{a}=\left(\sum_{b}V_{b}\left(\mathbf{r}^{0}_{b}-\mathbf{r}^{0} _{a}\right)\otimes\nabla^{0}_{a}W_{ab}\right)^{-1}, \tag{11}\]
where \(V_{b}\) represents the volume of the neighboring particle \(b\), \(\mathbf{r}_{a}^{0}\) and \(\mathbf{r}_{b}^{0}\) denote the positions of particles \(a\) and \(b\) in the reference configuration, and \(\nabla_{a}^{0}W_{ab}\) is the gradient of the kernel function given by
\[\nabla_{a}^{0}W_{ab}=\frac{\partial W\left(|\mathbf{r}_{ab}^{0}|,h\right)}{ \partial|\mathbf{r}_{ab}^{0}|}\mathbf{e}_{ab}^{0}, \tag{12}\]
where \(\mathbf{e}_{ab}^{0}\) is a unit vector pointing from particle \(a\) to \(b\). In total Lagrangian formulation, the neighborhood of particle \(a\) is defined in the initial configuration, and this set of neighboring particles remains fixed throughout the entire simulation. Additionally, \(\mathbf{B}_{a}^{0}\) is computed only once under the initial reference configuration. The momentum conservation in Eq. (2) can be approximated in the strong form as
\[\frac{\mathrm{d}\mathbf{v}_{a}}{\mathrm{d}t}=\frac{2}{\rho_{a}}\sum_{b}V_{b}^{ 0}\tilde{\mathbf{P}}_{ab}\nabla_{a}^{0}W_{ab}, \tag{13}\]
where \(\rho_{a}\) represents the density of particle \(a\), \(\tilde{\mathbf{P}}_{ab}\) is the averaged first Piola-Kirchhoff stress of the particle pair \((a,b)\), stated as
\[\tilde{\mathbf{P}}_{ab}=\frac{1}{2}\left(\mathbf{P}_{a}\mathbf{B}_{a}^{0}+ \mathbf{P}_{b}\mathbf{B}_{b}^{0}\right). \tag{14}\]
Note that the first Piola-Kirchhoff stress tensor is dependent on the deformation tensor \(\mathbf{F}\), the time derivative of which is computed from
\[\frac{d\mathbf{F}_{a}}{dt}=\left(\sum_{b}V_{b}\left(\mathbf{v}_{b}-\mathbf{v} _{a}\right)\otimes\nabla_{a}^{0}W_{ab}\right)\mathbf{B}_{a}^{0}, \tag{15}\]
where \(\mathbf{v}_{a}\) and \(\mathbf{v}_{b}\) denote the velocities of particles \(a\) and \(b\). Considering the plastic response which may exist in the solid deformation, a return mapping algorithm is used to obtain the stress-strain evolution.
### SPH discretization for fluid-structure interaction
In the fluid-structure interaction model discretization, each particle carries the location \(\mathbf{x}_{n}=\phi(\mathbf{X},t_{n})\) at time \(t_{n}\), along with an initial representative volume \(V^{0}\) that partitions the initial domain of the macroscopic solid. The deformation gradient \(\mathbf{F}_{n}\) of the solid phase is stored to update the solid current volume \(V_{n}\) and density \(\rho_{n}^{s}\). Additionally, the fluid mass \(m_{n}^{l}\), saturation \(\widetilde{a}_{n}\), and density-weighted velocity of the fluid relative to solid \(\mathbf{q}_{n}\) are stored. The fluid mass equation Eq. (10) of particle \(i\) is discretized as
\[\frac{\mathrm{D}m_{i}^{l}}{\mathrm{D}t}=2V_{i}\sum_{j}\frac{m_{j}}{\rho_{j}}( \mathbf{q}_{i}-\mathbf{q}_{j})\nabla_{i}W_{ij}. \tag{16}\]
Note that with the equation 1, we have the relation of gradient kernel function in the total Lagrangian and updated Lagrangian \(\nabla_{i}W_{ij}=\mathbf{F}^{-1}\nabla_{i}^{0}W_{ij}\). Once fluid mass is updated, the locally effective fluid density \(\rho^{l}\) is obtained subsequently. According to Eq. (39) and Eq. (9), we update the fluid saturation \(\widetilde{a}\) and the fluid flux \(\mathbf{q}\) in the particle form
\[\mathbf{q}=-K\rho^{l}V_{i}\sum_{j}\frac{m_{j}}{\rho_{j}}(\widetilde{a}_{i}- \widetilde{a}_{j})\nabla_{i}W_{ij}. \tag{17}\]
With the fluid flux and the stress in hand, we obtain discrete formulations for the momentum balance equation Eq. (8) as
\[\frac{D\mathbf{M}_{i}}{Dt}=2\sum_{j}V_{j}(\mathbf{T}_{i}+\mathbf{T}_{j})\nabla_{ i}W_{ij}-2\sum_{j}V_{j}(\mathbf{v}_{i}^{l}\otimes\mathbf{q}_{i}+\mathbf{v}_{j}^{l} \otimes\mathbf{q}_{j})\nabla_{i}W_{ij}, \tag{18}\]
where \(\mathbf{T}_{i}\) and \(\mathbf{T}_{j}\) are the stress tensors between particles \(i\) and \(j\). We then compute the updated solid velocity \(\mathbf{v}^{s}\) using the total momentum definition Eq. (6), where the total density of the mixture is the sum of the solid and fluid densities \(\rho=\rho^{s}+\rho^{l}\), written as
\[\mathbf{v}^{s}=\frac{\mathbf{M}-\mathbf{q}}{\rho}=\frac{\mathbf{M}-\mathbf{q} }{\rho^{s}+\rho^{l}}. \tag{19}\]
Subsequently, the fluid velocity \(\mathbf{v}^{l}\) is calculated using Eq. (7) as
\[\mathbf{v}^{l}=\mathbf{v}^{s}-\frac{\mathbf{q}}{\rho^{l}}. \tag{20}\]
## 4 Multi-time step algorithm
In multi-time scale coupling involving solid dynamic problems, different time scales simultaneously exist. A multi-time step algorithm using explicit scheme to match different time scale processes is introduced in this section. In this paper, the slow process, e.g., fluid diffusion is integrated with larger time step sizes, while the fast solid dynamics with smaller ones. With small time step size, the solid dynamics evolves to a quasi-equilibrium state to update velocity, position and other solid information. Further, in order to reduce the stress relaxation time of solid dynamics, a damping scheme is applied to accelerate the equilibrium process. For the following numerical simulations, stretch loading or fluid diffusion is performed with a larger time step size, while the dynamic stress relaxation coupled with a damping term is executed with a smaller time step size.
### Multi-time criteria
Since the explicit integration operator is conditionally stable, a time step criterion \(\Delta t_{s}\) in solid simulation is required when using explicit scheme, stated as
\[\Delta t_{s}=0.6\min\left(\frac{h}{c_{s}+|\mathbf{v}_{s}|_{max}},\sqrt{\frac{h }{|\frac{\mathrm{d}\mathbf{v}_{s}}{\mathrm{d}t}|_{max}}}\right), \tag{21}\]
where the artificial speed of sound of a solid structure \(c_{s}=\sqrt{K/\rho_{s}}\). In multi-time scale coupling problems, considering that the solid dynamic relaxation process is comparatively fast, \(\Delta t_{s}\) is usually limited under a small value. In comparison, the time step for internal diffusion evolution or stretching is allowed to be much larger. For the tensile test simulation, we divide the stretching process into \(N_{S}\) steps and the time step is
\[\Delta t_{l}=\frac{T_{t}}{N_{S}}, \tag{22}\]
where \(T_{t}\) is the entire process time of the tensile test, \(\Delta t_{l}\) accordingly the time step for stretch loading. Similarly, for the fluid-structure interaction, according to the Fick's law, the maximum time step allowed for explicit time stepping is characterized as [41]
\[\Delta t_{d}=0.5\frac{h^{2}}{D}, \tag{23}\]
stating that the time step is mainly limited by the diffusivity constant \(D\) and the kernel smoothing length \(h\). To address the difference between these time step sizes of different time scale processes, we present a multi-time step algorithm to simulate these processes respectively with a iterative scheme.
### Iterative scheme
Figure. 4.1 shows the iterative scheme of the proposed multi-time step algorithm schematically. It can be seen this algorithm consists of two loops, where the outer loop indicates that the entire dynamic progress is controlled by the prescribed displacements or diffusion relaxation, which are executed incrementally with a subscript \(l\) denoting each increment. The inner loop describes the solid dynamics evolution with a subscript \(k\) signifying each stress relaxation step. The loading or diffusion criterion \(\Delta t_{l}\) or \(\Delta t_{d}\) controls the external force exerting or the fluid diffusion process and \(\Delta t_{s}\) determines the frequency of solid stress relaxation. However, within one external loading time step \(\Delta t_{l}\) or diffusion time step \(\Delta t_{d}\), the time integration of structure should be computed as \(k_{0}=[\frac{\Delta t_{l}/d}{\Delta t_{s}}]+1\) times. With a limited \(\Delta t_{s}\) and much larger \(\Delta t_{l}\) and \(\Delta t_{d}\), \(k_{0}\) is supposed to be very large and the computation of solid dynamics will be trapped into a meaningless iteration, increasing the un
Figure 4.1: Flowchart of the iterative scheme in multi-time step algorithm.
Since once solid dynamics achieves the static state, the inner loop can be finished to begin another outer loop. Therefore, in order to save computation time, the inner loop is executed with a damping term to dissipate the kinetic energy and accelerate the relaxation of the transient response. Solid governing equations with extra damping can be solved a small number of times \(k\) until the kinetic energy is reduced to a sufficient small value \(E_{k}\). Specific criteria values of the kinetic energy are given in different cases. After the equilibrium state of the solid deformation is achieved in the inner steps, a new outer step begins and this procedure is performed once again until the physical computation time ends.
### Damping scheme
As we mentioned before, obtaining equilibrium for a dynamic system is excessively time-consuming in SPH method with explicit time-stepping. To address this issue, we apply a damping term into the stress relaxation to dissipate the extra kinetic energy inside the system and accelerate the convergence of stress relaxation process. Following Zhu et al. work [37], a viscous damping term \(\mathbf{f}^{v}\) is added in the solid momentum equation as
\[\frac{d\mathbf{v}}{dt}=\mathbf{f}^{s}+\mathbf{g}+\mathbf{f}^{v}, \tag{24}\]
Where \(\mathbf{f}^{s}\) and \(\mathbf{g}\) represents the surface and body forces, the added damping term \(\mathbf{f}^{v}\) can be discretized in the total Lagrangian form as
\[\mathbf{f}^{v}_{a}=\frac{\eta}{\rho_{a}}\nabla^{2}_{a}\mathbf{v}=\frac{2\eta} {m_{a}}\sum_{a}V_{a}V_{b}\mathbf{v}_{ab}\nabla^{0}_{a}W_{ab}, \tag{25}\]
where \(\eta\) is the dynamic viscosity, given separately in different cases, and usually it depends on the characteristic length scale of the problem and materials parameters. \(\mathbf{v}_{ab}=\mathbf{v}_{a}-\mathbf{v}_{b}\) denotes the velocity difference between a particle pair \((a,b)\). This viscous force can deduce the system oscillation caused by large velocity gradient and eliminates the extra kinetic energy. Therefore, the solid stress is relaxed much faster to a equilibrium state where the kinetic energy decreases below a criterion value. Also, a pairwise splitting scheme is adopted to update the velocity implicitly and locally, keeping the conservation of momentum in each particle pair. More detailed information can be referred to Zhu's work [37].
## 5 Numerical examples
In this section, several tests including the stretching-necking and the fluid diffusion coupled solid deformation in two and three dimensions, are simulated using the present method to show its accuracy and efficiency.
### Necking of a two-dimensional bar
The standard tensile necking test simulation has been previously studied in several papers [42; 43; 44; 38] with experimental and numerical results to compare. With a length of 53.334 mm and a width of 12.826 mm, the test sample is stretched from the surface under an increasing (uniaxial) load. A reduction in the width and thickness happens consistent with the elongation of this specimen. A slight imperfection of this sample
(1.8% reduction) is imposed initially in the center part as shown in Figure 5.1 to trigger the necking phenomenon. The specimen is composed of a elastic deformation depicted by the Neo-Hookean law and a plastic response by the nonlinear isotropic hardening law. The material parameters are given in Table 5.1. A total stretching of 10 mm is realized via a symmetric displacement boundary conditions. Here, \(dp=PH/50\)=0.25652 mm. Three layers of particle are imposed with the aforementioned boundary condition. Consistent with the experimental time around 2 minutes, the physical time in this simulation is set to \(t=100\)s, with stretching steps \(N_{S}=10000\) the corresponding velocity \(v=0.5\times 10^{-4}\) m/s. This is different from that in reference papers where the velocity usually is increased to about 1 m/s to reduce the physical time to \(1.5\times 10^{-3}\) s. After each step of stretch loading, stress relaxation coupled with damping is performed. The damping ratio is set to an experienced value of \(\eta=1.0\times 10^{4}\) based on the work of Zhu [37].
Figure 5.2 shows the deformation evolution colored by von Mise Strain at different time instants. A clear necking pattern is observed in the center of the specimen, which is consistent with that observed in both experimental and other numerical works [42, 43, 44]. The specimen undergoes three distinct stages: elastic strain, followed by uniform plastic strain, and finally necking strain. Figure 5.3 plots the radius evolution of the central part where necking occurs as a function of the imposed stretching displacement. It is compared with the results from the reference Elguedj and Hughes [44] where different mesh discretization and element types Q1, mixed Q1/P0, etc. are used to model this test. As time progresses and the sample elongates, the radius displacement of the central part increases linearly, while after necking occurs, it experiences a rapid increase. Figure 5.4 depicts the evolution of the reaction force as time progresses. After a short elastic response, represented by the initial straight line, the specimen enters the stage of uniform plastic deformation with a smooth increase of reaction force. During this stage, plastic deformation spreads slowly and shows a homogeneous state throughout the specimen. Eventually, when the boundary displacement reaches a certain value, necking occurs in the central part, and the reaction force reaches its
\begin{table}
\begin{tabular}{c c} \hline \hline Parameters & Value \\ \hline Shear modulus & 80.1938 Gpa \\ Bulk modulus & 164.21 Gpa \\ Initial flow stress & 450 MPa \\ Saturation flow stress & 715 MPa \\ Saturation exponent & 16.93 \\ Linear hardening coefficient & 129.24 MPa \\ \hline \hline \end{tabular}
\end{table}
Table 5.1: Necking test simulation: physical material parameters.
Figure 5.1: 2D tensile necking: geometry and initial and boundary condition setup.
Figure 5.2: 2D tensile necking: the deformation colored by von Mise Strain at different time instants.
peak value. Subsequently, the deformation changes to a mode where the plastic effect is concentrated in the central zone, resulting in a decreasing reaction force, which is more obvious in the following three dimensional case.
To determine when equilibrium is achieved, we monitor the kinetic energy \(E_{k}\) until it is damped below a threshold value derived from the elastic energy \(E_{e}\). Here, \(E_{e}\) is calculated using the formula \(E_{e}=\frac{1}{2}F\Delta x\), where \(F\) is the load force of 8000 N deduced from Figure 5.4, and \(\Delta x\) is the stretching length of 10 mm. To investigate the effect of the kinetic energy threshold on the simulation results, we conducted a series of stretching simulations with varying criteria. Figure 5.5 plots the variation of the radius displacement and reaction force for different kinetic energy criteria. Initially, we chose a larger criterion value of \(E_{k}=5\%E_{e}\) and gradually decreased the criterion. The results reveal that when \(E_{k}\) is set to be \(5\%E_{e}\), either the radius displacement and loading force evolution is not smooth enough, indicating that equilibrium is not achieved. This suggests that \(5\%E_{e}\) is too large as a criterion value. On the other hand, with too small criterion values, unnecessary calculation steps are performed, increasing computation time. The results demonstrate that for this 2D case, the appropriate kinetic energy criterion value is \(0.5\%E_{e}\).
During the simulation, the evolution of the kinetic energy after one stretching at four different time instants, as evaluated by the elastic energy \(E_{e}\), is shown in Figure 5.6. As expected, due to the stretching force, there is a kinetic energy fluctuation. After each stretching event, the kinetic energy first increases, followed by a decrease to a certain criterion value of \(0.5\%E_{e}\), which is due to the damping effects. Throughout the simulation process, stress relaxation occurs with viscous damping immediately
Figure 5.3: 2D tensile necking: the evolution of the radial displacement as a function of the imposed vertical displacement of the central part.
Figure 5.4: 2D tensile necking: the evolution of the reaction force versus the imposed vertical displacement.
Figure 5.5: 2D tensile necking: radius displacement (a) and the loading force (b) convergence with different kinetic energy criteria.
after each stretching. The relative kinetic energy at the end of each stretching step approaches \(0.5\%E_{e}\), showing that the equilibrium is achieved.
With a physical time in simulation \(t=100\)s, due to the time step size limitation in explicit scheme, the performed stretching times \(N_{S}\) and stress relaxation times \(N_{s}\) are supposed to be \(N_{S}=N_{s}=t/\Delta t_{s}=2.58\times 10^{9}\). With this multi-time criteria algorithm, we firstly decrease the number of stretching time steps from \(2.58\times 10^{9}\) to \(N_{S}=1.0\times 10^{4}\). Secondly, we decrease the stress relaxation times from \(2.58\times 10^{9}\) to \(N_{s}=3.26\times 10^{5}\) by coupling the damping term to accelerate the equilibrium obtaining. Table 5.2 lists the stress relaxation iterations performed in straightforward and multi-time step algorithms respectively and gives the quantitative efficiency of the present algorithm compared against the straightforward one in terms of stretching \(N_{S}\) and stress relaxation iterations \(N_{s}\) with the same total particle number \(N_{p}\). It is obvious that the proposed algorithm yields a drastic reduction in computation time.
### Necking of a three-dimensional bar
Further, a three-dimensional necking analysis of a cylindrical bar is carried out, which has been studied by Simo and Armero [38; 45], de Souza Neto et al. [42], Elguedj and Hughes [44]. The same geometry of radius 6.413 mm and length 53.334 mm with a slight reduction (1.8%) in the center of the bar as in the previous 2D case is considered. Loading is imposed using displacement control, with a total vertical displacement of 7 mm applied on both the top and bottom surface of the bar. The same material
\begin{table}
\begin{tabular}{c c c c c} \hline algorithm & \(N_{p}\) & \(N_{S}\) & \(N_{s}\) & \(N_{damping}\) \\ \hline straightforward algorithm & 10788 & \(2.58e^{9}\) & \(2.58e^{9}\) & - \\ multi-time step algorithm & 10788 & \(1.0e^{4}\) & \(3.26e^{5}\) & \(3.26e^{5}\) \\ \hline \end{tabular}
\end{table}
Table 5.2: 2D tensile necking: quantitative validation of the efficiency of this multi-time step algorithm.
Figure 5.6: 2D tensile necking: evolution of kinetic energy evaluated by the elastic energy after one stretching at different time.
properties in Table 5.1 and elastic-plastic response as that applied in previous two-dimensional case are employed herein. In this work, initial particle spacing \(dp=0.3\) mm with a total particle number almost \(N_{p}=2.5e^{5}\). With physical time \(t=100\)s and stretching steps \(N_{S}=10000\), the corresponding velocity is \(0.7\times 10^{-4}\) m/s, which allows problem to be simulated in a real stretching rate. The damping ratio used here is \(\eta=1.0\times 10^{4}\).
Contour plots of the von Mise Strain at different time instants from different views are shown in Figure 5.7-5.9. The last plots depict the deformed shape of the specimen at the final stage of the simulation, indicating the occurrence of a necking in the center of the specimen. Based on these figures, we can deduce the deformation evolution of this specimen: initially, the boundary conditions enabled the specimen to maintain an uniform elastic response in the short stage of loading history; subsequently, in the post-peak regime, a diffuse necking mode emerged, which eventually led to the formation of shear bands at high strain levels. These bands accumulated plastic deformations, ultimately leading to the final necking even failure of the specimen. The evolution of this pattern is well-reproduced by the force and deformation data presented in Figures
Figure 5.7: 3D tensile necking: the deformation colored by von Mise Strain at different time instants (top side view).
Figure 5.8: 3D tensile necking: the deformation colored by von Mise Strain at different time instants (front view with half the specimen).
5.10 and 5.11, which agrees well with experimental findings.
Figure 5.9: 3D tensile necking: the deformation colored by von Mise Strain at different time instants (top view with quarter the specimen).
Figure 5.10: 3D tensile necking: the evolution of radial displacement of the central part compared with the reference [42, 44, 49].
### Two-dimensional fluid-structure interaction
In this section, we perform a two-dimensional simulation of fluid diffusion coupling with porous solid deformation and the model is described in Section 2.2, to verify the efficiency of the presented method. As Figure 5.12 shows, a thin porous beam with a length of \(L=10.0\) mm and width of \(W=0.125\) mm is considered, with the left and right sides being constrained to prevent any curling or movement. The simulation starts with a fluid droplet contacting the center part of the beam with a length of \(0.3L\), and this contact continues for 10 seconds while the total physical time is 100 seconds. Given the thin nature of the beam, we assume all pores in the upper half part are filled with fluid initially. As we stated before, the relationship between fluid saturation \(\widetilde{a}\) and solid porosity \(a\) is \(0\leq\widetilde{a}\leq a<1\). For this 2D and 3D cases discussed later, we assume a solid porosity of \(a=0.4\), meaning that the fluid saturation \(\widetilde{a}\) in the central part(\(0.5W\times 0.3L\)) is constrained to \(\widetilde{a}=a=0.4\) for the initial 10 seconds, while in other regions \(\widetilde{a}_{0}=0.0\).
In accordance with the experimental setup, the solid material is considered as a porous and elastic Nafion membrane, with water serving as the fluid. The physical properties and material parameters of this membrane are listed in Table 5.4. The pressure coefficient C has been calibrated to fit the experimentally measured flexure curves, while other parameters are obtained from previous research papers [50; 51]. In the simulation, eight particles are placed in the vertical direction, with a particle
Figure 5.11: 3D tensile necking: the overall evolution of the reaction force versus the imposed vertical displacement compared with the reference [42; 44; 49].
Figure 5.12: 2D fluid-structure interaction: physical configuration of the thin porous beam.
spacing of \(dp_{y}=W/8=1.5625\times 10^{-2}\) mm. However, due to the high aspect ratio of the beam, using the same particle spacing \(dp\) in horizontal \(x\) and vertical \(y\) directions would require a large number of particles, thus increasing the computation time. To address this issue, an anisotropic kernel algorithm is employed, with an anisotropic ratio of 4.0, meaning \(dp_{x}=4dp_{y}=0.0625\) mm. In this simulation, an experienced damping ratio of \(\eta=1.0e^{3}\) in the damping term is utilized.
With the conditions given above, the simulation produces a deformed configuration colored by fluid saturation, as shown in Figure 5.13. Initially, the presence of a water droplet in the upper central region generates a fluid pressure, as explained in Eq. 43, leading to a localized bending in the central region. As time progresses, the saturation difference drives water diffusing continuously, and the total water amount within the porous solid increases, causing a rising flexure. This is also depicted in Figure 5.14, which records the vertical position \(y\) versus the horizontal \(x\) position of the beam at different time instants. After the contact finishes, no more water is added into the beam, and the central water flows slowly into the side areas. Clearly, the fluid saturation shows a smooth transition from the center to the surrounding area in Figure 5.13. Accordingly, a more uniform pressure distribution is developing, resulting in a more smooth flexure of the beam as shown in Figure 5.14 in the later period.
For determining the density kinetic energy criterion \(E_{k}\), we use the pressure from water, \(p^{l}\), stated in Eq. 43, as the reference since the fluid pressure induces the beam swelling. To evaluate the effect of the relative density kinetic energy threshold on the simulation results, a series of simulations are conducted using various criteria \(E_{k}\). The time evolution of the bending amplitude with different kinetic energy criteria is presented in Figure 5.15. With a relatively large criterion value of \(E_{k}=5\%p^{l}\), it is observed that the equilibrium state is not achieved and the energy is not fully eliminated with a relatively light deformation. On the other hand, using a very small criterion value leads to unnecessary calculation steps, increasing computation time. Therefore, it can be concluded that the appropriate density kinetic energy criterion value for this 2D case is \(0.05\%p^{l}\).
Referring to Figure 5.16, the evolution of the density kinetic energy within the diffusion period when \(t=20\)s, evaluated by the water pressure \(p^{l}\), is presented. Due to the water pressure, the density kinetic energy firstly experiences a peak after one diffusion performance, then followed by a decrease to a certain criterion value of \(0.05\%\)\(p^{l}\) we set before, which is attributed to the damping effects. Throughout the simulation process, the stress relaxation takes place accompanying with viscous damping immediately after each diffusion relaxation event. The relative density kinetic energy at the end of each diffusion step approaches \(0.05\%p^{l}\), indicating that the velocity almost vanishes. This signifies that equilibrium is achieved at the end of each diffusion time step.
The efficiency of the proposed approach is demonstrated through Table 5.5, which presents a quantitative comparison of the algorithm against the straightforward approach in terms of diffusion and stress relaxation iterations \(N_{D}\), \(N_{s}\) with a total particle
\begin{table}
\begin{tabular}{c c c c c} \hline Parameters & \(\rho\) (kg/m\({}^{3}\)) & K (m\({}^{2}\)/s) & Pressure coefficient C (Pa) & Young modulus (Pa) & Poisson ratio \\ \hline Value & 2000 & \(1.0e^{-10}\) & \(3.0e^{6}\) & \(8.242e^{6}\) & \(0.2631\) \\ \hline \end{tabular}
\end{table}
Table 5.4: Fluid-structure interaction: physical material parameters value of Nafion film. Data estimated from Motupally and Goswami [50, 51].
Figure 5.13: 2D fluid-structure interaction: the deformation colored by fluid saturation at different time instants.
Figure 5.14: 2D fluid-structure interaction: bending amplitude of the beam at different time instants.
Figure 5.15: 2D fluid-structure interaction: bending amplitude convergence with different density kinetic energy criteria.
number \(N_{p}\). The results reveal a great reduction in computation iterations, thus demonstrating the significant improvement in efficiency achieved by the proposed approach.
### Three-dimensional fluid-structure interaction
Next, we consider the fluid diffusion coupling swelling in a three-dimensional film, specifically the diffusion of water within a porous Nafion membrane. This system has been previously studied numerically by Zhao [24] and experimentally by Goswami [51]. This reference thin porous body is in the form of a polymer film with a x-y plane of dimensions \(L_{x}=10.0\) mm, \(L_{y}=10.0\) mm and a height of \(L_{z}=0.125\) mm. Four boundary sides are constrained to prevent any curling or movement. The physical parameters are taken to be the same as those listed in Table 5.4. The initial conditions are similar to those used in the two-dimensional case. The central square part of the membrane in contact with water occupies a region of dimensions \(0.3L_{x}\times 0.3L_{y}\times 0.5L_{z}\), and this contact lasts for 450 seconds, while the total physical time is 2500 seconds. No fluid is allowed to diffuse out from the membrane. The fluid saturation \(\widetilde{a}\) in the central square part is constrained to \(\widetilde{a}=a=0.4\) for the initial 450 seconds, while in other regions \(\widetilde{a}_{0}=0.0\). Similar with that in the previous two-dimension case,
Figure 5.16: 2D fluid-structure interaction: the density kinetic energy variation within the diffusion period when \(t=20\)s valuated by the water pressure \(p^{l}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline algorithm & \(N_{p}\) & \(N_{D}\) & \(N_{s}\) & \(N_{d}\) \\ \hline straightforward algorithm & 1336 & \(1.58e^{7}\) & \(1.58e^{7}\) & - \\ \hline multi-time step algorithm & 1336 & 125 & \(2.76e^{5}\) & \(2.76e^{5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5.5: 2D fluid-structure interaction: quantitative validation of the efficiency of this multi-time step algorithm.
an anisotropic kernel algorithm to reduce the total particle number evolving in this membrane simulation is used. Specifically, 8 particles are set in the vertical \(z\) direction, meaning that the particle spacing \(dp_{z}=W/8=1.5625\times 10^{-2}\) mm. Here, the anisotropic ratio is 8.0, meaning the \(dp_{x}=dp_{y}=8dp_{z}=0.125\) mm. In the stress relaxation process of the simulation, the experienced damping ratio is set to \(\eta=1.0e^{4}\). In terms of the convergence study of density kinetic energy criteria, by using the same method with that in 2D case, the 3D case has a converged criterion value of \(E_{d}=0.1\%\)\(p^{l}\). In order to provide a more accurate representation of the experiment, the evaporation process is taken into consideration, i.e., the water loses as time progresses. Deformation flexure occurs during the initial period, and later as the mass of fluid loses from the membrane, it eventually returns back to the original shape.
Figure 5.17 shows the membrane deformation colored by water saturation at different time instants. In the first 450 seconds, water amount continues to increase as time progresses, leading to a rising flexure as depicted in Figure 5.18, which records the time history of the height \(z\) of the central point. Once the contact period finishes, no further water is added into the beam, and the central water flows slowly into the side areas. At the same time, water evaporates from the membrane, resulting in a rapid decrease of water pressure and a corresponding decrease of the flexure, as shown by the blue line in Figure 5.18 beyond 450 seconds. Figure 5.18 also includes corresponding data points measured experimentally by Goswami [51] and results from other numerical models for the swelling degree of the very center point versus different time instants. Clearly, the present numerical simulation results exhibit good agreement with experimental results in terms of the deformation amplitude pattern, reproducing the increasing flexure during the water contact period and the subsequent decrease after the contact finishes, consistent with the saturation variation.
Drawing from the previous discussion, the optimal large outer time step is determined by the diffusion constant and the smoothing length, while the small inner time step is dictated by the material properties of the solid. Ideally, the outer time step allowed in principle is hundreds or thousands of times larger than the inner time step size allowed. However, in the standard explicit algorithm, the time step is limited to the smaller one, resulting in the execution of numerous stress relaxation steps and consuming a substantial amount of time. In the presented method, first, diffusion is performed with the larger time step, while stress relaxation is executed multiple times with damping effects until a kinetic energy is reached. Our approach saves time in two ways. Firstly, the number of diffusion relaxation times is reduced since multi-time step algorithm allows diffusion to be performed with its own time step as the outer loop. Secondly, once the kinetic energy criterion is satisfied, we consider the equilibrium achieved, and the inner loop is halted accordingly, avoiding unnecessary stress relaxation calculations. Figure 5.19 indicates the stress iterations \(N_{s}\) during this 3D simulation. There is an increase in the initial 450 seconds when the fluid is in contact with the film, and then a slower increase in the later stages. Table 5.6 presents the quantitative efficiency of our new algorithm compared to the straightforward one, by listing the diffusion iterations \(N_{D}\) and \(N_{s}\) separately. As shown in the table, both two iterations are obviously reduced, representing a significant improvement in saving computation time.
Figure 5.17: 3D fluid-structure interaction: the deformation colored by water saturation at different time instants.
Figure 5.18: 3D fluid-structure interaction: bending amplitude of the center point compared with experimental data and results from other numerical models.
Figure 5.19: 3D fluid-structure interaction: the stress iterations history during the whole simulation.
## 6 Conclusion
This paper proposed an approach employing a multi-time step algorithm to solve multi-time coupling problem involving solid dynamics. In this algorithm, the explicit scheme in time integration is used to simplify the equation system solving. Inner and outer loops with different time step sizes are carried out to match different time scale process. Another crucial feature of this algorithm is the utilization of a kinetic energy criterion to ascertain the attainment of equilibrium of solid dynamics and a damping term to accelerate this equilibrium attainment process, thereby enabling the earlier termination of the inner loop of solid stress relaxation and avoiding redundant computations. Two types of multi-time coupling problem, including a nonlinear hardening bar stretching and a fluid diffusion in porous media coupling solid deformation are simulated to test the performance of this algorithm. Results demonstrate the accuracy and a significant decrease in computation time. Further, the application of this algorithm in practical fluid diffusion coupling hydrogel deformation paves the way for simulating complex multi-physics problems of multi-time scales in the field of complex chemistry reaction.
**Authorship contribution statement**
Xiaojing Tang made the methodology, designed the research, developed code and tested the present library components, performed the visualization and validation, and wrote the original draft of the manuscript. Dong Wu investigated the topic, made the methodology, developed code and tested the present library components, conducted the formal analysis, modified the draft. Zhentong Wang developed code and tested the present library components, and revised the manuscript. Oskar Haidn and Xiangyu Hu made the conceptualization, supervised and administered the project, and revised the manuscript.
**Statements and Declarations**
The authors have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
**Acknowledgments**
Xiaojing Tang was partially supported by the China Scholarship Council (Grant No. 201906120034). Dong Wu was partially supported by the China Scholarship Council (Grant No. 20190613018). Xiangyu Hu would like to express his gratitude to Deutsche Forschungsge meinschaft (DFG) for their sponsorship of this research (Grant No. DFG HU1527/12-4).
\begin{table}
\begin{tabular}{l c c c c} \hline algorithm & membrane & \(N_{D}\) & \(N_{s}\) & \(N_{d}\) \\ \hline straightforward algorithm & 60552 & \(1.5e^{10}\) & \(1.5e^{10}\) & - \\ \hline multi-time step algorithm & 60552 & \(1.25e^{5}\) & \(2.89e^{6}\) & \(2.89e^{6}\) \\ \hline \end{tabular}
\end{table}
Table 5.6: 3D fluid-structure interaction: quantitative validation of the efficiency of this multi-time step algorithm. |
2310.00315 | Report on chaos bound outside Taub-NUT black holes | Positions of a charged particle's equilibrium orbits and spatial regions
where the chaos bound is violated are found through circular motions of the
particle around charged Taub-NUT black holes. Lyapunov exponent is gotten by
calculating eigenvalues of a Jacobian matrix in a phase space $(r,\pi_r)$. When
the particle's charge is fixed, the positions of the equilibrium orbits
gradually move away from the event horizons with the increase of the angular
momentum.The result shows that the bound is violated in the near-horizon
regions and at a certain distance from the horizons when the charge and NUT
parameter are fixed. The spatial regions increase with the increase of the NUT
parameter's value. | Yucheng He, Zeqiang Wang, Deyou Chen | 2023-09-30T09:10:33Z | http://arxiv.org/abs/2310.00315v1 | # Report on chaos bound outside Taub-NUT black holes
###### Abstract
Positions of a charged particle's equilibrium orbits and spatial regions where the chaos bound is violated are found through circular motions of the particle around charged Taub-NUT black holes. Lyapunov exponent is gotten by calculating eigenvalues of a Jacobian matrix in a phase space \((r,\pi_{r})\). When the particle's charge is fixed, the positions of the equilibrium orbits gradually move away from the event horizons with the increase of the angular momentum.The result shows that the bound is violated in the near-horizon regions and at a certain distance from the horizons when the charge and NUT parameter are fixed. The spatial regions increase with the increase of the NUT parameter's value.
Keywords:Chaos bound, spatial regions, Taub-NUT black holes
###### Contents
* 1 Introduction
* 2 Circular motion of particles around Taub-NUT black holes
* 3 Bound on Lyapunov exponent
* 3.1 Lyapunov exponent in non-extremal charged Taub-NUT black holes
* 3.2 Lyapunov exponent in extremal charged Taub-NUT black holes
* 4 Conclusions
## 1 Introduction
Motions of particles near black holes convey important information on background spacetimes. For example, null geodesics effectively explore quasinormal modes for test fields, and these modes are related to both the internal information and surface quantization of black holes [1; 2; 3; 4; 5]. Tunneling behaviors of particles near the event horizons can reflect the temperature of black holes [6]. The formation of black holes' shadows can be understood by studying motions of photons around black holes [7; 8; 9]. Research on these motions can bridge black holes physics and information theory.
Due to the nonlinearity of the Einstein's field equation, motions of particles may cause chaos. Chaos is a nonlinear phenomenon that is very sensitive to initial conditions. This sensitivity is represented by a Lyapunov exponent. A lot of work has been done on the chaos and exponent [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Focusing on the near-horizon region of the black hole, Hashimoto and Tanahashi studied the chaos generated by the radial motion of the particle [30]. The particle is subjected to an strong external force so that it can come very close to the hole without falling into it. They found the value of the exponent is independent on the strength and species of the external force and determined by the surface gravity of the hole,
\[\lambda\leq\kappa, \tag{1}\]
where \(\lambda\) is the exponent and \(\kappa\) is the surface gravity. This result favorably supported the conjecture proposed recently by Maldacena, Shenker and Stanford [31]. In this seminal conjecture, they proposed that there is a general upper bound for the Lyapunov exponent of chaos in quantum field theory with a large number of degrees of freedom. The exponent satisfies
\[\lambda\leq\frac{2\pi T}{\hbar}, \tag{2}\]
where \(T\) is the system's temperature originally derived by thought experiments of shock waves near black holes' horizons [32]. For a black hole, its temperature is determined by the surface gravity. Therefore, Eqs. (1) and (2) are equivalent. This bound has been extensively studied and confirmed by a large amount of work. It was proved to be saturated in Sachdev-Ye-Kitaev model and speculated that this saturation is dual to an Einstein gravity [33; 34; 35].
In the recent work, Zhao et al considered the particles' equilibrium in the near-horizon regions, and expanded the exponent at the event horizons [36]. When the contribution of sub-leading terms was considered, they found that the upper bound of the exponent (chaos bound) is violated. Cases of the chaos bound for the violations have also been found in [37; 38]. In [37; 38], Kan et al found that the bound is violated when the influence of the particle's angular momentum on the exponent was taken into account. In their work, the particle's mass and charge are fixed at certain values and the exponent was gotten by the effective potential method. Also considering the contribution of the angular momentum, Lei and Ge et al obtained the expression of the exponent by the matrix method, and studied the bound through the circular motion of the particle around the black hole and the exponent's expansion at the event horizon [39]. They found that the bound was violated both in the near-horizon regions and at a certain distance from the horizons of Reissner-Nordstrom and Reissner-Nordstrom anti-de Sitter black holes.
In this paper, we investigate the influence of the angular momentum of a charged particle around the charged Taub-NUT black holes on the Lyapunov exponent, and find spatial regions where the chaos bound is violated. Although the exponent is also obtained by the matrix method, we always fix the particle's charge as a constant in our calculation. This is different from the work of Ge et al., in which the charge is not fixed. Therefore, our calculation is a special case of their work. The NUT charge has both rotation-like and electromagnetic charge-like characteristics. Therefore, it is interesting to investigate its influence on the exponent and bound.
The rest is organized as follows. In the next section, we review thermodynamics of the charged Taub-NUT black holes and deriving the exponent by calculating the eigenvalue of the Jacobian matrix in the phase space \((r,\pi_{r})\). In Section 3, we investigate the influence of the particle's angular momentum and NUT charge on the exponent, and find the spatial regions where the bound is violated. The last section is devoted to our conclusions.
## 2 Circular motion of particles around Taub-NUT black holes
In this section we investigate the circular motion of a charged particle around the charged Taub-NUT black hole to derive the Lyapunov exponent. As an anisotropic cosmological model, the Taub-NUT solution is characterized by a Misner string similar to a singularity on the axis. This black hole is a solution of the Einstein-Maxwell theory. Its Lagrangian is
\[S=\frac{1}{16}\int d^{4}x\sqrt{-g}(R-2\Lambda-F_{\mu\nu}F^{\mu\nu}). \tag{1}\]
From the Lagrangian, the solution of the black hole is given by [40]
\[ds^{2}=-\frac{f(r)}{r^{2}+n^{2}}(dt+2ncos\theta d\phi)^{2}+\frac{r^{2}+n^{2}} {f(r)}dr^{2}+(r^{2}+n^{2})(d\theta^{2}+sin^{2}\theta d\phi^{2}), \tag{2}\]
with the electromagnetic potential
\[A_{\mu}=\frac{-Q_{0}r}{r^{2}+n^{2}}\left(dt+2ncos\theta d\phi\right), \tag{3}\]
where \(f(r)=r^{2}-2Mr-n^{2}+Q_{0}^{2}\), \(M,n\), \(Q_{0}\) are the mass, NUT parameter, and electric parameter of the black hole, respectively. This black hole has two Misner string singularities located at \(\theta=0\) and \(\theta=\pi\). There are two roots from \(f(r)=0\), which yields event and inner horizons,
\[r_{\pm}=M\pm\sqrt{M^{2}+n^{2}-Q_{0}^{2}}. \tag{4}\]
The surface gravity is
\[\kappa=\frac{r_{+}-M}{r_{+}^{2}+n^{2}}. \tag{5}\]
The Hawking temperature and the entropy are [41]
\[T=\frac{1}{4\pi r_{+}}\left(1-\frac{Q_{0}^{2}}{r_{+}^{2}+n^{2}}\right),\quad S =\pi\left(r_{+}^{2}+n^{2}\right). \tag{6}\]
The electric charge \(Q\) is expressed by the electric parameter \(Q_{0}\), which is
\[Q=\frac{Q_{0}(r_{+}^{2}-n^{2})}{r_{+}^{2}+n^{2}}. \tag{7}\]
Its electric potential is \(\Phi=\frac{Q_{0}r_{+}}{r_{+}^{2}+n^{2}}\). \(N\) is the Misner charge associated with the NUT parameter and \(\Psi\) is its conjugate quantity
\[N=-\frac{4\pi n^{3}}{r_{+}}\left(1-\frac{Q_{0}^{2}(n^{2}+3r_{+}^{2})}{(r_{+}^ {2}+n^{2})^{2}}\right),\quad\Psi=\frac{1}{8\pi n}. \tag{8}\]
The above thermodynamic quantities obey the first law of thermodynamics
\[dM=TdS+\Phi dQ+\Psi dN. \tag{9}\]
When a particle with mass \(m\) and charge \(q\) moves in a circular motion in the equatorial plane of the charged black hole, its Lagrangian is
\[\mathcal{L}=\frac{1}{2}\left[-f\dot{t}^{2}+\frac{\dot{r}^{2}}{f}+(r^{2}+n^{2}) \dot{\phi}^{2}\right]-qA_{t}\dot{t}, \tag{10}\]
where \(f=\frac{f(r)}{r^{2}+n^{2}}\), \(\dot{x}^{\mu}=\frac{d\dot{x}^{\mu}}{d\tau}\) and \(\tau\) is proper time. Using the above equation and generalized momenta \(\pi_{\mu}=\frac{\partial\mathcal{L}}{\partial\dot{x}}\), we get
\[\pi_{t}=-f\dot{t}-qA_{t}=-E,\quad\pi_{r}=\frac{\dot{r}}{f},\quad\pi_{\phi}=(r^ {2}+n^{2})\dot{\phi}=L. \tag{11}\]
In the above equation, \(E\) and \(L\) represent the energy and angular momentum of the particle, respectively. The Hamiltonian is
\[H=\frac{-(\pi_{t}+qA_{t})^{2}+\pi_{r}^{2}f^{2}+\pi_{\phi}^{2}(r^{2}+n^{2})^{-1 }f}{2f}. \tag{12}\]
From the Hamiltonian, the equation of motion of the particle can be obtained, which is
\[\dot{t}=\frac{\partial H}{\partial\pi_{t}}=-\frac{\pi_{t}+qA_{t} }{f},\quad\dot{\pi_{t}}=-\frac{\partial H}{\partial t}=0,\quad\dot{r}=\frac{ \partial H}{\partial\pi_{r}}=\pi_{r}f, \tag{13}\] \[\dot{\pi_{r}}=-\frac{\partial H}{\partial r}=-\frac{1}{2}\left[ \pi_{r}^{2}f^{{}^{\prime}}-\frac{2qA_{t}^{{}^{\prime}}(\pi_{t}+qA_{t})}{f}+ \frac{(\pi_{t}+qA_{t})^{2}f^{{}^{\prime}}}{f^{2}}-\pi_{\phi}^{2}((r^{2}+n^{2}) ^{-1})^{{}^{\prime}}\right],\] \[\dot{\phi}=\frac{\partial H}{\partial\pi_{\phi}}=\frac{\pi_{\phi }}{(r^{2}+n^{2})},\quad\dot{\pi_{\phi}}=-\frac{\partial H}{\partial\phi}=0,\]
where "\({}^{\prime}\)" denotes a derivative with respect to \(r\). In this paper, we use the matrix method to derive the exponent in the phase space \((r,\pi_{r})\). Therefore, we need to get the radial coordinate and momentum at a coordinate time \(t\) from the equations of motion Eq.(13),
\[\begin{split}&\frac{\mathrm{d}r}{\mathrm{d}t}=\frac{\dot{r}}{ \dot{t}}=-\frac{\pi_{r}f^{2}}{\pi_{t}+qA_{t}}\\ &\frac{\mathrm{d}\pi_{r}}{\mathrm{d}t}=\frac{\ddot{\pi_{r}}}{\dot {t}}=-qA_{t}^{{}^{\prime}}+\frac{1}{2}\left[\frac{\pi_{r}f^{2}f^{{}^{\prime}} }{\pi_{t}+qA_{t}}+\frac{\pi_{t}+qA_{t}f^{{}^{\prime}}}{f}-\frac{\pi_{\phi}^{2} ((r^{2}+n^{2})^{-1})^{{}^{\prime}}f}{\pi_{t}+qA_{t}}\right].\end{split} \tag{14}\]
The four-velocity of a particle obeys a normalization condition \(g_{\mu\nu}\ddot{x^{\mu}}\dot{x^{\nu}}=\eta\), where \(\eta=0\) describes a case of a massless particle and \(\eta=-1\) denotes a case of a massive particle. The particle is charged is in this paper and the normalization condition yields \(\pi_{t}+qA_{t}=-\sqrt{f\left[1+\pi_{r}^{2}f+\pi_{\phi}^{2}(r^{2}+n^{2})^{-1} \right]}\). We define \(F_{1}=\frac{\mathrm{d}r}{\mathrm{d}t}\) and \(F_{2}=\frac{\mathrm{d}\pi_{r}}{\mathrm{d}t}\). Then Eq.(14) is rewritten as
\[\begin{split} F_{1}&=\frac{\pi_{r}f^{2}}{\sqrt{f \left(1+\pi_{r}^{2}f+\pi_{\phi}^{2}(r^{2}+n^{2})^{-1}\right)}},\\ F_{2}&=-qA_{t}^{{}^{\prime}}+\frac{(2\pi_{r}^{2}f+ 1)f^{{}^{\prime}}}{2\sqrt{f\left(1+\pi_{r}^{2}f+\pi_{\phi}^{2}(r^{2}+n^{2})^{ -1}\right)}}-\frac{\pi_{\phi}^{2}((r^{2}+n^{2})f)^{{}^{\prime}}}{2\sqrt{f\left( 1+\pi_{r}^{2}f+\pi_{\phi}^{2}(r^{2}+n^{2})^{-1}\right)}}.\end{split} \tag{15}\]
In the phase space, the elements of the matrix is defined by
\[K_{11}=\frac{\partial F_{1}}{\partial r},\quad K_{12}=\frac{\partial F_{1}}{ \partial\pi_{r}},\quad K_{21}=\frac{\partial F_{2}}{\partial r},\quad K_{22}= \frac{\partial F_{2}}{\partial\pi_{r}}. \tag{16}\]
For a particle whose equilibrium orbit is a circular should satisfies \(\pi_{r}=\frac{d\pi_{r}}{dt}=0\). The location of the equilibrium orbit is calculated from this constraint and Eq.(15). By calculating the eigenvalues of the matrix, the exponent of the chaos of the charged particle in the orbit is obtained as follows
\[\lambda^{2}=\frac{1}{4}\Big{[}\frac{f^{{}^{\prime}}+\pi_{\phi}^{2}((r^{2}+n^{ 2})^{-1}f)^{{}^{\prime}}}{1+\pi_{\phi}^{2}(r^{2}+n^{2})^{-1}}\Big{]}^{2}-\frac {1}{2}f\frac{f^{{}^{\prime\prime}}+\pi_{\phi}^{2}((r^{2}+n^{2})^{-1}f)^{{}^{ \prime\prime}}}{1+\pi_{\phi}^{2}(r^{2}+n^{2})^{-1}}-\frac{qA_{t}^{{}^{\prime \prime}}f^{2}}{\sqrt{f(1+\pi_{\phi}^{2}(r^{2}+n^{2})^{-1})}}. \tag{17}\]
When the Lyapunov exponent \(\lambda>0\), the system of charged particle is unstable and chaos system. When the charge of the particle is fixed, we will calculate the Lyapunov exponent by different parameters of the black hole and particle. Clearly, the charge and angular momentum of particle affect the Lyapunov exponent. When the angular momentum is neglected, we get
\[\lambda^{2}=\frac{1}{4}(f^{{}^{\prime}})^{2}-\frac{1}{2}ff^{{}^{\prime\prime}} -qA_{t}^{{}^{\prime\prime}}f^{\frac{3}{2}}. \tag{18}\]
At the event horizon, \(f(r)=0\), and (17) is reduced to
\[\lambda^{2}=\frac{1}{4}(f^{{}^{\prime}})^{2}=\kappa^{2}. \tag{19}\]
Clearly, the exponent is saturated at the horizon. This result is consistent with that obtained in [30].
Bound on Lyapunov exponent
### Lyapunov exponent in non-extremal charged Taub-NUT black holes
When a charged particle moves around the charged Taub-NUT black hole, we can make the position of the particle's equilibrium orbit near the horizon by adjusting the charge-to-mass ratio and angular momentum of the particle. Different values of the Lyapunov exponent represent different motions. \(\lambda^{2}>0\), \(\lambda^{2}=0\) or \(\lambda^{2}<0\) denote unstable, critical or stable motions, respectively. When \(\lambda^{2}>\kappa^{2}\), the bound of the exponent is violated. We investigate the influence of the angular momentum of the particle on the exponent and find the angular momentum's range and spatial region where the exponent is violated. We set \(M=1\), and \(q=15\). The position \(r_{0}\) of the equilibrium orbit is listed in the following tables.
\begin{tabular}{c c c c c c c c} \hline & L & 0 & 1 & 3 & 5 & 7 & 10 & 15 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.50 & 2.27589 & 2.28942 & 2.37311 & 2.47695 & 2.57133 & 2.68608 & 2.82149 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.80 & 2.03071 & 2.03672 & 2.07901 & 2.14326 & 2.21223 & 2.30866 & 2.43907 \\ \cline{2-9} \(Q_{0}\)=0.95 & 1.87711 & 1.88132 & 1.91193 & 1.96138 & 2.01774 & 2.10121 & 2.22118 \\ \cline{2-9} \(Q_{0}\)=1.28 & 1.04148 & 1.04239 & 1.0513 & 1.07936 & 1.13341 & 1.22565 & 1.35941 \\ \hline \end{tabular}
Table 1. Positions of equilibrium orbits of the charged particle around the charged Taub-NUT black hole. For \(n=0.80\), the event horizon is located at \(r_{+}=2.1789\) when \(Q_{0}=0.50\), at \(r_{+}=2.0000\) when \(Q_{0}=0.80\), at \(r_{+}=1.8587\) when \(Q_{0}=0.95\), and at \(r_{+}=1.0400\) when \(Q_{0}=1.28\).
\begin{tabular}{c c c c c c c c} \hline & L & 0 & 1 & 3 & 5 & 7 & 10 & 15 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.70 & 1.8343 & 1.83861 & 1.87001 & 1.92062 & 1.97786 & 2.06121 & 2.17736 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.90 & 1.59727 & 1.59933 & 1.61509 & 1.64327 & 1.67912 & 1.73848 & 1.83387 \\ \cline{2-9} \(Q_{0}\)=1.00 & 1.40263 & 1.40386 & 1.41348 & 1.43164 & 1.45648 & 1.50154 & 1.58273 \\ \cline{2-9} \(Q_{0}\)=1.05 & 1.24104 & 1.24178 & 1.24767 & 1.25935 & 1.27653 & 1.31119 & 1.38354 \\ \hline \end{tabular}
Table 2. Positions of equilibrium orbits of the charged particles around the charged Taub-NUT black hole. For \(n=0.40\), the event horizon is located at \(r_{+}=1.81854\) when \(Q_{0}=0.70\),at \(r_{+}=1.59161\) when \(Q_{0}=0.90\), at \(r_{+}=1.4000\) when \(Q_{0}=1.00\), at \(r_{+}=1.23976\) when \(Q_{0}=1.05\),.
\begin{tabular}{c c c c c c c c} \hline & L & 0 & 1 & 3 & 5 & 7 & 10 & 15 \\ \cline{2-9} \(r_{0}\) & n=0.30 & 1.67851 & 1.68111 & 1.70073 & 1.73484 & 1.77677 & 1.84338 & 1.94522 \\ \cline{2-9} \(r_{0}\) & n=0.60 & 1.86561 & 1.86989 & 1.90101 & 1.95129 & 2.0084 & 2.09226 & 2.21081 \\ \cline{2-9} \(r_{0}\) & n=0.80 & 2.03071 & 2.03672 & 2.07901 & 2.14326 & 2.21223 & 2.30866 & 2.43907 \\ \cline{2-9} \(r_{0}\) & n=0.90 & 2.12257 & 2.12958 & 2.17805 & 2.24951 & 2.32427 & 2.42668 & 2.56276 \\ \hline \end{tabular}
Table 3. Positions of equilibrium orbits of the charged particles around the charged Taub-NUT black hole. For \(Q_{0}=0.80\), the event horizon is located at \(r_{+}=1.6708\) when \(n=0.30\), at \(r_{+}=1.8485\) when \(n=0.60\), at \(r_{+}=2.0000\) when \(n=0.80\),and at \(r_{+}=2.0816\) when \(n=0.90\).
\begin{tabular}{c c c c c c c c c} \hline & L & 0 & 1 & 3 & 5 & 7 & 10 & 15 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.50 & 2.27589 & 2.28942 & 2.37311 & 2.47695 & 2.57133 & 2.68608 & 2.82149 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.80 & 2.03071 & 2.03672 & 2.07901 & 2.14326 & 2.21223 & 2.30866 & 2.43907 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.95 & 1.87711 & 1.88132 & 1.91193 & 1.96138 & 2.01774 & 2.10121 & 2.22118 \\ \cline{2-9} \(Q_{0}\)=1.28 & 1.04148 & 1.04239 & 1.0513 & 1.07936 & 1.13341 & 1.22565 & 1.35941 \\ \hline \end{tabular}
Table 1. Positions of equilibrium orbits of the charged particle around the charged Taub-NUT black hole. For \(n=0.80\), the event horizon is located at \(r_{+}=2.1789\) when \(Q_{0}=0.50\), at \(r_{+}=2.0000\) when \(Q_{0}=0.80\), at \(r_{+}=1.8587\) when \(Q_{0}=0.95\), and at \(r_{+}=1.0400\) when \(Q_{0}=1.28\).
\begin{tabular}{c c c c c c c c} \hline & L & 0 & 1 & 3 & 5 & 7 & 10 & 15 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.70 & 1.8343 & 1.83861 & 1.87001 & 1.92062 & 1.97786 & 2.06121 & 2.17736 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=0.90 & 1.59727 & 1.59933 & 1.61509 & 1.64327 & 1.67912 & 1.73848 & 1.83387 \\ \cline{2-9} \(r_{0}\) & \(Q_{0}\)=1.00 & 1.40263 & 1.40386 & 1.41348 & 1.43164 & 1.45648 & 1.50154 & 1.58273 \\ \cline{2-9} \(Q_{0}\)=1.05 & 1.24104 & 1.24178 & 1.24767 & 1.25935 & 1.27653 & 1.31119 & 1.38354 \\ \hline \end{tabular}
Table 2. Positions of equilibrium orbits of the charged particles around the charged Taub-NUT black hole. For \(n=0.40\), the event horizon is located at \(r_{+}=1.81854\) when \(Q_{0}=0.70\),at \(r_{+}=1.59161\) when \(Q_{0}=0.90\), at \(r_{+}=1.4000\) when \(Q_{0}=1.00\), at \(r_{+}=1.23976\) when \(Q_{0}=1.05\),.
\begin{tabular}{c c c c c c c c} \hline & L & 0 & 1 & 3 & 5 & 7 & 10
In the above tables, when we fixed the NUT parameter and increased the angular momentum of the particle, the positions of the equilibrium orbits move away the horizon, and finally tend to a certain position. When the angular momentum is zero or small enough, the positions of the equilibrium orbits are closed to the horizon. In Table 1 and 2, when the angular momentum and NUT parameter were fixed, the positions of the equilibrium orbits moves closed to the horizon with the increase of the value of the electric parameter.
Using Eq. (17), we numerically calculate the values of the Lyapunov exponent at the equilibrium orbits and plot them in Figure 1 - Figure 4. In Figure 1, when the parameter \(Q_{0}\) is small, the value of the exponent increases to a maximum value with the increase of the angular momentum, and then decreases to a constant value. However, for any values of the angular momentum, the chaos bound is violated when the parameter \(Q_{0}\) is large enough. In this case, the exponent does not have a maximum value and tends to a certain value with the increase of the angular momentum. The range of the angular momentum where the bound is violated are increased with the increase of the parameter \(Q_{0}\). For different values of \(Q_{0}\), the locations of equilibrium orbits and range of the angular momentum for the violation are different. The relative size of the spatial region of the particle's motion when the bound is violated is \(1.22368>\frac{r_{0}}{r_{+}}>1.06782\) when \(Q_{0}=0.80\), is \(1.250729>\frac{r_{0}}{r_{+}}>1.0230483\) when \(Q_{0}=0.95\), and is \(2.1763>\frac{r_{0}}{r_{+}}>1.0014\) when \(Q_{0}=1.28\). Therefore, the relative size increases with the increase of the electric parameter's value.
When the NUT parameter is fixed at \(n=0.40\), we calculate values of the exponent at the equilibrium orbits with different values of the electric parameter. From Figure 2, we find that the violation still occurs when the electric parameter is relative large. The range of the angular momentum and spatial region for the violation increase with the increase of the electric parameter. The relative size of the spatial region is given by \(1.1626>\frac{r_{0}}{r_{+}}>1.0256\) when \(Q_{0}=0.90\), by \(1.5994>\frac{r_{0}}{r_{+}}>1.0316\) when \(Q_{0}=1.00\) and by \(1.75156>\frac{r_{0}}{r_{+}}>1.2449\) when \(Q_{0}=1.05\).
When the electric parameter is fixed at \(Q_{0}=0.80\), we calculate values of the exponent at the equilibrium orbits and plot them in Figure 3. In the figure, we find that a smaller value of the NUT parameter does not cause the violation for the bound, and a larger value of the parameter causes the violation When the angular momentum is within a certain range. When values of the NUT parameter are different, the equilibrium orbital positions and the range of the angular momentum are different. The relative spatial region is \(1.1095>\frac{r_{0}}{r_{+}}>1.0429\) when \(n=0.60\), is \(1.17062>r>1.04843\) when \(n=0.80\), and is \(1.0995>\frac{r_{0}}{r_{+}}>1.0278\) when \(n=0.90\). From the figure, we find that the angular momentum's range and spatial region are increase with the increase of the NUT parameter's value. As this parameter increases, the black hole gradually approaches an extremal black hole. We infer that the extremal black hole is more likely to violate the bound. This will be discussed in the next section.
Figure 2: The Lyapunov exponent of chaos of the charged particle outside the charged Taub-NUT black hole when \(n=0.40\). The chaos bound is violated when \(Q_{0}=0.90\) and \(16.3314>L>4.6857\) (the corresponding spatial region is \(1.8569>r_{0}>1.6382\)), when \(Q_{0}=1\) and \(339.7641>L>3.1137\), (\(2.2391>r_{0}>1.4143\)) and when \(Q_{0}=1.05\) and \(L>2.30113\) (\(2.17152>r_{0}>1.24494\)).
When the electric parameter is fixed at \(Q_{0}=1.00\), we calculate values of the exponent at the equilibrium orbits and plot them in Figure 4. When the bound is violated, the minimum value and region of the angular momentum decreases as the NUT parameter's value increases. When the NUT parameter's values are different, the angular momentum range and relative spatial region are different. The relative spatial region is given by \(1.7401>\frac{r_{0}}{r_{+}}>1.0065\) when \(n=0.20\), by \(1.5994>\frac{r_{0}}{r_{+}}>1.0316\) when \(n=0.40\), by \(1.3298>\frac{r_{0}}{r_{+}}>1.0134\) when \(n=0.60\), and is by \(1.2904>\frac{r_{0}}{r_{+}}>1.0180\) when \(n=0.80\). Clearly, the relative spatial region decreases with the increase of the NUT parameter's value.
For the non-extremal charged Taub-NUT black hole, the violation for the chaos bound are affected by the electric parameter, NUT parameter and the particle's angular momentum.
Figure 4: The Lyapunov exponent of chaos of the charged particle outside the charged Taub-NUT black hole when \(Q_{0}=1.00\). The chaos bound is violated in the range \(L>3.96016\) ( \(2.0895>r_{0}>1.2086\)) when \(n=0.20\), in the range \(339.7641>L>3.1137\) (\(2.2391>r_{0}>1.4143\)) when \(n=0.40\), in the range \(36.3545>L>2.5581\), \((2.1277>r_{0}>1.6215)\) when \(n=0.60\), and in the range \(26.2532>L>2.1749\) (\(2.3228>r_{0}>1.8325\)) when \(n=0.80\).
Figure 3: The Lyapunov exponent of chaos of the charged particle outside the charged Taub-NUT black hole when when \(Q_{0}=0.80\). The chaos bound is violated when \(n=0.60\), at \(9.1781>L>4.7954\), (the corresponding spatial region is \(2.0700>r_{0}>1.9456\)), when \(n=0.80\), at \(11.1126>L>3.5954\), \((2.3412>r_{0}>2.0968)\), when \(n=0.90\), at \(11.6743>L>3.2788\), \((2.4769>r_{0}>2.1816)\).
### Lyapunov exponent in extremal charged Taub-NUT black holes
For an extremal charged Taub-NUT black hole, the inner and event horizons coincides with each other and the surface gravity is zero. There is \(M^{2}+n^{2}=Q_{0}^{2}\) and \(r_{+}=1\). Using (13), we get positions of equilibrium orbits and list them in Table 5. The relationship between the Lyapunov exponent and angular momentum is plotted in Figure 5.
For the extremal case, when the particle's angular momentum is small, the positions of the equilibrium orbits does not exist and the particle drops into the black hole. A significant difference between it and non-extremal black hole is that the equilibrium orbit here can approach the event horizon infinitely without falling into it, which does not occur in the non-extremal black hole. The exponent is always positive at equilibrium orbits in the figure, which shows there always exists the violation for the chaos in the near-horizon region and at a certain distance from the horizon of the black hole. The range of the angular momentum and spatial region increase with the increase of the NUT parameter's value. Therefore, the extremal black hole is more likely to violate the chaos bound than the non-extremal black hole.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & L & 0 & 10 & 20 & 30 & 40 & 50 \\ \cline{2-7} \multirow{-2}{*}{\(r_{0}\)} & n=0.2,\(Q_{0}=\sqrt{1.04}\) & * & * & 1.16866 & 1.35506 & 1.47461 & 1.55774 \\ \cline{2-7} & n=0.4,\(Q_{0}=\sqrt{1.16}\) & * & * & 1.23725 & 1.41428 & 1.53009 & 1.61158 \\ \cline{2-7} \multirow{-2}{*}{\(r_{0}\)} & n=0.6,\(Q_{0}=\sqrt{1.36}\) & * & 1.06827 & 1.33819 & 1.50427 & 1.6155 & 1.69495 \\ \cline{2-7} & n=0.8,\(Q_{0}=\sqrt{1.64}\) & * & 1.21692 & 1.46144 & 1.61723 & 1.72397 & 1.80138 \\ \hline \hline \end{tabular}
\end{table}
Table5: Position of equilibrium orbits of a charged particle around the charged extremal Taub-NUT black hole. An asterisk indicates that a equilibrium orbit does not exist.
Figure 5: The Lyapunov exponent of chaos of the charged particle outside the extremal Taub-NUT black hole when when. The chaos bound is violated in the range \(L>14.0836\) (\(2.0197>r_{0}>1.0000\)) when \(n=0.20\) and \(Q_{0}=\sqrt{1.04}\), in the range \(L>11.6518\) (\(2.0771>r_{0}>1.0000\)) when \(n=0.40\) and \(Q_{0}=\sqrt{1.16}\), in the range \(L>8.1515\) (\(2.16616>r_{0}>1.0000\)) when \(n=0.60\) and \(Q_{0}=\sqrt{1.36}\), and in the range \(L>4.0176\) (\(2.28059>r_{0}>1.0000\)) when \(n=0.80\) and \(Q_{0}=\sqrt{1.64}\).
Conclusions
In this paper, we investigated the influences of the particle's angular momentum and NUT parameter on the Lyapunov exponent and found the spatial regions where the chaos bound is violated by changing the angular momentum and fixing other parameters. The Lyapunov exponent was gotten by calculating the eigenvalues of the Jacobian matrix. For the non-extremal black hole, when the particle's angular momentum is fixed and the electric parameter increases, the equilibrium orbits are closer to the event horizon. With the increase of the NUT parameter, the spatial region for the violation is closer to the event horizon. Therefore, the relatively large electric parameter more likely violate the bound in the near-horizon region. For the extremal case, when the angular momentum is small, the positions of the equilibrium orbits do not exist and there always exists the violation. The extremal black hole is more likely to violate the chaos bound than the non-extremal black hole.
There are two explanations for the violation of the chaos bound. In [39], Ge et al believed that this violation was related to the stability of the black holes, and it is possible to solve this violation phenomenon through research on stability. In [42, 43], the authors found the violation by exploring the influence of the minimum length effects on the chaotic motion. They believed that this result is not a violation for the conjecture proposed in [31], and the bound can be corrected by the minimum length in the bulk. The weak gravity conjecture reveals that the charge-to-mass ratio of a particle can be greater than 1. In this paper, we set the particle charge to 15, therefore, this conjecture is an implicit condition in our investigation. If this conjecture was not considered, the result maybe changed. In addition, the backreaction of the particle on the background spacetime has not been taken into account [39]. It will be meaningful to study of the chaos bound by considering this effect.
|
2309.11353 | Prospects for searches of $b \to s ν\barν$ decays at FCC-ee | We investigate the physics reach and potential for the study of various
decays involving a $b \to s \nu \bar{\nu}$ transition at the Future Circular
Collider running electron-positron collisions at the $Z$-pole (FCC-ee). Signal
and background candidates, which involve inclusive $Z$ contributions from
$b\bar{b}$, $c\bar{c}$ and $uds$ final states, are simulated for a proposed
multi-purpose detector. Signal candidates are selected using two Boosted
Decision Tree algorithms. We determine expected relative sensitivities of
$0.53\%$, $1.20\%$, $3.37\%$ and $9.86\%$ for the branching fractions of the
$B^{0} \to K^{*0} \nu \bar{\nu}$, $B^{0}_{s} \to \phi \nu \bar{\nu}$, $B^{0}
\to K^{0}_{S} \nu \bar{\nu}$ and $\Lambda_{b}^{0} \to \Lambda^{0} \nu
\bar{\nu}$ decays, respectively. In addition, we investigate the impact of
detector design choices related to particle-identification and vertex
resolution. The phenomenological impact of such measurements on the extraction
of Standard Model and new physics parameters is also studied. | Yasmine Amhis, Matthew Kenzie, Méril Reboud, Aidan R. Wiederhold | 2023-09-20T14:34:52Z | http://arxiv.org/abs/2309.11353v2 | # Prospects for searches of \(b\to s\nu\overline{\nu}\) decays at FCC-ee
###### Abstract
We investigate the physics reach and potential for the study of various decays involving a \(b\to s\nu\overline{\nu}\) transition at the Future Circular Collider running electron-positron collisions at the \(Z\)-pole (FCC-ee). Signal and background candidates, which involve inclusive \(Z\) contributions from \(b\bar{b}\), \(c\bar{c}\) and \(uds\) final states, are simulated for a proposed multi-purpose detector. Signal candidates are selected using two Boosted Decision Tree algorithms. We determine expected relative sensitivities of 0.53%, 1.20%, 3.37% and 9.86% for the branching fractions of the \(B^{0}\to K^{*0}\nu\overline{\nu}\), \(B^{0}_{s}\to\phi\nu\overline{\nu}\), \(B^{0}\to K^{0}_{s}\nu\overline{\nu}\) and \(A^{0}_{b}\to\Lambda\nu\overline{\nu}\) decays, respectively. In addition, we investigate the impact of detector design choices related to particle-identification and vertex resolution. The phenomenological impact of such measurements on the extraction of Standard Model and new physics parameters is also studied.
DOI: 10.17181/6k4q7-veh06
EOS-2023-04
IPPP/23/51
\({}^{1}\)_Universite Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France_
\({}^{2}\)_European Organization for Nuclear Research (CERN), Geneva, Switzerland_
\({}^{3}\)_Cavendish Laboratory, University of Cambridge, Cambridge, UK_
\({}^{4}\)_IPPP, Durham University, Durham, UK_
\({}^{5}\)_Department of Physics, University of Warwick, Coventry, UK_
\({}^{\dagger}\)_Corresponding Author_
Email: [email protected], [email protected],
[email protected], [email protected]
###### Contents
* 1 Introduction
* 2 SM predictions
* 3 Experimental environment
* 3.1 FCC-ee
* 3.2 Detector Response
* 3.3 Simulation Samples
* 3.4 Analysis framework and implementation
* 4 Analysis
* 4.1 First-stage BDT
* 4.2 Detailed study of background contributions
* 4.3 Second-stage BDT
* 4.4 Sensitivity Estimate
* 4.5 Extrapolation to neutral modes
* 4.6 Study of particle-identification
* 4.7 Study of imperfect vertex seeding
* 5 Phenomenology
* 5.1 SM implications
* 5.2 NP implications
* 6 Conclusion
* A Form factors definition
* B Individual background contributions
Introduction
Flavor Changing Neutral Current (FCNC) processes are sensitive probes of New Physics (NP) effects since they are both loop- and CKM-suppressed in the Standard Model (SM). Over the past several years, an enormous effort has been made at the LHC [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] and the \(B\)-factories [12, 13, 14, 15] to precisely measure decays involving a \(b\to s\ell\ell\) transition. However, a challenge which prohibits full exploitation of this data is precise knowledge of the SM predictions of the relevant observables, which are in most cases plagued by hadronic uncertainties, see e.g. Ref [16, 17].
The main interest in studying the decays involving a \(b\to s\nu\overline{\nu}\) transition is that they are theoretically cleaner than their counterparts with charged leptons [18, 19, 20]. Charm loops do not contribute to \(b\to s\nu\overline{\nu}\) decays, which are, baring weak annihilation effects that we will discuss, dominated by short-distance effects that have been precisely computed, including subleading QCD and electroweak corrections [21, 22, 23, 24, 25]. The only remaining theoretical uncertainties originate from knowledge of the CKM factor \(V_{tb}V_{ts}^{*}\), which can be determined using CKM unitarity [19], as well as the relevant local form-factors, which can be computed by means of numerical simulations of QCD on the lattice [26]. Recently, it has also been shown that one could probe \(C\!P\)-violating effects via time-dependent analysis of \(B^{0}\to K_{\rm S}^{0}\nu\overline{\nu}\) and \(B^{0}\to K^{*0}\nu\overline{\nu}\) decays [27].
Another motivation to study the \(b\to s\nu\overline{\nu}\) transition is its sensitivity to NP contributions. Most importantly, \(b\to s\nu\overline{\nu}\) observables allow us to probe effective operators with couplings to \(\nu_{\tau}\), which are related by \(SU(2)_{L}\) gauge invariance to operators with left-handed \(\tau\)-leptons [28]. These operators are poorly constrained at low-energies due to the experimental difficulty of probing decays involving a \(b\to s\tau\tau\) transition [29]. Furthermore, \(b\to s\nu\overline{\nu}\) observables can be related by gauge invariance to the hints of lepton-flavor-universality violation in the \(b\to c\tau\nu\) transition, which are still to be clarified, cf. e.g. [30, 31].
Experimentally, the first evidence for the \(B^{+}\to K^{+}\nu\bar{\nu}\) decay has been found recently by the Belle-II collaboration [32] with a significance of \(3.6\sigma\). Interestingly, the measured branching ratio \(\mathcal{B}(B^{+}\to K^{+}\nu\bar{\nu})=(2.4\pm 0.7)\times 10^{-5}\) exceeds the Standard Model prediction by \(2.8\sigma\). The Belle-II experiment is also working on \(B^{0}\to K^{*0}\nu\overline{\nu}\) decays [33], for which only upper limits have been obtained so far. In the future, they are expected to measure the corresponding branching fractions with \(\mathcal{O}(10\%)\) experimental precision with \(50\) ab\({}^{-1}\) of data [34]. These measurements are particularly challenging experimentally due to the missing energy of the neutrinos, and are consequently ideally suited to the clean environment of an electron-positron collider. \(Z\)-boson factories such as the Future Circular Collider running at the \(Z\)-pole (FCC-ee) offer a unique opportunity to study these decays in the future and to significantly improve on the precision that will be achieved by Belle-II.
In this paper, we perform a sensitivity study of various \(b\to s\nu\overline{\nu}\) decays at FCC-ee. These include the \(B^{0}\to K_{\rm S}^{0}\nu\overline{\nu}\) and \(B^{0}\to K^{*0}\nu\overline{\nu}\) modes, which are accessible at Belle-II running at the \(\Upsilon(4S)\) resonance, but also \(B_{s}^{0}\to\phi\nu\overline{\nu}\) and \(\Lambda_{b}^{0}\to\Lambda\nu\overline{\nu}\) which can only be measured in a Tera-Z experiment such as FCC-ee. We employ a similar strategy to Ref. [35], in which we exploit the relatively large imbalance of missing energy between the signal hemisphere (which contains two neutrinos) and the non-signal hemisphere. We then train a sequence of two boosted decision trees (BDTs) to distinguish between signal-like and background-like events, the first focusing on global event information and the second on specific candidate information. We use these two BDTs to optimise selection cuts and thus estimate the expected sensitivity to the relevant signal.
The remainder of this paper is organized as follows. Section 2 describes SM predictions of branching fractions and form factors in \(b\to s\nu\overline{\nu}\) transitions. Section 3 describes the experimental environment of the FCC and the IDEA detector. Section 4 describes the analysis performed and provides results for the sensitivity estimates, along with some discussion on detector design implications. Section 5 provides the interpretation of the sensitivity estimates in terms of SM parameters and of the relevant effective field theory Wilson coefficients.
## 2 SM predictions
The Weak Effective Theory (WET) Hamiltonian describing the \(b\to s\nu\overline{\nu}\) transition can be written as
\[\mathcal{H}^{sb\nu\nu}_{\text{eff}}=-\frac{4G_{F}}{\sqrt{2}}\lambda_{t}\sum_{i }\mathcal{C}_{i}\mathcal{O}_{i}+\text{h.c.}, \tag{1}\]
where \(G_{F}\) denotes the Fermi constant and \(\lambda_{t}=V_{tb}V_{ts}^{*}\). In the SM, the only non-zero Wilson coefficient, \(\mathcal{C}_{L}\), is associated to the operator
\[\mathcal{O}_{L}^{\nu_{i},\nu_{j}}=\frac{e^{2}}{16\pi^{2}}\big{(}\bar{s}_{L} \gamma_{\mu}b_{L}\big{)}\big{(}\bar{\nu}_{i}\gamma^{\mu}(1-\gamma_{5})\nu_{j} \big{)}, \tag{2}\]
where \(\left.\mathcal{C}_{L}^{\nu_{i},\nu_{j}}\right|_{\text{SM}}=\delta_{ij}C_{L}^{ \text{SM}}\), with
\[C_{L}^{\text{SM}}=-\frac{1.462(17)(2)}{\sin^{2}\theta_{W}}, \tag{3}\]
where NLO QCD corrections and NNLO electroweak contributions are taken into account [23, 24, 25]. Using \(\sin^{2}\theta_{W}=0.23141(4)\)[36], one gets \(C_{L}^{\text{SM}}=-6.32(7)\), with the dominant source of uncertainty due to higher-order QCD corrections. These uncertainties are negligible when compared to the theory uncertainties that will be discussed below.
Several decay modes of \(b\)-hadrons can be induced by the effective Hamiltonian in Eq. (1). The only ones accessible at Belle-II are \(B\to K\nu\bar{\nu}\) and \(B\to K^{*}\nu\bar{\nu}\), with mesons that can be either electrically charged or electrically neutral [33]. All the other modes cannot be measured in any of the running and future experiments, except for FCC-ee, which, as we will show, can additionally access \(B_{s}\to\phi\nu\bar{\nu}\) and \(\Lambda_{b}^{0}\to\Lambda^{(*)}\nu\bar{\nu}\). In what follows, we will limit ourselves to the decays involving neutral mesons, namely \(B^{0}\to K_{S}^{0}\nu\overline{\nu}\), \(B^{0}\to K^{*0}\nu\overline{\nu}\), \(B_{s}^{0}\to\phi\nu\overline{\nu}\) and \(\Lambda_{b}^{0}\to\Lambda\nu\overline{\nu}\), collectively referred to as \(B\to Y\nu\overline{\nu}\) throughout this paper, for two reasons. First, they are not affected by weak annihilation contributions [21] which makes them theoretically cleaner. Second, they are experimentally easier to probe as the decay vertex of the neutral hadron into charged tracks is reconstructible.
The relevant decay rates can be written in the SM as follows [19, 37],
\[\frac{d\mathcal{B}(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu})_{\rm SM}}{dq^{2}} =3\,\tau_{B^{0}}|N_{B^{0}}|^{2}|C_{L}^{\rm SM}|^{2}|\lambda_{t}|^{2} \rho_{+}^{K^{0}_{\rm S}}\,, \tag{4}\] \[\frac{d\mathcal{B}(B^{0}\to K^{*0}\nu\overline{\nu})_{\rm SM}}{dq^{2}} =3\,\tau_{B^{0}}|N_{B^{0}}|^{2}|C_{L}^{\rm SM}|^{2}|\lambda_{t}|^{ 2}(\rho_{A_{1}}^{K^{*0}}+\rho_{A_{12}}^{K^{*0}}+\rho_{V}^{K^{*0}})\,,\] (5) \[\frac{d\mathcal{B}(B^{0}_{s}\to\phi\nu\overline{\nu})_{\rm SM}}{dq ^{2}} =3\,\tau_{B^{0}_{s}}|N_{B^{0}_{s}}|^{2}|C_{L}^{\rm SM}|^{2}|\lambda_{t }|^{2}(\rho_{A_{1}}^{\phi}+\rho_{A_{12}}^{\phi}+\rho_{V}^{\phi})\,,\] (6) \[\frac{d\mathcal{B}(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu})_{ \rm SM}}{dq^{2}} =3\,\tau_{A^{0}_{b}}|N_{A^{0}_{b}}|^{2}|C_{L}^{\rm SM}|^{2}|\lambda _{t}|^{2}(\rho_{f^{V}_{\perp}}^{A}+\rho_{f^{A}_{\perp}}^{A}+\rho_{f^{V}_{0}}^{A }+\rho_{f^{A}_{0}}^{A})\,, \tag{7}\]
where
\[N_{B_{q}}=\frac{G_{F}\alpha_{\rm em}}{16\pi^{2}}\sqrt{\frac{m_{B_{q}}}{3\pi}}. \tag{8}\]
In the above equations, \(\rho_{i}\equiv\rho_{i}(q^{2})\) are functions of the hadronic form factors defined in Appendix A,
\[\rho_{+}^{K^{0}_{\rm S}} =\frac{\lambda^{3/2}}{2m_{B^{0}}^{4}}\left(f^{K}_{+}(q^{2})\right) ^{2}, \tag{9}\] \[\rho_{V}^{K^{*0}} =\frac{2\,q^{2}\lambda^{3/2}}{(m_{B^{0}}+m_{K^{*0}})m_{B^{0}}^{4 }}\left(V^{K^{*}}(q^{2})\right)^{2},\] (10) \[\rho_{A_{1}}^{K^{*0}} =\frac{2\,q^{2}\lambda^{1/2}(m_{B^{0}}+m_{K^{*0}})^{2}}{m_{B^{0}} ^{4}}\left(A_{1}^{K^{*}}(q^{2})\right)^{2},\] (11) \[\rho_{A_{12}}^{K^{*0}} =\frac{64\,m_{K^{*0}}^{2}\lambda^{1/2}}{m_{B^{0}}^{2}}\left(A_{12 }^{K^{*}}(q^{2})\right)^{2},\] (12) \[\rho_{f^{V/A}_{\perp}}^{A} =\frac{32\,q^{2}\lambda^{1/2}((m_{A^{0}_{b}}\mp m_{\Lambda})^{2}- q^{2})}{m_{\Lambda^{0}_{b}}^{4}}\left(f^{V/A}_{\perp}(q^{2})\right)^{2},\] (13) \[\rho_{f^{V/A}_{0}}^{A} =\frac{16\,\lambda^{1/2}(m_{A^{0}_{b}}\pm m_{\Lambda})^{2}((m_{A^ {0}_{b}}\mp m_{\Lambda})^{2}-q^{2})}{m_{A^{0}_{b}}^{4}}\left(f^{V/A}_{0}(q^{2} )\right)^{2}, \tag{14}\]
where \(\lambda\equiv\lambda(q^{2},m_{1}^{2},m_{2}^{2})=(q^{2}-(m_{1}-m_{2})^{2})(q^ {2}-(m_{1}+m_{2})^{2})\). Moreover, the expressions for \(B^{0}_{s}\to\phi\) are obtained from \(B^{0}\to K^{*0}\) via trivial replacements. An angular analysis of these decays offers access to one additional observable for \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) and two additional observables for \(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu}\). Following Refs. [18, 19, 38, 39] we define the mesonic longitudinal polarisation fractions as
\[F_{L}(B^{0}\to K^{*0}\nu\overline{\nu})_{\rm SM} =\frac{\rho_{A_{12}}^{K^{*0}}}{\rho_{A_{1}}^{K^{*0}}+\rho_{A_{12 }}^{K^{*0}}+\rho_{V}^{K^{*0}}}\,, \tag{15}\] \[F_{L}(B^{0}_{s}\to\phi\nu\overline{\nu})_{\rm SM} =\frac{\rho_{A_{12}}^{\phi}}{\rho_{A_{1}}^{\phi}+\rho_{A_{12}}^{ \phi}+\rho_{V}^{\phi}}\,. \tag{16}\]
The \(A_{b}^{0}\to\Lambda\nu\overline{\nu}\) longitudinal polarisation fractions and hadronic forward backward asymmetry are derived from Ref. [40] and read
\[F_{L}(\Lambda_{b}^{0}\to\Lambda\nu\overline{\nu})_{\rm SM} =\frac{\rho_{f_{0}^{V}}^{A}+\rho_{f_{0}^{A}}^{A}}{\rho_{f_{\perp}^{ V}}^{A}+\rho_{f_{\perp}^{A}}^{A}+\rho_{f_{0}^{V}}^{A}+\rho_{f_{0}^{A}}^{A}}\,, \tag{17}\] \[A_{\rm FB}^{A}(\Lambda_{b}^{0}\to\Lambda\nu\overline{\nu})_{\rm SM} =\frac{\alpha}{2}\frac{\tilde{\rho}_{\perp}^{A}+\tilde{\rho}_{0}^ {A}}{\rho_{f_{\perp}^{V}}^{A}+\rho_{f_{\perp}^{A}}^{A}+\rho_{f_{0}^{V}}^{A}+ \rho_{f_{0}^{A}}^{A}}\,, \tag{18}\]
where \(\alpha\) is the parity-violating decay parameter defined in Ref. [40] and we used
\[\tilde{\rho}_{\perp}^{A} =\frac{32\,q^{2}\lambda^{1/2}((m_{A_{b}^{0}}\mp m_{\Lambda})^{2}- q^{2})}{m_{A_{b}^{0}}^{4}}\,f_{\perp}^{V}(q^{2})f_{\perp}^{A}(q^{2}), \tag{19}\] \[\tilde{\rho}_{0}^{A} =\frac{16\,\lambda^{1/2}(m_{A_{b}^{0}}\pm m_{A})^{2}((m_{A_{b}^{ 0}}\mp m_{A})^{2}-q^{2})}{m_{A_{b}^{0}}^{4}}\,f_{0}^{V}(q^{2})f_{0}^{A}(q^{2}). \tag{20}\]
There are two main sources of uncertainties in the prediction of these decays rates: (i) the value of the CKM product \(\lambda_{t}\) and (ii) the hadronic form factors that need to be determined non-perturbatively, which will be discussed in the following.
The usual strategy to determine \(\lambda_{t}\) is to use the unitarity of the CKM matrix to relate it to \(|V_{cb}|\)[19]. However, the current discrepancy between the inclusive and exclusive determinations of \(|V_{cb}|\) introduces an ambiguity in the values that could be taken, see e.g. Ref. [20] for a recent discussion. An alternative is to extract \(|\lambda_{t}|\) from the mass-difference in the \(B_{s}-\overline{B_{s}}\) system using the product \(f_{B_{s}}\sqrt{\widehat{B_{s}}}\) of the \(B_{s}\) decay constant and bag parameter computed on the lattice [41, 42]. However, there is currently a disagreement between the determinations with \(2+1\) and \(2+1+1\) dynamical flavors [26], which leads again to an ambiguity. For the sake of definiteness, we will consider the value \(|\lambda_{t}|=(39.3\pm 1.0)\times 10^{-3}\) based on \(|V_{cb}|=(40.0\pm 1.0)\times 10^{-3}\) extracted from \(B\to D\ell\nu\) decays, which has a relative uncertainty of \(\approx 2.5\%\)[26]. However, it is clear that this puzzle needs to be solved by a combined theoretical and experimental effort to match the experimental precision foreseen at FCC-ee. In the phenomenological analysis of Sec. 5, we will also consider a hypothetical uncertainty of \(\approx 1.5\%\) which is quoted for exclusive \(|V_{cb}|\) determinations at Belle-II with \(50\) ab\({}^{-1}\)[33].
Regarding the hadronic form factors, the most reliable determinations are those based on numerical simulations of QCD on the lattice (LQCD). However, these results are only available for a few decay channels and only for large \(q^{2}\)-values. The SM predictions of the branching fractions thus rely on extrapolations of the form factors to the entire physical region, which are based on specific parameterisations. An alternative method, discussed in Sec. 5, consists of extracting ratios of form factors directly from the data, to guide the extrapolation at low \(q^{2}\).
In our phenomenological analysis, performed using the open-source EOS software [43] version v1.0.10 [44], we will consider two sets of form factors:
**2023**: For the mesonic modes \(B\to K^{(*)}\) and \(B_{s}\to\phi\), we follow the approach of Ref. [45] and parametrise the form factors with simplified series expansions [46]. We use the LQCD \(B\to K\) inputs of the FNAL/MILC [47] and HPQCD [48] collaborations. The \(B\to K^{*}\) and \(B_{s}\to\phi\) transitions are fitted on the LQCD inputs of Ref. [49] and the Light-Cone Sum Rules (LCSR)
estimations of Refs. [50, 51]. For the baryonic mode we follow Ref. [52] which uses the LQCD inputs of Ref. [53]. The predictions based on these inputs are quoted in Table 1. The uncertainties due to the form factors amounts to 5% for the \(B\to K\) transition and 10% for the other transitions.
**Future**: The predictions will need to be considerably improved to match the experimental precision foreseen at FCC-ee. This would be particularly challenging for transitions featuring resonances, as they are notably harder to predict, especially if these resonances are broad. For the purposes of this analysis, we assume that the uncertainties will be reduced by a factor of ten over the coming decades. This scenario only serves as a reference to make the phenomenological analysis realistic. The uncertainties due to the form factors would therefore amount to less than a percent for \(B\to K\) and \(\sim 1\%\) for the other transitions.
## 3 Experimental environment
For our experimental analysis, we follow much of the procedure developed and outlined in Ref. [35]. Here we give a brief description of the collider and detector environment that has been assumed for this study.
### FCC-ee
The proposed Future Circular Collider (FCC) [54] is the next generation state-of-the-art particle research facility. The ongoing FCC feasibility study is investigating the benefits and physics reach of such a machine which would be built in a new 80 - 100 km tunnel, near CERN, with capabilities of running in successive stages of \(e^{+}e^{-}\), \(e\)-\(p\) or \(p\)-\(p\) mode. The \(e^{+}e^{-}\) machine (FCC-ee) [55] would run at centre-of-mass-energies, \(\sqrt{s}\), in the range between 91 GeV (_i.e._ the Z-pole) and 36 GeV (_i.e._ the \(t\overline{t}\) threshold). FCC-ee offers unprecedented opportunity to study every known particle of the SM in exquisite detail. Beyond its capabilities as an electroweak precision machine there is scope for world's-best measurements in the beauty (\(b\)-quark), charm (\(c\)-quark) and tau (\(\tau\)-lepton) sectors with the vast statistics anticipated to be taken at the Z-pole. This so called "Tera-\(Z\)" run would produce \({\cal O}(10^{12})\)\(Z\)-bosons per experiment, which have a high branching fraction to both \(b\overline{b}\) (0.15) and \(c\overline{c}\)
\begin{table}
\begin{tabular}{l c c c} \hline \hline Decay mode & \({\cal B}/|\lambda_{t}|^{2}\,[10^{-3}]\) & \({\cal B}\,[10^{-6}]\) & Ref. \\ \hline \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\) & \(1.33\pm 0.04\) & \(2.02\pm 0.12\) & [45, 20] \\ \(B^{0}\to K^{*0}\nu\overline{\nu}\) & \(5.13\pm 0.51\) & \(7.93\pm 0.89\) & [45] \\ \(B^{0}_{s}\to\phi\nu\overline{\nu}\) & \(6.31\pm 0.67\) & \(9.74\pm 1.15\) & [45] \\ \(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu}\) & \(5.55\pm 0.56\) & \(8.57\pm 0.97\) & [52] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Current SM prediction for integrated decay rates summed over the three neutrino flavors. In the second row we used the exclusive determination of \(|V_{cb}|=(40.0\pm 1.0)\times 10^{-3}\), which yields \(|\lambda_{t}|=(39.3\pm 1.0)\times 10^{-3}\), as described in the text.
(0.12) pairs [36]. One of the advantages of a circular, as opposed to linear, collider layout is that collisions can be delivered to multiple interaction regions simultaneously, which allows for a variety of different detector design choices.
### Detector Response
Monte-Carlo (MC) event samples are used to simulate the response of the detector to various different physics processes. The procedure for event generation and simulation of the detector response is identical to that described in Ref. [35]. In summary, events are generated under nominal FCC-ee conditions using Pythia[56], with unstable particles decayed using EvtGen[57] and final-state radiation generated by Photos[58]. The detector configuration under consideration is the Innovative Detector for Electron-positron Accelerators (IDEA) concept [55]. The detector response is simulated using the DELPHES package with the configuration card in Ref. [59] interfaced to the common EDM4hep data format [60].
### Simulation Samples
Our study exploits various different MC simulation samples used to mimic the expected signal and background distributions at FCC-ee. We make use of inclusive samples of \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) (where \(q\) is one of the light quarks, \(q\in\{u,d,s\}\)) as proxies for the total expected background. We then make use of dedicated exclusive samples for each of the signal modes under study, namely the \(B^{0}\to K^{*0}\nu\overline{\nu}\), \(B^{0}_{s}\to\phi\nu\overline{\nu}\), \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\) and \(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu}\) decays. The simulated samples contain an admixture of both \(b\)-hadron flavours _i.e._ charge-conjugation is implied throughout. The \(K^{*0}\) resonance is assumed to be pure vector \(K^{*}(892)^{0}\to K^{+}\pi^{-}\) and the \(\phi\) resonance is assumed to be pure vector \(\phi(1020)\to K^{+}K^{-}\).
The signal decays are simulated using the PHSP EvtGen model which generates the \(B\) candidate decay children uniformly distributed in phase space. This does not accurately simulate the correct momentum transfer distribution in these decays. Consequently, we reweight our simulation samples using the MC truth invariant mass of the neutrino pair, \(q^{2}\), and the model predictions provided in Sec. 2. A comparison between the PHSP and theory prediction (LQCD+LCSR) for the \(q^{2}\) distribution is shown in Fig. 1 along with their ratio which is used for the reweighting of the simulation samples.
### Analysis framework and implementation
We make use of the same basic analysis framework as deployed in Ref. [35]. Our nominal analysis strategy (variations on these assumptions are further discussed below) assumes:
* **Perfect vertex seeding.** Whilst we take into account that vertex positions are not perfectly known, via the tracking system resolution, we assume that vertices can be perfectly seeded. In other words we always match the reconstructed vertex to the simulated vertex. The impact of this assumption is studied further below. High precision vertex finding will be a crucial aspect of the detector design to maximise the physics reach for \(b\to s\nu\overline{\nu}\).
* **Perfect particle identification.** We assume that the detector will have perfect discrimination between kaons and pions (and indeed protons and other species). This is particularly relevant for broader resonances that have both kaon and pions in the final state (for example the \(K^{*0}\)). The impact of this assumption is studied in further detail below in which we investigate the sensitivity at different values of the kaon-pion separation power.
Furthermore, due to the additional complexity required in reconstructing neutral final states, such as \(K^{0}_{\rm S}\to\pi^{+}\pi^{-}\) and \(\Lambda\to p\pi^{-}\), which fly some distance in the detector before producing charged tracks, we do not yet fully reconstruct these modes. We instead chose to focus on the modes which decay promptly, _i.e._ with \(K^{*0}\to K^{+}\pi^{-}\) and \(\phi\to K^{+}K^{-}\), and make sensitivity projections for the modes with neutrals based on assumptions about the neutral reconstruction. Reconstruction of neutral \(K^{0}_{\rm S}\) and \(\Lambda\) candidates has recently been developed for the IDEA detector at FCC-ee but was not available in time for our studies. A full study which includes neutral reconstruction will come at a later date.
## 4 Analysis
In order to obtain an estimate for the expected sensitivity to the various \(b\to s\nu\overline{\nu}\) decays under consideration, we optimise a two-stage selection procedure based on Boosted Decision Trees (BDTs). These are trained to distinguish between the signal candidates of interest and the inclusive backgrounds from \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\), for \(q\in\{u,d,s\}\).
One of the key signatures of the signal decays is the presence of large missing energy in the direction of the \(B\) meson candidate due to the two neutrinos in the final state. Consequently a typical signal event will have a relatively large imbalance of missing energy between the signal side of the \(Z\to b\overline{b}\) event and the non-signal side. For a typical \(Z\to b\overline{b}\) background event any missing energy will be approximately the same on both sides. In order to determine the imbalance between the signal-side and the non-signal-side we divide events (on a per-event basis) into two hemispheres, each respectively corresponding to one of the two \(b\)-quarks produced from the \(Z\) decay.
The hemispheres, pictorially represented in Fig. 2, are defined using the plane normal to the thrust axis, which is defined by the unit vector, \(\hat{\bf n}\), that minimises,
\[T=\frac{\sum_{i}|{\bf p}_{i}\cdot\hat{\bf n}|}{\sum_{i}|{\bf p}_{i}|}, \tag{21}\]
Figure 1: A comparison between the generated \(q^{2}\) distribution (orange line) and the theory prediction provided in this paper (blue line and band) along with their ratio (green line) which is used to reweight the simulation samples in our analysis, for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) decay (left) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) decay (right).
where \({\bf p}_{i}\) is the momentum vector of the \(i^{\rm th}\) reconstructed particle in the event. This thrust axis provides a measure of the direction of the quark pair produced from the \(Z\) decay. Reconstructed particles from each event are then assigned to either hemisphere depending on the angle, \(\theta\), between their momentum vector and the thrust axis. A particle is considered to be in the signal hemisphere (that which is expected to have the least total energy) if \(\cos(\theta)>0\) and in the non-signal hemisphere if \(\cos(\theta)<0\).
Signal candidates are constructed by requiring two opposite sign tracks originating from the same position and displaced from the primary interaction. A mass window cut, described in Table 2, is applied to the intermediate \(Y\) resonance.
Events are required to have at least one primary vertex, have at least one intermediate \(Y\) candidate and the momentum of the intermediate candidate must point towards the minimum energy hemisphere, _i.e._ the candidate must have \(\cos(\theta)>0\).
We train two different BDTs to isolate signal candidates from the background. The first is
\begin{table}
\begin{tabular}{l c c c} \hline Decay & Candidate & Candidate Children & Candidate Mass Range [GeV] \\ \hline \(B^{0}\to K^{*0}\nu\overline{\nu}\) & \(K^{*0}\) & \(K^{\pm}\pi^{\mp}\) & [0.65, 1.10] \\ \(B^{0}_{s}\rightarrow\phi\nu\overline{\nu}\) & \(\phi\) & \(K^{+}K^{-}\) & [1.00, 1.06] \\ \hline \end{tabular}
\end{table}
Table 2: The children PID and candidate mass range required for constructing the candidate particle for each signal decay.
Figure 2: A pictorial representation of the definition of the thrust axis and the two event hemispheres for a \(B^{0}\to K^{*0}\nu\overline{\nu}\) event.
designed to select based on the overall event topology and energy distribution. The second is designed to select based on specific information related to the intermediate candidate. The xgboost package [61] is used to train the BDTs using the \(k\)-fold cross validation method (with \(k=4\)) to avoid over-training and re-use of events. Separate trainings are performed for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) modes, with dedicated signal samples. The background training sample uses inclusive samples of \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) (with \(q\in\{u,d,s\}\)) appropriately weighted according to the known hadronic \(Z\) branching fractions: \(0.1512\) (\(Z\to b\overline{b}\)), \(0.1203\) (\(Z\to c\overline{c}\)) and \(0.4276\) (\(Z\to q\overline{q}\)) [36].
### First-stage BDT
The first stage BDT is trained using a sample of 1 million signal events and 1 million background events. The BDT is trained using the following input variables:
* The total reconstructed energy in each hemisphere,
* The total charged and neutral reconstructed energies of each hemisphere,
* The charged and neutral particle multiplicities in each hemisphere,
* The number of charged tracks used in the reconstruction of the primary vertex,
* The number of reconstructed vertices in the event,
* The number of candidates in the event
* The number of reconstructed vertices in each hemisphere,
* The minimum, maximum and average radial distance of all decay vertices from the primary vertex.
Figure 3 shows the BDT response in each of the reconstructed channels and Fig. 4 shows the efficiency as a function of a cut on the minimum BDT response. It can be seen that the stage 1 BDT is effective at rejecting the inclusive backgrounds, particularly from the lighter quark species, although there is a small mis-identification rate at high BDT scores. The integrated ROC score is \(0.965\) for both the \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) channels.
### Detailed study of background contributions
After the stage 1 BDT we introduce some loose pre-selection cuts which remove a large fraction of the inclusive backgrounds. These cuts are on the energy difference between the two hemispheres, \(E_{\rm diff}>5\) GeV, and on the stage 1 BDT, BDT1 \(>0.6\). The stage 1 BDT efficiency, shown in Fig. 4, demonstrates that the cut of BDT1 \(>0.6\) retains \(\sim 95\%\) of the signal whilst rejecting \(\sim 90\%\) of the inclusive background. The distribution of \(E_{\rm diff}\) is shown in Fig. 5, after the loose cut of BDT1 \(>0.6\) is applied. By studying in detail, via use of matching to the true MC candidates, the contributions from events which pass these loose cuts, we investigate what sort of backgrounds would be largest in a real-life study. The results are shown in Fig. 6. The dominant backgrounds are those which proceed via semi-leptonic \(b\to c\to s\) transitions and semi-leptonic prompt \(c\to s\) transitions. The most problematic of these are those that contain either real resonant \(K^{*}(892)^{0}\) or \(\phi(1020)^{0}\), which peak in the relevant invariant mass. A more detailed list of the specific exclusive background modes which contribute most significantly are provided in Appendix B. These specific backgrounds are not
Figure 4: First stage BDT response cut efficiencies for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) channel (left) and the \(B^{0}_{s}\to\phi\nu\overline{\nu}\) channel (right). The relevant signal mode response is shown as the orange line, the inclusive background sample responses are shown in red, blue and green for \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) (for \(q\in\{u,d,s\}\)), respectively.
Figure 3: First stage BDT response for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) channel (left) and the \(B^{0}_{s}\to\phi\nu\overline{\nu}\) channel (right). The relevant signal mode response is shown as the orange filled histogram, the inclusive background sample responses are shown in red, blue and green for \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) (for \(q\in\{u,d,s\}\)), respectively.
further studied in this work, although they are included as part of the inclusive samples we use to model our background. The dominant contributions would require dedicated treatment for future works aiming to maximise the sensitivity.
### Second-stage BDT
The second-stage BDT is trained using a sample of 1 million signal events and 1 million background events which pass the preselection criteria of \(E_{\rm diff}>5\,\)GeV and BDT1 \(>0.6\). The second-stage BDT is trained using the following input variables:
* The intermediate candidate's reconstructed mass
* The number of intermediate candidates in the event
* The intermediate candidate's flight distance and flight distance \(\chi^{2}\) from the primary vertex
* The \(x\), \(y\) and \(z\) components of the intermediate candidate's momentum
* The scalar momentum of the intermediate candidate
* The transverse and longitudinal impact parameter of the intermediate candidate
* The minimum, maximum and average transverse and longitudinal impact parameters of all other reconstructed decay vertices in the event
* The angle between the intermediate candidate and the thrust axis
* The mass of the primary vertex
* The nominal \(B\) candidate energy, defined as the \(Z\) mass minus all of the reconstructed energy apart from the candidate children
Figure 5: Distributions of the energy difference between the two hemispheres, after a loose cut on BDT1 \(>0.6\), for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) channel (left) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) channel (right). The relevant signal mode response is shown as the orange filled histogram, the inclusive background sample responses are shown in red, blue and green for \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) (for \(q\in\{u,d,s\}\)), respectively.
Figure 7 shows the second-stage BDT response in each of the reconstructed channels. The integrated ROC scores are 0.961 and 0.959 for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) channels, respectively.
### Sensitivity Estimate
In order to obtain an estimate of the overall sensitivity we need to find an optimal cut point in both BDT scores given a particular value of the expected \(B\to Y\nu\overline{\nu}\) branching fraction, where \(Y\) is the intermediate resonance, \(Y\in\{K^{*0},\phi\}\). Given that a combination of cuts on both BDTs is incredibly efficient at rejecting the background we cannot get an accurate estimate of the cut efficiencies directly from the inclusive background samples, because so little of the MC statistics remain for the inclusive backgrounds at high BDT cut values. Consequently we build a map of the signal and inclusive background efficiencies, \(\epsilon^{s}\) and \(\epsilon^{b}\), as a function of the two BDT score cut values and then use a bi-cubic spline to interpolate between points. We then define a figure of merit (FOM) defined as,
\[\text{FOM}=\frac{S}{\sqrt{S+B}}, \tag{22}\]
where \(S\) is the expected number of signal events and \(B\) is the expected number of background events based on the sum of contributions from \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) (for \(q\in\{u,d,s\}\)).
The signal expectation is computed as,
\[S=2\,N_{Z}\,\mathcal{B}(Z\to b\overline{b})\,f_{B}\,\mathcal{B}(B \to Y\nu\overline{\nu})\,\mathcal{B}(Y\to f)\,\epsilon^{s}_{\text{pre}}\, \epsilon^{s}_{\text{BDTs}}, \tag{23}\]
where \(N_{Z}\) is the number of \(Z\) bosons produced, the factor of two accounts for the fact there are two \(b\)-quarks, \(f_{B}\) is the production fraction for the \(b\)-quark to hadronise into the relevant \(b\)-hadron,
Figure 6: Background contributions as a function of the intermediate resonance mass, for inclusive background events which pass the loose pre-selection, in the \(B^{0}\to K^{*0}\nu\overline{\nu}\) mode (left) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) mode (right). Contributions are summed over the \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) with appropriate weighting for their relative branching fractions and selection efficiencies. Each distribution contains two non-resonant components (S-wave) in blue and orange, and two resonant components (left: vector \(K^{*}(892)^{0}\), right: vector \(\phi(1020)\)) in green and red. The two further distinctions are made between decays originating from a \(b\)-hadron (blue and green), labelled \(X_{b}\), and those originating from a prompt \(c\)-hadron (orange and red), labelled \(X_{c}\). All of the dominant backgrounds originating from a \(b\)-hadron proceed via a secondary \(c\)-hadron.
\({\cal B}(B\to Y\nu\overline{\nu})\) is the predicted branching fraction for the decay of interest, \({\cal B}(Y\to f)\) is the branching fraction of the intermediate resonance to the final state \(f\), \(\epsilon_{\rm pre}^{s}\) is the signal efficiency of the pre-selection (including the reconstruction and the loose cut on BDT1), and \(\epsilon_{\rm BDTs}^{s}\) is the signal efficiency of the two BDT score cuts.
The background expectation is computed as,
\[B=\sum_{f\in\{b\overline{b},c\overline{c},q\overline{q}\}}N_{Z}\,{\cal B}(Z \to f)\,\epsilon_{f,{\rm pre}}^{b}\,\epsilon_{f,{\rm BDTs}}^{b}, \tag{24}\]
where \({\cal B}(Z\to f)\) are the relevant branching fractions for \(Z\to\) hadrons (either \(b\overline{b}\), \(c\overline{c}\) or \(q\overline{q}\)) and \(\epsilon_{f,{\rm pre}}^{b}\), \(\epsilon_{f,{\rm BDTs}}^{b}\) are the pre-selection and BDT cut efficiencies of the relevant background, respectively.
For our study we assume the following values of the parameters in Eqs. (23) and (24):
* \(N_{Z}=6\times 10^{12}\), the number of \(Z\)-bosons produced across all experiments during the entire Tera-\(Z\) run at FCC-ee.
* The production fraction of \(B\)-mesons from \(Z\to b\overline{b}\) decays are \(f_{B^{0}}=0.43\) and \(f_{B^{0}_{s}}=0.096\).
* The SM predictions of the relevant decay branching fractions are provided above in Table 1, although we also scan the sensitivity as a function of these branching fractions below (see Fig. 8).
* The intermediate resonance branching fractions are \({\cal B}(K^{*0}\to K^{+}\pi^{-})=0.9975\) and \({\cal B}(\phi\to K^{+}K^{-})=0.491\).
* The \(Z\to\) hadrons branching fractions are \({\cal B}(Z\to b\overline{b})=0.1512\), \({\cal B}(Z\to c\overline{c})=0.1203\) and \({\cal B}(Z\to q\overline{q})=0.4276\).
The sensitivity provided in units of \(\sqrt{S+B}/S\) (%), in other words the expected relative size of the \(1\sigma\) uncertainty on the measured branching fraction as a function of the hypothesised branching
Figure 7: Second stage BDT response for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) channel (left) and the \(B^{0}_{s}\to\phi\nu\overline{\nu}\) channel (right). The relevant signal mode response is shown as the orange filled histogram, the inclusive background sample responses are shown in red, blue and green for \(Z\to b\overline{b}\), \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) (for \(q\in\{u,d,s\}\)), respectively.
fraction, is shown in Fig. 8. At the SM predictions the expected sensitivities are \(0.53\%\) for \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(1.20\%\) for \(B^{0}_{s}\to\phi\nu\overline{\nu}\). The expected signal-to-background ratios are at the optimal cuts points are \(0.17\) for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(0.13\) for \(B^{0}_{s}\to\phi\nu\overline{\nu}\). The total signal efficiency of the full analysis chain is \(3.7\%\) (\(7.4\%\)) for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) (\(B^{0}_{s}\to\phi\nu\overline{\nu}\)) mode, with background rejection rates of \(\mathcal{O}(10^{-7})\) for \(Z\to b\overline{b}\) backgrounds and \(\mathcal{O}(10^{-9})\) for \(Z\to c\overline{c}\) and \(Z\to q\overline{q}\) in both modes.
Given the excellent expected precision to the branching fractions, it would also be feasible to fit the differential branching fractions as a function of \(q^{2}\), therefore allowing for direct measurements of \(F_{L}\). Based on projections made by the Belle-II collaboration for prospects in \(b\to s\nu\overline{\nu}\) decays [34] we expect that \(F_{L}\) could be measured with a relative uncertainty of \(\sim 2.5\%\) in the \(B^{0}\to K^{*0}\nu\overline{\nu}\) mode and \(\sim 5\%\) in the \(B^{0}_{s}\to\phi\nu\overline{\nu}\) mode at FCC-ee.
### Extrapolation to neutral modes
Recent studies of neutral reconstruction performance with IDEA at FCC-ee suggest that the \(K^{0}_{\rm S}\) and \(\Lambda\) reconstruction efficiency is \(\sim 80\%\) in the momentum range relevant for this analysis [62]. Based on the typical efficiencies of our analysis in the \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) decays, along with an additional \(80\%\) reconstruction efficiency for the \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\) and \(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu}\) modes, we extrapolate our sensitivity estimates for the neutral modes using Eqs. (23) and (24), assuming the same background rejection rate can be achieved.
The numerical values used for the terms in Eqs. (23) and (24) are \(f_{\Lambda^{0}_{b}}=0.037\), \(\mathcal{B}(K^{0}_{\rm S}\to\pi^{+}\pi^{-})=0.692\) and \(\mathcal{B}(\Lambda\to p\pi^{-})=0.639\). This results in expected sensitivities (signal-to-background ratios), at the SM prediction, of \(3.37\%\) (\(0.04\)) for \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\) and \(9.86\%\) (\(0.015\)) for \(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu}\). The extrapolated sensitivity as a function of the hypothesised branching fraction for these modes is shown in Fig. 9.
### Study of particle-identification
As mentioned above, the sensitivity estimates provided in Fig. 8 are based on the assumption of perfect particle-identification performance. In other words it is assumed that all pions and kaons
can be perfectly distinguished by the detector and are thus given the correct mass hypothesis. This assumption is checked by recomputing the signal efficiencies, \(\epsilon^{s}_{\rm pre}\) and \(\epsilon^{s}_{\rm BDT}\) of Eq. (23), after making random mass hypothesis swaps of kaon \(\to\) pion and pion \(\to\) kaon, based on an assumed mis-identification rate, \(f_{\rm misid}\). This incorporates the effect of double mis-identifications and in most cases will cause events to fall outside of the mass window for the intermediate resonance, listed in Table 2.
The results of this study are shown in Fig. 10 in terms of the kaon-pion separation power in standard deviations, \(\sigma\), _vs._ the expected degradation to the sensitivity. These show that \(K-\pi\) separation of \(\sim 2\sigma\) would have a negligible impact on the uncertainty, although the performance rapidly degrades with worse separation.
### Study of imperfect vertex seeding
Furthermore, the sensitivity estimates provided in Fig. 8 assume perfect vertex seeding. Whilst the vertex resolution of the detector itself is incorporated it is still assumed that each vertex is correctly identified. In practise this will not always be the case and for poorly resolved vertices and for vertices in close proximity it may be that the wrong vertex will be chosen instead. This effect is investigated by randomly selecting the wrong vertex, based on a value of the vertex resolution, and propagating its effect through the analysis pipeline. The results are shown in Fig. 11, which gives the secondary vertex identification rate as a function of the vertex resolution. This shows that the vertex resolution will need to be \(\lesssim 0.2\,\mathrm{mm}\) in order to sufficiently mitigate vertex mis-identification. However, this is far above the the resolution requirements for vertex precision anyway, \(\mathcal{O}(10\,\upmu\mathrm{m})\). Consequently, we do not expect any significant effect from vertex mis-association.
Figure 11: The correct secondary vertex association rate as a function of the expected vertex resolution for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) decay (left) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) decay (right).
Figure 10: Degradation of the sensitivity to the branching fraction, with respect to the nominal sensitivity assuming perfect PID, as a function of the kaon-pion separation power for the \(B^{0}\to K^{*0}\nu\overline{\nu}\) decay (left) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) decay (right).
Phenomenology
In this section, we investigate the implications of measurements of the \(b\to s\nu\overline{\nu}\) observables with the expected sensitivity obtained in the previous sections, namely \(0.53\%\) for \(B^{0}\to K^{*0}\nu\overline{\nu}\), \(1.20\%\) for \(B^{0}_{s}\to\phi\nu\overline{\nu}\), \(3.37\%\) for \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\), and \(9.86\%\)\(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu}\). We will consider the current uncertainties for the SM predictions quoted in Table 1, and we will study the impact of an improvement on these uncertainties by more precise and accurate determinations of \(|\lambda_{t}|\) and the hadronic form factors.
### SM implications
As discussed in Sec. 2, the two main sources of uncertainties are the product of CKM matrix elements \(|\lambda_{t}|\) and the form factors. We first assume that NP effects are absent and study how a precise measurement of \(b\to s\nu\overline{\nu}\) could provide us with information about these quantities.
#### Extraction of CKM elements.
As a first illustration of the potential of these measurements at FCC-ee, we study the precision in extracting \(|\lambda_{t}|^{2}\) from these decays by using the form factors determined from LQCD. The most convenient decay for this purpose is \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\), for which only a single form factor is needed and is already predicted with a \(\approx 5\%\) precision. In this case, we can write,
\[|\lambda_{t}|=(39.3\times 10^{-3})\biggl{[}\frac{{\cal B}(B^{0}\to K^{0}_{\rm S }\nu\overline{\nu})^{\rm exp}}{2.02\times 10^{-6}}\biggr{]}^{1/2}\biggl{[} \frac{\kappa_{+}}{24.8}\biggr{]}^{-1/2}\quad\mbox{where $\kappa_{+}=\int dq^{2}\rho_{+}^{K^{0}_{ \rm S}}(q^{2})$}. \tag{25}\]
Equation (25) clearly shows that the joint effort of both the lattice and the experimental communities will make \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\) decays a major player in the extraction of \(|\lambda_{t}|\). We now extend this study to the other modes using the **Future** form factors uncertainties described in Sec. 2. The results are shown in Fig. 12, where the extracted values of \(\lambda_{t}\) are compared to the current world average.
#### Extraction of hadronic form factors.
Conversely, assuming accurate knowledge of the CKM elements from other sources, the \(b\to s\nu\overline{\nu}\) decays allow for a simultaneous extraction of the form factors. The dependency on CKM elements can also be lifted by considering only the shape of the form factors [20]. Assuming the SM, an unnormalised binned likelihood fit of the differential branching ratios provides direct access to the shape of the scalar form factor \(\rho_{+}^{K}\) and to the combination of the vector form factors \(\rho_{V}+\rho_{A_{1}}+\rho_{A_{12}}\).
As an example, we extend the definition of the ratio \(r_{\rm lh}\) introduced in Ref. [20] to the other modes using
\[r_{\rm lh}^{Y}=\frac{{\cal B}(B\to Y\nu\bar{\nu})_{0<q^{2}<q_{\rm max}^{2}/2} }{{\cal B}(B\to Y\nu\bar{\nu})_{q_{\rm max}^{2}/2<q^{2}<q_{\rm max}^{2}}}. \tag{26}\]
Assuming the current uncertainties on the form factors, we predict
\[r_{\rm lh}^{K}=1.91(6),\quad r_{\rm lh}^{K^{*}}=0.84(6),\quad r_{\rm lh}^{ \phi}=0.96(9),\quad r_{\rm lh}^{\Lambda}=0.50(9). \tag{27}\]
With the benchmark **Future** form factors we get
\[r_{\rm lh}^{K}=1.91(1),\quad r_{\rm lh}^{K^{*}}=0.83(1),\quad r_{\rm lh}^{ \phi}=0.82(1),\quad r_{\rm lh}^{\lambda}=0.49(1). \tag{28}\]
The interest of these ratios is clear from the reduced uncertainty one finds already with the current form factors. The effect is even more striking when the uncertainty on the form factors is smaller. This demonstrates that these ratios will provide valuable information when extrapolating the form factors from high \(q^{2}\), where the lattice QCD results are the most precise, to the low \(q^{2}\) region. This method can eventually be extended, once the statistical power allows it, to a full unnormalised binned likelihood fit to all of the available differential observables.
#### Ratio of charged and neutral leptons.
Finally, we emphasize the interest of ratios of the form
\[R_{Y}^{\ell/\nu}=\frac{\mathcal{B}(B\to Y\ell^{+}\ell^{-})}{\mathcal{B}(B \to Y\nu\bar{\nu})}, \tag{29}\]
where \(\ell\) is a charged lepton and the branching ratio can be integrated over the full kinematical range or, according to the experimental precision, over several bins. These ratios benefit from numerous uncertainty cancellations, both from the experimental side (fragmentation fraction, branching fraction of the normalization channel, experimental efficiency _etc._) and the theory side (CKM elements, local form factors _etc._) [20].
Experimentally, \(R_{K^{+}}^{\mu/\nu}\) can be reconstructed using the world average measurement of \(B\to K\mu^{+}\mu^{-}\) decays [36] and the combination of the searches and observation of \(B\to K\nu\overline{\nu}\) presented by the Belle-II collaboration [32]. Assuming uncorrelated uncertainties, we obtain
\[R_{K^{+}}^{\mu/\nu}|_{2023}=0.03\pm 0.01. \tag{30}\]
For the other modes, only lower limits can be set. Using again world averages [36], we get at 90% CL
\[R_{K^{*+}}^{\mu/\nu}|_{2023}>0.02,\qquad R_{K^{*0}}^{\mu/\nu}|_{2023}>0.07, \qquad R_{\phi}^{\mu/\nu}|_{2023}>2\times 10^{-4}. \tag{31}\]
Figure 12: 68% probability ranges assuming the branching ratios to be SM-like. We used the experimental uncertainties of Sec. 4 and the **Future** form factors uncertainties described in Sec. 2. The results are compared with the value derived from \(|V_{cb}|=(40.0\pm 1.0)\times 10^{-3}\), extracted from \(B\to D\ell\nu\) decays [26].
Reliable theoretical predictions of \(R_{M}^{\ell/\nu}\) are challenged by long-range effects, dominated by the charm loops and subject to several approaches [16, 17]. These effects give rise to a shift to the Wilson coefficient \(C_{9}^{\ell}\) that enters the Hamiltonian relevant to the \(bs\ell\ell\) sector of the WET. Any measurement of these ratios therefore provides invaluable information for understanding the non-local contributions. Assuming that the neutrino mode will dominate the experimental uncertainties, the sensitivities expected for FCC-ee will permit a direct extraction of the shift to \(C_{9}^{\ell}\) with an accuracy of \(8.7\%,13\%,22\%\) and \(37\%\) for the \(B\to K\), \(B\to K^{*}\), \(B_{s}\to\phi\) and \(\varLambda_{b}^{0}\to\varLambda\) transition respectively.
### NP implications
Assuming three massless, left-handed neutrino species below the electroweak scale, the dimension-6 effective Hamiltonian in Eq. (1) is augmented by only one additional contribution from potential NP beyond the SM [38]
\[\mathcal{O}_{R}^{\nu_{i},\nu_{j}}=\frac{e^{2}}{16\pi^{2}}\big{(}\bar{s}_{R} \gamma_{\mu}b_{L}\big{)}\big{(}\bar{\nu}_{i}\gamma^{\mu}(1-\gamma_{5})\nu_{j }\big{)}. \tag{32}\]
Assuming universal flavour conserving contributions only, the \(B\)-meson observables take the simple form [38]
\[\frac{d\mathcal{B}(B^{0}\to K_{\rm S}^{0}\nu\overline{\nu})}{dq^{2}} =3\,\tau_{B^{0}}|N_{B^{0}}|^{2}|C_{L}+C_{R}|^{2}|\lambda_{t}|^{2} \rho_{+}^{K_{\rm S}^{0}}\,, \tag{33}\] \[\frac{d\mathcal{B}(B^{0}\to K^{*0}\nu\overline{\nu})}{dq^{2}} =3\,\tau_{B}|N_{B^{0}}|^{2}|\lambda_{t}|^{2}\left(|C_{L}-C_{R}|^{2 }(\rho_{A_{1}}^{K^{*0}}+\rho_{A_{12}}^{K^{*0}})+|C_{L}+C_{R}|^{2}\rho_{V}^{K^{ *0}}\right)\,,\] (34) \[\frac{d\mathcal{B}(B_{s}^{0}\to\phi\nu\overline{\nu})}{dq^{2}} =3\,\tau_{B_{s}}|N_{B_{s}^{0}}|^{2}|\lambda_{t}|^{2}\left(|C_{L}-C_ {R}|^{2}(\rho_{A_{1}}^{\phi}+\rho_{A_{12}}^{\phi})+|C_{L}+C_{R}|^{2}\rho_{V}^{ \phi}\right)\,, \tag{35}\]
\[F_{L}(B^{0}\to K^{*0}\nu\overline{\nu}) =\frac{|C_{L}-C_{R}|^{2}\rho_{A_{12}}^{K^{*0}}}{|C_{L}-C_{R}|^{2 }(\rho_{A_{1}}^{K^{*0}}+\rho_{A_{12}}^{K^{*0}})+|C_{L}+C_{R}|^{2}\rho_{V}^{K^{ *0}}}\,, \tag{36}\] \[F_{L}(B_{s}^{0}\to\phi\nu\overline{\nu}) =\frac{|C_{L}-C_{R}|^{2}\rho_{A_{12}}^{\phi}}{|C_{L}-C_{R}|^{2}( \rho_{A_{1}}^{\phi}+\rho_{A_{12}}^{\phi})+|C_{L}+C_{R}|^{2}\rho_{V}^{\phi}}. \tag{37}\]
Setting the lepton masses to zero in Ref. [40], we also get
\[\frac{d\mathcal{B}(\varLambda_{b}^{0}\to\varLambda\nu\overline{\nu})}{dq^{2} }=3\,\tau_{A_{b}^{0}}|N_{A_{b}^{0}}|^{2}|\lambda_{t}|^{2}\left(|C_{L}-C_{R}|^ {2}(\rho_{f_{\perp}^{A}}^{A}+\rho_{f_{0}^{A}}^{A})+|C_{L}+C_{R}|^{2}(\rho_{f_{ \perp}^{V}}^{A}+\rho_{f_{0}^{V}}^{A})\right), \tag{38}\]
\[F_{L}(\varLambda_{b}^{0}\to\varLambda\nu\overline{\nu})=\frac{|C_{L}-C_{R}|^{ 2}\rho_{f_{0}^{A}}^{A}+|C_{L}+C_{R}|^{2}\rho_{f_{0}^{A}}^{A}}{|C_{L}-C_{R}|^{ 2}(\rho_{f_{\perp}^{A}}^{A}+\rho_{f_{0}^{A}}^{A})+|C_{L}+C_{R}|^{2}(\rho_{f_{ \perp}^{V}}^{A}+\rho_{f_{0}^{V}}^{A})}, \tag{39}\]
\[A_{\rm FB}^{A}(\varLambda_{b}^{0}\to\varLambda\nu\overline{\nu})=\frac{\alpha }{2}\frac{\left(|C_{L}|^{2}-|C_{R}|^{2}\right)\left(\tilde{\rho}_{\perp}^{A}+ \tilde{\rho}_{0}^{A}\right)}{|C_{L}-C_{R}|^{2}(\rho_{f_{\perp}^{A}}^{A}+\rho_ {f_{0}^{A}}^{A})+|C_{L}+C_{R}|^{2}(\rho_{f_{\perp}^{V}}^{A}+\rho_{f_{0}^{V}}^{ A})}. \tag{40}\]
Above, we assume that \(C_{L}^{\nu_{i},\nu_{j}}\equiv\delta_{\nu_{i}\nu_{j}}C_{L}\) and \(C_{R}^{\nu_{i},\nu_{j}}\equiv\delta_{\nu_{i}\nu_{j}}C_{R}\). The full \(B\)-meson expressions, including flavour violating contributions, can be found in Ref. [38]. The above expressions present a
global \((C_{L},C_{R})\to(-C_{L},-C_{R})\) symmetry and another symmetry, \((C_{L},C_{R})\to(C_{R},C_{L})\), which is only violated by \(A^{A}_{\rm FB}(\Lambda^{0}_{b}\to\Lambda\nu\overline{\nu})\).
The measurement of the \(b\to s\nu\overline{\nu}\) branching ratio converts into lines (for \(B\to K\nu\bar{\nu}\)) or ellipses (for the other channels) in the \((C_{L},C_{R})\) plane. On the other hand, longitudinal fractions give cross-shaped constraints. Up to the 4-fold degeneracy due to the symmetries of these two sets of observables, a BSM point can be unambiguously obtained only by a combined measurement of several branching ratios or longitudinal fractions. This is depicted in Fig. 13, where the left panel shows the interest of a such a combination in the case of \(B^{0}\to K^{*0}\nu\overline{\nu}\) decays. In the right panel, we compare the estimated constraints obtained at the end of the Tera-Z run to the current constraint derived from the experimental status of \(B\to K\nu\bar{\nu}\)[32] (we refer to Refs. [63, 64, 65, 66] for more complete WET studies). We stress that the Belle-II measurement is performed assuming SM-like distributions. Converting this measurement into a constraint on the Wilson coefficients therefore requires a proper reanalysis, which is beyond the scope of this paper.
## 6 Conclusion
We carry out an initial performance study on the measurement of \(b\to s\nu\overline{\nu}\) decays at FCC-ee running at the \(Z\) pole. To achieve this, we produce updated SM predictions of the observables related to the decays \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\), \(B^{0}\to K^{*0}\nu\overline{\nu}\), \(B^{0}_{s}\to\phi\nu\overline{\nu}\) and \(A^{0}_{b}\to\Lambda\nu\overline{\nu}\), both with current and projected theory uncertainties.
We then study the expected sensitivity to these observables, under the assumption that \(6\times 10^{12}\)\(Z\)-bosons are produced in the lifetime of FCC-ee "Tera-Z" running. We find that the uncertainty on the branching fractions, at the SM predicted values, are a relative 0.53%, 1.20%, 3.37% and 9.86%
Figure 13: Regions with 68% probability of the marginal posterior density, assuming that all observables are SM-like. We used the experimental uncertainties of Sec. 4 and the **Future** form factors uncertainties described in Sec. 2. **Left**: Regions constrained by a measurement of only the branching ratio (orange band), only the longitudinal fraction (blue band) and both (green ellipse) in the case of \(B^{0}\to K^{*0}\nu\overline{\nu}\) decays. **Right**: Comparison between the current constraints [32] (cyan band) and the sensitivities predicted at FCC-ee in this study (blue, orange, green and red bands).
for the \(B^{0}\to K^{*0}\nu\overline{\nu}\), \(B^{0}_{s}\to\phi\nu\overline{\nu}\), \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\) and \(\mathchar 28935\relax^{0}_{b}\to\mathchar 28935\relax\nu\overline{\nu}\) decays, respectively. The sensitivity estimates for the neutral \(B^{0}\to K^{0}_{\rm S}\nu\overline{\nu}\) and \(\mathchar 28935\relax^{0}_{b}\to\mathchar 28935\relax\nu\overline{\nu}\) are based on rather simplistic assumptions but a full study of these modes was considered beyond the scope for this paper and will be revisited in future works.
In addition we investigate the impact on the sensitivity of particle identification and vertex identification performance. For the former we find that the sensitivity is significantly degraded if the kaon-pion separation power is less than \(2\sigma\). For the latter we find no significant impact on vertex seeding providing the vertex resolution is below 0.2 mm.
Finally, we investigate the impact such measurements would have on SM and beyond SM interpretations. Not only do we find that these decays have a high potential for the extraction of CKM parameters, but we also show that they provide theoretically clean access to the form factors that enter the equivalent decays to charged leptons. Consideration of the ratios of the branching fraction to the charged and neutral lepton may therefore be the only unambiguous probe of the hadronic effects that plague the interpretation of \(b\to s\ell\ell\) decays.
Our studies demonstrate that FCC-ee offers an unparalleled and probably unique opportunity to measure these incredibly rare, experimentally difficult, yet theoretically clean observables with exquisite precision.
## Acknowledgments
We would like to thank Olcyr Sumensari for his contributions to the early stages of the project as well as Danny van Dyk and Paula Alvarez Cartelle for comments on the manuscript. We would also like to thank Clement Helsens and Donal Hill for their help with setup, simulation and running of the FCC analysis code. We thank the FCC-ee Physics Performance Group for the fruitful discussions and helpful feedback on the analysis procedure and manuscript, in particular Guy Wilkinson, Stephane Monteil, Patrizia Azzi, Emmanuel Perez and Xunwu Zuo. We also thank our colleagues in the Warwick LHCb group for their helpful advice. M.R. thanks Stefan Meinel for the discussion on the future of form factor uncertainties. M.K is supported by the Science and Technology Facilities Council (STFC), UK, under grant #ST/R004536/3 and UK Research and Innovation under grant #EP/X014746/2. A.R.W. is supported by the STFC, UK.
## Appendix A Form factors definition
The three \(\bar{B}\to\bar{P}\) form factors are defined by
\[\langle\bar{P}(k)|J^{\mu}_{V}|\bar{B}(p)\rangle=\left[(p+k)^{\mu}- \frac{M_{B}^{2}-M_{P}^{2}}{q^{2}}q^{\mu}\right]f_{+}^{B\to P}+\frac{M_{B}^{2}-M_ {P}^{2}}{q^{2}}q^{\mu}f_{0}^{B\to P}, \tag{41}\] \[\langle\bar{P}(k)|J^{\mu}_{T}|\bar{B}(p)\rangle=\frac{if_{T}^{B \to P}}{M_{B}+M_{P}}\left[q^{2}(p+k)^{\mu}-(M_{B}^{2}-M_{P}^{2})q^{\mu}\right]. \tag{42}\]
The seven \(\bar{B}\to\bar{V}\) form factors are defined by
\[\langle\bar{V}(k,\eta)|J^{\mu}_{V}|\bar{B}(p)\rangle=\epsilon^{ \mu\nu\rho\sigma}\eta^{*}_{\nu}p_{\rho}k_{\sigma}\frac{2V^{B\to V}}{M_{B}+M_{V}}, \tag{43}\] \[\langle\bar{V}(k,\eta)|J^{\mu}_{A}|\bar{B}(p)\rangle=i\eta^{*}_{ \nu}\bigg{[}g^{\mu\nu}(M_{B}+M_{V})A_{1}^{B\to V}-(p+k)^{\mu}q^{\nu}\frac{A_{2 }^{B\to V}}{M_{B}+M_{V}}\] \[\qquad\qquad\qquad\qquad\qquad-2M_{V}\frac{q^{\mu}q^{\nu}}{q^{2}} (A_{3}^{B\to V}-A_{0}^{B\to V})\bigg{]},\] (44) \[\langle\bar{V}(k,\eta)|J^{\mu}_{T}|\bar{B}(p)\rangle=\epsilon^{ \mu\nu\rho\sigma}\eta^{*}_{\nu}p_{\rho}k_{\sigma}\,2T_{1}^{B\to V},\] (45) \[\langle\bar{V}(k,\eta)|J^{\mu}_{AT}|\bar{B}(p)\rangle=i\eta^{*}_{ \nu}\bigg{[}\Big{(}g^{\mu\nu}(M_{B}^{2}-M_{V}^{2})-(p+k)^{\mu}q^{\nu}\Big{)}T _{2}^{B\to V}\] \[\qquad\qquad\qquad\qquad-q^{\nu}\left(q^{\mu}-\frac{q^{2}}{M_{B} ^{2}-M_{V}^{2}}(p+k)^{\mu}\right)T_{3}^{B\to V}\bigg{]}, \tag{46}\]
where \(\eta\) is the polarisation four-vector of the vector meson, and we abbreviate
\[A_{3}^{B\to V}\equiv\frac{M_{B}+M_{V}}{2\,M_{V}}\,A_{1}^{B\to V}- \frac{M_{B}-M_{V}}{2\,M_{V}}\,A_{2}^{B\to V}. \tag{47}\]
The ten \(\widetilde{\Lambda}_{b}\to\widetilde{\Lambda}\) form factors are defined by [67]
\[\langle\Lambda(k,s_{\Lambda})|\overline{s}\,\gamma^{\mu}\,b|\Lambda_{ b}(p,s_{\Lambda_{b}})\rangle= \;\overline{u}_{\Lambda}(k,s_{\Lambda})\bigg{[}f_{t}^{V}(q^{2}) \,(m_{\Lambda_{b}}-m_{\Lambda})\frac{q^{\mu}}{q^{2}} \tag{48}\] \[\;+f_{0}^{V}(q^{2})\frac{m_{\Lambda_{b}}+m_{\Lambda}}{s_{+}} \left(p^{\mu}+k^{\mu}-(m_{\Lambda_{b}}^{2}-m_{\Lambda}^{2})\frac{q^{\mu}}{q^{2} }\right)\] \[\;+f_{\perp}^{V}(q^{2})\left(\gamma^{\mu}-\frac{2m_{\Lambda_{b}}}{ s_{+}}p^{\mu}-\frac{2m_{\Lambda_{b}}}{s_{+}}k^{\mu}\right)\bigg{]}u_{\Lambda_{b}}(p,s_{ \Lambda_{b}})\,,\] \[\langle\Lambda(k,s_{\Lambda})|\overline{s}\,\gamma^{\mu}\gamma_{ 5}\,b|\Lambda_{b}(p,s_{\Lambda_{b}})\rangle= \;-\overline{u}_{\Lambda}(k,s_{\Lambda})\,\gamma_{5}\bigg{[}f_{t }^{A}(q^{2})\,(m_{\Lambda_{b}}+m_{\Lambda})\frac{q^{\mu}}{q^{2}}\] (49) \[\;+f_{0}^{A}(q^{2})\frac{m_{\Lambda_{b}}-m_{\Lambda}}{s_{-}}\left( p^{\mu}+k^{\mu}-(m_{\Lambda_{b}}^{2}-m_{\Lambda}^{2})\frac{q^{\mu}}{q^{2}}\right)\] \[\;+f_{\perp}^{A}(q^{2})\left(\gamma^{\mu}+\frac{2m_{\Lambda}}{s_{- }}p^{\mu}-\frac{2m_{\Lambda_{b}}}{s_{-}}k^{\mu}\right)\bigg{]}u_{\Lambda_{b}}( p_{\Lambda_{b}},s_{\Lambda_{b}}),\] \[\langle\Lambda(k,s_{\Lambda})|\overline{s}\,i\sigma^{\mu\nu}q_{ \nu}\,b|\Lambda_{b}(p,s_{\Lambda_{b}})\rangle= \;-\overline{u}_{\Lambda}(k,s_{\Lambda})\bigg{[}f_{0}^{T}(q^{2}) \frac{q^{2}}{s_{+}}\left(p^{\mu}+k^{\mu}-(m_{\Lambda_{b}}^{2}-m_{\Lambda}^{2}) \frac{q^{\mu}}{q^{2}}\right)\] (50) \[\;+f_{\perp}^{T}(q^{2})\,(m_{\Lambda_{b}}+m_{\Lambda})\left( \gamma^{\mu}-\frac{2m_{\Lambda}}{s_{+}}\,p^{\mu}-\frac{2m_{\Lambda_{b}}}{s_{+ }}\,k^{\mu}\right)\bigg{]}u_{\Lambda_{b}}(p,s_{\Lambda_{b}})\,,\] \[\langle\Lambda(k,s_{\Lambda})|\overline{s}\,i\sigma^{\mu\nu}q_{ \nu}\gamma_{5}\,b|\Lambda_{b}(p,s_{\Lambda_{b}})\rangle= \;-\overline{u}_{\Lambda}(k,s_{\Lambda})\,\gamma_{5}\bigg{[}f_{0}^ {T5}(q^{2})\,\frac{q^{2}}{s_{-}}\left(p^{\mu}+k^{\mu}-(m_{\Lambda_{b}}^{2}-m_{ \Lambda}^{2})\frac{q^{\mu}}{q^{2}}\right)\] (51) \[\;+f_{\perp}^{T5}(q^{2})\,(m_{\Lambda_{b}}-m_{\Lambda})\left( \gamma^{\mu}+\frac{2m_{\Lambda}}{s_{-}}\,p^{\mu}-\frac{2m_{\Lambda_{b}}}{s_{- }}\,k^{\mu}\right)\bigg{]}u_{\Lambda_{b}}(p,s_{\Lambda_{b}})\,,\]
where we abbreviate \(\sigma^{\mu\nu}=\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]\) and \(s_{\pm}=(m_{\Lambda_{b}}\pm m_{\Lambda})-q^{2}\). The labelling of the ten form factors follows the conventions of Ref. [40].
## Appendix B Individual background contributions
Below we list the dominant background sources found in the \(B^{0}\to K^{*0}\nu\overline{\nu}\) and \(B^{0}_{s}\to\phi\nu\overline{\nu}\) analysis. All of these may come with additional neutrals (\(\pi^{0}\), \(\gamma\), \(\nu\)) in the final state.
\(B^{0}\to K^{*0}\nu\overline{\nu}\) backgrounds with real \(K^{*}(892)^{0}\):
* \(B^{+}\to D^{*0}\nu\ell^{+}\), \(D^{*0}\to D^{0}\pi^{0}/\gamma\), \(D^{0}\to K^{*0}\pi^{0}\)
* \(B^{0}\to D^{*-}\nu\ell^{+}\), \(D^{*-}\to D^{0}\pi^{-}\), \(D^{0}\to K^{*0}\pi^{0}\)
* \(B^{+}\to D^{0}\nu\ell^{+}\), \(D^{0}\to K^{*0}\pi^{0}/\eta\)
* \(D^{0}\to K^{*0}\pi^{0}/\eta\)
* \(D^{+}\to K^{*0}\ell^{+}\nu\)
* The above prompt charm decays but from \(D^{*0}\) and \(D^{*+}\)
\(B^{0}\to K^{*0}\nu\overline{\nu}\) backgrounds with fake \(K^{*0}\):
* \(B^{+}\to D^{*0}\nu\ell^{+}\), \(D^{*0}\to D^{0}\pi^{0}/\gamma\), \(D^{0}\to h^{+}h^{-}\pi^{0}(\pi^{0})\)
* \(B^{0}\to D^{*-}\nu\ell^{+}\), \(D^{*-}\to D^{0}\pi^{-}\), \(D^{0}\to h^{+}h^{-}\pi^{0}(\pi^{0})\)
* \(B^{+}\to D^{0}\nu\ell^{+}\), \(D^{0}\to h^{+}h^{-}\pi^{0}(\pi^{0})\)
\(B^{0}_{s}\to\phi\nu\overline{\nu}\) backgrounds with real \(\phi\):
* \(B^{+}\to D^{*0}\nu\ell^{+}\), \(D^{*0}\to D^{0}\pi^{0}/\gamma\), \(D^{0}\to\phi K^{0}\)
* \(B^{0}\to D^{*-}\nu\ell^{+}\), \(D^{*-}\to D^{0}\pi^{-}\), \(D^{0}\to\phi K^{0}\)
* \(B^{0}\to D^{0}\phi\)
* \(B^{0}_{s}\to D^{-}_{s}\nu\ell^{+}\), \(D^{-}_{s}\to\phi\ell^{-}\overline{\nu}\)
* \(D^{+}_{s}\to\phi\ell^{+}\overline{\nu}\)
* \(D^{0}\to\phi K^{0}\)
* \(D^{+}_{s}\to\phi\pi^{+}\)
* \(D^{+}\to\phi\pi^{+}\)
* \(D^{+}_{s}\to\phi\rho^{+}\)
* The above prompt charm decays but from \(D^{*0}\) and \(D^{*+}\)
\(B^{0}_{s}\to\phi\nu\overline{\nu}\) backgrounds with fake \(\phi\):
* \(B^{+}\to D^{*0}\nu\ell^{+}\), \(D^{*0}\to D^{0}\pi^{0}/\gamma\), \(D^{0}\to h^{+}h^{-}\pi^{0}(\pi^{0})\)
* \(B^{0}\to D^{*-}\nu\ell^{+}\), \(D^{*-}\to D^{0}\pi^{-}\), \(D^{0}\to h^{+}h^{-}\pi^{0}(\pi^{0})\)
* \(B^{+}\to D^{0}\nu\ell^{+}\), \(D^{0}\to h^{+}h^{-}\pi^{0}(\pi^{0})\)
|
2309.12882 | Floquet-Anderson localization in the Thouless pump and how to avoid it | We investigate numerically how onsite disorder affects conduction in the
periodically driven Rice-Mele model, a prototypical realization of the Thouless
pump. Although the pump is robust against disorder in the fully adiabatic
limit, much less is known about the case of finite period time $T$, which is
relevant also in light of recent experimental realizations. We find that at any
fixed period time and nonzero disorder, increasing the system size $L\to\infty$
always leads to a breakdown of the pump, indicating Anderson localization of
the Floquet states. Our numerics indicate, however, that in a properly defined
thermodynamic limit, where $L/T^\theta$ is kept constant, Anderson localization
can be avoided, and the charge pumped per cycle has a well-defined value -- as
long as the disorder is not too strong. The critical exponent $\theta$ is not
universal, rather, its value depends on the disorder strength. Our findings are
relevant for practical, experimental realizations of the Thouless pump, for
studies investigating the nature of its current-carrying Floquet eigenstates,
as well as the mechanism of the full breakdown of the pump, expected if the
disorder exceeds a critical value. | András Grabarits, Attila Takács, Ion Cosma Fulga, János K. Asbóth | 2023-09-22T14:13:35Z | http://arxiv.org/abs/2309.12882v1 | # Floquet-Anderson localization in the Thouless pump and how to avoid it
###### Abstract
We investigate numerically how onsite disorder affects conduction in the periodically driven Rice-Mele model, a prototypical realization of the Thouless pump. Although the pump is robust against disorder in the fully adiabatic limit, much less is known about the case of finite period time \(T\), which is relevant also in light of recent experimental realizations. We find that at any fixed period time and nonzero disorder, increasing the system size \(L\to\infty\) always leads to a breakdown of the pump, indicating Anderson localization of the Floquet modes. Our numerics indicate, however, that in a properly defined thermodynamic limit, where \(L/T^{d}\) is kept constant, Anderson localization can be avoided, and the charge pumped per cycle has a well-defined value - as long as the disorder is not too strong. The critical exponent \(\theta\) is not universal, rather, its value depends on the disorder strength. Our findings are relevant for practical, experimental realizations of the Thouless pump, for studies investigating the nature of its current-carrying Floquet eigenstates, as well as the mechanism of the full breakdown of the pump, expected if the disorder exceeds a critical value.
The Thouless pump [1] was instrumental in understanding the role topology plays in the theory of the quantum Hall effect. Its simplest form is that of a two-band, gapped fermionic chain whose parameters are slowly and periodically varied in time. In the half-filled state, where the lower/upper band is completely filled/empy, and in the adiabatic limit, an integer number \(Q\) of fermions are pumped in the lower band, where \(Q\) is a topological invariant, equal to the Chern number \(C\) of the pump sequence.
While it started out as a thought experiment, the Thouless pump can now be found in the lab [2]. After its demonstration in photonic systems [3; 4; 5; 6], topological pumping has been realized in a variety of platforms, such as mechanical metamaterials [7; 8], ultracold atoms [9; 10; 11], and other quantum systems [12]. More recently, an electrical circuit demonstration has been put forward [13].
The real-life Thouless pump has a finite size, \(L\), a finite period time, \(T\), and it is disordered. This raises the question whether these, either separately or in combination, can prove detrimental to its robustness, namely to the quantization of the pumped charge. For instance, even without disorder, a finite period time gives corrections to the quantized value of the pumped charge. For a quench-like switching on of the pump, Ref. [14] found these to scale as \(1/T^{2}\), but they should be greatly reduced for a smoother switching-on of the periodic driving cycle. The breaking of adiabaticity, however, is not always sufficient to destroy the Thouless pump. As shown in Ref. [15], if the disorder-free pump is made longer and longer, then adiabaticity will be strongly broken for any finite \(T\), no matter how large. In spite of this, they found that the quantization of the pumped charge survives in the steady-state regime, when the pump performs cycle after cycle. Similarly, in the adiabatic \(T\to\infty\) limit, the quantization of the pumped charge should be robust against small disorder, as discussed by Thouless and Niu [16]. Here, eigenstates of the time-dependent Hamiltonian are all Anderson localized, and in the adiabatic limit an intuitive (although possibly misleading) picture is that periodic modulation pumps charge between them.
In this Letter we focus on the effect of onsite disorder on an actual Thouless pump (finite \(T\) rather than the adiabatic limit). On the one hand, disorder can even result in a suppression of finite-\(T\) corrections, and a higher pumped charge [17]. On the other, adding too much disorder has to result in a breakdown of the pump, via an Anderson localization transition - a few works have already studied this numerically [17; 18].
Onsite disorder on the Thouless pump is particularly interesting because of the connection to the "levitation and annihilation" in Chern insulators [19]. Disorder in the Chern insulator localizes its eigenstates, but each topological band has (at least) one state that remains extended in the thermodynamic limit, which "carries the Chern number" [20; 21; 22]. As disorder is increased, full Anderson localization happens by these robustly extended states "levitating" towards each other in energy and "annihilating". Can such phenomena be observed in the Floquet states of the disordered Thouless pump? Numerical results [17] are consistent with this, and have even iden
tified a critical exponent for this Anderson localization transition, obtained by scaling up the size of the pump at a constant (and large) period time.
There is an issue with the disordered Thouless pump in the thermodynamic limit, however, that to the best of our knowledge has not been directly addressed yet. One might think that for a thermodynamic limit, \(L\to\infty\) and \(T\to\infty\) should be taken one after the other. However, we argue this is inadequate. Taking \(T\to\infty\) first is problematic, since the charge pump becomes infinitely slow. Moreover, in this "ultra-adiabatic" limit transitions occur between distant Anderson localized eigenstates, thus computation of \(Q\) needs open boundary conditions [23]. Taking \(L\to\infty\) first is often (sometimes tacitly) assumed. However, as we show in the following, this leads to a breakdown of the pump due to the Anderson localization of the Floquet eigenstates. In this Letter we suggest a properly defined way to take \(L\to\infty\) and \(T\to\infty\).
_The model. --_ We consider the periodically driven Rice-Mele model [24] with an onsite potential disorder that is independent of time. Spinless fermions hop on a closed chain of \(L=2N\) sites, with the unitary time evolution governed by the Hamiltonian,
\[\hat{H}(t) =-\sum_{m=1}^{L}\left[J+(-1)^{m}\tilde{J}\cos\tfrac{2\pi t}{T} \right]\hat{c}_{m}^{\dagger}\hat{c}_{m+1}+\text{h.c.}\] \[-\sum_{m=1}^{L}\left[(-1)^{m}\Delta\sin\tfrac{2\pi t}{T}+W\zeta_ {m}\right]\hat{c}_{m}^{\dagger}\hat{c}_{m}, \tag{1}\]
where \(\hat{c}_{m}\) annihilates a fermion on site \(m\), with \(\hat{c}_{L+1}=\hat{c}_{1}\), i.e., periodic boundary conditions, \(J/\tilde{J}\) are uniform/staggered components of the nearest-neighbor hopping, \(\Delta\) is a staggered onsite potential, and \(t\) and \(T\) are time and period time. The onsite disorder has amplitude \(W\) and the \(\zeta_{m}\)'s are real random numbers uniformly distributed on \([-1/2,1/2]\). We set \(\hbar=1\) for convenience. In this noninteracting model, all quantities of interest can be computed from the single-particle \(L\times L\) Hamiltonian matrix \(H(t)\), with \(\hat{H}(t)=\sum_{l,m=1}^{L}\hat{c}_{l}^{\dagger}H_{lm}(t)\hat{c}_{m}\).
We use the basis of Floquet states: eigenstates \(\ket{\psi_{n}}\) of the single-particle (Floquet) unitary operator \(\hat{U}\) for one period of time evolution, \(\hat{U}\ket{\psi_{n}}=e^{-i\varepsilon_{n}}\ket{\psi_{n}}\). Here \(\hat{U}=\mathcal{T}e^{-i\int_{0}^{T}\mathrm{d}t\hat{H}(t)}\), where \(\mathcal{T}\) is time ordering, \(n=1,2,\ldots,L\) is the eigenstate index and \(\varepsilon_{n}\) is the quasienergy. Floquet states evolve periodically in time, up to a phase factor:
\[\ket{\psi_{n}(t)}=\mathcal{T}e^{-i\int_{0}^{t}\mathrm{d}t^{\prime}\hat{H}(t^{ \prime})}\ket{\psi_{n}}=e^{i\varepsilon_{n}T}\ket{\psi_{n}(t+T)}. \tag{2}\]
If the disorder is weak and the pump is run slowly enough, Floquet states can be assigned to bands according to their average energy,
\[\overline{E_{n}}=\frac{1}{T}\int_{0}^{T}\mathrm{d}t\bra{\psi_{n}(t)}\hat{H}(t )\ket{\psi_{n}(t)}. \tag{3}\]
Floquet states carry current, whose integral over the time period gives the pumped charge in that state,
\[Q_{n}=2\int_{0}^{T}\mathrm{d}t\left(J+\tilde{J}\cos\tfrac{2\pi t}{T}\right) \mathrm{Im}[\psi_{n,2}^{*}(t)\psi_{n,1}(t)]. \tag{4}\]
Here we take the current between sites \(1\) and \(2\), but the position does not matter, due to the periodicity of the time evolution of Floquet states.
We calculate the charge pumped in the so-called sustained pumping limit of a filled lower band [17; 25]: The system is initialized at \(t=0\), with the \(L/2\) lowest energy eigenstates \(\ket{\phi_{l}}\) of the instantaneous Hamiltonian fully occupied, and then is time evolved. After many cycles, this results effectively in a Floquet diagonal ensemble [25], i.e., an incoherent mixture where Floquet states are populated with the same weights as at \(t=0\). Thus, the charge pumped per cycle in this limit is
\[Q=\sum_{n=1}^{L}Q_{n}\sum_{l=1}^{L/2}\ket{\bra{\phi_{l}}\psi_{n}}\rvert^{2}. \tag{5}\]
_The numerical method. --_ We compute the time evolution of the Floquet states, needed for Eq. (4), as a matrix product of time-slices of the timestep operator. For the time-slices, we used a recently-developed method based on the Chebyshev polynomial representation of skew Hermitian matrices, \(e^{-iHdt}\approx\alpha_{0}-iz_{0}H\mathrm{d}t-\alpha_{1}[H\mathrm{d}t]^{2}+ iz_{1}[H\mathrm{d}t]^{3}+\alpha_{2}[H\mathrm{d}t]^{4}-iz_{2}[H\mathrm{d}t]^{5}\), with constants specified to \(20\) decimals [26]. This gives the matrix exponential to numerical accuracy, as long as \(||H(t)\mathrm{d}t||_{1}<1.17\times 10^{-2}\). We could reach chains lengths up to \(L=10000\), more than a factor of \(10\) larger than previous works [17]. We use the hopping \(J\) as our energy scale, and set parameters as
\[J=1;\qquad\qquad\tilde{J}=1/2;\qquad\qquad\Delta=1.5, \tag{6}\]
for a well-defined gap with a Chern number \(C=1\). Thus, \(Q=1\) in the adiabatic limit, as long as the instantaneous Hamiltonian is gapped, i.e., \(W\lesssim 3.5\)[17].
_Results. --_ We found that the disordered Thouless pump breaks down when increasing the length \(L\) of the chain, keeping the period time \(T\) constant. Examples are shown in Fig. 1, for disorder \(W=2.5\), for \(T=8\) to \(50\), and \(L=80\) up to \(4480\) In these and all cases we studied, the pumped charge decreased as the length was increased. This suggests that Floquet-Anderson localization does not only set in when the disorder is large (\(W>3.5\)), but occurs for any onsite disorder, \(W>0\). The length \(L\) where the pumped charge decreases significantly (e.g., \(Q=1/2\), or \(Q=1/4\)) provides an estimate for the Floquet localization length \(\zeta_{F}\).
We found that slower driving (longer period times \(T\)) leads to more resilient Thouless pumps, with an apparent power-law relation
\[Q(L,T,W)=Q(L^{\prime},(L^{\prime}/L)^{1/\theta(W)}T,W). \tag{7}\]
This is suggested by the good collapse of the numerically measured \(Q\) values when using the above scaling relation, as shown in Fig. 1. Thus, the Floquet localization length appears to scale with the period time as \(\zeta_{F}\propto T^{\theta}\).
We found that the exponent \(\theta\) of Eq. (7) does not take on a universal value, but depends continuously on the disorder \(W\), as shown in Fig. 2. We extracted the exponent by three different methods. First and second, by identifying \(\zeta_{F}\) with the system size where the pumped charge is \(Q=1/2\), and \(Q=1/4\), respectively. Third, we took all the data for a fixed \(W\), and various \(T\) and \(L\) values, and fitted it with a three-parameter Ansatz - detailed in the Supplemental Material (SM, [27]). These methods agree, and give a disorder-dependent exponent \(\theta\), which approaches \(\theta\approx 2\) near the critical disorder \(W\approx 3.5\). Note, however, that the numerical evidence for the power-law scaling is strong only for the case of moderate disorder. For smaller disorders, \(W\lesssim 1.5\), we have \(1/\theta\ll 1\), thus in the numerically available range the evidence for the power-law scaling here is not conclusive, as discussed in the SM [27].
For a more complete picture of the breakdown of the Thouless pump as a function of disorder \(W\), chain length \(L\), and period time \(T\), we show the numerically obtained map of pumped charge \(Q\) in Fig. 3. The colors show \(Q\) values for \(L=320\), and results for other lengths are shown as \(Q=1/4\) isolines. These reveal four qualitatively different regimes of the charge pump. For small disorder, \(W\lesssim 0.5\), the Floquet localization length \(\zeta_{F}\) decreases sharply as the disorder is increased. For \(0.5\lesssim W\lesssim 2\), and period times \(10\lesssim T\lesssim 20\), \(\zeta_{F}\) does not depend much on the disorder strength. For larger disorder, \(2\lesssim W\lesssim W_{c}=3.5\), we have a sharp decrease of the \(\zeta_{F}\) as \(W\) is increased. Finally, above the critical disorder value, \(3.5\lesssim W\), we observe the charge pump breaking down completely.
_Thermodynamic limit._ -- For a deeper understanding of how disorder impacts the Thouless pump, we define an alternate thermodynamic limit: \(L\to\infty\) and \(T\to\infty\) together, with \(L/T^{\theta}\) kept constant. This is needed, e.g., to explore the extended/localized nature of the Floquet states, using the inverse participation ratio [17] (IPR) \(P_{2}=\sum_{m=1}^{L}\left|\psi_{n,m}\right|^{4}\). To show how this limit avoids the problem of the Anderson localization of Floquet states, see Fig. 4. We choose parameters so that for short lengths, \(L\lesssim 100\), the Thouless pump works well, and Floquet states form two well separated bands. If the length is increased at fixed \(T\) (panel a), the two bands merge and the pump begins to break down. In contrast, if \(L/T^{\theta}\) is kept constant (panel b), the spec
Figure 1: Pumped charge \(Q\) decreases as chain length \(L\) is increased from 80 to 4480, with disorder \(W=2.5\) (average of 20 disorder realizations). Results for various period times \(T\) fall onto each other if rescaling the system size as \(L/T^{\theta}\), with \(\theta=3.95\). Inset: unscaled data.
Figure 3: Colormap: Pumped charge \(Q\), for \(L=320\), for various period times \(T\) and disorders \(W\) (average over 20 disorder realizations). The Thouless pump works well (light area) for small \(W\) and large \(T\), and breaks down both if \(T\) decreases or \(W\) increases. For \(W\gg W_{c}\approx 3.5\), where there is no gap in the instantaneous energy spectrum, the pumped charge is very close to 0 for any \(T\). Black lines: \(T(W)\) curves along which \(Q=1/4\). These show that the pump breaks down easier if the chain is longer.
Figure 2: The scaling exponent \(\theta\), obtained by three numerical approaches for each value of disorder \(W\) (see main text for details). The exponent decreases as \(W\) increases, while it appears to diverge at \(W=0\), consistent with the pumped charge \(Q\) being independent of the system size in the clean case – this latter is shown in the inset. For a detailed description of how the exponent was extracted see the Supplemental Material [27].
trum of Floquet states shows no qualitative change up to the largest system sizes that we were able to access numerically. For each band, states in the band center are more extended (lower IPR, decreasing with \(L\)) and at the band edges more localized (higher IPR, independent of \(L\)). More detailed analysis of the IPR values, and examples for shorter period times \(T\), where the breakdown of the pump is even more visible, are shown in the SM [27].
_Discussion, conclusions. --_ We found that onsite potential disorder in the periodically driven Rice-Mele model results in a breakdown of the Thouless pump in the \(L\to\infty\), constant-\(T\) limit, due to the Anderson localization of the Floquet states. This can be avoided by taking \(L\to\infty\) and \(T\to\infty\) together, keeping \(L/T^{\theta}\) constant, where \(\theta\) is a disorder-dependent critical exponent. Although we expected \(\theta=2\) based on the corrections to adiabaticity, we found that this is not the case, rather, \(\theta\) depends on disorder strength continuously. It is an interesting open problem to find an analytical explanation of this phenomenon.
Our work is a starting point for a more systematic investigation of the relation between Anderson localization in the Thouless pump and the "levitation and annihilation" of extended states in Chern insulators. By finding a suitable thermodynamic limit, we open the way to studying numerically the conduction in the disordered Thouless pump, as well as the mechanism of its breakdown as disorder is increased. Our preliminary results, shown in the SM [27], suggest that the current-carrying states here have a fractal nature, but it is an open question whether in the thermodynamic it is only a single Floquet state per band that carries the current.
From a broader perspective, the Thouless pump is the oldest among a large family of topological pumps, which by now go well beyond Chern insulators. Topological pumping has been associated with a wide range of topological insulators and superconductors [28, 29, 30], and has been proposed to occur also between topological defects [31, 32]. It has been further extended to higher-order topological phases [33, 34, 35], which can lead to dipole or to quadrupole pumps [36]. Very recently, topological pumping between the corners of a two-dimensional sample has been shown to occur both theoretically and experimentally, with the pump working either via bulk states [37, 38], or via edge states [39]. Our work opens a new direction of research in this field, consisting in the study of the thermodynamic limits associated to this large family of pumps and the critical exponents characterizing them.
_Acknowledgements_ We acknowledge support from the National Research, Development and Innovation Office (NKFIH) through the OTKA research grants No. K138606 and 132146, and within the Quantum National Laboratory of Hungary program (Grant No. 2022-2.1.1-NL-2022-00004). ICF acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter - _ct.qmat_ (EXC 2147, project-ids 390858490 and 392019).
|
2309.03757 | Compact metric spaces with infinite cop number | Mohar recently adapted the classical game of Cops and Robber from graphs to
metric spaces, thereby unifying previously studied pursuit-evasion games. He
conjectured that finitely many cops can win on any compact geodesic metric
space, and that their number can be upper-bounded in terms of the ranks of the
homology groups when the space is a simplicial pseudo-manifold. We disprove
these conjectures by constructing a metric on $\mathbb{S}^3$ with infinite cop
number. More problems are raised than settled. | Agelos Georgakopoulos | 2023-09-07T15:00:46Z | http://arxiv.org/abs/2309.03757v1 | # Compact metric spaces with infinite cop number
###### Abstract
Mohar recently adapted the classical game of Cops and Robber from graphs to metric spaces, thereby unifying previously studied pursuit-evasion games. He conjectured that finitely many cops can win on any compact geodesic metric space, and that their number can be upper-bounded in terms of the ranks of the homology groups when the space is a simplicial pseudo-manifold. We disprove these conjectures by constructing a metric on \(\mathbb{S}^{3}\) with infinite cop number. More problems are raised than settled.
**Keywords:** cops and robber, geodesic metric space, pursuit-evasion.
**MSC 2020 Classification:** 91A44, 05C57, 91A24, 91A05, 49N75.
## 1 Introduction
The game of Cops and Robber is one of the most studied games on graphs due its implications for the structure of the host graph, e.g. its tree-width [13] or genus [9]. Well-known open problems include Meyniel's conjecture that \(O(\sqrt{n})\) cops can catch the robber on any graph on \(n\) vertices, and Schroder's conjecture that \(g+3\) cops suffice on any graph of genus \(g\)[12], while Mohar conjectures that \(\sqrt{g}\) is the right order of magnitude [9]. See [3] for an extensive survey of the literature.
Mohar [10, 11] introduced a variant of the game taking place on an arbitrary compact geodesic metric space. This is similar to a game studied by Bollobas, Leader and Walters [2]. Other pursuit-evasion games like this taking place on
continuous spaces have a rich literature, with motivation coming from control theory and several practical applications; see [7, 11, 8] and references therein.
Mohar's aforementioned game is a common generalisation of many other pursuit-evasion games previously studied. It is played on an arbitrary compact geodesic metric space \(X\), and involves a number of cops trying to capture, or approach arbitrarily close, a single robber. Cops and robber can travel the same distance \(\tau(n)\) in each step \(n\in\mathbb{N}\), decided by the robber under the sole restriction that \(\sum\tau(n)=\infty\). The _cop number_\(c(X)\) is the minimum number of cops that can win the game on \(X\), i.e. be able to come to arbitrarily small distance to the robber. The precise definitions are given in Section 2.2. Mohar [11] made the following conjecture:
**Conjecture 1.1** ([11]).: _Every game space \(X\) has a finite cop number._
Our first result is a counterexample to this, obtained by glueing together a sequence of graphs with diverging cop numbers, after appropriately re-scaling their edge-lengths. Irsic, Mohar & Wesolek [5] had previously proved that the unit ball in \(\ell^{2}(\mathbb{N})\) has infinite _strong_ cop number, defined analogously but requiring a cop to 'catch' the robber, i.e. coincide with his position.
I perceive this counterexample as good news for the concept of cop number, as it shows that finiteness of \(c(X)\) is a non-trivial property of a metric space \(X\). It could be useful to relate \(c(X)\) with other properties of a metric space, and we pose some problems in this direction in Section 6. It would be particularly interesting to find implications of the finiteness of \(c(X)\) on the structure of \(X\). The best example of such a result I am aware of says that if a single cop of speed strictly less than that of the robber can catch the latter on a finite graph \(G\), then \(G\) is Gromov-hyperbolic [4]. It would be interesting to extend this result to an arbitrary compact geodesic metric space.
Mohar obtained upper bounds on the cop numbers of compact surfaces, which naturally led to another conjecture that we will disprove:
**Conjecture 1.2** ([11]).: _Suppose that \(X\) is an \(n\)-dimensional simplicial pseudomanifold, whose \(ith\) homology group \(H_{i}(X)\) has rank \(r_{i}\) for \(i=1,\ldots,n\). Then \(c(X)=O(n\sqrt{r_{1}+\ldots+r_{n}})\)._
A _simplicial pseudomanifold_ is a simplicial complex in which each simplex is contained in an \(n\)-simplex, and each \((n-1)\)-simplex is contained in at most two \(n\)-simplices.
We provide counterexamples \(X=X_{k},k\in\mathbb{N}\) to Conjecture 1.2, where \(X\) is a simplicial \(3\)-complex homeomorphic to a \(3\)-manifold, in fact to \(\mathbb{S}^{3}\). Thus \(r_{i}=0\) for every \(i\), yet we have \(c(X_{k})>k\). In our counterexamples to both aforementioned conjectures the robber can afford to choose a constant _agility_ function \(\tau\).
These counterexamples contrast the following interesting theorem of Irsic, Mohar & Wesolek [6]: If \(M\) is a compact manifold (of any dimension) with constant curvature \(-1\), then \(c(M)=2\). This suggests that given a compact
topological space \(X\), the metric we put on \(X\) can affect \(c(X)\) more than its topology.
If \(X\) is a simplicial complex, we can endow it with the following natural metric \(d\), called the _simplicial metric_: we start by giving each \(k\)-simplex \(S\) of \(X\) a metric \(d_{S}\) that makes \(S\) isometric with the standard \(k\)-simplex, and let \(d\) be the length metric on \(X\) induced by the \(d_{S}\). The aforementioned counterexamples to Conjecture 1.2 can in fact be metrized like this. In particular, they have a piecewise linear metric. If we drop this restriction of having a simplicial metric, then we can endow \(\mathbb{S}^{n},n\geq 3\) with a metric that makes its cop number infinite (Corollary 5.1).
The following is modest alternatives to Conjecture 1.1:
**Problem 1.3**.: _Does every compact metrizable topological space \(X\) admit a metric \(d\) such that \(c((X,d))\) is finite?_
The following could be a first step towards Problem 1.3:
**Conjecture 1.4**.: _Suppose that \(X\) is a simplicial complex homeomorphic to a compact manifold, endowed with its simplicial metric. Then \(c(X)<\infty\) (and even \(c_{0}(X)<\infty\))._
This might be easy to prove using the well-known idea of guardable sets [1].
Recall that in our counterexamples to Conjecture 1.2 we can achieve arbitrarily large \(c(\mathbb{S}^{3})\) with a PL metric, or infinite \(c(\mathbb{S}^{3})\) with an unrestricted metric. This raises
**Problem 1.5**.: _Does every finite-dimensional, compact, Riemannian manifold \(M\) have finite \(c(M)\)?_
The results of this paper suggest that it is not possible to bound \(c(M)\) here in terms of the volume of \(M\) even if we fix the topology and diameter of \(M\). But it would be interesting to obtain upper bounds on \(c(M)\) involving other parameters, e.g. the curvature, injectivity radius, etc.
## 2 Definitions
### Metric spaces
Let \(X=(X,d)\) be a metric space. The _ball_ of radius \(r\) around \(x\in X\) is the set \(\mbox{\it B(x,r)}:=\{y\in X\mid d(x,y)<r\}\).
Given a topological path \(p:[0,1]\to X\), we define its _length_\(\ell(p)\) to be the supremum of \(\sum_{0\leq i<n}d(p(t_{i}),p(t_{i+1}))\) over all finite sequences \(0=t_{0}<t_{1}<\ldots t_{n}=1\).
We say that \(d\) is a _length metric_ (and that \((X,d)\) is a length space), if \(d(x,y)=\inf\ell(p)\) holds for every \(x,y\in X\), where the infimum ranges over all topological \(x\)-\(y\) paths. When \(X\) is compact, then it is not hard to see that
this infimum is always realised by some \(x\)-\(y\) path, which path we call an \(x\)_-\(y\) geodesic._ In this case, i.e. when every \(x,y\in X\) are connected by a geodesic, we say that \(X\) is a _geodesic metric space_.
If \(d\) is not a length metric, we can still use it to define one when \(X\) is path-connected: define the _(intrinsic) length metric \(d^{\prime}\) induced by \(d\)_ by \(d^{\prime}(x,y):=\inf\ell(p)\), where the infimum ranges over all topological \(x\)-\(y\) paths.
### The game
Let \(X=(X,d)\) be a compact, geodesic metric space, called the _game space_, and let \(k\geq 1\) be an integer. The _Game of Cops and Robber_ on \(X\) with \(k\) cops is defined as follows. The first player, who controls the robber, selects the initial positions for the robber and for each of the \(k\) cops. Formally, this is a pair \((r^{0},c^{0})\in X^{k+1}\), where \(r^{0}\in X\) is the robber's initial position and \(c^{0}=(c^{0}_{1},\ldots,c^{0}_{k})\in X\) are the initial positions of the cops. The same player selects the _agility function_, which is a map \(\tau:\mathbb{N}\to\mathbb{R}_{+}\) and will specify the distances that the players can travel in each step. The agility function must allow for the total length travelled to be infinite, i.e. \(\sum_{n\geq 1}\tau(n)=\infty\). After the initial position and the agility function are chosen, the game proceeds as a discrete game in consecutive steps. Having made \(n-1\) steps (\(n\geq 1\)), the players have their positions \((r^{n-1},c^{n-1},...,c^{n-1})\in X^{k+1}\). In the \(n\)th step, the robber moves to a point \(r^{n}\in X\) at distance at most \(\tau(n)\) from his current position, i.e. \(d(r^{n-1},r^{n})\leq\tau(n)\). The destination \(r^{n}\) is revealed to the second player who is manipulating the cops. Then each cop \(C_{i},i\in[k]\) is moved to a position \(c^{n}_{i}\), also at distance at most \(\tau(n)\) from its current position, i.e. \(d(c^{n-1}_{i},c^{n}_{i})\leq\tau(n)\). The game stops if \(c^{n}_{i}=r^{n}\) for some \(i\in[k]\). In that case, the value of the game is \(0\) and we say that the cops have _caught the robber_. Otherwise the game proceeds. If it never stops, the _value_ of the game is
\[v:=\inf_{n\geq 0}\min_{i\in[k]}d(r^{n},c^{n}_{i}).\]
If the value is \(0\), we say that the cops won the game; otherwise the robber won. Note that the cops can win even if they never catch the robber.
Given a game space \(X\), let \(c(X)\) be the smallest integer \(k\) such that \(k\) cops win the game on \(X\) for every strategy of the robber. If such a \(k\) does not exist, then we set \(c(X)=\infty\). We call \(c(X)\) the _cop number_ of \(X\). Similarly we define the _strong cop number_\(c_{0}(X)\) as the smallest cardinal \(k\) such that \(k\) cops can always catch the robber.
## 3 The counterexamples
We now present our example of a game space \(X\) with infinite cop number, disproving Conjecture 1.1:
**Counterexample 1:** Let \((G_{n})_{n\in\mathbb{N}}\) be a sequence of finite graphs with \(c^{\prime}(G_{n})\to\infty\), as provided e.g. by Aigner and Fromme [1], where \(c^{\prime}(G)\) denotes
the graph-theoretic variant of the cop number of \(G\), i.e. where the players must move to an adjacent vertex or stay still in each step. We can think of \(G_{n}\) as an \(1\)-complex, endowed with its simplicial metric as defined in the introduction. Let \(G^{\prime}_{n}\) be the re-scaling of \(G_{n}\) where each edge is given length \(\frac{1}{n\cdot\text{diam}(G_{n})}\), and note that \(\text{diam}(G^{\prime}_{n})\to 0\). Form a metric space \(X\) by picking a vertex \(w_{n}\) in each \(G^{\prime}_{n}\) and identifying them into one vertex \(w\). Notice that \(X\) is compact, as the \(G^{\prime}_{n}\) converge to \(w\). We claim that \(c(X)=\infty\). Indeed, given \(k\in\mathbb{N}\), the robber can win the game against \(k\) cops as follows. He picks \(n\) such that \(c(G_{n})>k\), and sets his agility function to be the constant equalling the length \(\ell\) of the edges of \(G^{\prime}_{n}\). He then plays according to his winning strategy for the discrete game on \(G_{n}\) as follows.
Whenever any cop \(C_{i}\) leaves \(G^{\prime}_{n}\), the robber pretends that \(C_{i}\) idles at \(w\). (The robber never leaves \(G^{\prime}_{n}\) himself.) Whenever \(C_{i}\) moves to an interior point \(x\) of an edge, the robber pretends that \(C_{i}\) moved to the vertex closest to \(x\), unless \(x\) is the midpoint of an edge. In the latter case, the robber picks a geodesic \(\gamma\) from the previous pretended position of \(C_{i}\) to \(x\). If \(\gamma\) passes through a vertex \(z\), then the robber pretends that \(C_{i}\) moved to \(z\). Otherwise, he pretends that \(C_{i}\) stays at his previous pretended position. If we start the game with pretended positions coinciding with the actual ones, it is straightforward to check by induction on the number of steps that
\[\begin{array}{l}\text{the pretended position $p_{i}^{n}$ of $C_{i}$ at step $n$ is within distance at most $\frac{1}{2}$}\\ \text{from the actual position,}\end{array} \tag{1}\]
and therefore that \(p_{i}^{n}\) is in the closed neighbourhood of \(p_{i}^{n-1}\) in \(G_{n}\). Thus moving from one pretended position to the next is a legal move in the discrete game.
Therefore, by playing the game on \(G^{\prime}_{n}\) according to his strategy for \(G_{n}\) based on the pretended cop positions, the robber will never get caught, and in fact will avoid ever being approached to distance less than \(\frac{1}{2}\) by (1).
Our counterexample disproving Conjecture 1.2 is more involved, and needs some preparation. Given a game space \(X\), and a compact subspace \(S\subset X\), we will show that it is possible to 'cap \(S\) off' by attaching to it a metric space \(\text{hat}(S)\) that does not decrease \(c(X)\) and such that \(S\) deformation-retracts to a point through \(\text{hat}(S)\). For example, when \(S\) is homeomorphic to \(\mathbb{S}^{1}\), then \(\text{hat}(S)\) is a topological disc with boundary \(S\). More generally, \(\text{hat}(S)\) is homeomorphic to the cone over \(S\), but it is important to endow it with the right metric. We define it without reference to our game, or the ambient space \(X\), as follows. We will only make use of this definition for \(S\) homeomorphic to \(\mathbb{S}^{1}\) or \(\mathbb{S}^{2}\) later on, so the reader will lose nothing by assuming that this is the case.
**Definition 3.1**.: Given a compact metric space \((S,d_{S})\), and \(h\in\mathbb{R}_{+}\), we define the _\(h\)-hat_\(\text{hat}(h,S)\)_over_\(S\) as the following metric space. We form \(\text{hat}(h,S)\) out of two parts, the _cylinder_ and the _top_ (Figure 1 below might be helpful). The cylinder \(\text{cyl}(h,S)\) is the cartesian product \(S\times[0,h]\), metrized by the corresponding \(\ell_{1}\) metric, i.e. \(d_{1}((s,t),(s^{\prime},t^{\prime})):=d_{S}(s,s^{\prime})+|t-t^{\prime}|\) for any \(s,s^{\prime}\in S\) and
\(t,t^{\prime}\in[0,h]\). The top \(\operatorname{top}(h,S)\) is obtained from a cylinder \(S\times[0,h+\operatorname{diam}(S))\) after adding a 'cone point' \(z\), and endowing it with a metric \(d_{2}\) such that \(d_{2}((s,t),z)\) converges to \(0\) as \(t\to h+\operatorname{diam}(S)\) and \(d_{2}((s,0),(s^{\prime},0))=d_{S}(s,s^{\prime})\) for every \(s,s^{\prime}\in S\). This is easy to achieve with a definition similar to \(d_{1}\) after appropriate scaling.
We obtain \(\operatorname{hat}(h,S)\) by identifying the top layer \(S\times\{h\}\) of \(\operatorname{cyl}(h,S)\) with the bottom layer \(S\times\{0\}\) of \(\operatorname{top}(h,S)\). We endow \(\operatorname{hat}(h,S)\) with the length metric \(d^{\prime}\)_induced_ by \(d_{1}\) and \(d_{2}\), i.e. we let \(d^{\prime}(x,y)\) be the infimum of the lengths of all \(x\)-\(y\) paths \(p\), where the length of a subpath of \(p\) contained in \(\operatorname{cyl}(h,S)\), respectively \(\operatorname{top}(h,S)\), is measured with respect to to \(d_{1}\) (resp. \(d_{2}\)). Note that \(d^{\prime}(s,s^{\prime})=d_{S}(s,s^{\prime})\) for every \(s,s^{\prime}\in S\), and \(\operatorname{hat}(h,S)\) is compact.
The following is immediate from the construction of \(\operatorname{hat}(h,S)\):
**Observation 3.2**.: _If \(S\) is homeomorphic to the \(n\)-sphere \(\mathbb{S}^{n},n\geq 1\), then \(\operatorname{hat}(h,S)\) is homeomorphic to the \((n+1)\)-disc \(\{x\in\mathbb{R}^{n+1}\mid d(x,(0,\dots,0))\leq 1\}\)._
**Definition 3.3**.: _Given a game space \((X,d)\) and a compact subspace \(S\subseteq X\), we let \(X_{h,S}\) be the metric space obtained from \(X\cup\operatorname{hat}(h,S)\) by identifying \(S\subseteq X\) with the bottom layer \(S\times\{0\}\) of \(\operatorname{cyl}(h,S)\). We endow \(X_{h,S}\) with the length metric induced by \(d\) and \(d^{\prime}\) as above._
Note that \(X_{h,S}\) is still compact, and that its metric is an extension of \(d\). Using the compactness, it follows that \(X_{h,S}\) a geodesic metric space, i.e. a game space. A similar, but much more restricted, construction was used by Mohar [11, SS8.1].
**Theorem 3.4**.: _Let \(X\) be a game space and let \(S\subseteq X\) be compact. Then there is \(h\in\mathbb{R}_{+}\) such that \(c(X_{h,S})\geq c(X)\)._
Before proving this let us see how it can be used to disprove Mohar's Conjecture 1.2:
**Counterexample 2:** Given \(k\in\mathbb{N}\), we let \(\mathcal{S}_{k}\) be an appropriately metrized surface with \(c(\mathcal{S}_{k})>k\), which have been constructed by Mohar [11]. Applying Theorem 3.4 we will transform \(\mathcal{S}_{k}\) into a homeomorph \(X\) of \(\mathbb{S}^{3}\) with \(c(X)\geq c(\mathcal{S}_{k})\), which disproves Conjecture 1.2.
For this, embed \(\mathcal{S}_{k}\) topologically into \(\mathbb{S}^{3}\) in a standard way, i.e. so that the image --which we still denote by \(\mathcal{S}_{k}\)-- separates \(\mathbb{S}^{3}\) into two components \(C_{1},C_{2}\), each homeomorphic to a solid surface. Let \(T^{\prime}\) be a triangulation of \(\mathcal{S}_{k}\), and let \(T\) be a triangulation of \(\mathbb{S}^{3}\) extending \(T^{\prime}\) such that all \(0\)-cells of \(T\) lie on \(\mathcal{S}_{k}\).
By repeatedly applying Theorem 3.4 to the \(1\)-cells, then the \(2\)-cells, and then the \(3\)-cells of \(T\backslash T^{\prime}\), we will produce a homeomorph of \(\mathbb{S}^{3}\), while keeping the cop number greater than \(k\) throughout. To make this precise, pick an \(1\)-cell \(e\) of \(T\backslash T^{\prime}\), with end-vertices \(x,y\) say, and apply Theorem 3.4 with \(S=\{x,y\}\) and a large enough \(h\). Note that \(S\) is homeomorphic to \(\mathbb{S}^{0}\), and therefore \(\operatorname{hat}(h,S)\) is homeomorphic to \(e\) by Observation 3.2.
By repeatedly applying Theorem 3.4 to the remaining \(1\)-cells of \(T\backslash T^{\prime}\), we obtain a homeomorphism \(\mathcal{S}^{1}\) of the union of \(\mathcal{S}_{k}\) with the \(1\)-skeleton of \(T\), with cop
number at least that of \(\mathcal{S}_{k}\). After this, we go through the 2-cells of \(T\backslash T^{\prime}\), and proceed similarly: for each such 2-cell \(f\), we apply Theorem 3.4 with \(S\) being the boundary of \(f\), which is homeomorphic to \(\mathbb{S}^{1}\), and therefore \(\operatorname{hat}(h,\partial f)\) is homeomorphic to a disc. This yields a homeomorph \(\mathcal{S}^{2}\) of the union of \(\mathcal{S}_{k}\) with the 2-skeleton of \(T\). Finally, we proceed similarly with the 3-cells of \(T\), to obtain a homeomorph of \(\mathbb{S}^{3}\) with cop number at least that of \(\mathcal{S}_{k}\).
Conjecture 1.2 does not explicitly clarify whether we are allowed to put an arbitrary geodesic metric on \(X\) (which seems to me to be the intended meaning), or whether we must use the metric induced by that of the standard simplex. Without any restriction on the metric, we can even achieve infinite cop number; see Section 5. But our construction can be easily modified to yield counterexamples even with the aforementioned restriction. For this, we start by constructing our own \(\mathcal{S}_{k}\) as follows. Let \(G_{k}\) be a graph with graph-theoretic cop number \(c^{\prime}(G_{k})\) greater than \(k\). Mohar [11, Theorem 8] proved that we still have \(c(G_{k})>k\) in our metric-space version of the game when we think of \(G_{k}\) as a 1-complex with each 1-cell isometric to the real interval \([0,1]\) (and this holds if we force the constant agility function \(\tau\equiv 1\)). Let \(f\) be an embedding of \(G_{k}\) into an orientable surface \(\mathcal{S}^{\prime}_{k}\) of minimal possible genus. A classical result of Youngs [14] says that each face \(F\) of \(f\) must be homeomorphic to an open disc. We may assume that the closure \(\overline{F}\) of \(F\) is homeomorphic to a closed disc, because otherwise we can embed a few edges inside \(F\), with end-vertices in \(\partial F\), to separate \(\overline{F}\) into closed discs. By Theorem 3.4, if we subdivide these new edges into long enough paths, then \(c(G^{\prime}_{k})\geq c(G_{k})\) holds for the resulting graph \(G^{\prime}_{k}\). We then apply Theorem 3.4 to each face-boundary \(S\) of the resulting embedding of \(G^{\prime}_{k}\). We thereby modify the construction of \(\operatorname{hat}(h,S)\), to make it isometric with a simplicial complex each simplex of which bears the standard metric. This is possible by cutting \(\operatorname{hat}(h,S)\) into small enough 2-simplices. Hereby we use the fact that the cylinder \(S\times[0,r]\) where \(S\) is a circle of integer length \(n\), and \(r\) the height of an equilateral triangle of side-length 1, can be tiled into \(2n\) equilateral triangles. Notice that we are free to choose \(h\) to be a multiple of \(r\) in any application of Theorem 3.4.
After doing so for every face, we obtain a simplicial complex homeomorphic to \(\mathcal{S}^{\prime}_{k}\), with the desired kind of metric, i.e. a simplicial metric.
We then continue with our construction from above, again modifying the construction of \(\operatorname{hat}(h,S)\) to make it isometric with a simplicial complex; this is possible by Remark 3 below. This completes our counterexamples to Conjecture 1.2.
**Remark 1:** We can continue applying Theorem 3.4 in a similar fashion, to obtain a homeomorph \(X_{n}\) of \(\mathbb{S}^{n},n>3\) with arbitrarily large \(c(X_{n})\): note that \(\mathbb{S}^{n-1}\) separates \(\mathbb{S}^{n}\) into two \(n\)-discs, and so applying Theorem 3.4 twice with \(S=X_{n-1}\) we obtain a homeomorph \(X_{n}\) of \(\mathbb{S}^{n}\) with \(c(X_{n})\geq c(X_{n-1})\).
**Remark 2:** In Counterexample 2 the robber can afford to choose a constant agility function \(\tau\). This is because in each application of Theorem 3.4 the robber just applies the same \(\tau\) on \(X_{h,S}\) that he was using for \(X\), and he starts with \(\tau\equiv 1\) on \(G_{k}\).
Proof of Theorem 3.4
We now prove our main technical tool, Theorem 3.4.
Let \(k<c(X)\), and let \(\tau\) be an agility function that the robber can use to win against \(k\) cops on \(X\). Mohar [10] proves that it is always in favour of the robber to use decreasing agility functions, and so we may assume that \(\tau\) is decreasing, and in particular \(M:=\max_{n\in\mathbb{N}}\tau(n)\) exists and is finite. We set \(h:=M+\operatorname{diam}(X)\) and construct \(X^{\prime}:=X_{h,S}\) as in Definition 3.3 (any larger \(h\) would do as well).
Let us first introduce some notation. Each point \(x\) in \(\operatorname{hat}(h,S)\) except for the cone point \(z\) comes with two coordinates \((s,h)\). We define its _trace_\(\pi(x):=s\) and _height_\(hei(x):=h\). We extend \(\pi\) and \(hei\) to \(X\) by letting \(\pi(s)=s\) and \(hei(x)=0\) there, and we also let \(hei(z):=2h\) and \(\pi(z)=s\) for an arbitrarily chosen \(s\in X\). Easily,
\[d(x,\pi(x))=hei(x)\text{ for every }x\in X\cup\operatorname{cyl}(h,S). \tag{2}\]
Each topological path \(p:[0,1]\to X^{\prime}-z\) gives rise to a path \(\pi\circ p\) in \(X\). It is straightforward to check that
\[\ell(\pi\circ p)\leq\ell(p). \tag{3}\]
We will modify the robber's winning strategy against \(k\) cops on \(X\) into such a strategy for \(X^{\prime}\); this implies that \(k<c(X^{\prime})\), proving our statement.
For this, we will play a variant of the game, where in addition to the \(k\) cops \(C_{i}\) we have \(k\)_shadow cops_\(C^{\prime}_{i}\), the positions \(s_{i}^{n}\) of which are chosen by a third player, the (robber's) _accomplice_. The moves of the accomplice can be thought of as part of the robber's strategy. The shadow cops will help us prove that the (true) cops win nothing by placing themselves in \(X^{\prime}\backslash X\). The robber and shadow cops will only move within \(X\).
This game evolves as follows. The robber initiates the game following his winning strategy against \(k\) cops on \(X\), with the agility function \(\tau\) chosen above, placing each shadow cop \(C^{\prime}_{i}\) at the same position \(s_{i}^{0}=c_{i}^{0}\) as \(C_{i}\). From then on, the robber disregards the true cops \(C_{i}\), and pretends to be playing against the shadow cops \(C^{\prime}_{i}\), playing according to his aforementioned strategy.1 (Thus the robber ignores \(X^{\prime}\backslash X\).)
Footnote 1: There is a well-known cop strategy of Aigner and Fromm [1] for guarding a shortest path in a graph, and we have stolen, and adapted, this strategy on behalf of the robber.
The cops \(C_{i}\) disregard the shadow cops, and play on \(X^{\prime}\) according to their favourite strategy for trying to win against the robber.
After each cop move, from \(c_{i}^{n-1}\) to \(c_{i}^{n}\) say, the accomplice moves \(C^{\prime}_{i}\) as follows:
1. if \(c_{i}^{n}\in X\cup\operatorname{cyl}(h,S)\), then move within \(X\) as close to \(C_{i}\) as possible, i.e. let \(s_{i}^{n}\) be a point in the closed ball \(\overline{B(s_{i}^{n-1},\tau(n))}\) in \(X\) minimising the distance to \(\pi(c_{i}^{n})\) (it might be that \(s_{i}^{n}\in X\backslash S\));
2. if \(c_{i}^{n}\in\operatorname{top}(h,S)\), then sit, i.e. let \(s_{i}^{n}=s_{i}^{n-1}\).
This completes the description of the game play. We claim that
if
\[c_{i}^{n}\in X\]
, then
\[s_{i}^{n}=c_{i}^{n} \tag{4}\]
holds for every
\[i\in[k]\]
and
\[n\in\mathbb{N}\]
. This claim already implies that the true cops cannot catch the robber, i.e. that
\[c_{0}(X^{\prime})\geq k\]
, because the robber follows a strategy that guarantees he is not caught by the shadow cops. In order to show our stronger statement that
\[c(X^{\prime})\geq k\]
, we strengthen claim (4) as follows: \[d(s_{i}^{n},\pi(c_{i}^{n}))\leq hei(c_{i}^{n})\]
holds for every
\[i\in[k]\]
and
\[n\in\mathbb{N}\]
. (5)
To see that this implies \(c(X^{\prime})\geq k\), notice first that as the shadow cops are restricted to \(X\), and the robber follows his winning strategy for \(X\) against them, there is \(\epsilon>0\) such that no shadow cop ever comes \(\epsilon\)-close to the robber, i.e. \(d(s_{i}^{n},r^{n})>\epsilon\). If a true cop comes \(\epsilon/3\)-close to the robber, i.e. \(d(c_{i}^{n},r^{n})\leq\epsilon/3\), for some \(\epsilon<h\), then \(d(c_{i}^{n},X)=hei(c_{i}^{n})\leq\epsilon/3\). But then (5) yields \(d(s_{i}^{n},\pi(c_{i}^{n}))\leq\epsilon/3\), which combined with \(d(c_{i}^{n},\pi(c_{i}^{n}))=hei(c_{i}^{n})\) from (2), and the triangle inequality, contradicts that \(d(s_{i}^{n},r^{n})>\epsilon\) (Figure 1). This means that the robber wins against the true cops in \(X^{\prime}\).
Thus it only remains to prove (5), and we do so for each \(i\in[k]\) by induction on \(n\). It is true for \(n=0\) as \(s_{i}^{0}=c_{i}^{0}=\pi(c_{i}^{0})\) by the robber's choice of initial positions.
Assuming (5) holds for \(n-1\), we prove it for \(n\) by distinguishing the following cases.
1. \(c_{i}^{n-1}\in X\). In this case our inductive hypothesis yields \(s_{i}^{n-i}=c_{i}^{n-i}\). Let \(p\) be a \(c_{i}^{n-1}\)-\(c_{i}^{n}\) geodesic in \(X^{\prime}\). By (3), we have \(\ell(\pi\circ p)\leq\ell(p)\), and therefore \(d(s_{i}^{n-1},\pi(c_{i}^{n}))\leq d(c_{i}^{n-1},c_{i}^{n})\) because \(\pi(c_{i}^{n})\) is the endpoint of \(\pi\circ p\). Thus
it would be an allowable move for \(C_{i}^{\prime}\) to move to \(\pi(c_{i}^{n})\), and therefore \(C_{i}^{\prime}\) chose \(s^{n}=\pi(c_{i}^{n})\), and so \(d(s_{i}^{n},\pi(c_{i}^{n}))=0\). Here we used the fact that \(c_{i}^{n}\in X\cup\operatorname{cyl}(h,S)\) because \(c_{i}^{n-1}\in X\) and \(h>M\geq\tau(n)\).
2. \(c_{i}^{n-1}\in\operatorname{top}(h,S)\) or \(c_{i}^{n}\in\operatorname{top}(h,S)\). In the former case we have \(hei(c_{i}^{n-1})\geq h\geq\tau(n)+\operatorname{diam}(X)\). This implies \(hei(c_{i}^{n})\geq\operatorname{diam}(X)\), as a player cannot decrease his height by more than the distance they are allowed to travel in a step. The latter inequality also holds trivially if \(c_{i}^{n}\in\operatorname{top}(h,S)\). Thus (5) holds since \(s_{i}^{n},\pi(c_{i}^{n})\in X\).
3. \(c_{i}^{n-1}\in\operatorname{cyl}(h,S)\) and \(c_{i}^{n}\in X\). Let again \(p\) be a \(c_{i}^{n-1}\)-\(c_{i}^{n}\) geodesic in \(X^{\prime}\), and let \(x\) be the first point of \(p\) in \(X\). Let \(p_{1},p_{2}\) be the two subpaths into which \(x\) separates \(p\). We claim that \(d(s_{i}^{n-1},c_{i}^{n})\leq\ell(p)\leq\tau(n)\), and so \(C_{i}^{\prime}\) must choose \(s_{i}^{n}=c_{i}^{n}\). To see this, note first that \(p\) avoids \(\operatorname{top}(h,S)\), because the latter is at distance more than \(\tau(n)\) from the endpoint of \(p\). This implies that \[d(c_{i}^{n-1},x)\geq d_{1}(c_{i}^{n-1},x)=hei(c_{i}^{n-1})+d(\pi(c_{i}^{n-1}),x)\] by the definitions of our metrics. By the inductive hypothesis, we have \(hei(c_{i}^{n-1})\geq d(s_{i}^{n-1},\pi(c_{i}^{n-1}))\), and combining this with the above inequality we obtain \(d(c_{i}^{n-1},x)\geq d(s_{i}^{n-1},\pi(c_{i}^{n-1}))+d(\pi(c_{i}^{n-1}),x)\). By the triangle inequality we have \(d(s_{i}^{n-1},c_{i}^{n})\leq d(s_{i}^{n-1},\pi(c_{i}^{n-1}))+d(\pi(c_{i}^{n-1 }),x)+d(x,c_{i}^{n})\), and combining with the previous inequality we obtain \(d(s_{i}^{n-1},c_{i}^{n})\leq d(c_{i}^{n-1},x)+d(x,c_{i}^{n})\). Since \(p\) is a geodesic, the latter sum equals \(\ell(p)\), proving our claim. Thus (5) holds in this case as well.
4. \(c_{i}^{n-1},c_{i}^{n}\in\operatorname{cyl}(h,S)\). In this case we have \[d(c_{i}^{n-1},c_{i}^{n})\geq d_{1}(c_{i}^{n-1},c_{i}^{n})=|hei(c_{i}^{n-1})-hei (c_{i}^{n})|+d(\pi(c_{i}^{n-1}),\pi(c_{i}^{n})).\] In other words, the distance that \(C_{i}\) traveled in step \(n\) is at least the sum of the differences that his move has made to each of the two sides of inequality (5). Thus \(C_{i}^{\prime}\) can choose a \(s_{i}^{n-1}\)-\(\pi(c_{i}^{n})\) geodesic \(g\), and move to a point \(s_{i}^{n}\) on \(g\) at distance \(d(c_{i}^{n-1},c_{i}^{n})\) from \(s_{i}^{n-1}\), to ensure that (5) remains valid.
**Remark 3:** The conclusion of Theorem 3.4 remains valid as is if we 'inflate' our metric \(d\) on \(\operatorname{hat}(h,S)\), i.e. if we replace it with any metric \(d^{+}\) on \(X^{\prime}\) with the following properties: (A) \(d^{+}(x,y)\geq d(x,y)\) for every \(x,y\in X^{\prime}\), (B) \(d^{+}(x,y)=d(x,y)\) for every \(x,y\in X\), and (C) \(d^{+}\) generates the same topology as \(d\). Indeed, such a change of metric restricts the moves of the true cops, while the robber and accomplice can ignore the change of metric since they are moving within \(X\) only.
A common counterexample
We now go one step further from Counterexamples 1 & 2, and produce a game space homeomorphic to \(\mathbb{S}^{3}\) with infinite cop number, thus obtaining a strong simultaneous counterexample to Conjectures 1.1 and 1.2:
**Corollary 5.1**.: _There is a game space \(X\) homeomorphic to \(\mathbb{S}^{3}\) with \(c(X)=\infty\)._
Proof (sketch).: We modify the construction in Counterexample 2, to produce game spaces \(X_{k}\) homeomorphic to the closed 3-disc instead of \(\mathbb{S}^{3}\), by letting \(T\) be a triangulation of a portion of \(\mathbb{S}^{3}\) instead of all of it. By rescaling the metric of \(X_{k}\), we may assume that \(\lim_{k\to\infty}\operatorname{diam}(X_{k})=0\).
Embed all \(X_{k},k\in\mathbb{N}\) topologically into \(\mathbb{S}^{3}\), so that the images are disjoint, and each \(X_{k}\) intersects the 'equator' \(\mathbb{S}^{1}\subset\mathbb{S}^{3}\) along a subarc. Let us arrange these images so that they have a single accumulation point on \(\mathbb{S}^{1}\). Let \(U\) be the union of \(\bigcup_{k\in\mathbb{N}}X_{k}\) with the points of \(\mathbb{S}^{1}\) that are disjoint from the image of \(\bigcup_{k\in\mathbb{N}}X_{k}\). It is easy to see that the intrinsic length metric \(d\) on \(U\) induced by the metrics of \(\mathbb{S}^{1}\) and the \(X_{k}\) is a geodesic metric, and that \(d\) coincides with the original metric of \(X_{k}\) when restricted to that subspace. Therefore, following the arguments of the proof of Theorem 3.4, with \(S\) being a pair of points \(\partial X_{k}\cap\mathbb{S}^{1}\), we obtain \(c(U)=\infty\); indeed, the robber can choose to stay inside one of the \(X_{k}\), chosen after the number of cops is fixed. Here, we are assuming the diameter of each \(X_{k}\) to be much smaller than that of \(\mathbb{S}^{1}\).
It is easy to find two 2-discs \(S_{1},S_{2}\) in \(\mathbb{S}^{3}\), bounded by circles \(C_{1},C_{2}\), such that the complement of \(S_{1}\cup S_{2}\cup U\) consists of two components \(F_{1},F_{2}\), each homeomorphic to a 3-disc. Indeed, we can obtain \(C_{1}\) from \(\mathbb{S}^{1}\), by replacing the subarc inside the image of each \(X_{k}\) with an appropriate arc contained in the boundary of \(X_{k}\) with the same end-points. Applying Theorem 3.4 four times starting with \(U\), and with \(S\) being \(C_{1},C_{2},\partial F_{1}\) and \(\partial F_{2}\), in that order, we end up with the desired game space \(X\).
## 6 Further problems
We conclude with some open problems that seek to make connections between the cop number of a space and some of its classical topological and metric properties.
The doubling constant of a metric space \((X,d)\) is the minimal \(k\in\mathbb{N}\cup\{\infty\}\) such that for all \(x\in X\), and \(r>0\), the ball \(B(x,r)\) can be covered by at most \(k\) balls of radius \(r/2\).
**Conjecture 6.1**.: _Let \(X\) be a game space with finite doubling constant. Then \(c(X)<\infty\)._
**Problem 6.2**.: _Let \(X\) be a compact topological space such that \(c((X,d))<\infty\) holds for every geodesic metric \(d\) compatible with the topology of \(X\). Must \(X\) have topological dimension at most 2?_
Recall that Irsic, Mohar & Wesolek [5] proved that the unit ball \(B\) in \(\ell^{2}(\mathbb{N})\) has \(c_{0}(B)=\infty\) but \(c(B)=1\). This motivates
**Problem 6.3**.: _Suppose a game space \((X,d)\) satisfies \(c_{0}(X)=\infty\) but \(c(X)<\infty\). Must \(X\) have infinite topological dimension?_
Call a metric space \((X,d)\)_homogeneous_, if for every \(x,y\in X\) there is an isometry \(i:X\to X\) with \(i(x)=y\).
**Conjecture 6.4**.: _Let \(X\) be a homogeneous game space. Then \(c(X)<\infty\)._
## Acknowledgement
I thank George Kontogeorgiou for a discussion that triggered Section 5.
|